Multilevel modeling of single-case data: A comparison of maximum likelihood and Bayesian estimation.
Moeyaert, Mariola; Rindskopf, David; Onghena, Patrick; Van den Noortgate, Wim
2017-12-01
The focus of this article is to describe Bayesian estimation, including construction of prior distributions, and to compare parameter recovery under the Bayesian framework (using weakly informative priors) and the maximum likelihood (ML) framework in the context of multilevel modeling of single-case experimental data. Bayesian estimation results were found similar to ML estimation results in terms of the treatment effect estimates, regardless of the functional form and degree of information included in the prior specification in the Bayesian framework. In terms of the variance component estimates, both the ML and Bayesian estimation procedures result in biased and less precise variance estimates when the number of participants is small (i.e., 3). By increasing the number of participants to 5 or 7, the relative bias is close to 5% and more precise estimates are obtained for all approaches, except for the inverse-Wishart prior using the identity matrix. When a more informative prior was added, more precise estimates for the fixed effects and random effects were obtained, even when only 3 participants were included. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Space Shuttle propulsion parameter estimation using optimal estimation techniques
NASA Technical Reports Server (NTRS)
1983-01-01
The fifth monthly progress report includes corrections and additions to the previously submitted reports. The addition of the SRB propellant thickness as a state variable is included with the associated partial derivatives. During this reporting period, preliminary results of the estimation program checkout was presented to NASA technical personnel.
The recursive maximum likelihood proportion estimator: User's guide and test results
NASA Technical Reports Server (NTRS)
Vanrooy, D. L.
1976-01-01
Implementation of the recursive maximum likelihood proportion estimator is described. A user's guide to programs as they currently exist on the IBM 360/67 at LARS, Purdue is included, and test results on LANDSAT data are described. On Hill County data, the algorithm yields results comparable to the standard maximum likelihood proportion estimator.
Kirkham, Amy A; Pauhl, Katherine E; Elliott, Robyn M; Scott, Jen A; Doria, Silvana C; Davidson, Hanan K; Neil-Sztramko, Sarah E; Campbell, Kristin L; Camp, Pat G
2015-01-01
To determine the utility of equations that use the 6-minute walk test (6MWT) results to estimate peak oxygen uptake ((Equation is included in full-text article.)o2) and peak work rate with chronic obstructive pulmonary disease (COPD) patients in a clinical setting. This study included a systematic review to identify published equations estimating peak (Equation is included in full-text article.)o2 and peak work rate in watts in COPD patients and a retrospective chart review of data from a hospital-based pulmonary rehabilitation program. The following variables were abstracted from the records of 42 consecutively enrolled COPD patients: measured peak (Equation is included in full-text article.)o2 and peak work rate achieved during a cycle ergometer cardiopulmonary exercise test, 6MWT distance, age, sex, weight, height, forced expiratory volume in 1 second, forced vital capacity, and lung diffusion capacity. Estimated peak (Equation is included in full-text article.)o2 and peak work rate were estimated from 6MWT distance using published equations. The error associated with using estimated peak (Equation is included in full-text article.)o2 or peak work to prescribe aerobic exercise intensities of 60% and 80% was calculated. Eleven equations from 6 studies were identified. Agreement between estimated and measured values was poor to moderate (intraclass correlation coefficients = 0.11-0.63). The error associated with using estimated peak (Equation is included in full-text article.)o2 or peak work rate to prescribe exercise intensities of 60% and 80% of measured values ranged from mean differences of 12 to 35 and 16 to 47 percentage points, respectively. There is poor to moderate agreement between measured peak (Equation is included in full-text article.)o2 and peak work rate and estimations from equations that use 6MWT distance, and the use of the estimated values for prescription of aerobic exercise intensity would result in large error. Equations estimating peak (Equation is included in full-text article.)o2 and peak work rate are of low utility for prescribing exercise intensity in pulmonary rehabilitation programs.
On-line implementation of nonlinear parameter estimation for the Space Shuttle main engine
NASA Technical Reports Server (NTRS)
Buckland, Julia H.; Musgrave, Jeffrey L.; Walker, Bruce K.
1992-01-01
We investigate the performance of a nonlinear estimation scheme applied to the estimation of several parameters in a performance model of the Space Shuttle Main Engine. The nonlinear estimator is based upon the extended Kalman filter which has been augmented to provide estimates of several key performance variables. The estimated parameters are directly related to the efficiency of both the low pressure and high pressure fuel turbopumps. Decreases in the parameter estimates may be interpreted as degradations in turbine and/or pump efficiencies which can be useful measures for an online health monitoring algorithm. This paper extends previous work which has focused on off-line parameter estimation by investigating the filter's on-line potential from a computational standpoint. ln addition, we examine the robustness of the algorithm to unmodeled dynamics. The filter uses a reduced-order model of the engine that includes only fuel-side dynamics. The on-line results produced during this study are comparable to off-line results generated previously. The results show that the parameter estimates are sensitive to dynamics not included in the filter model. Off-line results using an extended Kalman filter with a full order engine model to address the robustness problems of the reduced-order model are also presented.
Estimates of advection and diffusion in the Potomac estuary
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elliott, A.J.
1976-01-01
A two-layered dispersion model, suitable for application to partially-mixed estuaries, has been developed to provide hydrological interpretation of the results of biological sampling. The model includes horizontal and vertical advection plus both horizontal and vertical diffusion. A pseudo-geostrophic method, which includes a damping factor to account for internal eddy friction, is used to estimate the horizontal advective fluxes and the results are compared with field observations. A salt balance model is then used to estimate the effective diffusivities in the Potomac estuary during the Spring of 1974.
Ares I-X Best Estimated Trajectory Analysis and Results
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Beck, Roger E.; Starr, Brett R.; Derry, Stephen D.; Brandon, Jay; Olds, Aaron D.
2011-01-01
The Ares I-X trajectory reconstruction produced best estimated trajectories of the flight test vehicle ascent through stage separation, and of the first and upper stage entries after separation. The trajectory reconstruction process combines on-board, ground-based, and atmospheric measurements to produce the trajectory estimates. The Ares I-X vehicle had a number of on-board and ground based sensors that were available, including inertial measurement units, radar, air-data, and weather balloons. However, due to problems with calibrations and/or data, not all of the sensor data were used. The trajectory estimate was generated using an Iterative Extended Kalman Filter algorithm, which is an industry standard processing algorithm for filtering and estimation applications. This paper describes the methodology and results of the trajectory reconstruction process, including flight data preprocessing and input uncertainties, trajectory estimation algorithms, output transformations, and comparisons with preflight predictions.
Communications availability: Estimation studies at AMSC
NASA Technical Reports Server (NTRS)
Sigler, C. Edward, Jr.
1994-01-01
The results of L-band communications availability work performed to date are presented. Results include a L-band communications availability estimate model and field propagation trials using an INMARSAT-M terminal. American Mobile Satellite Corporation's (AMSC's) primary concern centers on availability of voice communications intelligibility, with secondary concerns for circuit-switched data and fax. The model estimates for representative terrain/vegetation areas are applied to the contiguous U.S. for overall L-band communications availability estimates.
Quantitative estimation of source complexity in tsunami-source inversion
NASA Astrophysics Data System (ADS)
Dettmer, Jan; Cummins, Phil R.; Hawkins, Rhys; Jakir Hossen, M.
2016-04-01
This work analyses tsunami waveforms to infer the spatiotemporal evolution of sea-surface displacement (the tsunami source) caused by earthquakes or other sources. Since the method considers sea-surface displacement directly, no assumptions about the fault or seafloor deformation are required. While this approach has no ability to study seismic aspects of rupture, it greatly simplifies the tsunami source estimation, making it much less dependent on subjective fault and deformation assumptions. This results in a more accurate sea-surface displacement evolution in the source region. The spatial discretization is by wavelet decomposition represented by a trans-D Bayesian tree structure. Wavelet coefficients are sampled by a reversible jump algorithm and additional coefficients are only included when required by the data. Therefore, source complexity is consistent with data information (parsimonious) and the method can adapt locally in both time and space. Since the source complexity is unknown and locally adapts, no regularization is required, resulting in more meaningful displacement magnitudes. By estimating displacement uncertainties in a Bayesian framework we can study the effect of parametrization choice on the source estimate. Uncertainty arises from observation errors and limitations in the parametrization to fully explain the observations. As a result, parametrization choice is closely related to uncertainty estimation and profoundly affects inversion results. Therefore, parametrization selection should be included in the inference process. Our inversion method is based on Bayesian model selection, a process which includes the choice of parametrization in the inference process and makes it data driven. A trans-dimensional (trans-D) model for the spatio-temporal discretization is applied here to include model selection naturally and efficiently in the inference by sampling probabilistically over parameterizations. The trans-D process results in better uncertainty estimates since the parametrization adapts parsimoniously (in both time and space) according to the local data resolving power and the uncertainty about the parametrization choice is included in the uncertainty estimates. We apply the method to the tsunami waveforms recorded for the great 2011 Japan tsunami. All data are recorded on high-quality sensors (ocean-bottom pressure sensors, GPS gauges, and DART buoys). The sea-surface Green's functions are computed by JAGURS and include linear dispersion effects. By treating the noise level at each gauge as unknown, individual gauge contributions to the source estimate are appropriately and objectively weighted. The results show previously unreported detail of the source, quantify uncertainty spatially, and produce excellent data fits. The source estimate shows an elongated peak trench-ward from the hypo centre that closely follows the trench, indicating significant sea-floor deformation near the trench. Also notable is a bi-modal (negative to positive) displacement feature in the northern part of the source near the trench. The feature has ~2 m amplitude and is clearly resolved by the data with low uncertainties.
Huang, Jian; Zhang, Cun-Hui
2013-01-01
The ℓ1-penalized method, or the Lasso, has emerged as an important tool for the analysis of large data sets. Many important results have been obtained for the Lasso in linear regression which have led to a deeper understanding of high-dimensional statistical problems. In this article, we consider a class of weighted ℓ1-penalized estimators for convex loss functions of a general form, including the generalized linear models. We study the estimation, prediction, selection and sparsity properties of the weighted ℓ1-penalized estimator in sparse, high-dimensional settings where the number of predictors p can be much larger than the sample size n. Adaptive Lasso is considered as a special case. A multistage method is developed to approximate concave regularized estimation by applying an adaptive Lasso recursively. We provide prediction and estimation oracle inequalities for single- and multi-stage estimators, a general selection consistency theorem, and an upper bound for the dimension of the Lasso estimator. Important models including the linear regression, logistic regression and log-linear models are used throughout to illustrate the applications of the general results. PMID:24348100
Astone, Pia; Weinstein, Alan; Agathos, Michalis; Bejger, Michał; Christensen, Nelson; Dent, Thomas; Graff, Philip; Klimenko, Sergey; Mazzolo, Giulio; Nishizawa, Atsushi; Robinet, Florent; Schmidt, Patricia; Smith, Rory; Veitch, John; Wade, Madeline; Aoudia, Sofiane; Bose, Sukanta; Calderon Bustillo, Juan; Canizares, Priscilla; Capano, Colin; Clark, James; Colla, Alberto; Cuoco, Elena; Da Silva Costa, Carlos; Dal Canton, Tito; Evangelista, Edgar; Goetz, Evan; Gupta, Anuradha; Hannam, Mark; Keitel, David; Lackey, Benjamin; Logue, Joshua; Mohapatra, Satyanarayan; Piergiovanni, Francesco; Privitera, Stephen; Prix, Reinhard; Pürrer, Michael; Re, Virginia; Serafinelli, Roberto; Wade, Leslie; Wen, Linqing; Wette, Karl; Whelan, John; Palomba, C; Prodi, G
The Amaldi 10 Parallel Session C2 on gravitational wave (GW) search results, data analysis and parameter estimation included three lively sessions of lectures by 13 presenters, and 34 posters. The talks and posters covered a huge range of material, including results and analysis techniques for ground-based GW detectors, targeting anticipated signals from different astrophysical sources: compact binary inspiral, merger and ringdown; GW bursts from intermediate mass binary black hole mergers, cosmic string cusps, core-collapse supernovae, and other unmodeled sources; continuous waves from spinning neutron stars; and a stochastic GW background. There was considerable emphasis on Bayesian techniques for estimating the parameters of coalescing compact binary systems from the gravitational waveforms extracted from the data from the advanced detector network. This included methods to distinguish deviations of the signals from what is expected in the context of General Relativity.
NASA Technical Reports Server (NTRS)
Astone, Pia; Weinstein, Alan; Agathos, Michalis; Bejger, Michal; Christensen, Nelson; Dent, Thomas; Graff, Philip; Klimenko, Sergey; Mazzolo, Giulio; Nishizawa, Atsushi
2015-01-01
The Amaldi 10 Parallel Session C2 on gravitational wave(GW) search results, data analysis and parameter estimation included three lively sessions of lectures by 13 presenters, and 34 posters. The talks and posters covered a huge range of material, including results and analysis techniques for ground-based GW detectors, targeting anticipated signals from different astrophysical sources: compact binary inspiral, merger and ringdown; GW bursts from intermediate mass binary black hole mergers, cosmic string cusps, core-collapse supernovae, and other unmodeled sources; continuous waves from spinning neutron stars; and a stochastic GW background. There was considerable emphasis on Bayesian techniques for estimating the parameters of coalescing compact binary systems from the gravitational waveforms extracted from the data from the advanced detector network. This included methods to distinguish deviations of the signals from what is expected in the context of General Relativity.
Phase and Pupil Amplitude Recovery for JWST Space-Optics Control
NASA Technical Reports Server (NTRS)
Dean, B. H.; Zielinski, T. P.; Smith, J. S.; Bolcar, M. R.; Aronstein, D. L.; Fienup, J. R.
2010-01-01
This slide presentation reviews the phase and pupil amplitude recovery for the James Webb Space Telescope (JWST) Near Infrared Camera (NIRCam). It includes views of the Integrated Science Instrument Module (ISIM), the NIRCam, examples of Phase Retrieval Data, Ghost Irradiance, Pupil Amplitude Estimation, Amplitude Retrieval, Initial Plate Scale Estimation using the Modulation Transfer Function (MTF), Pupil Amplitude Estimation vs lambda, Pupil Amplitude Estimation vs. number of Images, Pupil Amplitude Estimation vs Rotation (clocking), and Typical Phase Retrieval Results Also included is information about the phase retrieval approach, Non-Linear Optimization (NLO) Optimized Diversity Functions, and Least Square Error vs. Starting Pupil Amplitude.
Estimation of population mean under systematic sampling
NASA Astrophysics Data System (ADS)
Noor-ul-amin, Muhammad; Javaid, Amjad
2017-11-01
In this study we propose a generalized ratio estimator under non-response for systematic random sampling. We also generate a class of estimators through special cases of generalized estimator using different combinations of coefficients of correlation, kurtosis and variation. The mean square errors and mathematical conditions are also derived to prove the efficiency of proposed estimators. Numerical illustration is included using three populations to support the results.
Blinded sample size re-estimation in three-arm trials with 'gold standard' design.
Mütze, Tobias; Friede, Tim
2017-10-15
In this article, we study blinded sample size re-estimation in the 'gold standard' design with internal pilot study for normally distributed outcomes. The 'gold standard' design is a three-arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three-arm trials at which the non-inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re-estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re-estimation based on the popular one-sample variance estimator results in overpowered trials. Moreover, sample size re-estimation based on unbiased variance estimators such as the Xing-Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re-estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re-estimation with the Xing-Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing-Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re-estimation based on the Xing-Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Worldwide Ocean Optics Database (WOOD)
2002-09-30
attenuation estimated from diffuse attenuation and backscatter data). Error estimates will also be provided for the computed results. Extensive algorithm...empirical algorithms (e.g., beam attenuation estimated from diffuse attenuation and backscatter data). Error estimates will also be provided for the...properties, including diffuse attenuation, beam attenuation, and scattering. Data from ONR-funded bio-optical cruises will be given priority for loading
Evaluation of Piloted Inputs for Onboard Frequency Response Estimation
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Martos, Borja
2013-01-01
Frequency response estimation results are presented using piloted inputs and a real-time estimation method recently developed for multisine inputs. A nonlinear simulation of the F-16 and a Piper Saratoga research aircraft were subjected to different piloted test inputs while the short period stabilator/elevator to pitch rate frequency response was estimated. Results show that the method can produce accurate results using wide-band piloted inputs instead of multisines. A new metric is introduced for evaluating which data points to include in the analysis and recommendations are provided for applying this method with piloted inputs.
Computer simulation results of attitude estimation of earth orbiting satellites
NASA Technical Reports Server (NTRS)
Kou, S. R.
1976-01-01
Computer simulation results of attitude estimation of Earth-orbiting satellites (including Space Telescope) subjected to environmental disturbances and noises are presented. Decomposed linear recursive filter and Kalman filter were used as estimation tools. Six programs were developed for this simulation, and all were written in the basic language and were run on HP 9830A and HP 9866A computers. Simulation results show that a decomposed linear recursive filter is accurate in estimation and fast in response time. Furthermore, for higher order systems, this filter has computational advantages (i.e., less integration errors and roundoff errors) over a Kalman filter.
Crustal dynamics project data analysis, 1987. Volume 2: Mobile VLBI geodetic results, 1982-1986
NASA Technical Reports Server (NTRS)
Ma, C.; Ryan, J. W.
1987-01-01
The Goddard VLBI group reports the results of analyzing 101 Mark III data sets acquired from mobile observing sites through the end of 1986 and available to the Crustal Dynamics Project. The fixed VLBI observations at Hat Creek, Ft. Davis, Mojave, and OVRO are included as they participate heavily in the mobile schedules. One large solution GLB171 was used to obtain baseline length and transverse evolutions. Radio source positions were estimated globally, while nutation offsets were estimated from each data set. The results include 28 mobile sites.
Estimating acreage by double sampling using LANDSAT data
NASA Technical Reports Server (NTRS)
Pont, F.; Horwitz, H.; Kauth, R. (Principal Investigator)
1982-01-01
Double sampling techniques employing LANDSAT data for estimating the acreage of corn and soybeans was investigated and evaluated. The evaluation was based on estimated costs and correlations between two existing procedures having differing cost/variance characteristics, and included consideration of their individual merits when coupled with a fictional 'perfect' procedure of zero bias and variance. Two features of the analysis are: (1) the simultaneous estimation of two or more crops; and (2) the imposition of linear cost constraints among two or more types of resource. A reasonably realistic operational scenario was postulated. The costs were estimated from current experience with the measurement procedures involved, and the correlations were estimated from a set of 39 LACIE-type sample segments located in the U.S. Corn Belt. For a fixed variance of the estimate, double sampling with the two existing LANDSAT measurement procedures can result in a 25% or 50% cost reduction. Double sampling which included the fictional perfect procedure results in a more cost effective combination when it is used with the lower cost/higher variance representative of the existing procedures.
Estimating Bottleneck Bandwidth using TCP
NASA Technical Reports Server (NTRS)
Allman, Mark
1998-01-01
Various issues associated with estimating bottleneck bandwidth using TCP are presented in viewgraph form. Specific topics include: 1) Why TCP is wanted to estimate the bottleneck bandwidth; 2) Setting ssthresh to an appropriate value to reduce loss; 3) Possible packet-pair solutions; and 4) Preliminary results: ACTS and the Internet.
NASA Technical Reports Server (NTRS)
Ryan, J. W.; Ma, C.
1987-01-01
The Goddard VLBI group reports the results of analyzing Mark III data sets from fixed observatories through the end of 1986 and available to the Crustal Dynamics Project. All full-day data from POLARIS/IRIS are included. The mobile VLBI sites at Platteville (Colorado), Penticton (British Columbia), and Yellowknife (Northwest Territories) are also included since these occupations bear on the study of plate stability. Two large solutions, GLB121 and GLB122, were used to obtain Earth rotation parameters and baseline evolutions, respectively. Radio source positions were estimated globally while nutation offsets were estimated from each data set. The results include 25 sites and 108 baselines.
NASA Technical Reports Server (NTRS)
Kelly, G. M.; Mcconnell, J. G.; Findlay, J. T.; Heck, M. L.; Henry, M. W.
1984-01-01
The STS-11 (41-B) postflight data processing is completed and the results published. The final reconstructed entry trajectory is presented. The various atmospheric sources available for this flight are discussed. Aerodynamic Best Estimate of Trajectory BET generation and plots from this file are presented. A definition of the major maneuvers effected is given. Physical constants, including spacecraft mass properties; final residuals from the reconstruction process; trajectory parameter listings; and an archival section are included.
Space shuttle propulsion estimation development verification
NASA Technical Reports Server (NTRS)
Rogers, Robert M.
1989-01-01
The application of extended Kalman filtering to estimating the Space Shuttle Propulsion performance, i.e., specific impulse, from flight data in a post-flight processing computer program is detailed. The flight data used include inertial platform acceleration, SRB head pressure, SSME chamber pressure and flow rates, and ground based radar tracking data. The key feature in this application is the model used for the SRB's, which is a nominal or reference quasi-static internal ballistics model normalized to the propellant burn depth. Dynamic states of mass overboard and propellant burn depth are included in the filter model to account for real-time deviations from the reference model used. Aerodynamic, plume, wind and main engine uncertainties are also included for an integrated system model. Assuming uncertainty within the propulsion system model and attempts to estimate its deviations represent a new application of parameter estimation for rocket powered vehicles. Illustrations from the results of applying this estimation approach to several missions show good quality propulsion estimates.
Kirk, Martyn D; Pires, Sara M; Black, Robert E; Caipo, Marisa; Crump, John A; Devleesschauwer, Brecht; Döpfer, Dörte; Fazil, Aamir; Fischer-Walker, Christa L; Hald, Tine; Hall, Aron J; Keddy, Karen H; Lake, Robin J; Lanata, Claudio F; Torgerson, Paul R; Havelaar, Arie H; Angulo, Frederick J
2015-12-01
Foodborne diseases are important worldwide, resulting in considerable morbidity and mortality. To our knowledge, we present the first global and regional estimates of the disease burden of the most important foodborne bacterial, protozoal, and viral diseases. We synthesized data on the number of foodborne illnesses, sequelae, deaths, and Disability Adjusted Life Years (DALYs), for all diseases with sufficient data to support global and regional estimates, by age and region. The data sources included varied by pathogen and included systematic reviews, cohort studies, surveillance studies and other burden of disease assessments. We sought relevant data circa 2010, and included sources from 1990-2012. The number of studies per pathogen ranged from as few as 5 studies for bacterial intoxications through to 494 studies for diarrheal pathogens. To estimate mortality for Mycobacterium bovis infections and morbidity and mortality for invasive non-typhoidal Salmonella enterica infections, we excluded cases attributed to HIV infection. We excluded stillbirths in our estimates. We estimate that the 22 diseases included in our study resulted in two billion (95% uncertainty interval [UI] 1.5-2.9 billion) cases, over one million (95% UI 0.89-1.4 million) deaths, and 78.7 million (95% UI 65.0-97.7 million) DALYs in 2010. To estimate the burden due to contaminated food, we then applied proportions of infections that were estimated to be foodborne from a global expert elicitation. Waterborne transmission of disease was not included. We estimate that 29% (95% UI 23-36%) of cases caused by diseases in our study, or 582 million (95% UI 401-922 million), were transmitted by contaminated food, resulting in 25.2 million (95% UI 17.5-37.0 million) DALYs. Norovirus was the leading cause of foodborne illness causing 125 million (95% UI 70-251 million) cases, while Campylobacter spp. caused 96 million (95% UI 52-177 million) foodborne illnesses. Of all foodborne diseases, diarrheal and invasive infections due to non-typhoidal S. enterica infections resulted in the highest burden, causing 4.07 million (95% UI 2.49-6.27 million) DALYs. Regionally, DALYs per 100,000 population were highest in the African region followed by the South East Asian region. Considerable burden of foodborne disease is borne by children less than five years of age. Major limitations of our study include data gaps, particularly in middle- and high-mortality countries, and uncertainty around the proportion of diseases that were foodborne. Foodborne diseases result in a large disease burden, particularly in children. Although it is known that diarrheal diseases are a major burden in children, we have demonstrated for the first time the importance of contaminated food as a cause. There is a need to focus food safety interventions on preventing foodborne diseases, particularly in low- and middle-income settings.
Kirk, Martyn D.; Pires, Sara M.; Black, Robert E.; Caipo, Marisa; Crump, John A.; Devleesschauwer, Brecht; Döpfer, Dörte; Fazil, Aamir; Fischer-Walker, Christa L.; Hald, Tine; Hall, Aron J.; Keddy, Karen H.; Lake, Robin J.; Lanata, Claudio F.; Torgerson, Paul R.; Havelaar, Arie H.; Angulo, Frederick J.
2015-01-01
Background Foodborne diseases are important worldwide, resulting in considerable morbidity and mortality. To our knowledge, we present the first global and regional estimates of the disease burden of the most important foodborne bacterial, protozoal, and viral diseases. Methods and Findings We synthesized data on the number of foodborne illnesses, sequelae, deaths, and Disability Adjusted Life Years (DALYs), for all diseases with sufficient data to support global and regional estimates, by age and region. The data sources included varied by pathogen and included systematic reviews, cohort studies, surveillance studies and other burden of disease assessments. We sought relevant data circa 2010, and included sources from 1990–2012. The number of studies per pathogen ranged from as few as 5 studies for bacterial intoxications through to 494 studies for diarrheal pathogens. To estimate mortality for Mycobacterium bovis infections and morbidity and mortality for invasive non-typhoidal Salmonella enterica infections, we excluded cases attributed to HIV infection. We excluded stillbirths in our estimates. We estimate that the 22 diseases included in our study resulted in two billion (95% uncertainty interval [UI] 1.5–2.9 billion) cases, over one million (95% UI 0.89–1.4 million) deaths, and 78.7 million (95% UI 65.0–97.7 million) DALYs in 2010. To estimate the burden due to contaminated food, we then applied proportions of infections that were estimated to be foodborne from a global expert elicitation. Waterborne transmission of disease was not included. We estimate that 29% (95% UI 23–36%) of cases caused by diseases in our study, or 582 million (95% UI 401–922 million), were transmitted by contaminated food, resulting in 25.2 million (95% UI 17.5–37.0 million) DALYs. Norovirus was the leading cause of foodborne illness causing 125 million (95% UI 70–251 million) cases, while Campylobacter spp. caused 96 million (95% UI 52–177 million) foodborne illnesses. Of all foodborne diseases, diarrheal and invasive infections due to non-typhoidal S. enterica infections resulted in the highest burden, causing 4.07 million (95% UI 2.49–6.27 million) DALYs. Regionally, DALYs per 100,000 population were highest in the African region followed by the South East Asian region. Considerable burden of foodborne disease is borne by children less than five years of age. Major limitations of our study include data gaps, particularly in middle- and high-mortality countries, and uncertainty around the proportion of diseases that were foodborne. Conclusions Foodborne diseases result in a large disease burden, particularly in children. Although it is known that diarrheal diseases are a major burden in children, we have demonstrated for the first time the importance of contaminated food as a cause. There is a need to focus food safety interventions on preventing foodborne diseases, particularly in low- and middle-income settings. PMID:26633831
NASA Astrophysics Data System (ADS)
Bai, H.; Gong, C.; Wang, M.; Zhang, Z.
2017-12-01
Precipitation susceptibility to aerosol perturbation plays a key role in understanding aerosol-cloud interactions and constraining aerosol indirect effects. However, large discrepancies exist in the previous satellite estimates of precipitation susceptibility. In this paper, multi-sensor aerosol and cloud products, including those from CALIPSO, CloudSat, MODIS, and AMSR-E from June 2006 to April 2011 are analyzed to estimate precipitation susceptibility (including precipitation frequency susceptibility SPOP, precipitation intensity susceptibility SI, and precipitation rate susceptibility SR) in warm marine clouds. Our results show that SPOP demonstrates relatively robust features throughout independent LWP products and diverse rain products. In contrast, the behaviors of SI are more subject to LWP or rain products. Our results further show that SPOP strongly depends on atmospherics stability, with larger value under more stable environment. Precipitation susceptibility calculated with respect to cloud droplet number concentration (CDNC) is generally much larger than that estimated with respect to aerosol index (AI), which results from the weak dependency of CDNC on AI.
Estimation of delays and other parameters in nonlinear functional differential equations
NASA Technical Reports Server (NTRS)
Banks, H. T.; Lamm, P. K. D.
1983-01-01
A spline-based approximation scheme for nonlinear nonautonomous delay differential equations is discussed. Convergence results (using dissipative type estimates on the underlying nonlinear operators) are given in the context of parameter estimation problems which include estimation of multiple delays and initial data as well as the usual coefficient-type parameters. A brief summary of some of the related numerical findings is also given.
Grey literature in meta-analyses.
Conn, Vicki S; Valentine, Jeffrey C; Cooper, Harris M; Rantz, Marilyn J
2003-01-01
In meta-analysis, researchers combine the results of individual studies to arrive at cumulative conclusions. Meta-analysts sometimes include "grey literature" in their evidential base, which includes unpublished studies and studies published outside widely available journals. Because grey literature is a source of data that might not employ peer review, critics have questioned the validity of its data and the results of meta-analyses that include it. To examine evidence regarding whether grey literature should be included in meta-analyses and strategies to manage grey literature in quantitative synthesis. This article reviews evidence on whether the results of studies published in peer-reviewed journals are representative of results from broader samplings of research on a topic as a rationale for inclusion of grey literature. Strategies to enhance access to grey literature are addressed. The most consistent and robust difference between published and grey literature is that published research is more likely to contain results that are statistically significant. Effect size estimates of published research are about one-third larger than those of unpublished studies. Unfunded and small sample studies are less likely to be published. Yet, importantly, methodological rigor does not differ between published and grey literature. Meta-analyses that exclude grey literature likely (a) over-represent studies with statistically significant findings, (b) inflate effect size estimates, and (c) provide less precise effect size estimates than meta-analyses including grey literature. Meta-analyses should include grey literature to fully reflect the existing evidential base and should assess the impact of methodological variations through moderator analysis.
Latest NASA Instrument Cost Model (NICM): Version VI
NASA Technical Reports Server (NTRS)
Mrozinski, Joe; Habib-Agahi, Hamid; Fox, George; Ball, Gary
2014-01-01
The NASA Instrument Cost Model, NICM, is a suite of tools which allow for probabilistic cost estimation of NASA's space-flight instruments at both the system and subsystem level. NICM also includes the ability to perform cost by analogy as well as joint confidence level (JCL) analysis. The latest version of NICM, Version VI, was released in Spring 2014. This paper will focus on the new features released with NICM VI, which include: 1) The NICM-E cost estimating relationship, which is applicable for instruments flying on Explorer-like class missions; 2) The new cluster analysis ability which, alongside the results of the parametric cost estimation for the user's instrument, also provides a visualization of the user's instrument's similarity to previously flown instruments; and 3) includes new cost estimating relationships for in-situ instruments.
45 CFR 284.11 - What definitions apply to this part?
Code of Federal Regulations, 2010 CFR
2010-10-01
... METHODOLOGY FOR DETERMINING WHETHER AN INCREASE IN A STATE OR TERRITORY'S CHILD POVERTY RATE IS THE RESULT OF... estimating the number and percentage of children in poverty in each State. These methods may include national estimates based on the Current Population Survey; the Small Area Income and Poverty Estimates; the annual...
The report gives results of a first attempt to estimate global and country-specific methane (CH4) emissons from sewers and on-site wastewater treatment systems, including latrines and septic sewage tanks. It follows a report that includes CH4 and nitrous oxide (N2O) estimates fro...
Emergent constraints for aerosol indirect effects
NASA Astrophysics Data System (ADS)
Wang, M.; Zhang, S.; Gong, C.; Ghan, S. J.
2016-12-01
Methane in the U.S. GHG Inventory The EPA's annual Inventory of U.S. Greenhouse Gas Emissions and Sinks (GHG Inventory) includes detailed national estimates of anthropogenic methane emissions. In recent years, new data have become available on methane emissions across a number of anthropogenic sources in the U.S. The GHG Inventory has incorporated newly available data and includes updated emissions estimates from a number of categories. This presentation will discuss the latest GHG Inventory results, including results for the oil and gas, waste, and agriculture sectors. The presentation will also discuss key areas for research, and processes for updating data in the GHG Inventory.
Methane Emissions in the U.S. GHG Inventory
NASA Astrophysics Data System (ADS)
Weitz, M.
2017-12-01
Methane in the U.S. GHG Inventory The EPA's annual Inventory of U.S. Greenhouse Gas Emissions and Sinks (GHG Inventory) includes detailed national estimates of anthropogenic methane emissions. In recent years, new data have become available on methane emissions across a number of anthropogenic sources in the U.S. The GHG Inventory has incorporated newly available data and includes updated emissions estimates from a number of categories. This presentation will discuss the latest GHG Inventory results, including results for the oil and gas, waste, and agriculture sectors. The presentation will also discuss key areas for research, and processes for updating data in the GHG Inventory.
Methods for Estimating Water Withdrawals for Mining in the United States, 2005
Lovelace, John K.
2009-01-01
The mining water-use category includes groundwater and surface water that is withdrawn and used for nonfuels and fuels mining. Nonfuels mining includes the extraction of ores, stone, sand, and gravel. Fuels mining includes the extraction of coal, petroleum, and natural gas. Water is used for mineral extraction, quarrying, milling, and other operations directly associated with mining activities. For petroleum and natural gas extraction, water often is injected for secondary oil or gas recovery. Estimates of water withdrawals for mining are needed for water planning and management. This report documents methods used to estimate withdrawals of fresh and saline groundwater and surface water for mining during 2005 for each county and county equivalent in the United States, Puerto Rico, and the U.S. Virgin Islands. Fresh and saline groundwater and surface-water withdrawals during 2005 for nonfuels- and coal-mining operations in each county or county equivalent in the United States, Puerto Rico, and the U.S. Virgin Islands were estimated. Fresh and saline groundwater withdrawals for oil and gas operations in counties of six states also were estimated. Water withdrawals for nonfuels and coal mining were estimated by using mine-production data and water-use coefficients. Production data for nonfuels mining included the mine location and weight (in metric tons) of crude ore, rock, or mineral produced at each mine in the United States, Puerto Rico, and the U.S. Virgin Islands during 2004. Production data for coal mining included the weight, in metric tons, of coal produced in each county or county equivalent during 2004. Water-use coefficients for mined commodities were compiled from various sources including published reports and written communications from U.S. Geological Survey National Water-use Information Program (NWUIP) personnel in several states. Water withdrawals for oil and gas extraction were estimated for six States including California, Colorado, Louisiana, New Mexico, Texas, and Wyoming, by using data from State agencies that regulate oil and gas extraction. Total water withdrawals for mining in a county were estimated by summing estimated water withdrawals for nonfuels mining, coal mining, and oil and gas extraction. The results of this study were distributed to NWUIP personnel in each State during 2007. NWUIP personnel were required to submit estimated withdrawals for numerous categories of use in their States to a national compilation team for inclusion in a national report describing water use in the United States during 2005. NWUIP personnel had the option of submitting the estimates determined by using the methods described in this report, a modified version of these estimates, or their own set of estimates or reported data. Estimated withdrawals resulting from the methods described in this report may not be included in the national report; therefore the estimates are not presented herein in order to avoid potential inconsistencies with the national report. Water-use coefficients for specific minerals also are not presented to avoid potential disclosure of confidential production data provided by mining operations to the U.S. Geological Survey.
NASA Astrophysics Data System (ADS)
Kitterød, Nils-Otto
2017-08-01
Unconsolidated sediment cover thickness (D) above bedrock was estimated by using a publicly available well database from Norway, GRANADA. General challenges associated with such databases typically involve clustering and bias. However, if information about the horizontal distance to the nearest bedrock outcrop (L) is included, does the spatial estimation of D improve? This idea was tested by comparing two cross-validation results: ordinary kriging (OK) where L was disregarded; and co-kriging (CK) where cross-covariance between D and L was included. The analysis showed only minor differences between OK and CK with respect to differences between estimation and true values. However, the CK results gave in general less estimation variance compared to the OK results. All observations were declustered and transformed to standard normal probability density functions before estimation and back-transformed for the cross-validation analysis. The semivariogram analysis gave correlation lengths for D and L of approx. 10 and 6 km. These correlations reduce the estimation variance in the cross-validation analysis because more than 50 % of the data material had two or more observations within a radius of 5 km. The small-scale variance of D, however, was about 50 % of the total variance, which gave an accuracy of less than 60 % for most of the cross-validation cases. Despite the noisy character of the observations, the analysis demonstrated that L can be used as secondary information to reduce the estimation variance of D.
Garmann, D; McLeay, S; Shah, A; Vis, P; Maas Enriquez, M; Ploeger, B A
2017-07-01
The pharmacokinetics (PK), safety and efficacy of BAY 81-8973, a full-length, unmodified, recombinant human factor VIII (FVIII), were evaluated in the LEOPOLD trials. The aim of this study was to develop a population PK model based on pooled data from the LEOPOLD trials and to investigate the importance of including samples with FVIII levels below the limit of quantitation (BLQ) to estimate half-life. The analysis included 1535 PK observations (measured by the chromogenic assay) from 183 male patients with haemophilia A aged 1-61 years from the 3 LEOPOLD trials. The limit of quantitation was 1.5 IU dL -1 for the majority of samples. Population PK models that included or excluded BLQ samples were used for FVIII half-life estimations, and simulations were performed using both estimates to explore the influence on the time below a determined FVIII threshold. In the data set used, approximately 16.5% of samples were BLQ, which is not uncommon for FVIII PK data sets. The structural model to describe the PK of BAY 81-8973 was a two-compartment model similar to that seen for other FVIII products. If BLQ samples were excluded from the model, FVIII half-life estimations were longer compared with a model that included BLQ samples. It is essential to assess the importance of BLQ samples when performing population PK estimates of half-life for any FVIII product. Exclusion of BLQ data from half-life estimations based on population PK models may result in an overestimation of half-life and underestimation of time under a predetermined FVIII threshold, resulting in potential underdosing of patients. © 2017 Bayer AG. Haemophilia Published by John Wiley & Sons Ltd.
Schmitt, Neal; Golubovich, Juliya; Leong, Frederick T L
2011-12-01
The impact of measurement invariance and the provision for partial invariance in confirmatory factor analytic models on factor intercorrelations, latent mean differences, and estimates of relations with external variables is investigated for measures of two sets of widely assessed constructs: Big Five personality and the six Holland interests (RIASEC). In comparing models that include provisions for partial invariance with models that do not, the results indicate quite small differences in parameter estimates involving the relations between factors, one relatively large standardized mean difference in factors between the subgroups compared and relatively small differences in the regression coefficients when the factors are used to predict external variables. The results provide support for the use of partially invariant models, but there does not seem to be a great deal of difference between structural coefficients when the measurement model does or does not include separate estimates of subgroup parameters that differ across subgroups. Future research should include simulations in which the impact of various factors related to invariance is estimated.
The dependability of medical students' performance ratings as documented on in-training evaluations.
van Barneveld, Christina
2005-03-01
To demonstrate an approach to obtain an unbiased estimate of the dependability of students' performance ratings during training, when the data-collection design includes nesting of student in rater, unbalanced nest sizes, and dependent observations. In 2003, two variance components analyses of in-training evaluation (ITE) report data were conducted using urGENOVA software. In the first analysis, the dependability for the nested and unbalanced data-collection design was calculated. In the second analysis, an approach using multiple generalizability studies was used to obtain an unbiased estimate of the student variance component, resulting in an unbiased estimate of dependability. Results suggested that there is bias in estimates of the dependability of students' performance on ITEs that are attributable to the data-collection design. When the bias was corrected, the results indicated that the dependability of ratings of student performance was almost zero. The combination of the multiple generalizability studies method and the use of specialized software provides an unbiased estimate of the dependability of ratings of student performance on ITE scores for data-collection designs that include nesting of student in rater, unbalanced nest sizes, and dependent observations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koehler, J.; Sylte, W.W.
1997-12-31
The deposition of atmospheric polyaromatic hydrocarbons (PAHs) into San Diego Bay was evaluated at an initial study level. This study was part of an overall initial estimate of PAH waste loading to San Diego Bay from all environmental pathways. The study of air pollutant deposition to water bodies has gained increased attention both as a component of Total Maximum Daily Load (TMDL) determinations required under the Clean Water Act and pursuant to federal funding authorized by the 1990 Clean Air Act Amendments to study the atmospheric deposition of hazardous air pollutants to the Great Waters, which includes coastal waters. Tomore » date, studies under the Clean Air Act have included the Great Lakes, Chesapeake Bay, Lake Champlain, and Delaware Bay. Given the limited resources of this initial study for San Diego Bay, the focus was on maximizing the use of existing data and information. The approach developed included the statistical evaluation of measured atmospheric PAH concentrations in the San Diego area, the extrapolation of EPA study results of atmospheric PAH concentrations above Lake Michigan to supplement the San Diego data, the estimation of dry and wet deposition with published calculation methods considering local wind and rainfall data, and the comparison of resulting PAH deposition estimates for San Diego Bay with estimated PAH emissions from ship and commercial boat activity in the San Diego area. The resulting PAH deposition and ship emission estimates were within the same order of magnitude. Since a significant contributor to the atmospheric deposition of PAHs to the Bay is expected to be from shipping traffic, this result provides a check on the order of magnitude on the PAH deposition estimate. Also, when compared against initial estimates of PAH loading to San Diego Bay from other environmental pathways, the atmospheric deposition pathway appears to be a significant contributor.« less
NASA Astrophysics Data System (ADS)
Papanastasiou, Dimitrios K.; Beltrone, Allison; Marshall, Paul; Burkholder, James B.
2018-05-01
Hydrochlorofluorocarbons (HCFCs) are ozone depleting substances and potent greenhouse gases that are controlled under the Montreal Protocol. However, the majority of the 274 HCFCs included in Annex C of the protocol do not have reported global warming potentials (GWPs) which are used to guide the phaseout of HCFCs and the future phase down of hydrofluorocarbons (HFCs). In this study, GWPs for all C1-C3 HCFCs included in Annex C are reported based on estimated atmospheric lifetimes and theoretical methods used to calculate infrared absorption spectra. Atmospheric lifetimes were estimated from a structure activity relationship (SAR) for OH radical reactivity and estimated O(1D) reactivity and UV photolysis loss processes. The C1-C3 HCFCs display a wide range of lifetimes (0.3 to 62 years) and GWPs (5 to 5330, 100-year time horizon) dependent on their molecular structure and the H-atom content of the individual HCFC. The results from this study provide estimated policy-relevant GWP metrics for the HCFCs included in the Montreal Protocol in the absence of experimentally derived metrics.
Page, Matthew J; McKenzie, Joanne E; Kirkham, Jamie; Dwan, Kerry; Kramer, Sharon; Green, Sally; Forbes, Andrew
2014-10-01
Systematic reviews may be compromised by selective inclusion and reporting of outcomes and analyses. Selective inclusion occurs when there are multiple effect estimates in a trial report that could be included in a particular meta-analysis (e.g. from multiple measurement scales and time points) and the choice of effect estimate to include in the meta-analysis is based on the results (e.g. statistical significance, magnitude or direction of effect). Selective reporting occurs when the reporting of a subset of outcomes and analyses in the systematic review is based on the results (e.g. a protocol-defined outcome is omitted from the published systematic review). To summarise the characteristics and synthesise the results of empirical studies that have investigated the prevalence of selective inclusion or reporting in systematic reviews of randomised controlled trials (RCTs), investigated the factors (e.g. statistical significance or direction of effect) associated with the prevalence and quantified the bias. We searched the Cochrane Methodology Register (to July 2012), Ovid MEDLINE, Ovid EMBASE, Ovid PsycINFO and ISI Web of Science (each up to May 2013), and the US Agency for Healthcare Research and Quality (AHRQ) Effective Healthcare Program's Scientific Resource Center (SRC) Methods Library (to June 2013). We also searched the abstract books of the 2011 and 2012 Cochrane Colloquia and the article alerts for methodological work in research synthesis published from 2009 to 2011 and compiled in Research Synthesis Methods. We included both published and unpublished empirical studies that investigated the prevalence and factors associated with selective inclusion or reporting, or both, in systematic reviews of RCTs of healthcare interventions. We included empirical studies assessing any type of selective inclusion or reporting, such as investigations of how frequently RCT outcome data is selectively included in systematic reviews based on the results, outcomes and analyses are discrepant between protocol and published review or non-significant outcomes are partially reported in the full text or summary within systematic reviews. Two review authors independently selected empirical studies for inclusion, extracted the data and performed a risk of bias assessment. A third review author resolved any disagreements about inclusion or exclusion of empirical studies, data extraction and risk of bias. We contacted authors of included studies for additional unpublished data. Primary outcomes included overall prevalence of selective inclusion or reporting, association between selective inclusion or reporting and the statistical significance of the effect estimate, and association between selective inclusion or reporting and the direction of the effect estimate. We combined prevalence estimates and risk ratios (RRs) using a random-effects meta-analysis model. Seven studies met the inclusion criteria. No studies had investigated selective inclusion of results in systematic reviews, or discrepancies in outcomes and analyses between systematic review registry entries and published systematic reviews. Based on a meta-analysis of four studies (including 485 Cochrane Reviews), 38% (95% confidence interval (CI) 23% to 54%) of systematic reviews added, omitted, upgraded or downgraded at least one outcome between the protocol and published systematic review. The association between statistical significance and discrepant outcome reporting between protocol and published systematic review was uncertain. The meta-analytic estimate suggested an increased risk of adding or upgrading (i.e. changing a secondary outcome to primary) when the outcome was statistically significant, although the 95% CI included no association and a decreased risk as plausible estimates (RR 1.43, 95% CI 0.71 to 2.85; two studies, n = 552 meta-analyses). Also, the meta-analytic estimate suggested an increased risk of downgrading (i.e. changing a primary outcome to secondary) when the outcome was statistically significant, although the 95% CI included no association and a decreased risk as plausible estimates (RR 1.26, 95% CI 0.60 to 2.62; two studies, n = 484 meta-analyses). None of the included studies had investigated whether the association between statistical significance and adding, upgrading or downgrading of outcomes was modified by the type of comparison, direction of effect or type of outcome; or whether there is an association between direction of the effect estimate and discrepant outcome reporting.Several secondary outcomes were reported in the included studies. Two studies found that reasons for discrepant outcome reporting were infrequently reported in published systematic reviews (6% in one study and 22% in the other). One study (including 62 Cochrane Reviews) found that 32% (95% CI 21% to 45%) of systematic reviews did not report all primary outcomes in the abstract. Another study (including 64 Cochrane and 118 non-Cochrane reviews) found that statistically significant primary outcomes were more likely to be completely reported in the systematic review abstract than non-significant primary outcomes (RR 2.66, 95% CI 1.81 to 3.90). None of the studies included systematic reviews published after 2009 when reporting standards for systematic reviews (Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) Statement, and Methodological Expectations of Cochrane Intervention Reviews (MECIR)) were disseminated, so the results might not be generalisable to more recent systematic reviews. Discrepant outcome reporting between the protocol and published systematic review is fairly common, although the association between statistical significance and discrepant outcome reporting is uncertain. Complete reporting of outcomes in systematic review abstracts is associated with statistical significance of the results for those outcomes. Systematic review outcomes and analysis plans should be specified prior to seeing the results of included studies to minimise post-hoc decisions that may be based on the observed results. Modifications that occur once the review has commenced, along with their justification, should be clearly reported. Effect estimates and CIs should be reported for all systematic review outcomes regardless of the results. The lack of research on selective inclusion of results in systematic reviews needs to be addressed and studies that avoid the methodological weaknesses of existing research are also needed.
Hammarstrom, Jane M.; Bookstrom, Arthur A.; Dicken, Connie L.; Drenth, Benjamin J.; Ludington, Steve; Robinson, Gilpin R.; Setiabudi, Bambang Tjahjono; Sukserm, Wudhikarn; Sunuhadi, Dwi Nugroho; Wah, Alexander Yan Sze; Zientek, Michael L.
2013-01-01
This assessment includes an overview of the assessment results with summary tables. Detailed descriptions of each tract are included in appendixes, with estimates of numbers of undiscovered deposits, and probabilistic estimates of amounts of copper, molybdenum, gold, and silver that could be contained in undiscovered deposits for each permissive tract. A geographic information system (GIS) that accompanies the report includes tract boundaries and a database of known porphyry copper deposits and significant prospects.
Ludington, Steve; Mihalasky, Mark J.; Hammarstrom, Jane M.; Robinson, Giplin R.; Frost, Thomas P.; Gans, Kathleen D.; Light, Thomas D.; Miller, Robert J.; Alexeiev, Dmitriy V.
2012-01-01
This report includes an overview of the assessment results and summary tables. Descriptions of each tract are included in appendixes, with estimates of numbers of undiscovered deposits, and probabilistic estimates of amounts of copper, molybdenum, gold, and silver that could be contained in undiscovered deposits for each permissive tract. A geographic information system that accompanies the report includes tract boundaries and a database of known porphyry copper deposits and prospects.
NASA Technical Reports Server (NTRS)
Huffman, George J.; Adler, Robert F.; Rudolf, Bruno; Schneider, Udo; Keehn, Peter R.
1995-01-01
The 'satellite-gauge model' (SGM) technique is described for combining precipitation estimates from microwave satellite data, infrared satellite data, rain gauge analyses, and numerical weather prediction models into improved estimates of global precipitation. Throughout, monthly estimates on a 2.5 degrees x 2.5 degrees lat-long grid are employed. First, a multisatellite product is developed using a combination of low-orbit microwave and geosynchronous-orbit infrared data in the latitude range 40 degrees N - 40 degrees S (the adjusted geosynchronous precipitation index) and low-orbit microwave data alone at higher latitudes. Then the rain gauge analysis is brougth in, weighting each field by its inverse relative error variance to produce a nearly global, observationally based precipitation estimate. To produce a complete global estimate, the numerical model results are used to fill data voids in the combined satellite-gauge estimate. Our sequential approach to combining estimates allows a user to select the multisatellite estimate, the satellite-gauge estimate, or the full SGM estimate (observationally based estimates plus the model information). The primary limitation in the method is imperfections in the estimation of relative error for the individual fields. The SGM results for one year of data (July 1987 to June 1988) show important differences from the individual estimates, including model estimates as well as climatological estimates. In general, the SGM results are drier in the subtropics than the model and climatological results, reflecting the relatively dry microwave estimates that dominate the SGM in oceanic regions.
Ares I-X Best Estimated Trajectory and Comparison with Pre-Flight Predictions
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Beck, Roger E.; Derry, Stephen D.; Brandon, Jay M.; Starr, Brett R.; Tartabini, Paul V.; Olds, Aaron D.
2011-01-01
The Ares I-X trajectory reconstruction produced best estimated trajectories of the flight test vehicle ascent through stage separation, and of the first and upper stage entries after separation. The trajectory reconstruction process combines on-board, ground-based, and atmospheric measurements to produce the trajectory estimates. The Ares I-X vehicle had a number of on-board and ground based sensors that were available, including inertial measurement units, radar, air- data, and weather balloons. However, due to problems with calibrations and/or data, not all of the sensor data were used. The trajectory estimate was generated using an Iterative Extended Kalman Filter algorithm, which is an industry standard processing algorithm for filtering and estimation applications. This paper describes the methodology and results of the trajectory reconstruction process, including flight data preprocessing and input uncertainties, trajectory estimation algorithms, output transformations, and comparisons with preflight predictions.
Three-dimensional FLASH Laser Radar Range Estimation via Blind Deconvolution
2009-10-01
scene can result in errors due to several factors including the optical spatial impulse response, detector blurring, photon noise , timing jitter, and...estimation error include spatial blur, detector blurring, noise , timing jitter, and inter-sample targets. Unlike previous research, this paper ac- counts...for pixel coupling by defining the range image mathematical model as a 2D convolution between the system spatial impulse response and the object (target
Nonmarket economic user values of the Florida Keys/Key West
Vernon R. Leeworthy; J. Michael Bowker
1997-01-01
This report provides estimates of the nonmarket economic user values for recreating visitors to the Florida Keys/Key West that participated in natural resource-based activities. Results from estimated travel cost models are presented, including visitorâs responses to prices and estimated per person-trip user values. Annual user values are also calculated and presented...
Cost estimators for construction of forest roads in the central Appalachians
Deborah, A. Layton; Chris O. LeDoux; Curt C. Hassler; Curt C. Hassler
1992-01-01
Regression equations were developed for estimating the total cost of road construction in the central Appalachian region. Estimators include methods for predicting total costs for roads constructed using hourly rental methods and roads built on a total-job bid basis. Results show that total-job bid roads cost up to five times as much as roads built than when equipment...
Chandon, Pierre; Ordabayeva, Nailya
2017-02-01
Five studies show that people, including experts such as professional chefs, estimate quantity decreases more accurately than quantity increases. We argue that this asymmetry occurs because physical quantities cannot be negative. Consequently, there is a natural lower bound (zero) when estimating decreasing quantities but no upper bound when estimating increasing quantities, which can theoretically grow to infinity. As a result, the "accuracy of less" disappears (a) when a numerical or a natural upper bound is present when estimating quantity increases, or (b) when people are asked to estimate the (unbounded) ratio of change from 1 size to another for both increasing and decreasing quantities. Ruling out explanations related to loss aversion, symbolic number mapping, and the visual arrangement of the stimuli, we show that the "accuracy of less" influences choice and demonstrate its robustness in a meta-analysis that includes previously published results. Finally, we discuss how the "accuracy of less" may explain asymmetric reactions to the supersizing and downsizing of food portions, some instances of the endowment effect, and asymmetries in the perception of increases and decreases in physical and psychological distance. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Optimal Doppler centroid estimation for SAR data from a quasi-homogeneous source
NASA Technical Reports Server (NTRS)
Jin, M. Y.
1986-01-01
This correspondence briefly describes two Doppler centroid estimation (DCE) algorithms, provides a performance summary for these algorithms, and presents the experimental results. These algorithms include that of Li et al. (1985) and a newly developed one that is optimized for quasi-homogeneous sources. The performance enhancement achieved by the optimal DCE algorithm is clearly demonstrated by the experimental results.
Padoan, Andrea; Antonelli, Giorgia; Aita, Ada; Sciacovelli, Laura; Plebani, Mario
2017-10-26
The present study was prompted by the ISO 15189 requirements that medical laboratories should estimate measurement uncertainty (MU). The method used to estimate MU included the: a) identification of quantitative tests, b) classification of tests in relation to their clinical purpose, and c) identification of criteria to estimate the different MU components. Imprecision was estimated using long-term internal quality control (IQC) results of the year 2016, while external quality assessment schemes (EQAs) results obtained in the period 2015-2016 were used to estimate bias and bias uncertainty. A total of 263 measurement procedures (MPs) were analyzed. On the basis of test purpose, in 51 MPs imprecision only was used to estimate MU; in the remaining MPs, the bias component was not estimable for 22 MPs because EQAs results did not provide reliable statistics. For a total of 28 MPs, two or more MU values were calculated on the basis of analyte concentration levels. Overall, results showed that uncertainty of bias is a minor factor contributing to MU, the bias component being the most relevant contributor to all the studied sample matrices. The model chosen for MU estimation allowed us to derive a standardized approach for bias calculation, with respect to the fitness-for-purpose of test results. Measurement uncertainty estimation could readily be implemented in medical laboratories as a useful tool in monitoring the analytical quality of test results since they are calculated using a combination of both the long-term imprecision IQC results and bias, on the basis of EQAs results.
Bundschuh, Mirco; Newman, Michael C; Zubrod, Jochen P; Seitz, Frank; Rosenfeldt, Ricki R; Schulz, Ralf
2015-03-01
We argued recently that the positive predictive value (PPV) and the negative predictive value (NPV) are valuable metrics to include during null hypothesis significance testing: They inform the researcher about the probability of statistically significant and non-significant test outcomes actually being true. Although commonly misunderstood, a reported p value estimates only the probability of obtaining the results or more extreme results if the null hypothesis of no effect was true. Calculations of the more informative PPV and NPV require a priori estimate of the probability (R). The present document discusses challenges of estimating R.
A tool for the estimation of the distribution of landslide area in R
NASA Astrophysics Data System (ADS)
Rossi, M.; Cardinali, M.; Fiorucci, F.; Marchesini, I.; Mondini, A. C.; Santangelo, M.; Ghosh, S.; Riguer, D. E. L.; Lahousse, T.; Chang, K. T.; Guzzetti, F.
2012-04-01
We have developed a tool in R (the free software environment for statistical computing, http://www.r-project.org/) to estimate the probability density and the frequency density of landslide area. The tool implements parametric and non-parametric approaches to the estimation of the probability density and the frequency density of landslide area, including: (i) Histogram Density Estimation (HDE), (ii) Kernel Density Estimation (KDE), and (iii) Maximum Likelihood Estimation (MLE). The tool is available as a standard Open Geospatial Consortium (OGC) Web Processing Service (WPS), and is accessible through the web using different GIS software clients. We tested the tool to compare Double Pareto and Inverse Gamma models for the probability density of landslide area in different geological, morphological and climatological settings, and to compare landslides shown in inventory maps prepared using different mapping techniques, including (i) field mapping, (ii) visual interpretation of monoscopic and stereoscopic aerial photographs, (iii) visual interpretation of monoscopic and stereoscopic VHR satellite images and (iv) semi-automatic detection and mapping from VHR satellite images. Results show that both models are applicable in different geomorphological settings. In most cases the two models provided very similar results. Non-parametric estimation methods (i.e., HDE and KDE) provided reasonable results for all the tested landslide datasets. For some of the datasets, MLE failed to provide a result, for convergence problems. The two tested models (Double Pareto and Inverse Gamma) resulted in very similar results for large and very large datasets (> 150 samples). Differences in the modeling results were observed for small datasets affected by systematic biases. A distinct rollover was observed in all analyzed landslide datasets, except for a few datasets obtained from landslide inventories prepared through field mapping or by semi-automatic mapping from VHR satellite imagery. The tool can also be used to evaluate the probability density and the frequency density of landslide volume.
Probability based remaining capacity estimation using data-driven and neural network model
NASA Astrophysics Data System (ADS)
Wang, Yujie; Yang, Duo; Zhang, Xu; Chen, Zonghai
2016-05-01
Since large numbers of lithium-ion batteries are composed in pack and the batteries are complex electrochemical devices, their monitoring and safety concerns are key issues for the applications of battery technology. An accurate estimation of battery remaining capacity is crucial for optimization of the vehicle control, preventing battery from over-charging and over-discharging and ensuring the safety during its service life. The remaining capacity estimation of a battery includes the estimation of state-of-charge (SOC) and state-of-energy (SOE). In this work, a probability based adaptive estimator is presented to obtain accurate and reliable estimation results for both SOC and SOE. For the SOC estimation, an n ordered RC equivalent circuit model is employed by combining an electrochemical model to obtain more accurate voltage prediction results. For the SOE estimation, a sliding window neural network model is proposed to investigate the relationship between the terminal voltage and the model inputs. To verify the accuracy and robustness of the proposed model and estimation algorithm, experiments under different dynamic operation current profiles are performed on the commercial 1665130-type lithium-ion batteries. The results illustrate that accurate and robust estimation can be obtained by the proposed method.
Inertial and time-of-arrival ranging sensor fusion.
Vasilyev, Paul; Pearson, Sean; El-Gohary, Mahmoud; Aboy, Mateo; McNames, James
2017-05-01
Wearable devices with embedded kinematic sensors including triaxial accelerometers, gyroscopes, and magnetometers are becoming widely used in applications for tracking human movement in domains that include sports, motion gaming, medicine, and wellness. The kinematic sensors can be used to estimate orientation, but can only estimate changes in position over short periods of time. We developed a prototype sensor that includes ultra wideband ranging sensors and kinematic sensors to determine the feasibility of fusing the two sensor technologies to estimate both orientation and position. We used a state space model and applied the unscented Kalman filter to fuse the sensor information. Our results demonstrate that it is possible to estimate orientation and position with less error than is possible with either sensor technology alone. In our experiment we obtained a position root mean square error of 5.2cm and orientation error of 4.8° over a 15min recording. Copyright © 2017 Elsevier B.V. All rights reserved.
Baker, David R; Barron, Leon; Kasprzyk-Hordern, Barbara
2014-07-15
This paper presents, for the first time, community-wide estimation of drug and pharmaceuticals consumption in England using wastewater analysis and a large number of compounds. Among groups of compounds studied were: stimulants, hallucinogens and their metabolites, opioids, morphine derivatives, benzodiazepines, antidepressants and others. Obtained results showed the usefulness of wastewater analysis in order to provide estimates of local community drug consumption. It is noticeable that where target compounds could be compared to NHS prescription statistics, good comparisons were apparent between the two sets of data. These compounds include oxycodone, dihydrocodeine, methadone, tramadol, temazepam and diazepam. Whereas, discrepancies were observed for propoxyphene, codeine, dosulepin and venlafaxine (over-estimations in each case except codeine). Potential reasons for discrepancies include: sales of drugs sold without prescription and not included within NHS data, abuse of a drug with the compound trafficked through illegal sources, different consumption patterns in different areas, direct disposal leading to over estimations when using parent compound as the drug target residue and excretion factors not being representative of the local community. It is noticeable that using a metabolite (and not a parent drug) as a biomarker leads to higher certainty of obtained estimates. With regard to illicit drugs, consistent and logical results were reported. Monitoring of these compounds over a one week period highlighted the expected recreational use of many of these drugs (e.g. cocaine and MDMA) and the more consistent use of others (e.g. methadone). Copyright © 2014 Elsevier B.V. All rights reserved.
AMT-200S Motor Glider Parameter and Performance Estimation
NASA Technical Reports Server (NTRS)
Taylor, Brian R.
2011-01-01
Parameter and performance estimation of an instrumented motor glider was conducted at the National Aeronautics and Space Administration Dryden Flight Research Center in order to provide the necessary information to create a simulation of the aircraft. An output-error technique was employed to generate estimates from doublet maneuvers, and performance estimates were compared with results from a well-known flight-test evaluation of the aircraft in order to provide a complete set of data. Aircraft specifications are given along with information concerning instrumentation, flight-test maneuvers flown, and the output-error technique. Discussion of Cramer-Rao bounds based on both white noise and colored noise assumptions is given. Results include aerodynamic parameter and performance estimates for a range of angles of attack.
Experimental Bayesian Quantum Phase Estimation on a Silicon Photonic Chip.
Paesani, S; Gentile, A A; Santagati, R; Wang, J; Wiebe, N; Tew, D P; O'Brien, J L; Thompson, M G
2017-03-10
Quantum phase estimation is a fundamental subroutine in many quantum algorithms, including Shor's factorization algorithm and quantum simulation. However, so far results have cast doubt on its practicability for near-term, nonfault tolerant, quantum devices. Here we report experimental results demonstrating that this intuition need not be true. We implement a recently proposed adaptive Bayesian approach to quantum phase estimation and use it to simulate molecular energies on a silicon quantum photonic device. The approach is verified to be well suited for prethreshold quantum processors by investigating its superior robustness to noise and decoherence compared to the iterative phase estimation algorithm. This shows a promising route to unlock the power of quantum phase estimation much sooner than previously believed.
Economic Consequences and Potentially Preventable Costs Related to Osteoporosis in the Netherlands.
Dunnewind, Tom; Dvortsin, Evgeni P; Smeets, Hugo M; Konijn, Rob M; Bos, Jens H J; de Boer, Pieter T; van den Bergh, Joop P; Postma, Maarten J
2017-06-01
Osteoporosis often does not involve symptoms, and so the actual number of patients with osteoporosis is higher than the number of diagnosed individuals. This underdiagnosis results in a treatment gap. To estimate the total health care resource use and costs related to osteoporosis in the Netherlands, explicitly including fractures, and to estimate the proportion of fracture costs that are linked to the treatment gap and might therefore be potentially preventable; to also formulate, on the basis of these findings, strategies to optimize osteoporosis care and treatment and reduce its related costs. In this retrospective study, data of the Achmea Health Database representing 4.2 million Dutch inhabitants were used to investigate the economic consequence of osteoporosis in the Netherlands in 2010. Specific cohorts were created to identify osteoporosis-related fractures and their costs. Besides, costs of pharmaceutical treatment regarding osteoporosis were included. Using data from the literature, the treatment gap was estimated. Sensitivity analysis was performed on the base-case results. A total of 108,013 individuals with a history of fractures were included in this study. In this population, 59,193 patients were using anti-osteoporotic medication and 86,776 patients were using preventive supplements. A total number of 3,039 osteoporosis-related fractures occurred. The estimated total costs were €465 million. On the basis of data presented in the literature, the treatment gap in our study population was estimated to vary from 60% to 72%. The estimated total costs corrected for treatment gap were €1.15 to €1.64 billion. These results indicate room for improvement in the health care policy against osteoporosis. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
PERIODIC AUTOREGRESSIVE-MOVING AVERAGE (PARMA) MODELING WITH APPLICATIONS TO WATER RESOURCES.
Vecchia, A.V.
1985-01-01
Results involving correlation properties and parameter estimation for autogressive-moving average models with periodic parameters are presented. A multivariate representation of the PARMA model is used to derive parameter space restrictions and difference equations for the periodic autocorrelations. Close approximation to the likelihood function for Gaussian PARMA processes results in efficient maximum-likelihood estimation procedures. Terms in the Fourier expansion of the parameters are sequentially included, and a selection criterion is given for determining the optimal number of harmonics to be included. Application of the techniques is demonstrated through analysis of a monthly streamflow time series.
Estimating Power System Dynamic States Using Extended Kalman Filter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Zhenyu; Schneider, Kevin P.; Nieplocha, Jaroslaw
2014-10-31
Abstract—The state estimation tools which are currently deployed in power system control rooms are based on a steady state assumption. As a result, the suite of operational tools that rely on state estimation results as inputs do not have dynamic information available and their accuracy is compromised. This paper investigates the application of Extended Kalman Filtering techniques for estimating dynamic states in the state estimation process. The new formulated “dynamic state estimation” includes true system dynamics reflected in differential equations, not like previously proposed “dynamic state estimation” which only considers the time-variant snapshots based on steady state modeling. This newmore » dynamic state estimation using Extended Kalman Filter has been successfully tested on a multi-machine system. Sensitivity studies with respect to noise levels, sampling rates, model errors, and parameter errors are presented as well to illustrate the robust performance of the developed dynamic state estimation process.« less
Scarborough, Peter; Harrington, Richard A.; Mizdrak, Anja; Zhou, Lijuan Marissa; Doherty, Aiden
2014-01-01
Noncommunicable disease (NCD) scenario models are an essential part of the public health toolkit, allowing for an estimate of the health impact of population-level interventions that are not amenable to assessment by standard epidemiological study designs (e.g., health-related food taxes and physical infrastructure projects) and extrapolating results from small samples to the whole population. The PRIME (Preventable Risk Integrated ModEl) is an openly available NCD scenario model that estimates the effect of population-level changes in diet, physical activity, and alcohol and tobacco consumption on NCD mortality. The structure and methods employed in the PRIME are described here in detail, including the development of open source code that will support a PRIME web application to be launched in 2015. This paper reviews scenario results from eleven papers that have used the PRIME, including estimates of the impact of achieving government recommendations for healthy diets, health-related food taxes and subsidies, and low-carbon diets. Future challenges for NCD scenario modelling, including the need for more comparisons between models and the improvement of future prediction of NCD rates, are also discussed. PMID:25328757
Log Pearson type 3 quantile estimators with regional skew information and low outlier adjustments
Griffis, V.W.; Stedinger, Jery R.; Cohn, T.A.
2004-01-01
The recently developed expected moments algorithm (EMA) [Cohn et al., 1997] does as well as maximum likelihood estimations at estimating log‐Pearson type 3 (LP3) flood quantiles using systematic and historical flood information. Needed extensions include use of a regional skewness estimator and its precision to be consistent with Bulletin 17B. Another issue addressed by Bulletin 17B is the treatment of low outliers. A Monte Carlo study compares the performance of Bulletin 17B using the entire sample with and without regional skew with estimators that use regional skew and censor low outliers, including an extended EMA estimator, the conditional probability adjustment (CPA) from Bulletin 17B, and an estimator that uses probability plot regression (PPR) to compute substitute values for low outliers. Estimators that neglect regional skew information do much worse than estimators that use an informative regional skewness estimator. For LP3 data the low outlier rejection procedure generally results in no loss of overall accuracy, and the differences between the MSEs of the estimators that used an informative regional skew are generally modest in the skewness range of real interest. Samples contaminated to model actual flood data demonstrate that estimators which give special treatment to low outliers significantly outperform estimators that make no such adjustment.
Log Pearson type 3 quantile estimators with regional skew information and low outlier adjustments
NASA Astrophysics Data System (ADS)
Griffis, V. W.; Stedinger, J. R.; Cohn, T. A.
2004-07-01
The recently developed expected moments algorithm (EMA) [, 1997] does as well as maximum likelihood estimations at estimating log-Pearson type 3 (LP3) flood quantiles using systematic and historical flood information. Needed extensions include use of a regional skewness estimator and its precision to be consistent with Bulletin 17B. Another issue addressed by Bulletin 17B is the treatment of low outliers. A Monte Carlo study compares the performance of Bulletin 17B using the entire sample with and without regional skew with estimators that use regional skew and censor low outliers, including an extended EMA estimator, the conditional probability adjustment (CPA) from Bulletin 17B, and an estimator that uses probability plot regression (PPR) to compute substitute values for low outliers. Estimators that neglect regional skew information do much worse than estimators that use an informative regional skewness estimator. For LP3 data the low outlier rejection procedure generally results in no loss of overall accuracy, and the differences between the MSEs of the estimators that used an informative regional skew are generally modest in the skewness range of real interest. Samples contaminated to model actual flood data demonstrate that estimators which give special treatment to low outliers significantly outperform estimators that make no such adjustment.
48 CFR 1816.405-274 - Award fee evaluation factors.
Code of Federal Regulations, 2011 CFR
2011-10-01
... omission by the contractor that results in compromise of classified information, illegal technology... information technology services, equipment or property damage from vandalism greater than $250,000, or theft... negotiated estimated cost of the contract. This estimated cost may include the value of undefinitized change...
48 CFR 1816.405-274 - Award fee evaluation factors.
Code of Federal Regulations, 2012 CFR
2012-10-01
... omission by the contractor that results in compromise of classified information, illegal technology... information technology services, equipment or property damage from vandalism greater than $250,000, or theft... negotiated estimated cost of the contract. This estimated cost may include the value of undefinitized change...
48 CFR 1816.405-274 - Award fee evaluation factors.
Code of Federal Regulations, 2014 CFR
2014-10-01
... omission by the contractor that results in compromise of classified information, illegal technology... information technology services, equipment or property damage from vandalism greater than $250,000, or theft... negotiated estimated cost of the contract. This estimated cost may include the value of undefinitized change...
Burden of disease and costs of aneurysmal subarachnoid haemorrhage (aSAH) in the United Kingdom
2010-01-01
Background To estimate life years and quality-adjusted life years (QALYs) lost and the economic burden of aneurysmal subarachnoid haemorrhage (aSAH) in the United Kingdom including healthcare and non-healthcare costs from a societal perspective. Methods All UK residents in 2005 with aSAH (International Classification of Diseases 10th revision (ICD-10) code I60). Sex and age-specific abridged life tables were generated for a general population and aSAH cohorts. QALYs in each cohort were calculated adjusting the life tables with health-related quality of life (HRQL) data. Healthcare costs included hospital expenditure, cerebrovascular rehabilitation, primary care and community health and social services. Non-healthcare costs included informal care and productivity losses arising from morbidity and premature death. Results A total of 80,356 life years and 74,807 quality-adjusted life years were estimated to be lost due to aSAH in the UK in 2005. aSAH costs the National Health Service (NHS) £168.2 million annually with hospital inpatient admissions accounting for 59%, community health and social services for 18%, aSAH-related operations for 15% and cerebrovascular rehabilitation for 6% of the total NHS estimated costs. The average per patient cost for the NHS was estimated to be £23,294. The total economic burden (including informal care and using the human capital method to estimate production losses) of a SAH in the United Kingdom was estimated to be £510 million annually. Conclusion The economic and disease burden of aSAH in the United Kingdom is reported in this study. Decision-makers can use these results to complement other information when informing prevention policies in this field and to relate health care expenditures to disease categories. PMID:20423472
Method for Estimating Water Withdrawals for Livestock in the United States, 2005
Lovelace, John K.
2009-01-01
Livestock water use includes ground water and surface water associated with livestock watering, feedlots, dairy operations, and other on-farm needs. The water may be used for drinking, cooling, sanitation, waste disposal, and other needs related to the animals. Estimates of water withdrawals for livestock are needed for water planning and management. This report documents a method used to estimate withdrawals of fresh ground water and surface water for livestock in 2005 for each county and county equivalent in the United States, Puerto Rico, and the U.S. Virgin Islands. Categories of livestock included dairy cattle, beef and other cattle, hogs and pigs, laying hens, broilers and other chickens, turkeys, sheep and lambs, all goats, and horses (including ponies, mules, burros, and donkeys). Use of the method described in this report could result in more consistent water-withdrawal estimates for livestock that can be used by water managers and planners to determine water needs and trends across the United States. Water withdrawals for livestock in 2005 were estimated by using water-use coefficients, in gallons per head per day for each animal type, and livestock-population data. Coefficients for various livestock for most States were obtained from U.S. Geological Survey water-use program personnel or U.S. Geological Survey water-use publications. When no coefficient was available for an animal type in a State, the median value of reported coefficients for that animal was used. Livestock-population data were provided by the National Agricultural Statistics Service. County estimates were further divided into ground-water and surface-water withdrawals for each county and county equivalent. County totals from 2005 were compared to county totals from 1995 and 2000. Large deviations from 1995 or 2000 livestock withdrawal estimates were investigated and generally were due to comparison with reported withdrawals, differences in estimation techniques, differences in livestock coefficients, or use of livestock-population data from different sources. The results of this study were distributed to U.S. Geological Survey water-use program personnel in each State during 2007. Water-use program personnel are required to submit estimated withdrawals for all categories of use in their States to the National Water-Use Information Program for inclusion in a national report describing water use in the United States during 2005. Water-use program personnel had the option of submitting these estimates, a modified version of these estimates, or their own set of estimates or reported data. Estimated withdrawals resulting from the method described in this report are not presented herein to avoid potential inconsistencies with estimated withdrawals for livestock that will be presented in the national report, as different methods used by water-use personnel may result in different withdrawal estimates. Estimated withdrawals also are not presented to avoid potential disclosure of data for individual livestock operations.
Calculating system reliability with SRFYDO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morzinski, Jerome; Anderson - Cook, Christine M; Klamann, Richard M
2010-01-01
SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for themore » system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.« less
Empirical Allometric Models to Estimate Total Needle Biomass For Loblolly Pine
Hector M. de los Santos-Posadas; Bruce E. Borders
2002-01-01
Empirical geometric models based on the cone surface formula were adapted and used to estimate total dry needle biomass (TNB) and live branch basal area (LBBA). The results suggest that the empirical geometric equations produced good fit and stable parameters while estimating TNB and LBBA. The data used include trees form a spacing study of 12 years old and a set of...
Estimating abundance of mountain lions from unstructured spatial sampling
Russell, Robin E.; Royle, J. Andrew; Desimone, Richard; Schwartz, Michael K.; Edwards, Victoria L.; Pilgrim, Kristy P.; Mckelvey, Kevin S.
2012-01-01
Mountain lions (Puma concolor) are often difficult to monitor because of their low capture probabilities, extensive movements, and large territories. Methods for estimating the abundance of this species are needed to assess population status, determine harvest levels, evaluate the impacts of management actions on populations, and derive conservation and management strategies. Traditional mark–recapture methods do not explicitly account for differences in individual capture probabilities due to the spatial distribution of individuals in relation to survey effort (or trap locations). However, recent advances in the analysis of capture–recapture data have produced methods estimating abundance and density of animals from spatially explicit capture–recapture data that account for heterogeneity in capture probabilities due to the spatial organization of individuals and traps. We adapt recently developed spatial capture–recapture models to estimate density and abundance of mountain lions in western Montana. Volunteers and state agency personnel collected mountain lion DNA samples in portions of the Blackfoot drainage (7,908 km2) in west-central Montana using 2 methods: snow back-tracking mountain lion tracks to collect hair samples and biopsy darting treed mountain lions to obtain tissue samples. Overall, we recorded 72 individual capture events, including captures both with and without tissue sample collection and hair samples resulting in the identification of 50 individual mountain lions (30 females, 19 males, and 1 unknown sex individual). We estimated lion densities from 8 models containing effects of distance, sex, and survey effort on detection probability. Our population density estimates ranged from a minimum of 3.7 mountain lions/100 km2 (95% Cl 2.3–5.7) under the distance only model (including only an effect of distance on detection probability) to 6.7 (95% Cl 3.1–11.0) under the full model (including effects of distance, sex, survey effort, and distance x sex on detection probability). These numbers translate to a total estimate of 293 mountain lions (95% Cl 182–451) to 529 (95% Cl 245–870) within the Blackfoot drainage. Results from the distance model are similar to previous estimates of 3.6 mountain lions/100 km2 for the study area; however, results from all other models indicated greater numbers of mountain lions. Our results indicate that unstructured spatial sampling combined with spatial capture–recapture analysis can be an effective method for estimating large carnivore densities.
A cautionary note on Bayesian estimation of population size by removal sampling with diffuse priors.
Bord, Séverine; Bioche, Christèle; Druilhet, Pierre
2018-05-01
We consider the problem of estimating a population size by removal sampling when the sampling rate is unknown. Bayesian methods are now widespread and allow to include prior knowledge in the analysis. However, we show that Bayes estimates based on default improper priors lead to improper posteriors or infinite estimates. Similarly, weakly informative priors give unstable estimators that are sensitive to the choice of hyperparameters. By examining the likelihood, we show that population size estimates can be stabilized by penalizing small values of the sampling rate or large value of the population size. Based on theoretical results and simulation studies, we propose some recommendations on the choice of the prior. Then, we applied our results to real datasets. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Revisiting the Estimation of Dinosaur Growth Rates
Myhrvold, Nathan P.
2013-01-01
Previous growth-rate studies covering 14 dinosaur taxa, as represented by 31 data sets, are critically examined and reanalyzed by using improved statistical techniques. The examination reveals that some previously reported results cannot be replicated by using the methods originally reported; results from new methods are in many cases different, in both the quantitative rates and the qualitative nature of the growth, from results in the prior literature. Asymptotic growth curves, which have been hypothesized to be ubiquitous, are shown to provide best fits for only four of the 14 taxa. Possible reasons for non-asymptotic growth patterns are discussed; they include systematic errors in the age-estimation process and, more likely, a bias toward younger ages among the specimens analyzed. Analysis of the data sets finds that only three taxa include specimens that could be considered skeletally mature (i.e., having attained 90% of maximum body size predicted by asymptotic curve fits), and eleven taxa are quite immature, with the largest specimen having attained less than 62% of predicted asymptotic size. The three taxa that include skeletally mature specimens are included in the four taxa that are best fit by asymptotic curves. The totality of results presented here suggests that previous estimates of both maximum dinosaur growth rates and maximum dinosaur sizes have little statistical support. Suggestions for future research are presented. PMID:24358133
Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1
NASA Technical Reports Server (NTRS)
1983-01-01
The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.
Aynekulu, Ermias; Pitkänen, Sari; Packalen, Petteri
2016-01-01
It has been suggested that above-ground biomass (AGB) inventories should include tree height (H), in addition to diameter (D). As H is a difficult variable to measure, H-D models are commonly used to predict H. We tested a number of approaches for H-D modelling, including additive terms which increased the complexity of the model, and observed how differences in tree-level predictions of H propagated to plot-level AGB estimations. We were especially interested in detecting whether the choice of method can lead to bias. The compared approaches listed in the order of increasing complexity were: (B0) AGB estimations from D-only; (B1) involving also H obtained from a fixed-effects H-D model; (B2) involving also species; (B3) including also between-plot variability as random effects; and (B4) involving multilevel nested random effects for grouping plots in clusters. In light of the results, the modelling approach affected the AGB estimation significantly in some cases, although differences were negligible for some of the alternatives. The most important differences were found between including H or not in the AGB estimation. We observed that AGB predictions without H information were very sensitive to the environmental stress parameter (E), which can induce a critical bias. Regarding the H-D modelling, the most relevant effect was found when species was included as an additive term. We presented a two-step methodology, which succeeded in identifying the species for which the general H-D relation was relevant to modify. Based on the results, our final choice was the single-level mixed-effects model (B3), which accounts for the species but also for the plot random effects reflecting site-specific factors such as soil properties and degree of disturbance. PMID:27367857
Thomas, Richard M; Parks, Connie L; Richard, Adam H
2016-09-01
A common task in forensic anthropology involves the estimation of the biological sex of a decedent by exploiting the sexual dimorphism between males and females. Estimation methods are often based on analysis of skeletal collections of known sex and most include a research-based accuracy rate. However, the accuracy rates of sex estimation methods in actual forensic casework have rarely been studied. This article uses sex determinations based on DNA results from 360 forensic cases to develop accuracy rates for sex estimations conducted by forensic anthropologists. The overall rate of correct sex estimation from these cases is 94.7% with increasing accuracy rates as more skeletal material is available for analysis and as the education level and certification of the examiner increases. Nine of 19 incorrect assessments resulted from cases in which one skeletal element was available, suggesting that the use of an "undetermined" result may be more appropriate for these cases. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.
Test suite for image-based motion estimation of the brain and tongue
NASA Astrophysics Data System (ADS)
Ramsey, Jordan; Prince, Jerry L.; Gomez, Arnold D.
2017-03-01
Noninvasive analysis of motion has important uses as qualitative markers for organ function and to validate biomechanical computer simulations relative to experimental observations. Tagged MRI is considered the gold standard for noninvasive tissue motion estimation in the heart, and this has inspired multiple studies focusing on other organs, including the brain under mild acceleration and the tongue during speech. As with other motion estimation approaches, using tagged MRI to measure 3D motion includes several preprocessing steps that affect the quality and accuracy of estimation. Benchmarks, or test suites, are datasets of known geometries and displacements that act as tools to tune tracking parameters or to compare different motion estimation approaches. Because motion estimation was originally developed to study the heart, existing test suites focus on cardiac motion. However, many fundamental differences exist between the heart and other organs, such that parameter tuning (or other optimization) with respect to a cardiac database may not be appropriate. Therefore, the objective of this research was to design and construct motion benchmarks by adopting an "image synthesis" test suite to study brain deformation due to mild rotational accelerations, and a benchmark to model motion of the tongue during speech. To obtain a realistic representation of mechanical behavior, kinematics were obtained from finite-element (FE) models. These results were combined with an approximation of the acquisition process of tagged MRI (including tag generation, slice thickness, and inconsistent motion repetition). To demonstrate an application of the presented methodology, the effect of motion inconsistency on synthetic measurements of head- brain rotation and deformation was evaluated. The results indicated that acquisition inconsistency is roughly proportional to head rotation estimation error. Furthermore, when evaluating non-rigid deformation, the results suggest that inconsistent motion can yield "ghost" shear strains, which are a function of slice acquisition viability as opposed to a true physical deformation.
A new software for deformation source optimization, the Bayesian Earthquake Analysis Tool (BEAT)
NASA Astrophysics Data System (ADS)
Vasyura-Bathke, H.; Dutta, R.; Jonsson, S.; Mai, P. M.
2017-12-01
Modern studies of crustal deformation and the related source estimation, including magmatic and tectonic sources, increasingly use non-linear optimization strategies to estimate geometric and/or kinematic source parameters and often consider both jointly, geodetic and seismic data. Bayesian inference is increasingly being used for estimating posterior distributions of deformation source model parameters, given measured/estimated/assumed data and model uncertainties. For instance, some studies consider uncertainties of a layered medium and propagate these into source parameter uncertainties, while others use informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed to efficiently explore the high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational burden of these methods is high and estimation codes are rarely made available along with the published results. Even if the codes are accessible, it is usually challenging to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in deformation source estimations, we undertook the effort of developing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package builds on the pyrocko seismological toolbox (www.pyrocko.org), and uses the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat), and we encourage and solicit contributions to the project. Here, we present our strategy for developing BEAT and show application examples; especially the effect of including the model prediction uncertainty of the velocity model in following source optimizations: full moment tensor, Mogi source, moderate strike-slip earth-quake.
Test Suite for Image-Based Motion Estimation of the Brain and Tongue
Ramsey, Jordan; Prince, Jerry L.; Gomez, Arnold D.
2017-01-01
Noninvasive analysis of motion has important uses as qualitative markers for organ function and to validate biomechanical computer simulations relative to experimental observations. Tagged MRI is considered the gold standard for noninvasive tissue motion estimation in the heart, and this has inspired multiple studies focusing on other organs, including the brain under mild acceleration and the tongue during speech. As with other motion estimation approaches, using tagged MRI to measure 3D motion includes several preprocessing steps that affect the quality and accuracy of estimation. Benchmarks, or test suites, are datasets of known geometries and displacements that act as tools to tune tracking parameters or to compare different motion estimation approaches. Because motion estimation was originally developed to study the heart, existing test suites focus on cardiac motion. However, many fundamental differences exist between the heart and other organs, such that parameter tuning (or other optimization) with respect to a cardiac database may not be appropriate. Therefore, the objective of this research was to design and construct motion benchmarks by adopting an “image synthesis” test suite to study brain deformation due to mild rotational accelerations, and a benchmark to model motion of the tongue during speech. To obtain a realistic representation of mechanical behavior, kinematics were obtained from finite-element (FE) models. These results were combined with an approximation of the acquisition process of tagged MRI (including tag generation, slice thickness, and inconsistent motion repetition). To demonstrate an application of the presented methodology, the effect of motion inconsistency on synthetic measurements of head-brain rotation and deformation was evaluated. The results indicated that acquisition inconsistency is roughly proportional to head rotation estimation error. Furthermore, when evaluating non-rigid deformation, the results suggest that inconsistent motion can yield “ghost” shear strains, which are a function of slice acquisition viability as opposed to a true physical deformation. PMID:28781414
Program review presentation to Level 1, Interagency Coordination Committee
NASA Technical Reports Server (NTRS)
1982-01-01
Progress in the development of crop inventory technology is reported. Specific topics include the results of a thematic mapper analysis, variable selection studies/early season estimator improvements, the agricultural information system simulator, large unit proportion estimation, and development of common features for multi-satellite information extraction.
49 CFR 211.9 - Content of rulemaking and waiver petitions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... including an evaluation of anticipated impacts of the action sought; each evaluation shall include an estimate of resulting costs to the private sector, to consumers, and to Federal, State and local governments as well as an evaluation of resulting benefits, quantified to the extent practicable. Each...
Determining wave direction using curvature parameters.
de Queiroz, Eduardo Vitarelli; de Carvalho, João Luiz Baptista
2016-01-01
The curvature of the sea wave was tested as a parameter for estimating wave direction in the search for better results in estimates of wave direction in shallow waters, where waves of different sizes, frequencies and directions intersect and it is difficult to characterize. We used numerical simulations of the sea surface to determine wave direction calculated from the curvature of the waves. Using 1000 numerical simulations, the statistical variability of the wave direction was determined. The results showed good performance by the curvature parameter for estimating wave direction. Accuracy in the estimates was improved by including wave slope parameters in addition to curvature. The results indicate that the curvature is a promising technique to estimate wave directions.•In this study, the accuracy and precision of curvature parameters to measure wave direction are analyzed using a model simulation that generates 1000 wave records with directional resolution.•The model allows the simultaneous simulation of time-series wave properties such as sea surface elevation, slope and curvature and they were used to analyze the variability of estimated directions.•The simultaneous acquisition of slope and curvature parameters can contribute to estimates wave direction, thus increasing accuracy and precision of results.
Estimating the Effects of the Terminal Area Productivity Program
NASA Technical Reports Server (NTRS)
Lee, David A.; Kostiuk, Peter F.; Hemm, Robert V., Jr.; Wingrove, Earl R., III; Shapiro, Gerald
1997-01-01
The report describes methods and results of an analysis of the technical and economic benefits of the systems to be developed in the NASA Terminal Area Productivity (TAP) program. A runway capacity model using parameters that reflect the potential impact of the TAP technologies is described. The runway capacity model feeds airport specific models which are also described. The capacity estimates are used with a queuing model to calculate aircraft delays, and TAP benefits are determined by calculating the savings due to reduced delays. The report includes benefit estimates for Boston Logan and Detroit Wayne County airports. An appendix includes a description and listing of the runway capacity model.
Curran, Christopher A.; Eng, Ken; Konrad, Christopher P.
2012-01-01
Regional low-flow regression models for estimating Q7,10 at ungaged stream sites are developed from the records of daily discharge at 65 continuous gaging stations (including 22 discontinued gaging stations) for the purpose of evaluating explanatory variables. By incorporating the base-flow recession time constant τ as an explanatory variable in the regression model, the root-mean square error for estimating Q7,10 at ungaged sites can be lowered to 72 percent (for known values of τ), which is 42 percent less than if only basin area and mean annual precipitation are used as explanatory variables. If partial-record sites are included in the regression data set, τ must be estimated from pairs of discharge measurements made during continuous periods of declining low flows. Eight measurement pairs are optimal for estimating τ at partial-record sites, and result in a lowering of the root-mean square error by 25 percent. A low-flow survey strategy that includes paired measurements at partial-record sites requires additional effort and planning beyond a standard strategy, but could be used to enhance regional estimates of τ and potentially reduce the error of regional regression models for estimating low-flow characteristics at ungaged sites.
Field evaluation of distance-estimation error during wetland-dependent bird surveys
Nadeau, Christopher P.; Conway, Courtney J.
2012-01-01
Context: The most common methods to estimate detection probability during avian point-count surveys involve recording a distance between the survey point and individual birds detected during the survey period. Accurately measuring or estimating distance is an important assumption of these methods; however, this assumption is rarely tested in the context of aural avian point-count surveys. Aims: We expand on recent bird-simulation studies to document the error associated with estimating distance to calling birds in a wetland ecosystem. Methods: We used two approaches to estimate the error associated with five surveyor's distance estimates between the survey point and calling birds, and to determine the factors that affect a surveyor's ability to estimate distance. Key results: We observed biased and imprecise distance estimates when estimating distance to simulated birds in a point-count scenario (x̄error = -9 m, s.d.error = 47 m) and when estimating distances to real birds during field trials (x̄error = 39 m, s.d.error = 79 m). The amount of bias and precision in distance estimates differed among surveyors; surveyors with more training and experience were less biased and more precise when estimating distance to both real and simulated birds. Three environmental factors were important in explaining the error associated with distance estimates, including the measured distance from the bird to the surveyor, the volume of the call and the species of bird. Surveyors tended to make large overestimations to birds close to the survey point, which is an especially serious error in distance sampling. Conclusions: Our results suggest that distance-estimation error is prevalent, but surveyor training may be the easiest way to reduce distance-estimation error. Implications: The present study has demonstrated how relatively simple field trials can be used to estimate the error associated with distance estimates used to estimate detection probability during avian point-count surveys. Evaluating distance-estimation errors will allow investigators to better evaluate the accuracy of avian density and trend estimates. Moreover, investigators who evaluate distance-estimation errors could employ recently developed models to incorporate distance-estimation error into analyses. We encourage further development of such models, including the inclusion of such models into distance-analysis software.
Study on paddy rice yield estimation based on multisource data and the Grey system theory
NASA Astrophysics Data System (ADS)
Deng, Wensheng; Wang, Wei; Liu, Hai; Li, Chen; Ge, Yimin; Zheng, Xianghua
2009-10-01
The paddy rice is our important crops. In study of the paddy rice yield estimation, compared with the scholars who usually only take the remote sensing data or meteorology as the influence factors, we combine the remote sensing and the meteorological data to make the monitoring result closer reality. Although the gray system theory has used in many aspects, it is applied very little in paddy rice yield estimation. This study introduces it to the paddy rice yield estimation, and makes the yield estimation model. This can resolve small data sets problem that can not be solved by deterministic model. It selects some regions in Jianghan plain for the study area. The data includes multi-temporal remote sensing image, meteorological and statistic data. The remote sensing data is the 16-day composite images (250-m spatial resolution) of MODIS. The meteorological data includes monthly average temperature, sunshine duration and rain fall amount. The statistical data is the long-term paddy rice yield of the study area. Firstly, it extracts the paddy rice planting area from the multi-temporal MODIS images with the help of GIS and RS. Then taking the paddy rice yield as the reference sequence, MODIS data and meteorological data as the comparative sequence, computing the gray correlative coefficient, it selects the yield estimation factor based on the grey system theory. Finally, using the factors, it establishes the yield estimation model and does the result test. The result indicated that the method is feasible and the conclusion is credible. It can provide the scientific method and reference value to carry on the region paddy rice remote sensing estimation.
Prediction and assimilation of surf-zone processes using a Bayesian network: Part II: Inverse models
Plant, Nathaniel G.; Holland, K. Todd
2011-01-01
A Bayesian network model has been developed to simulate a relatively simple problem of wave propagation in the surf zone (detailed in Part I). Here, we demonstrate that this Bayesian model can provide both inverse modeling and data-assimilation solutions for predicting offshore wave heights and depth estimates given limited wave-height and depth information from an onshore location. The inverse method is extended to allow data assimilation using observational inputs that are not compatible with deterministic solutions of the problem. These inputs include sand bar positions (instead of bathymetry) and estimates of the intensity of wave breaking (instead of wave-height observations). Our results indicate that wave breaking information is essential to reduce prediction errors. In many practical situations, this information could be provided from a shore-based observer or from remote-sensing systems. We show that various combinations of the assimilated inputs significantly reduce the uncertainty in the estimates of water depths and wave heights in the model domain. Application of the Bayesian network model to new field data demonstrated significant predictive skill (R2 = 0.7) for the inverse estimate of a month-long time series of offshore wave heights. The Bayesian inverse results include uncertainty estimates that were shown to be most accurate when given uncertainty in the inputs (e.g., depth and tuning parameters). Furthermore, the inverse modeling was extended to directly estimate tuning parameters associated with the underlying wave-process model. The inverse estimates of the model parameters not only showed an offshore wave height dependence consistent with results of previous studies but the uncertainty estimates of the tuning parameters also explain previously reported variations in the model parameters.
Design of Low-Cost Vehicle Roll Angle Estimator Based on Kalman Filters and an Iot Architecture.
Garcia Guzman, Javier; Prieto Gonzalez, Lisardo; Pajares Redondo, Jonatan; Sanz Sanchez, Susana; Boada, Beatriz L
2018-06-03
In recent years, there have been many advances in vehicle technologies based on the efficient use of real-time data provided by embedded sensors. Some of these technologies can help you avoid or reduce the severity of a crash such as the Roll Stability Control (RSC) systems for commercial vehicles. In RSC, several critical variables to consider such as sideslip or roll angle can only be directly measured using expensive equipment. These kind of devices would increase the price of commercial vehicles. Nevertheless, sideslip or roll angle or values can be estimated using MEMS sensors in combination with data fusion algorithms. The objectives stated for this research work consist of integrating roll angle estimators based on Linear and Unscented Kalman filters to evaluate the precision of the results obtained and determining the fulfillment of the hard real-time processing constraints to embed this kind of estimators in IoT architectures based on low-cost equipment able to be deployed in commercial vehicles. An experimental testbed composed of a van with two sets of low-cost kits was set up, the first one including a Raspberry Pi 3 Model B, and the other having an Intel Edison System on Chip. This experimental environment was tested under different conditions for comparison. The results obtained from low-cost experimental kits, based on IoT architectures and including estimators based on Kalman filters, provide accurate roll angle estimation. Also, these results show that the processing time to get the data and execute the estimations based on Kalman Filters fulfill hard real time constraints.
Space Shuttle propulsion parameter estimation using optimal estimation techniques
NASA Technical Reports Server (NTRS)
1983-01-01
This fourth monthly progress report again contains corrections and additions to the previously submitted reports. The additions include a simplified SRB model that is directly incorporated into the estimation algorithm and provides the required partial derivatives. The resulting partial derivatives are analytical rather than numerical as would be the case using the SOBER routines. The filter and smoother routine developments have continued. These routines are being checked out.
Buried transuranic wastes at ORNL: Review of past estimates and reconciliation with current data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trabalka, J.R.
1997-09-01
Inventories of buried (generally meaning disposed of) transuranic (TRU) wastes at Oak Ridge National Laboratory (ORNL) have been estimated for site remediation and waste management planning over a period of about two decades. Estimates were required because of inadequate waste characterization and incomplete disposal records. For a variety of reasons, including changing definitions of TRU wastes, differing objectives for the estimates, and poor historical data, the published results have sometimes been in conflict. The purpose of this review was (1) to attempt to explain both the rationale for and differences among the various estimates, and (2) to update the estimatesmore » based on more recent information obtained from waste characterization and from evaluations of ORNL waste data bases and historical records. The latter included information obtained from an expert panel`s review and reconciliation of inconsistencies in data identified during preparation of the ORNL input for the third revision of the Baseline Inventory Report for the Waste Isolation Pilot Plant. The results summarize current understanding of the relationship between past estimates of buried TRU wastes and provide the most up-to-date information on recorded burials thereafter. The limitations of available information on the latter and thus the need for improved waste characterization are highlighted.« less
Anglemyer, Andrew; Horvath, Hacsi T; Bero, Lisa
2014-04-29
Researchers and organizations often use evidence from randomized controlled trials (RCTs) to determine the efficacy of a treatment or intervention under ideal conditions. Studies of observational designs are often used to measure the effectiveness of an intervention in 'real world' scenarios. Numerous study designs and modifications of existing designs, including both randomized and observational, are used for comparative effectiveness research in an attempt to give an unbiased estimate of whether one treatment is more effective or safer than another for a particular population.A systematic analysis of study design features, risk of bias, parameter interpretation, and effect size for all types of randomized and non-experimental observational studies is needed to identify specific differences in design types and potential biases. This review summarizes the results of methodological reviews that compare the outcomes of observational studies with randomized trials addressing the same question, as well as methodological reviews that compare the outcomes of different types of observational studies. To assess the impact of study design (including RCTs versus observational study designs) on the effect measures estimated.To explore methodological variables that might explain any differences identified.To identify gaps in the existing research comparing study designs. We searched seven electronic databases, from January 1990 to December 2013.Along with MeSH terms and relevant keywords, we used the sensitivity-specificity balanced version of a validated strategy to identify reviews in PubMed, augmented with one term ("review" in article titles) so that it better targeted narrative reviews. No language restrictions were applied. We examined systematic reviews that were designed as methodological reviews to compare quantitative effect size estimates measuring efficacy or effectiveness of interventions tested in trials with those tested in observational studies. Comparisons included RCTs versus observational studies (including retrospective cohorts, prospective cohorts, case-control designs, and cross-sectional designs). Reviews were not eligible if they compared randomized trials with other studies that had used some form of concurrent allocation. In general, outcome measures included relative risks or rate ratios (RR), odds ratios (OR), hazard ratios (HR). Using results from observational studies as the reference group, we examined the published estimates to see whether there was a relative larger or smaller effect in the ratio of odds ratios (ROR).Within each identified review, if an estimate comparing results from observational studies with RCTs was not provided, we pooled the estimates for observational studies and RCTs. Then, we estimated the ratio of ratios (risk ratio or odds ratio) for each identified review using observational studies as the reference category. Across all reviews, we synthesized these ratios to get a pooled ROR comparing results from RCTs with results from observational studies. Our initial search yielded 4406 unique references. Fifteen reviews met our inclusion criteria; 14 of which were included in the quantitative analysis.The included reviews analyzed data from 1583 meta-analyses that covered 228 different medical conditions. The mean number of included studies per paper was 178 (range 19 to 530).Eleven (73%) reviews had low risk of bias for explicit criteria for study selection, nine (60%) were low risk of bias for investigators' agreement for study selection, five (33%) included a complete sample of studies, seven (47%) assessed the risk of bias of their included studies,Seven (47%) reviews controlled for methodological differences between studies,Eight (53%) reviews controlled for heterogeneity among studies, nine (60%) analyzed similar outcome measures, and four (27%) were judged to be at low risk of reporting bias.Our primary quantitative analysis, including 14 reviews, showed that the pooled ROR comparing effects from RCTs with effects from observational studies was 1.08 (95% confidence interval (CI) 0.96 to 1.22). Of 14 reviews included in this analysis, 11 (79%) found no significant difference between observational studies and RCTs. One review suggested observational studies had larger effects of interest, and two reviews suggested observational studies had smaller effects of interest.Similar to the effect across all included reviews, effects from reviews comparing RCTs with cohort studies had a pooled ROR of 1.04 (95% CI 0.89 to 1.21), with substantial heterogeneity (I(2) = 68%). Three reviews compared effects of RCTs and case-control designs (pooled ROR: 1.11 (95% CI 0.91 to 1.35)).No significant difference in point estimates across heterogeneity, pharmacological intervention, or propensity score adjustment subgroups were noted. No reviews had compared RCTs with observational studies that used two of the most common causal inference methods, instrumental variables and marginal structural models. Our results across all reviews (pooled ROR 1.08) are very similar to results reported by similarly conducted reviews. As such, we have reached similar conclusions; on average, there is little evidence for significant effect estimate differences between observational studies and RCTs, regardless of specific observational study design, heterogeneity, or inclusion of studies of pharmacological interventions. Factors other than study design per se need to be considered when exploring reasons for a lack of agreement between results of RCTs and observational studies. Our results underscore that it is important for review authors to consider not only study design, but the level of heterogeneity in meta-analyses of RCTs or observational studies. A better understanding of how these factors influence study effects might yield estimates reflective of true effectiveness.
Automatic Estimation of the Radiological Inventory for the Dismantling of Nuclear Facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia-Bermejo, R.; Felipe, A.; Gutierrez, S.
The estimation of the radiological inventory of Nuclear Facilities to be dismantled is a process that included information related with the physical inventory of all the plant and radiological survey. Estimation of the radiological inventory for all the components and civil structure of the plant could be obtained with mathematical models with statistical approach. A computer application has been developed in order to obtain the radiological inventory in an automatic way. Results: A computer application that is able to estimate the radiological inventory from the radiological measurements or the characterization program has been developed. In this computer applications has beenmore » included the statistical functions needed for the estimation of the central tendency and variability, e.g. mean, median, variance, confidence intervals, variance coefficients, etc. This computer application is a necessary tool in order to be able to estimate the radiological inventory of a nuclear facility and it is a powerful tool for decision taken in future sampling surveys.« less
Can price get the monkey off our back? A meta-analysis of illicit drug demand.
Gallet, Craig A
2014-01-01
Because of the increased availability of price data over the past 15 years, several studies have estimated the demand for illicit drugs, providing 462 estimates of the price elasticity. Results from estimating several meta-regressions reveal that these price elasticity estimates are influenced by a number of study characteristics. For instance, the price elasticity differs across drugs, with its absolute value being smallest for marijuana, compared with cocaine and heroin. Furthermore, price elasticity estimates are sensitive to whether demand is modeled in the short-run or the long-run, measures of quantity and price, whether or not alcohol and other illicit drugs are included in the specification of demand, and the location of demand. However, a number of other factors, including the functional form of demand, several specification issues, the type of data and method used to estimate demand, and the quality of the publication outlet, have less influence on the price elasticity. Copyright © 2013 John Wiley & Sons, Ltd.
Small pollutant concentration gradients between levels above a plant canopy result in large uncertainties in estimated air–surface exchange fluxes when using existing micrometeorological gradient methods, including the aerodynamic gradient method (AGM) and the modified Bowen rati...
INCREASING THE ACCURACY OF MAYFIELD ESTIMATES USING KNOWLEDGE OF NEST AGE
This presentation will focus on the error introduced in nest-survival modeling when nest-cycles are assumed to be of constant length. I will present the types of error that may occur, including biases resulting from incorrect estimates of expected values, as well as biases that o...
Cost Allocation Issues in Interlibrary Systems.
ERIC Educational Resources Information Center
Alexander, Ernest R.
1985-01-01
In comparing methods of allocating service transaction costs among member libraries of interlibrary systems, questions of how costs are to be estimated, and what cost elements are to be included are critical. Different approaches of estimation yield varying results. Actual distribution of units accounts for greatest variance in allocations. (CDD)
College Quality and Early Adult Outcomes
ERIC Educational Resources Information Center
Long, Mark C.
2008-01-01
This paper estimates the effects of various college qualities on several early adult outcomes, using panel data from the National Education Longitudinal Study. I compare the results using ordinary least squares with three alternative methods of estimation, including instrumental variables, and the methods used by Dale and Krueger [(2002).…
Estimating home-range size: when to include a third dimension?
Monterroso, Pedro; Sillero, Neftalí; Rosalino, Luís Miguel; Loureiro, Filipa; Alves, Paulo Célio
2013-01-01
Most studies dealing with home ranges consider the study areas as if they were totally flat, working only in two dimensions, when in reality they are irregular surfaces displayed in three dimensions. By disregarding the third dimension (i.e., topography), the size of home ranges underestimates the surface actually occupied by the animal, potentially leading to misinterpretations of the animals' ecological needs. We explored the influence of considering the third dimension in the estimation of home-range size by modeling the variation between the planimetric and topographic estimates at several spatial scales. Our results revealed that planimetric approaches underestimate home-range size estimations, which range from nearly zero up to 22%. The difference between planimetric and topographic estimates of home-ranges sizes produced highly robust models using the average slope as the sole independent factor. Moreover, our models suggest that planimetric estimates in areas with an average slope of 16.3° (±0.4) or more will incur in errors ≥5%. Alternatively, the altitudinal range can be used as an indicator of the need to include topography in home-range estimates. Our results confirmed that home-range estimates could be significantly biased when topography is disregarded. We suggest that study areas where home-range studies will be performed should firstly be scoped for its altitudinal range, which can serve as an indicator for the need for posterior use of average slope values to model the surface area used and/or available for the studied animals. PMID:23919170
Balzer, Laura B; Zheng, Wenjing; van der Laan, Mark J; Petersen, Maya L
2018-01-01
We often seek to estimate the impact of an exposure naturally occurring or randomly assigned at the cluster-level. For example, the literature on neighborhood determinants of health continues to grow. Likewise, community randomized trials are applied to learn about real-world implementation, sustainability, and population effects of interventions with proven individual-level efficacy. In these settings, individual-level outcomes are correlated due to shared cluster-level factors, including the exposure, as well as social or biological interactions between individuals. To flexibly and efficiently estimate the effect of a cluster-level exposure, we present two targeted maximum likelihood estimators (TMLEs). The first TMLE is developed under a non-parametric causal model, which allows for arbitrary interactions between individuals within a cluster. These interactions include direct transmission of the outcome (i.e. contagion) and influence of one individual's covariates on another's outcome (i.e. covariate interference). The second TMLE is developed under a causal sub-model assuming the cluster-level and individual-specific covariates are sufficient to control for confounding. Simulations compare the alternative estimators and illustrate the potential gains from pairing individual-level risk factors and outcomes during estimation, while avoiding unwarranted assumptions. Our results suggest that estimation under the sub-model can result in bias and misleading inference in an observational setting. Incorporating working assumptions during estimation is more robust than assuming they hold in the underlying causal model. We illustrate our approach with an application to HIV prevention and treatment.
Assaad, Houssein I; Choudhary, Pankaj K
2013-01-01
The L -statistics form an important class of estimators in nonparametric statistics. Its members include trimmed means and sample quantiles and functions thereof. This article is devoted to theory and applications of L -statistics for repeated measurements data, wherein the measurements on the same subject are dependent and the measurements from different subjects are independent. This article has three main goals: (a) Show that the L -statistics are asymptotically normal for repeated measurements data. (b) Present three statistical applications of this result, namely, location estimation using trimmed means, quantile estimation and construction of tolerance intervals. (c) Obtain a Bahadur representation for sample quantiles. These results are generalizations of similar results for independently and identically distributed data. The practical usefulness of these results is illustrated by analyzing a real data set involving measurement of systolic blood pressure. The properties of the proposed point and interval estimators are examined via simulation.
WEIGHTED LIKELIHOOD ESTIMATION UNDER TWO-PHASE SAMPLING
Saegusa, Takumi; Wellner, Jon A.
2013-01-01
We develop asymptotic theory for weighted likelihood estimators (WLE) under two-phase stratified sampling without replacement. We also consider several variants of WLEs involving estimated weights and calibration. A set of empirical process tools are developed including a Glivenko–Cantelli theorem, a theorem for rates of convergence of M-estimators, and a Donsker theorem for the inverse probability weighted empirical processes under two-phase sampling and sampling without replacement at the second phase. Using these general results, we derive asymptotic distributions of the WLE of a finite-dimensional parameter in a general semiparametric model where an estimator of a nuisance parameter is estimable either at regular or nonregular rates. We illustrate these results and methods in the Cox model with right censoring and interval censoring. We compare the methods via their asymptotic variances under both sampling without replacement and the more usual (and easier to analyze) assumption of Bernoulli sampling at the second phase. PMID:24563559
Tarone, Aaron M; Foran, David R
2011-01-01
Forensic entomologists use size and developmental stage to estimate blow fly age, and from those, a postmortem interval. Since such estimates are generally accurate but often lack precision, particularly in the older developmental stages, alternative aging methods would be advantageous. Presented here is a means of incorporating developmentally regulated gene expression levels into traditional stage and size data, with a goal of more precisely estimating developmental age of immature Lucilia sericata. Generalized additive models of development showed improved statistical support compared to models that did not include gene expression data, resulting in an increase in estimate precision, especially for postfeeding third instars and pupae. The models were then used to make blind estimates of development for 86 immature L. sericata raised on rat carcasses. Overall, inclusion of gene expression data resulted in increased precision in aging blow flies. © 2010 American Academy of Forensic Sciences.
Bonilla, M.G.; Mark, R.K.; Lienkaemper, J.J.
1984-01-01
In order to refine correlations of surface-wave magnitude, fault rupture length at the ground surface, and fault displacement at the surface by including the uncertainties in these variables, the existing data were critically reviewed and a new data base was compiled. Earthquake magnitudes were redetermined as necessary to make them as consistent as possible with the Gutenberg methods and results, which necessarily make up much of the data base. Measurement errors were estimated for the three variables for 58 moderate to large shallow-focus earthquakes. Regression analyses were then made utilizing the estimated measurement errors. The regression analysis demonstrates that the relations among the variables magnitude, length, and displacement are stochastic in nature. The stochastic variance, introduced in part by incomplete surface expression of seismogenic faulting, variation in shear modulus, and regional factors, dominates the estimated measurement errors. Thus, it is appropriate to use ordinary least squares for the regression models, rather than regression models based upon an underlying deterministic relation with the variance resulting from measurement errors. Significant differences exist in correlations of certain combinations of length, displacement, and magnitude when events are qrouped by fault type or by region, including attenuation regions delineated by Evernden and others. Subdivision of the data results in too few data for some fault types and regions, and for these only regressions using all of the data as a group are reported. Estimates of the magnitude and the standard deviation of the magnitude of a prehistoric or future earthquake associated with a fault can be made by correlating M with the logarithms of rupture length, fault displacement, or the product of length and displacement. Fault rupture area could be reliably estimated for about 20 of the events in the data set. Regression of MS on rupture area did not result in a marked improvement over regressions that did not involve rupture area. Because no subduction-zone earthquakes are included in this study, the reported results do not apply to such zones.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tibuleac, Ileana
2016-06-30
A new, cost effective and non-invasive exploration method using ambient seismic noise has been tested at Soda Lake, NV, with promising results. The material included in this report demonstrates that, with the advantage of initial S-velocity models estimated from ambient noise surface waves, the seismic reflection survey, although with lower resolution, reproduces the results of the active survey when the ambient seismic noise is not contaminated by strong cultural noise. Ambient noise resolution is less at depth (below 1000m) compared to the active survey. In general, the results are promising and useful information can be recovered from ambient seismic noise,more » including dipping features and fault locations.« less
Robust Magnetotelluric Impedance Estimation
NASA Astrophysics Data System (ADS)
Sutarno, D.
2010-12-01
Robust magnetotelluric (MT) response function estimators are now in standard use by the induction community. Properly devised and applied, these have ability to reduce the influence of unusual data (outliers). The estimators always yield impedance estimates which are better than the conventional least square (LS) estimation because the `real' MT data almost never satisfy the statistical assumptions of Gaussian distribution and stationary upon which normal spectral analysis is based. This paper discuses the development and application of robust estimation procedures which can be classified as M-estimators to MT data. Starting with the description of the estimators, special attention is addressed to the recent development of a bounded-influence robust estimation, including utilization of the Hilbert Transform (HT) operation on causal MT impedance functions. The resulting robust performances are illustrated using synthetic as well as real MT data.
Hill, Mary Catherine
1992-01-01
This report documents a new version of the U.S. Geological Survey modular, three-dimensional, finite-difference, ground-water flow model (MODFLOW) which, with the new Parameter-Estimation Package that also is documented in this report, can be used to estimate parameters by nonlinear regression. The new version of MODFLOW is called MODFLOWP (pronounced MOD-FLOW*P), and functions nearly identically to MODFLOW when the ParameterEstimation Package is not used. Parameters are estimated by minimizing a weighted least-squares objective function by the modified Gauss-Newton method or by a conjugate-direction method. Parameters used to calculate the following MODFLOW model inputs can be estimated: Transmissivity and storage coefficient of confined layers; hydraulic conductivity and specific yield of unconfined layers; vertical leakance; vertical anisotropy (used to calculate vertical leakance); horizontal anisotropy; hydraulic conductance of the River, Streamflow-Routing, General-Head Boundary, and Drain Packages; areal recharge rates; maximum evapotranspiration; pumpage rates; and the hydraulic head at constant-head boundaries. Any spatial variation in parameters can be defined by the user. Data used to estimate parameters can include existing independent estimates of parameter values, observed hydraulic heads or temporal changes in hydraulic heads, and observed gains and losses along head-dependent boundaries (such as streams). Model output includes statistics for analyzing the parameter estimates and the model; these statistics can be used to quantify the reliability of the resulting model, to suggest changes in model construction, and to compare results of models constructed in different ways.
Entry Debris Field Estimation Methods and Application to Compton Gamma Ray Observatory Disposal
NASA Technical Reports Server (NTRS)
Mrozinski, Richard B.
2001-01-01
For public safety reasons, the Compton Gamma Ray Observatory (CGRO) was intentionally deorbited on June 4, 2000. This deorbit was NASA's first intentional controlled deorbit of a satellite, and more will come including the eventual deorbit of the International Space Station. To maximize public safety, satellite deorbit planning requires conservative estimates of the debris footprint size and location. These estimates are needed to properly design a deorbit sequence that places the debris footprint over unpopulated areas, including protection for deorbit contingencies. This paper details a method for estimating the length (range), width (crossrange), and location of entry and breakup debris footprints. This method utilizes a three degree-of-freedom Monte Carlo simulation incorporating uncertainties in all aspects of the problem, including vehicle and environment uncertainties. The method incorporates a range of debris characteristics based on historical data in addition to any vehicle-specific debris catalog information. This paper describes the method in detail, and presents results of its application as used in planning the deorbit of the CGRO.
Oelze, Michael L; Mamou, Jonathan
2016-02-01
Conventional medical imaging technologies, including ultrasound, have continued to improve over the years. For example, in oncology, medical imaging is characterized by high sensitivity, i.e., the ability to detect anomalous tissue features, but the ability to classify these tissue features from images often lacks specificity. As a result, a large number of biopsies of tissues with suspicious image findings are performed each year with a vast majority of these biopsies resulting in a negative finding. To improve specificity of cancer imaging, quantitative imaging techniques can play an important role. Conventional ultrasound B-mode imaging is mainly qualitative in nature. However, quantitative ultrasound (QUS) imaging can provide specific numbers related to tissue features that can increase the specificity of image findings leading to improvements in diagnostic ultrasound. QUS imaging can encompass a wide variety of techniques including spectral-based parameterization, elastography, shear wave imaging, flow estimation, and envelope statistics. Currently, spectral-based parameterization and envelope statistics are not available on most conventional clinical ultrasound machines. However, in recent years, QUS techniques involving spectral-based parameterization and envelope statistics have demonstrated success in many applications, providing additional diagnostic capabilities. Spectral-based techniques include the estimation of the backscatter coefficient (BSC), estimation of attenuation, and estimation of scatterer properties such as the correlation length associated with an effective scatterer diameter (ESD) and the effective acoustic concentration (EAC) of scatterers. Envelope statistics include the estimation of the number density of scatterers and quantification of coherent to incoherent signals produced from the tissue. Challenges for clinical application include correctly accounting for attenuation effects and transmission losses and implementation of QUS on clinical devices. Successful clinical and preclinical applications demonstrating the ability of QUS to improve medical diagnostics include characterization of the myocardium during the cardiac cycle, cancer detection, classification of solid tumors and lymph nodes, detection and quantification of fatty liver disease, and monitoring and assessment of therapy.
Oelze, Michael L.; Mamou, Jonathan
2017-01-01
Conventional medical imaging technologies, including ultrasound, have continued to improve over the years. For example, in oncology, medical imaging is characterized by high sensitivity, i.e., the ability to detect anomalous tissue features, but the ability to classify these tissue features from images often lacks specificity. As a result, a large number of biopsies of tissues with suspicious image findings are performed each year with a vast majority of these biopsies resulting in a negative finding. To improve specificity of cancer imaging, quantitative imaging techniques can play an important role. Conventional ultrasound B-mode imaging is mainly qualitative in nature. However, quantitative ultrasound (QUS) imaging can provide specific numbers related to tissue features that can increase the specificity of image findings leading to improvements in diagnostic ultrasound. QUS imaging techniques can encompass a wide variety of techniques including spectral-based parameterization, elastography, shear wave imaging, flow estimation and envelope statistics. Currently, spectral-based parameterization and envelope statistics are not available on most conventional clinical ultrasound machines. However, in recent years QUS techniques involving spectral-based parameterization and envelope statistics have demonstrated success in many applications, providing additional diagnostic capabilities. Spectral-based techniques include the estimation of the backscatter coefficient, estimation of attenuation, and estimation of scatterer properties such as the correlation length associated with an effective scatterer diameter and the effective acoustic concentration of scatterers. Envelope statistics include the estimation of the number density of scatterers and quantification of coherent to incoherent signals produced from the tissue. Challenges for clinical application include correctly accounting for attenuation effects and transmission losses and implementation of QUS on clinical devices. Successful clinical and pre-clinical applications demonstrating the ability of QUS to improve medical diagnostics include characterization of the myocardium during the cardiac cycle, cancer detection, classification of solid tumors and lymph nodes, detection and quantification of fatty liver disease, and monitoring and assessment of therapy. PMID:26761606
The dosimetric impact of including the patient table in CT dose estimates
NASA Astrophysics Data System (ADS)
Nowik, Patrik; Bujila, Robert; Kull, Love; Andersson, Jonas; Poludniowski, Gavin
2017-12-01
The purpose of this study was to evaluate the dosimetric impact of including the patient table in Monte Carlo CT dose estimates for both spiral scans and scan projection radiographs (SPR). CT scan acquisitions were simulated for a Siemens SOMATOM Force scanner (Siemens Healthineers, Forchheim, Germany) with and without a patient table present. An adult male, an adult female and a pediatric female voxelized phantom were simulated. The simulated scans included tube voltages of 80 and 120 kVp. Spiral scans simulated without a patient table resulted in effective doses that were overestimated by approximately 5% compared to the same simulations performed with the patient table present. Doses in selected individual organs (breast, colon, lung, red bone marrow and stomach) were overestimated by up to 8%. Effective doses from SPR acquired with the x-ray tube stationary at 6 o’clock (posterior-anterior) were overestimated by 14-23% when the patient table was not included, with individual organ dose discrepancies (breast, colon, lung red bone marrow and stomach) all exceeding 13%. The reference entrance skin dose to the back were in this situation overestimated by 6-15%. These results highlight the importance of including the patient table in patient dose estimates for such scan situations.
2011-09-01
0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing...instructions, searching existing data sources , gathering and maintaining the data needed, and completing and reviewing this collection of information...Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden to
NASA Astrophysics Data System (ADS)
Radun, Jenni; Leisti, Tuomas; Virtanen, Toni; Nyman, Göte; Häkkinen, Jukka
2014-11-01
To understand the viewing strategies employed in a quality estimation task, we compared two visual tasks-quality estimation and difference estimation. The estimation was done for a pair of natural images having small global changes in quality. Two groups of observers estimated the same set of images, but with different instructions. One group estimated the difference in quality and the other the difference between image pairs. The results demonstrated the use of different visual strategies in the tasks. The quality estimation was found to include more visual planning during the first fixation than the difference estimation, but afterward needed only a few long fixations on the semantically important areas of the image. The difference estimation used many short fixations. Salient image areas were mainly attended to when these areas were also semantically important. The results support the hypothesis that these tasks' general characteristics (evaluation time, number of fixations, area fixated on) show differences in processing, but also suggest that examining only single fixations when comparing tasks is too narrow a view. When planning a subjective experiment, one must remember that a small change in the instructions might lead to a noticeable change in viewing strategy.
Evaluation of wind field statistics near and inside clouds using a coherent Doppler lidar
NASA Astrophysics Data System (ADS)
Lottman, Brian Todd
1998-09-01
This work proposes advanced techniques for measuring the spatial wind field statistics near and inside clouds using a vertically pointing solid state coherent Doppler lidar on a fixed ground based platform. The coherent Doppler lidar is an ideal instrument for high spatial and temporal resolution velocity estimates. The basic parameters of lidar are discussed, including a complete statistical description of the Doppler lidar signal. This description is extended to cases with simple functional forms for aerosol backscatter and velocity. An estimate for the mean velocity over a sensing volume is produced by estimating the mean spectra. There are many traditional spectral estimators, which are useful for conditions with slowly varying velocity and backscatter. A new class of estimators (novel) is introduced that produces reliable velocity estimates for conditions with large variations in aerosol backscatter and velocity with range, such as cloud conditions. Performance of traditional and novel estimators is computed for a variety of deterministic atmospheric conditions using computer simulated data. Wind field statistics are produced for actual data for a cloud deck, and for multi- layer clouds. Unique results include detection of possible spectral signatures for rain, estimates for the structure function inside a cloud deck, reliable velocity estimation techniques near and inside thin clouds, and estimates for simple wind field statistics between cloud layers.
Inferences about landbird abundance from count data: recent advances and future directions
Nichols, J.D.; Thomas, L.; Conn, P.B.; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.
2009-01-01
We summarize results of a November 2006 workshop dealing with recent research on the estimation of landbird abundance from count data. Our conceptual framework includes a decomposition of the probability of detecting a bird potentially exposed to sampling efforts into four separate probabilities. Primary inference methods are described and include distance sampling, multiple observers, time of detection, and repeated counts. The detection parameters estimated by these different approaches differ, leading to different interpretations of resulting estimates of density and abundance. Simultaneous use of combinations of these different inference approaches can not only lead to increased precision but also provides the ability to decompose components of the detection process. Recent efforts to test the efficacy of these different approaches using natural systems and a new bird radio test system provide sobering conclusions about the ability of observers to detect and localize birds in auditory surveys. Recent research is reported on efforts to deal with such potential sources of error as bird misclassification, measurement error, and density gradients. Methods for inference about spatial and temporal variation in avian abundance are outlined. Discussion topics include opinions about the need to estimate detection probability when drawing inference about avian abundance, methodological recommendations based on the current state of knowledge and suggestions for future research.
Molar axis estimation from computed tomography images.
Dongxia Zhang; Yangzhou Gan; Zeyang Xia; Xinwen Zhou; Shoubin Liu; Jing Xiong; Guanglin Li
2016-08-01
Estimation of tooth axis is needed for some clinical dental treatment. Existing methods require to segment the tooth volume from Computed Tomography (CT) images, and then estimate the axis from the tooth volume. However, they may fail during estimating molar axis due to that the tooth segmentation from CT images is challenging and current segmentation methods may get poor segmentation results especially for these molars with angle which will result in the failure of axis estimation. To resolve this problem, this paper proposes a new method for molar axis estimation from CT images. The key innovation point is that: instead of estimating the 3D axis of each molar from the segmented volume, the method estimates the 3D axis from two projection images. The method includes three steps. (1) The 3D images of each molar are projected to two 2D image planes. (2) The molar contour are segmented and the contour's 2D axis are extracted in each 2D projection image. Principal Component Analysis (PCA) and a modified symmetry axis detection algorithm are employed to extract the 2D axis from the segmented molar contour. (3) A 3D molar axis is obtained by combining the two 2D axes. Experimental results verified that the proposed method was effective to estimate the axis of molar from CT images.
An Updated TRMM Composite Climatology of Tropical Rainfall and Its Validation
NASA Technical Reports Server (NTRS)
Wang, Jian-Jian; Adler, Robert F.; Huffman, George; Bolvin, David
2013-01-01
An updated 15-yr Tropical Rainfall Measuring Mission (TRMM) composite climatology (TCC) is presented and evaluated. This climatology is based on a combination of individual rainfall estimates made with data from the primaryTRMMinstruments: theTRMM Microwave Imager (TMI) and the precipitation radar (PR). This combination climatology of passive microwave retrievals, radar-based retrievals, and an algorithm using both instruments simultaneously provides a consensus TRMM-based estimate of mean precipitation. The dispersion of the three estimates, as indicated by the standard deviation sigma among the estimates, is presented as a measure of confidence in the final estimate and as an estimate of the uncertainty thereof. The procedures utilized by the compositing technique, including adjustments and quality-control measures, are described. The results give a mean value of the TCC of 4.3mm day(exp -1) for the deep tropical ocean beltbetween 10 deg N and 10 deg S, with lower values outside that band. In general, the TCC values confirm ocean estimates from the Global Precipitation Climatology Project (GPCP) analysis, which is based on passive microwave results adjusted for sampling by infrared-based estimates. The pattern of uncertainty estimates shown by sigma is seen to be useful to indicate variations in confidence. Examples include differences between the eastern and western portions of the Pacific Ocean and high values in coastal and mountainous areas. Comparison of the TCC values (and the input products) to gauge analyses over land indicates the value of the radar-based estimates (small biases) and the limitations of the passive microwave algorithm (relatively large biases). Comparison with surface gauge information from western Pacific Ocean atolls shows a negative bias (16%) for all the TRMM products, although the representativeness of the atoll gauges of open-ocean rainfall is still in question.
The Economic Burden of Child Maltreatment in the United States And Implications for Prevention
Fang, Xiangming; Brown, Derek S.; Florence, Curtis; Mercy, James A.
2013-01-01
Objectives To present new estimates of the average lifetime costs per child maltreatment victim and aggregate lifetime costs for all new child maltreatment cases incurred in 2008 using an incidence-based approach. Methods This study used the best available secondary data to develop cost per case estimates. For each cost category, the paper used attributable costs whenever possible. For those categories that attributable cost data were not available, costs were estimated as the product of incremental effect of child maltreatment on a specific outcome multiplied by the estimated cost associated with that outcome. The estimate of the aggregate lifetime cost of child maltreatment in 2008 was obtained by multiplying per-victim lifetime cost estimates by the estimated cases of new child maltreatment in 2008. Results The estimated average lifetime cost per victim of nonfatal child maltreatment is $210,012 in 2010 dollars, including $32,648 in childhood health care costs; $10,530 in adult medical costs; $144,360 in productivity losses; $7,728 in child welfare costs; $6,747 in criminal justice costs; and $7,999 in special education costs. The estimated average lifetime cost per death is $1,272,900, including $14,100 in medical costs and $1,258,800 in productivity losses. The total lifetime economic burden resulting from new cases of fatal and nonfatal child maltreatment in the United States in 2008 is approximately $124 billion. In sensitivity analysis, the total burden is estimated to be as large as $585 billion. Conclusions Compared with other health problems, the burden of child maltreatment is substantial, indicating the importance of prevention efforts to address the high prevalence of child maltreatment. PMID:22300910
Decision rules for unbiased inventory estimates
NASA Technical Reports Server (NTRS)
Argentiero, P. D.; Koch, D.
1979-01-01
An efficient and accurate procedure for estimating inventories from remote sensing scenes is presented. In place of the conventional and expensive full dimensional Bayes decision rule, a one-dimensional feature extraction and classification technique was employed. It is shown that this efficient decision rule can be used to develop unbiased inventory estimates and that for large sample sizes typical of satellite derived remote sensing scenes, resulting accuracies are comparable or superior to more expensive alternative procedures. Mathematical details of the procedure are provided in the body of the report and in the appendix. Results of a numerical simulation of the technique using statistics obtained from an observed LANDSAT scene are included. The simulation demonstrates the effectiveness of the technique in computing accurate inventory estimates.
Identification and feedback control in structures with piezoceramic actuators
NASA Technical Reports Server (NTRS)
Banks, H. T.; Ito, K.; Wang, Y.
1992-01-01
In this lecture we give fundamental well-posedness results for a variational formulation of a class of damped second order partial differential equations with unbounded input or control coefficients. Included as special cases in this class are structures with piezoceramic actuators. We consider approximation techniques leading to computational methods in the context of both parameter estimation and feedback control problems for these systems. Rigorous convergence results for parameter estimates and feedback gains are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, A; Pasciak, A
Purpose: Skin dosimetry is important for fluoroscopically-guided interventions, as peak skin doses (PSD) that Result in skin reactions can be reached during these procedures. The purpose of this study was to assess the accuracy of different indirect dose estimates and to determine if PSD can be calculated within ±50% for embolization procedures. Methods: PSD were measured directly using radiochromic film for 41 consecutive embolization procedures. Indirect dose metrics from procedures were collected, including reference air kerma (RAK). Four different estimates of PSD were calculated and compared along with RAK to the measured PSD. The indirect estimates included a standard method,more » use of detailed information from the RDSR, and two simplified calculation methods. Indirect dosimetry was compared with direct measurements, including an analysis of uncertainty associated with film dosimetry. Factors affecting the accuracy of the indirect estimates were examined. Results: PSD calculated with the standard calculation method were within ±50% for all 41 procedures. This was also true for a simplified method using a single source-to-patient distance (SPD) for all calculations. RAK was within ±50% for all but one procedure. Cases for which RAK or calculated PSD exhibited large differences from the measured PSD were analyzed, and two causative factors were identified: ‘extreme’ SPD and large contributions to RAK from rotational angiography or runs acquired at large gantry angles. When calculated uncertainty limits [−12.8%, 10%] were applied to directly measured PSD, most indirect PSD estimates remained within ±50% of the measured PSD. Conclusions: Using indirect dose metrics, PSD can be determined within ±50% for embolization procedures, and usually to within ±35%. RAK can be used without modification to set notification limits and substantial radiation dose levels. These results can be extended to similar procedures, including vascular and interventional oncology. Film dosimetry is likely an unnecessary effort for these types of procedures.« less
School Cost Functions: A Meta-Regression Analysis
ERIC Educational Resources Information Center
Colegrave, Andrew D.; Giles, Margaret J.
2008-01-01
The education cost literature includes econometric studies attempting to determine economies of scale, or estimate an optimal school or district size. Not only do their results differ, but the studies use dissimilar data, techniques, and models. To derive value from these studies requires that the estimates be made comparable. One method to do…
Be the Volume: A Classroom Activity to Visualize Volume Estimation
ERIC Educational Resources Information Center
Mikhaylov, Jessica
2011-01-01
A hands-on activity can help multivariable calculus students visualize surfaces and understand volume estimation. This activity can be extended to include the concepts of Fubini's Theorem and the visualization of the curves resulting from cross-sections of the surface. This activity uses students as pillars and a sheet or tablecloth for the…
The new version of EPA’s positive matrix factorization (EPA PMF) software, 5.0, includes three error estimation (EE) methods for analyzing factor analytic solutions: classical bootstrap (BS), displacement of factor elements (DISP), and bootstrap enhanced by displacement (BS-DISP)...
Children's mathematical performance: five cognitive tasks across five grades.
Moore, Alex M; Ashcraft, Mark H
2015-07-01
Children in elementary school, along with college adults, were tested on a battery of basic mathematical tasks, including digit naming, number comparison, dot enumeration, and simple addition or subtraction. Beyond cataloguing performance to these standard tasks in Grades 1 to 5, we also examined relationships among the tasks, including previously reported results on a number line estimation task. Accuracy and latency improved across grades for all tasks, and classic interaction patterns were found, for example, a speed-up of subitizing and counting, increasingly shallow slopes in number comparison, and progressive speeding of responses especially to larger addition and subtraction problems. Surprisingly, digit naming was faster than subitizing at all ages, arguing against a pre-attentive processing explanation for subitizing. Estimation accuracy and speed were strong predictors of children's addition and subtraction performance. Children who gave exponential responses on the number line estimation task were slower at counting in the dot enumeration task and had longer latencies on addition and subtraction problems. The results provided further support for the importance of estimation as an indicator of children's current and future mathematical expertise. Copyright © 2015 Elsevier Inc. All rights reserved.
Sensitivity of estimated muscle force in forward simulation of normal walking
Xiao, Ming; Higginson, Jill
2009-01-01
Generic muscle parameters are often used in muscle-driven simulations of human movement estimate individual muscle forces and function. The results may not be valid since muscle properties vary from subject to subject. This study investigated the effect of using generic parameters in a muscle-driven forward simulation on muscle force estimation. We generated a normal walking simulation in OpenSim and examined the sensitivity of individual muscle to perturbations in muscle parameters, including the number of muscles, maximum isometric force, optimal fiber length and tendon slack length. We found that when changing the number muscles included in the model, only magnitude of the estimated muscle forces was affected. Our results also suggest it is especially important to use accurate values of tendon slack length and optimal fiber length for ankle plantarflexors and knee extensors. Changes in force production one muscle were typically compensated for by changes in force production by muscles in the same functional muscle group, or the antagonistic muscle group. Conclusions regarding muscle function based on simulations with generic musculoskeletal parameters should be interpreted with caution. PMID:20498485
Irrigation water demand: A meta-analysis of price elasticities
NASA Astrophysics Data System (ADS)
Scheierling, Susanne M.; Loomis, John B.; Young, Robert A.
2006-01-01
Metaregression models are estimated to investigate sources of variation in empirical estimates of the price elasticity of irrigation water demand. Elasticity estimates are drawn from 24 studies reported in the United States since 1963, including mathematical programming, field experiments, and econometric studies. The mean price elasticity is 0.48. Long-run elasticities, those that are most useful for policy purposes, are likely larger than the mean estimate. Empirical results suggest that estimates may be more elastic if they are derived from mathematical programming or econometric studies and calculated at a higher irrigation water price. Less elastic estimates are found to be derived from models based on field experiments and in the presence of high-valued crops.
Ertefaie, Ashkan; Flory, James H; Hennessy, Sean; Small, Dylan S
2017-06-15
Instrumental variable (IV) methods provide unbiased treatment effect estimation in the presence of unmeasured confounders under certain assumptions. To provide valid estimates of treatment effect, treatment effect confounders that are associated with the IV (IV-confounders) must be included in the analysis, and not including observations with missing values may lead to bias. Missing covariate data are particularly problematic when the probability that a value is missing is related to the value itself, which is known as nonignorable missingness. In such cases, imputation-based methods are biased. Using health-care provider preference as an IV method, we propose a 2-step procedure with which to estimate a valid treatment effect in the presence of baseline variables with nonignorable missing values. First, the provider preference IV value is estimated by performing a complete-case analysis using a random-effects model that includes IV-confounders. Second, the treatment effect is estimated using a 2-stage least squares IV approach that excludes IV-confounders with missing values. Simulation results are presented, and the method is applied to an analysis comparing the effects of sulfonylureas versus metformin on body mass index, where the variables baseline body mass index and glycosylated hemoglobin have missing values. Our result supports the association of sulfonylureas with weight gain. © The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Li, Haojie; Graham, Daniel J
2016-08-01
This paper estimates the causal effect of 20mph zones on road casualties in London. Potential confounders in the key relationship of interest are included within outcome regression and propensity score models, and the models are then combined to form a doubly robust estimator. A total of 234 treated zones and 2844 potential control zones are included in the data sample. The propensity score model is used to select a viable control group which has common support in the covariate distributions. We compare the doubly robust estimates with those obtained using three other methods: inverse probability weighting, regression adjustment, and propensity score matching. The results indicate that 20mph zones have had a significant causal impact on road casualty reduction in both absolute and proportional terms. Copyright © 2016 Elsevier Ltd. All rights reserved.
Remote sensing in Iowa agriculture. [cropland inventory, soils, forestland, and crop diseases
NASA Technical Reports Server (NTRS)
Mahlstede, J. P. (Principal Investigator); Carlson, R. E.
1973-01-01
The author has identified the following significant results. Results include the estimation of forested and crop vegetation acreages using the ERTS-1 imagery. The methods used to achieve these estimates still require refinement, but the results appear promising. Practical applications would be directed toward achieving current land use inventories of these natural resources. This data is presently collected by sampling type surveys. If ERTS-1 can observe this and area estimates can be determined accurately, then a step forward has been achieved. Cost benefit relationship will have to be favorable. Problems still exist in these estimation techniques due to the diversity of the scene observed in the ERTS-1 imagery covering other part of Iowa. This is due to influence of topography and soils upon the adaptability of the vegetation to specific areas of the state. The state mosaic produced from ERTS-1 imagery shows these patterns very well. Research directed to acreage estimates is continuing.
Improving estimates of tree mortality probability using potential growth rate
Das, Adrian J.; Stephenson, Nathan L.
2015-01-01
Tree growth rate is frequently used to estimate mortality probability. Yet, growth metrics can vary in form, and the justification for using one over another is rarely clear. We tested whether a growth index (GI) that scales the realized diameter growth rate against the potential diameter growth rate (PDGR) would give better estimates of mortality probability than other measures. We also tested whether PDGR, being a function of tree size, might better correlate with the baseline mortality probability than direct measurements of size such as diameter or basal area. Using a long-term dataset from the Sierra Nevada, California, U.S.A., as well as existing species-specific estimates of PDGR, we developed growth–mortality models for four common species. For three of the four species, models that included GI, PDGR, or a combination of GI and PDGR were substantially better than models without them. For the fourth species, the models including GI and PDGR performed roughly as well as a model that included only the diameter growth rate. Our results suggest that using PDGR can improve our ability to estimate tree survival probability. However, in the absence of PDGR estimates, the diameter growth rate was the best empirical predictor of mortality, in contrast to assumptions often made in the literature.
The relative impact of baryons and cluster shape on weak lensing mass estimates of galaxy clusters
NASA Astrophysics Data System (ADS)
Lee, B. E.; Le Brun, A. M. C.; Haq, M. E.; Deering, N. J.; King, L. J.; Applegate, D.; McCarthy, I. G.
2018-05-01
Weak gravitational lensing depends on the integrated mass along the line of sight. Baryons contribute to the mass distribution of galaxy clusters and the resulting mass estimates from lensing analysis. We use the cosmo-OWLS suite of hydrodynamic simulations to investigate the impact of baryonic processes on the bias and scatter of weak lensing mass estimates of clusters. These estimates are obtained by fitting NFW profiles to mock data using MCMC techniques. In particular, we examine the difference in estimates between dark matter-only runs and those including various prescriptions for baryonic physics. We find no significant difference in the mass bias when baryonic physics is included, though the overall mass estimates are suppressed when feedback from AGN is included. For lowest-mass systems for which a reliable mass can be obtained (M200 ≈ 2 × 1014M⊙), we find a bias of ≈-10 per cent. The magnitude of the bias tends to decrease for higher mass clusters, consistent with no bias for the most massive clusters which have masses comparable to those found in the CLASH and HFF samples. For the lowest mass clusters, the mass bias is particularly sensitive to the fit radii and the limits placed on the concentration prior, rendering reliable mass estimates difficult. The scatter in mass estimates between the dark matter-only and the various baryonic runs is less than between different projections of individual clusters, highlighting the importance of triaxiality.
Multiple-rule bias in the comparison of classification rules
Yousefi, Mohammadmahdi R.; Hua, Jianping; Dougherty, Edward R.
2011-01-01
Motivation: There is growing discussion in the bioinformatics community concerning overoptimism of reported results. Two approaches contributing to overoptimism in classification are (i) the reporting of results on datasets for which a proposed classification rule performs well and (ii) the comparison of multiple classification rules on a single dataset that purports to show the advantage of a certain rule. Results: This article provides a careful probabilistic analysis of the second issue and the ‘multiple-rule bias’, resulting from choosing a classification rule having minimum estimated error on the dataset. It quantifies this bias corresponding to estimating the expected true error of the classification rule possessing minimum estimated error and it characterizes the bias from estimating the true comparative advantage of the chosen classification rule relative to the others by the estimated comparative advantage on the dataset. The analysis is applied to both synthetic and real data using a number of classification rules and error estimators. Availability: We have implemented in C code the synthetic data distribution model, classification rules, feature selection routines and error estimation methods. The code for multiple-rule analysis is implemented in MATLAB. The source code is available at http://gsp.tamu.edu/Publications/supplementary/yousefi11a/. Supplementary simulation results are also included. Contact: edward@ece.tamu.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:21546390
Pham, Ba'; Klassen, Terry P; Lawson, Margaret L; Moher, David
2005-08-01
To assess whether language of publication restrictions impact the estimates of an intervention's effectiveness, whether such impact is similar for conventional medicine and complementary medicine interventions, and whether the results are influenced by publication bias and statistical heterogeneity. We set out to examine the extent to which including reports of randomized controlled trials (RCTs) in languages other than English (LOE) influences the results of systematic reviews, using a broad dataset of 42 language-inclusive systematic reviews, involving 662 RCTs, including both conventional medicine (CM) and complementary and alternative medicine (CAM) interventions. For CM interventions, language-restricted systematic reviews, compared with language-inclusive ones, did not introduce biased results, in terms of estimates of intervention effectiveness (random effects ration of odds rations ROR=1.02; 95% CI=0.83-1.26). For CAM interventions, however, language-restricted systematic reviews resulted in a 63% smaller protective effect estimate than language-inclusive reviews (random effects ROR=1.63; 95% CI=1.03-2.60). Language restrictions do not change the results of CM systematic reviews but do substantially alter the results of CAM systematic reviews. These findings are robust even after sensitivity analyses, and do not appear to be influenced by statistical heterogeneity and publication bias.
Intertemporal consumption with directly measured welfare functions and subjective expectations
Kapteyn, Arie; Kleinjans, Kristin J.; van Soest, Arthur
2010-01-01
Euler equation estimation of intertemporal consumption models requires many, often unverifiable assumptions. These include assumptions on expectations and preferences. We aim at reducing some of these requirements by using direct subjective information on respondents’ preferences and expectations. The results suggest that individually measured welfare functions and expectations have predictive power for the variation in consumption across households. Furthermore, estimates of the intertemporal elasticity of substitution based on the estimated welfare functions are plausible and of a similar order of magnitude as other estimates found in the literature. The model favored by the data only requires cross-section data for estimation. PMID:20442798
Channel estimation based on quantized MMP for FDD massive MIMO downlink
NASA Astrophysics Data System (ADS)
Guo, Yao-ting; Wang, Bing-he; Qu, Yi; Cai, Hua-jie
2016-10-01
In this paper, we consider channel estimation for Massive MIMO systems operating in frequency division duplexing mode. By exploiting the sparsity of propagation paths in Massive MIMO channel, we develop a compressed sensing(CS) based channel estimator which can reduce the pilot overhead. As compared with the conventional least squares (LS) and linear minimum mean square error(LMMSE) estimation, the proposed algorithm is based on the quantized multipath matching pursuit - MMP - reduced the pilot overhead and performs better than other CS algorithms. The simulation results demonstrate the advantage of the proposed algorithm over various existing methods including the LS, LMMSE, CoSaMP and conventional MMP estimators.
NASA Astrophysics Data System (ADS)
Zhao, Zhanfeng; Illman, Walter A.
2018-04-01
Previous studies have shown that geostatistics-based transient hydraulic tomography (THT) is robust for subsurface heterogeneity characterization through the joint inverse modeling of multiple pumping tests. However, the hydraulic conductivity (K) and specific storage (Ss) estimates can be smooth or even erroneous for areas where pumping/observation densities are low. This renders the imaging of interlayer and intralayer heterogeneity of highly contrasting materials including their unit boundaries difficult. In this study, we further test the performance of THT by utilizing existing and newly collected pumping test data of longer durations that showed drawdown responses in both aquifer and aquitard units at a field site underlain by a highly heterogeneous glaciofluvial deposit. The robust performance of the THT is highlighted through the comparison of different degrees of model parameterization including: (1) the effective parameter approach; (2) the geological zonation approach relying on borehole logs; and (3) the geostatistical inversion approach considering different prior information (with/without geological data). Results reveal that the simultaneous analysis of eight pumping tests with the geostatistical inverse model yields the best results in terms of model calibration and validation. We also find that the joint interpretation of long-term drawdown data from aquifer and aquitard units is necessary in mapping their full heterogeneous patterns including intralayer variabilities. Moreover, as geological data are included as prior information in the geostatistics-based THT analysis, the estimated K values increasingly reflect the vertical distribution patterns of permeameter-estimated K in both aquifer and aquitard units. Finally, the comparison of various THT approaches reveals that differences in the estimated K and Ss tomograms result in significantly different transient drawdown predictions at observation ports.
Methods for measuring utilization of mental health services in two epidemiologic studies
NOVINS, DOUGLAS K.; BEALS, JANETTE; CROY, CALVIN; MANSON, SPERO M.
2015-01-01
Objectives of Study Psychiatric epidemiologic studies often include two or more sets of questions regarding service utilization, but the agreement across these different questions and the factors associated with their endorsement have not been examined. The objectives of this study were to describe the agreement of different sets of mental health service utilization questions that were included in the American Indian Service Utilization Psychiatric Epidemiology Risk and Protective Factors Project (AI-SUPERPFP), and compare the results to similar questions included in the baseline National Comorbidity Survey (NCS). Methods Responses to service utilization questions by 2878 AI-SUPERPFP and 5877 NCS participants were examined by calculating estimates of service use and agreement (κ) across the different sets of questions. Logistic regression models were developed to identify factors associated with endorsement of specific sets of questions. Results In both studies, estimates of mental health service utilization varied across the different sets of questions. Agreement across the different question sets was marginal to good (κ = 0.27–0.69). Characteristics of identified service users varied across the question sets. Limitations Neither survey included data to examine the validity of participant responses to service utilization questions. Recommendations for Further Research Question wording and placement appear to impact estimates of service utilization in psychiatric epidemiologic studies. Given the importance of these estimates for policy-making, further research into the validity of survey responses as well as impacts of question wording and context on rates of service utilization is warranted. PMID:18767205
Overcoming bias in estimating the volume-outcome relationship.
Tsai, Alexander C; Votruba, Mark; Bridges, John F P; Cebul, Randall D
2006-02-01
To examine the effect of hospital volume on 30-day mortality for patients with congestive heart failure (CHF) using administrative and clinical data in conventional regression and instrumental variables (IV) estimation models. The primary data consisted of longitudinal information on comorbid conditions, vital signs, clinical status, and laboratory test results for 21,555 Medicare-insured patients aged 65 years and older hospitalized for CHF in northeast Ohio in 1991-1997. The patient was the primary unit of analysis. We fit a linear probability model to the data to assess the effects of hospital volume on patient mortality within 30 days of admission. Both administrative and clinical data elements were included for risk adjustment. Linear distances between patients and hospitals were used to construct the instrument, which was then used to assess the endogeneity of hospital volume. When only administrative data elements were included in the risk adjustment model, the estimated volume-outcome effect was statistically significant (p=.029) but small in magnitude. The estimate was markedly attenuated in magnitude and statistical significance when clinical data were added to the model as risk adjusters (p=.39). IV estimation shifted the estimate in a direction consistent with selective referral, but we were unable to reject the consistency of the linear probability estimates. Use of only administrative data for volume-outcomes research may generate spurious findings. The IV analysis further suggests that conventional estimates of the volume-outcome relationship may be contaminated by selective referral effects. Taken together, our results suggest that efforts to concentrate hospital-based CHF care in high-volume hospitals may not reduce mortality among elderly patients.
Effects of Directed Energy Weapons
1994-01-01
them, and led to the law of conser- vation of energy. 2. The estimate of the energy it takes to brew a cup of coffee as- sumes that it is a 6 oz cup...the thermal diffusivity of the target material (see Figure 1–5). We can use this result to estimate the threshold for melting. A laser of intensity S...is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and
Crustal dynamics project data analysis fixed station VLBI geodetic results
NASA Technical Reports Server (NTRS)
Ryan, J. W.; Ma, C.
1985-01-01
The Goddard VLBI group reports the results of analyzing the fixed observatory VLBI data available to the Crustal Dynamics Project through the end of 1984. All POLARIS/IRIS full-day data are included. The mobile site at Platteville, Colorado is also included since its occupation bears on the study of plate stability. Data from 1980 through 1984 were used to obtain the catalog of site and radio source positions labeled S284C. Using this catalog two types of one-day solutions were made: (1) to estimate site and baseline motions; and (2) to estimate Earth rotation parameters. A priori Earth rotation parameters were interpolated to the epoch of each observation from BIH Circular D.
A TRMM-Based System for Real-Time Quasi-Global Merged Precipitation Estimates
NASA Technical Reports Server (NTRS)
Starr, David OC. (Technical Monitor); Huffman, G. J.; Adler, R. F.; Stocker, E. F.; Bolvin, D. T.; Nelkin, E. J.
2002-01-01
A new processing system has been developed to combine IR and microwave data into 0.25 degree x 0.25 degree gridded precipitation estimates in near-real time over the latitude band plus or minus 50 degrees. Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI) precipitation estimates are used to calibrate Special Sensor Microwave/Imager (SSM/I) estimates, and Advanced Microwave Sounding Unit (AMSU) and Advanced Microwave Scanning Radiometer (AMSR) estimates, when available. The merged microwave estimates are then used to create a calibrated IR estimate in a Probability-Matched-Threshold approach for each individual hour. The microwave and IR estimates are combined for each 3-hour interval. Early results will be shown, including typical tropical and extratropical storm evolution and examples of the diurnal cycle. Major issues will be discussed, including the choice of IR algorithm, the approach for merging the IR and microwave estimates, extension to higher latitudes, retrospective processing back to 1999, and extension to the GPCP One-Degree Daily product (for which the authors are responsible). The work described here provides one approach to using data from the future NASA Global Precipitation Measurement program, which is designed to provide Jill global coverage by low-orbit passive microwave satellites every three hours beginning around 2008.
Reaeration equations derived from U.S. geological survey database
Melching, C.S.; Flores, H.E.
1999-01-01
Accurate estimation of the reaeration-rate coefficient (K2) is extremely important for waste-load allocation. Currently, available K2 estimation equations generally yield poor estimates when applied to stream conditions different from those for which the equations were derived because they were derived from small databases composed of potentially highly inaccurate measurements. A large data set of K2 measurements made with tracer-gas methods was compiled from U.S. Geological Survey studies. This compilation included 493 reaches on 166 streams in 23 states. Careful screening to detect and eliminate erroneous measurements reduced the date set to 371 measurements. These measurements were divided into four subgroups on the basis of flow regime (channel control or pool and riffle) and stream scale (discharge greater than or less than 0.556 m3/s). Multiple linear regression in logarithms was applied to relate K2 to 12 stream hydraulic and water-quality characteristics. The resulting best-estimation equations had the form of semiempirical equations that included the rate of energy dissipation and discharge or depth and width as variables. For equation verification, a data set of K2 measurements made with tracer-gas procedures by other agencies was compiled from the literature. This compilation included 127 reaches on at least 24 streams in at least seven states. The standard error of estimate obtained when applying the developed equations to the U.S. Geological Survey data set ranged from 44 to 61%, whereas the standard error of estimate was 78% when applied to the verification data set.Accurate estimation of the reaeration-rate coefficient (K2) is extremely important for waste-load allocation. Currently, available K2 estimation equations generally yield poor estimates when applied to stream conditions different from those for which the equations were derived because they were derived from small databases composed of potentially highly inaccurate measurements. A large data set of K2 measurements made with tracer-gas methods was compiled from U.S. Geological Survey studies. This compilation included 493 reaches on 166 streams in 23 states. Careful screening to detect and eliminate erroneous measurements reduced the data set to 371 measurements. These measurements were divided into four subgroups on the basis of flow regime (channel control or pool and riffle) and stream scale (discharge greater than or less than 0.556 m3/s). Multiple linear regression in logarithms was applied to relate K2 to 12 stream hydraulic and water-quality characteristics. The resulting best-estimation equations had the form of semiempirical equations that included the rate of energy dissipation and discharge or depth and width as variables. For equation verification, a data set of K2 measurements made with tracer-gas procedures by other agencies was compiled from the literature. This compilation included 127 reaches on at least 24 streams in at least seven states. The standard error of estimate obtained when applying the developed equations to the U.S. Geological Survey data set ranged from 44 to 61%, whereas the standard error of estimate was 78% when applied to the verification data set.
Endogenous pain modulation in chronic orofacial pain: a systematic review and meta-analysis.
Moana-Filho, Estephan J; Herrero Babiloni, Alberto; Theis-Mahon, Nicole R
2018-06-15
Abnormal endogenous pain modulation was suggested as a potential mechanism for chronic pain, ie, increased pain facilitation and/or impaired pain inhibition underlying symptoms manifestation. Endogenous pain modulation function can be tested using psychophysical methods such as temporal summation of pain (TSP) and conditioned pain modulation (CPM), which assess pain facilitation and inhibition, respectively. Several studies have investigated endogenous pain modulation function in patients with nonparoxysmal orofacial pain (OFP) and reported mixed results. This study aimed to provide, through a qualitative and quantitative synthesis of the available literature, overall estimates for TSP/CPM responses in patients with OFP relative to controls. MEDLINE, Embase, and the Cochrane databases were searched, and references were screened independently by 2 raters. Twenty-six studies were included for qualitative review, and 22 studies were included for meta-analysis. Traditional meta-analysis and robust variance estimation were used to synthesize overall estimates for standardized mean difference. The overall standardized estimate for TSP was 0.30 (95% confidence interval: 0.11-0.49; P = 0.002), with moderate between-study heterogeneity (Q [df = 17] = 41.8, P = 0.001; I = 70.2%). Conditioned pain modulation's estimated overall effect size was large but above the significance threshold (estimate = 1.36; 95% confidence interval: -0.09 to 2.81; P = 0.066), with very large heterogeneity (Q [df = 8] = 108.3, P < 0.001; I = 98.0%). Sensitivity analyses did not affect the overall estimate for TSP; for CPM, the overall estimate became significant if specific random-effect models were used or if the most influential study was removed. Publication bias was not present for TSP studies, whereas it substantially influenced CPM's overall estimate. These results suggest increased pain facilitation and trend for pain inhibition impairment in patients with nonparoxysmal OFP.
Bonilla, Manuel G.; Mark, Robert K.; Lienkaemper, James J.
1984-01-01
In order to refine correlations of surface-wave magnitude, fault rupture length at the ground surface, and fault displacement at the surface by including the uncertainties in these variables, the existing data were critically reviewed and a new data base was compiled. Earthquake magnitudes were redetermined as necessary to make them as consistent as possible with the Gutenberg methods and results, which make up much of the data base. Measurement errors were estimated for the three variables for 58 moderate to large shallow-focus earthquakes. Regression analyses were then made utilizing the estimated measurement errors.The regression analysis demonstrates that the relations among the variables magnitude, length, and displacement are stochastic in nature. The stochastic variance, introduced in part by incomplete surface expression of seismogenic faulting, variation in shear modulus, and regional factors, dominates the estimated measurement errors. Thus, it is appropriate to use ordinary least squares for the regression models, rather than regression models based upon an underlying deterministic relation in which the variance results primarily from measurement errors.Significant differences exist in correlations of certain combinations of length, displacement, and magnitude when events are grouped by fault type or by region, including attenuation regions delineated by Evernden and others.Estimates of the magnitude and the standard deviation of the magnitude of a prehistoric or future earthquake associated with a fault can be made by correlating Ms with the logarithms of rupture length, fault displacement, or the product of length and displacement.Fault rupture area could be reliably estimated for about 20 of the events in the data set. Regression of Ms on rupture area did not result in a marked improvement over regressions that did not involve rupture area. Because no subduction-zone earthquakes are included in this study, the reported results do not apply to such zones.
Accounting for Incomplete Species Detection in Fish Community Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
McManamay, Ryan A; Orth, Dr. Donald J; Jager, Yetta
2013-01-01
Riverine fish assemblages are heterogeneous and very difficult to characterize with a one-size-fits-all approach to sampling. Furthermore, detecting changes in fish assemblages over time requires accounting for variation in sampling designs. We present a modeling approach that permits heterogeneous sampling by accounting for site and sampling covariates (including method) in a model-based framework for estimation (versus a sampling-based framework). We snorkeled during three surveys and electrofished during a single survey in suite of delineated habitats stratified by reach types. We developed single-species occupancy models to determine covariates influencing patch occupancy and species detection probabilities whereas community occupancy models estimated speciesmore » richness in light of incomplete detections. For most species, information-theoretic criteria showed higher support for models that included patch size and reach as covariates of occupancy. In addition, models including patch size and sampling method as covariates of detection probabilities also had higher support. Detection probability estimates for snorkeling surveys were higher for larger non-benthic species whereas electrofishing was more effective at detecting smaller benthic species. The number of sites and sampling occasions required to accurately estimate occupancy varied among fish species. For rare benthic species, our results suggested that higher number of occasions, and especially the addition of electrofishing, may be required to improve detection probabilities and obtain accurate occupancy estimates. Community models suggested that richness was 41% higher than the number of species actually observed and the addition of an electrofishing survey increased estimated richness by 13%. These results can be useful to future fish assemblage monitoring efforts by informing sampling designs, such as site selection (e.g. stratifying based on patch size) and determining effort required (e.g. number of sites versus occasions).« less
Observed galaxy number counts on the lightcone up to second order: I. Main result
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertacca, Daniele; Maartens, Roy; Clarkson, Chris, E-mail: daniele.bertacca@gmail.com, E-mail: roy.maartens@gmail.com, E-mail: chris.clarkson@gmail.com
2014-09-01
We present the galaxy number overdensity up to second order in redshift space on cosmological scales for a concordance model. The result contains all general relativistic effects up to second order that arise from observing on the past light cone, including all redshift effects, lensing distortions from convergence and shear, and contributions from velocities, Sachs-Wolfe, integrated SW and time-delay terms. This result will be important for accurate calculation of the bias on estimates of non-Gaussianity and on precision parameter estimates, introduced by nonlinear projection effects.
Human Body 3D Posture Estimation Using Significant Points and Two Cameras
Juang, Chia-Feng; Chen, Teng-Chang; Du, Wei-Chin
2014-01-01
This paper proposes a three-dimensional (3D) human posture estimation system that locates 3D significant body points based on 2D body contours extracted from two cameras without using any depth sensors. The 3D significant body points that are located by this system include the head, the center of the body, the tips of the feet, the tips of the hands, the elbows, and the knees. First, a linear support vector machine- (SVM-) based segmentation method is proposed to distinguish the human body from the background in red, green, and blue (RGB) color space. The SVM-based segmentation method uses not only normalized color differences but also included angle between pixels in the current frame and the background in order to reduce shadow influence. After segmentation, 2D significant points in each of the two extracted images are located. A significant point volume matching (SPVM) method is then proposed to reconstruct the 3D significant body point locations by using 2D posture estimation results. Experimental results show that the proposed SVM-based segmentation method shows better performance than other gray level- and RGB-based segmentation approaches. This paper also shows the effectiveness of the 3D posture estimation results in different postures. PMID:24883422
A New Formulation of the Filter-Error Method for Aerodynamic Parameter Estimation in Turbulence
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Morelli, Eugene A.
2015-01-01
A new formulation of the filter-error method for estimating aerodynamic parameters in nonlinear aircraft dynamic models during turbulence was developed and demonstrated. The approach uses an estimate of the measurement noise covariance to identify the model parameters, their uncertainties, and the process noise covariance, in a relaxation method analogous to the output-error method. Prior information on the model parameters and uncertainties can be supplied, and a post-estimation correction to the uncertainty was included to account for colored residuals not considered in the theory. No tuning parameters, needing adjustment by the analyst, are used in the estimation. The method was demonstrated in simulation using the NASA Generic Transport Model, then applied to the subscale T-2 jet-engine transport aircraft flight. Modeling results in different levels of turbulence were compared with results from time-domain output error and frequency- domain equation error methods to demonstrate the effectiveness of the approach.
Estimating the parasitaemia of Plasmodium falciparum: experience from a national EQA scheme
2013-01-01
Background To examine performance of the identification and estimation of percentage parasitaemia of Plasmodium falciparum in stained blood films distributed in the UK National External Quality Assessment Scheme (UKNEQAS) Blood Parasitology Scheme. Methods Analysis of performance for the diagnosis and estimation of the percentage parasitaemia of P. falciparum in Giemsa-stained thin blood films was made over a 15-year period to look for trends in performance. Results An average of 25% of participants failed to estimate the percentage parasitaemia, 17% overestimated and 8% underestimated, whilst 5% misidentified the malaria species present. Conclusions Although the results achieved by participants for other blood parasites have shown an overall improvement, the level of performance for estimation of the parasitaemia of P. falciparum remains unchanged over 15 years. Possible reasons include incorrect calculation, not examining the correct part of the film and not examining an adequate number of microscope fields. PMID:24261625
Alomari, Mahmoud A; Shqair, Dana M; Khabour, Omar F; Alawneh, Khaldoon; Nazzal, Mahmoud I; Keewan, Esraa F
2012-01-01
Exercise testing is associated with barriers prevent using cardiovascular (CV) endurance (CVE) measure frequently. A recent nonexercise model (NM) is alleged to estimate CVE without exercise. This study examined CVE relationships, using the NM model, with measures of obesity, physical fitness (PF), blood glucose and lipid, and circulation in 188 asymptomatic young (18-40 years) adults. Estimated CVE correlated favorably with measures of PF (r = 0.4 - 0.5) including handgrip strength, distance in 6 munities walking test, and shoulder press, and leg extension strengths, obesity (r = 0.2 - 0.7) including % body fat, body water content, fat mass, muscle mass, BMI, waist and hip circumferences and waist/hip ratio, and circulation (r = 0.2 - 0.3) including blood pressures, blood flow, vascular resistance, and blood (r = 0.2 - 0.5) profile including glucose, total cholesterol, LDL-C, HDL-C, and triglycerides. Additionally, differences (P < 0.05) in examined measures were found between the high, average, and low estimated CVE groups. Obviously the majority of these measures are CV disease risk factors and metabolic syndrome components. These results enhance the NM scientific value, and thus, can be further used in clinical and nonclinical settings.
Direct process estimation from tomographic data using artificial neural systems
NASA Astrophysics Data System (ADS)
Mohamad-Saleh, Junita; Hoyle, Brian S.; Podd, Frank J.; Spink, D. M.
2001-07-01
The paper deals with the goal of component fraction estimation in multicomponent flows, a critical measurement in many processes. Electrical capacitance tomography (ECT) is a well-researched sensing technique for this task, due to its low-cost, non-intrusion, and fast response. However, typical systems, which include practicable real-time reconstruction algorithms, give inaccurate results, and existing approaches to direct component fraction measurement are flow-regime dependent. In the investigation described, an artificial neural network approach is used to directly estimate the component fractions in gas-oil, gas-water, and gas-oil-water flows from ECT measurements. A 2D finite- element electric field model of a 12-electrode ECT sensor is used to simulate ECT measurements of various flow conditions. The raw measurements are reduced to a mutually independent set using principal components analysis and used with their corresponding component fractions to train multilayer feed-forward neural networks (MLFFNNs). The trained MLFFNNs are tested with patterns consisting of unlearned ECT simulated and plant measurements. Results included in the paper have a mean absolute error of less than 1% for the estimation of various multicomponent fractions of the permittivity distribution. They are also shown to give improved component fraction estimation compared to a well known direct ECT method.
The Response of Abortion Demand to Changes in Abortion Costs
ERIC Educational Resources Information Center
Medoff, Marshall H.
2008-01-01
This study uses pooled cross-section time-series data, over the years 1982, 1992 and 2000, to estimate the impact of various restrictive abortion laws on the demand for abortion. This study complements and extends prior research by explicitly including the price of obtaining an abortion in the estimation. The empirical results show that the real…
The Biasing Effects of Unmodeled ARMA Time Series Processes on Latent Growth Curve Model Estimates
ERIC Educational Resources Information Center
Sivo, Stephen; Fan, Xitao; Witta, Lea
2005-01-01
The purpose of this study was to evaluate the robustness of estimated growth curve models when there is stationary autocorrelation among manifest variable errors. The results suggest that when, in practice, growth curve models are fitted to longitudinal data, alternative rival hypotheses to consider would include growth models that also specify…
Increasing precision of turbidity-based suspended sediment concentration and load estimates.
Jastram, John D; Zipper, Carl E; Zelazny, Lucian W; Hyer, Kenneth E
2010-01-01
Turbidity is an effective tool for estimating and monitoring suspended sediments in aquatic systems. Turbidity can be measured in situ remotely and at fine temporal scales as a surrogate for suspended sediment concentration (SSC), providing opportunity for a more complete record of SSC than is possible with physical sampling approaches. However, there is variability in turbidity-based SSC estimates and in sediment loadings calculated from those estimates. This study investigated the potential to improve turbidity-based SSC, and by extension the resulting sediment loading estimates, by incorporating hydrologic variables that can be monitored remotely and continuously (typically 15-min intervals) into the SSC estimation procedure. On the Roanoke River in southwestern Virginia, hydrologic stage, turbidity, and other water-quality parameters were monitored with in situ instrumentation; suspended sediments were sampled manually during elevated turbidity events; samples were analyzed for SSC and physical properties including particle-size distribution and organic C content; and rainfall was quantified by geologic source area. The study identified physical properties of the suspended-sediment samples that contribute to SSC estimation variance and hydrologic variables that explained variability of those physical properties. Results indicated that the inclusion of any of the measured physical properties in turbidity-based SSC estimation models reduces unexplained variance. Further, the use of hydrologic variables to represent these physical properties, along with turbidity, resulted in a model, relying solely on data collected remotely and continuously, that estimated SSC with less variance than a conventional turbidity-based univariate model, allowing a more precise estimate of sediment loading, Modeling results are consistent with known mechanisms governing sediment transport in hydrologic systems.
Penetration of UV Radiation in the Earth's Oceans
NASA Technical Reports Server (NTRS)
Mitchell, B. Greg; Lubin, Dan
2005-01-01
This project was a collaboration between SIO/UCSD and NASA/GSFC to develop a global estimation of the penetration of UV light into open ocean waters, and into coastal waters. We determined the ocean UV reflectance spectra seen by satellites above the atmosphere by combining existing sophisticated radiative transfer models with in situ UV Visible data sets to improve coupled radiance estimates both underwater and within the atmosphere. Results included improved estimates of surface spectral irradiance, 0.3-1.0 micron, and estimates of photosynthetic inhibition, DNA mutation, and CO production. Data sets developed under this proposal have been made publicly available via submission to the SeaWiFS Bio-Optical Archive and Storage System. Numerous peer-reviewed publications and conference proceedings and abstracts resulted from the work supported by this research award.
Notes on a New Coherence Estimator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bickel, Douglas L.
This document discusses some interesting features of the new coherence estimator in [1] . The estimator is d erived from a slightly different viewpoint. We discuss a few properties of the estimator, including presenting the probability density function of the denominator of the new estimator , which is a new feature of this estimator . Finally, we present an appr oximate equation for analysis of the sensitivity of the estimator to the knowledge of the noise value. ACKNOWLEDGEMENTS The preparation of this report is the result of an unfunded research and development activity. Sandia National Laboratories is a multi -more » program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE - AC04 - 94AL85000.« less
The Estimated Annual Cost of Uterine Leiomyomata in the United States
CARDOZO, Eden R.; CLARK, Andrew D.; BANKS, Nicole K.; HENNE, Melinda B.; STEGMANN, Barbara J.; SEGARS, James H.
2011-01-01
Objective To estimate the total annual societal cost of uterine fibroids in the United States, based on direct and indirect costs, including associated obstetric complications. Study Design A systematic review of the literature was conducted to estimate the number of women seeking treatment for symptomatic fibroids annually, the costs of medical and surgical treatment, work lost and obstetric complications attributable to fibroids. Total annual costs were converted to 2010 U.S. dollars. A sensitivity analysis was performed. Results The estimated annual direct costs (surgery, hospital admissions, outpatient visits, medications) were $4.1 to $9.4 billion. Estimated lost work costs ranged from $1.55 to $17.2 billion annually. Obstetric outcomes attributed to fibroids resulted in a cost of $238 million to $7.76 billion annually. Uterine fibroids were estimated to cost the US $5.9 to $34.4 billion annually. Conclusions Obstetric complications associated with fibroids contributed significantly to their economic burden. Lost work costs may account for the largest proportion of societal costs due to fibroids. PMID:22244472
Using GIS-based methods and lidar data to estimate rooftop solar technical potential in US cities
NASA Astrophysics Data System (ADS)
Margolis, Robert; Gagnon, Pieter; Melius, Jennifer; Phillips, Caleb; Elmore, Ryan
2017-07-01
We estimate the technical potential of rooftop solar photovoltaics (PV) for select US cities by combining light detection and ranging (lidar) data, a validated analytical method for determining rooftop PV suitability employing geographic information systems, and modeling of PV electricity generation. We find that rooftop PV’s ability to meet estimated city electricity consumption varies widely—from meeting 16% of annual consumption (in Washington, DC) to meeting 88% (in Mission Viejo, CA). Important drivers include average rooftop suitability, household footprint/per-capita roof space, the quality of the solar resource, and the city’s estimated electricity consumption. In addition to city-wide results, we also estimate the ability of aggregations of households to offset their electricity consumption with PV. In a companion article, we will use statistical modeling to extend our results and estimate national rooftop PV technical potential. In addition, our publically available data and methods may help policy makers, utilities, researchers, and others perform customized analyses to meet their specific needs.
Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model
NASA Technical Reports Server (NTRS)
Rizvi, Farheen
2016-01-01
Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.
Cho, Il Haeng; Park, Kyung S; Lim, Chang Joo
2010-02-01
In this study, we described the characteristics of five different biological age (BA) estimation algorithms, including (i) multiple linear regression, (ii) principal component analysis, and somewhat unique methods developed by (iii) Hochschild, (iv) Klemera and Doubal, and (v) a variant of Klemera and Doubal's method. The objective of this study is to find the most appropriate method of BA estimation by examining the association between Work Ability Index (WAI) and the differences of each algorithm's estimates from chronological age (CA). The WAI was found to be a measure that reflects an individual's current health status rather than the deterioration caused by a serious dependency with the age. Experiments were conducted on 200 Korean male participants using a BA estimation system developed principally under the concept of non-invasive, simple to operate and human function-based. Using the empirical data, BA estimation as well as various analyses including correlation analysis and discriminant function analysis was performed. As a result, it had been confirmed by the empirical data that Klemera and Doubal's method with uncorrelated variables from principal component analysis produces relatively reliable and acceptable BA estimates. 2009 Elsevier Ireland Ltd. All rights reserved.
A new multistage groundwater transport inverse method: presentation, evaluation, and implications
Anderman, Evan R.; Hill, Mary C.
1999-01-01
More computationally efficient methods of using concentration data are needed to estimate groundwater flow and transport parameters. This work introduces and evaluates a three‐stage nonlinear‐regression‐based iterative procedure in which trial advective‐front locations link decoupled flow and transport models. Method accuracy and efficiency are evaluated by comparing results to those obtained when flow‐ and transport‐model parameters are estimated simultaneously. The new method is evaluated as conclusively as possible by using a simple test case that includes distinct flow and transport parameters, but does not include any approximations that are problem dependent. The test case is analytical; the only flow parameter is a constant velocity, and the transport parameters are longitudinal and transverse dispersivity. Any difficulties detected using the new method in this ideal situation are likely to be exacerbated in practical problems. Monte‐Carlo analysis of observation error ensures that no specific error realization obscures the results. Results indicate that, while this, and probably other, multistage methods do not always produce optimal parameter estimates, the computational advantage may make them useful in some circumstances, perhaps as a precursor to using a simultaneous method.
Isonymy structure of Sucre and Táchira, two Venezuelan states.
Rodríguez-Larralde, A; Barrai, I
1997-10-01
The isonymy structure of two Venezuelan states, Sucre and Táchira, is described using the surnames of the Register of Electors updated in 1991. The frequency distribution of surnames pooled together by sex was obtained for the 57 counties of Sucre and the 52 counties of Táchira, based on total population sizes of 158,705 and 160,690 individuals, respectively. The coefficient of consanguinity resulting from random isonymy (phi ii), Karlin and McGregor's ni (identical to v), and the proportion of the population included in surnames represented only once (estimator A) and in the seven most frequent surnames (estimator B) were calculated for each county. RST, a measure of microdifferentiation, was estimated for each state. The Euclidean distance between pairs of counties within states was calculated together with the corresponding geographic distances. The correlations between their logarithmic transformations were significant in both cases, indicating differentiation of surnames by distance. Dendrograms based on the Euclidean distance matrix were constructed. From them a first approximation of the effect of internal migration within states was obtained. Ninety-six percent of the coefficient of consanguinity resulting from random isonymy is determined by the proportion of the population included in the seven most frequent surnames, whereas between 72% and 88% of Karlin and McGregor's ni for Sucre and Táchira, respectively, is determined by the proportion of population included in surnames represented only once. Surnames with generalized and with focal distribution were identified for both states, to be used as possible indicators of the geographic origin of their carriers. Our results indicate that Táchira's counties, on average, tend to be more isolated than Sucre's counties, as measured by RST, estimator B, and phi ii. Comparisons with the results obtained for other. Venezuelan states and other non-Venezuelan populations are also given.
Estimating milk yield and value losses from increased somatic cell count on US dairy farms.
Hadrich, J C; Wolf, C A; Lombard, J; Dolak, T M
2018-04-01
Milk loss due to increased somatic cell counts (SCC) results in economic losses for dairy producers. This research uses 10 mo of consecutive dairy herd improvement data from 2013 and 2014 to estimate milk yield loss using SCC as a proxy for clinical and subclinical mastitis. A fixed effects regression was used to examine factors that affected milk yield while controlling for herd-level management. Breed, milking frequency, days in milk, seasonality, SCC, cumulative months with SCC greater than 100,000 cells/mL, lactation, and herd size were variables included in the regression analysis. The cumulative months with SCC above a threshold was included as a proxy for chronic mastitis. Milk yield loss increased as the number of test days with SCC ≥100,000 cells/mL increased. Results from the regression were used to estimate a monetary value of milk loss related to SCC as a function of cow and operation related explanatory variables for a representative dairy cow. The largest losses occurred from increased cumulative test days with a SCC ≥100,000 cells/mL, with daily losses of $1.20/cow per day in the first month to $2.06/cow per day in mo 10. Results demonstrate the importance of including the duration of months above a threshold SCC when estimating milk yield losses. Cows with chronic mastitis, measured by increased consecutive test days with SCC ≥100,000 cells/mL, resulted in higher milk losses than cows with a new infection. This provides farm managers with a method to evaluate the trade-off between treatment and culling decisions as it relates to mastitis control and early detection. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Dong, M C; van Vleck, L D
1989-03-01
Variance and covariance components for milk yield, survival to second freshening, calving interval in first lactation were estimated by REML with the expectation and maximization algorithm for an animal model which included herd-year-season effects. Cows without calving interval but with milk yield were included. Each of the four data sets of 15 herds included about 3000 Holstein cows. Relationships across herds were ignored to enable inversion of the coefficient matrix of mixed model equations. Quadratics and their expectations were accumulated herd by herd. Heritability of milk yield (.32) agrees with reports by same methods. Heritabilities of survival (.11) and calving interval(.15) are slightly larger and genetic correlations smaller than results from different methods of estimation. Genetic correlation between milk yield and calving interval (.09) indicates genetic ability to produce more milk is lightly associated with decreased fertility.
NASA Technical Reports Server (NTRS)
Boland, J. S., III
1975-01-01
A general simulation program is presented (GSP) involving nonlinear state estimation for space vehicle flight navigation systems. A complete explanation of the iterative guidance mode guidance law, derivation of the dynamics, coordinate frames, and state estimation routines are given so as to fully clarify the assumptions and approximations involved so that simulation results can be placed in their proper perspective. A complete set of computer acronyms and their definitions as well as explanations of the subroutines used in the GSP simulator are included. To facilitate input/output, a complete set of compatable numbers, with units, are included to aid in data development. Format specifications, output data phrase meanings and purposes, and computer card data input are clearly spelled out. A large number of simulation and analytical studies were used to determine the validity of the simulator itself as well as various data runs.
Remote sensing of agricultural crops and soils
NASA Technical Reports Server (NTRS)
Bauer, M. E. (Principal Investigator)
1983-01-01
Research in the correlative and noncorrelative approaches to image registration and the spectral estimation of corn canopy phytomass and water content is reported. Scene radiation research results discussed include: corn and soybean LANDSAT MSS classification performance as a function of scene characteristics; estimating crop development stages from MSS data; the interception of photosynthetically active radiation in corn and soybean canopies; costs of measuring leaf area index of corn; LANDSAT spectral inputs to crop models including the use of the greenness index to assess crop stress and the evaluation of MSS data for estimating corn and soybean development stages; field research experiment design data acquisition and preprocessing; and Sun-view angles studies of corn and soybean canopies in support of vegetation canopy reflection modeling.
2013-09-01
model and the BRDF in the SRP model are not consistent with each other, then the resulting estimated albedo-areas and mass are inaccurate and biased...This work studies the use of physically consistent BRDF -SRP models for mass estimation. Simulation studies are used to provide an indication of the...benefits of using these new models . An unscented Kalman filter approach that includes BRDF and mass parameters in the state vector is used. The
Shariat, Mohammad Hassan; Gazor, Saeed; Redfearn, Damian
2016-08-01
In this paper, we study the problem of the cardiac conduction velocity (CCV) estimation for the sequential intracardiac mapping. We assume that the intracardiac electrograms of several cardiac sites are sequentially recorded, their activation times (ATs) are extracted, and the corresponding wavefronts are specified. The locations of the mapping catheter's electrodes and the ATs of the wavefronts are used here for the CCV estimation. We assume that the extracted ATs include some estimation errors, which we model with zero-mean white Gaussian noise values with known variances. Assuming stable planar wavefront propagation, we derive the maximum likelihood CCV estimator, when the synchronization times between various recording sites are unknown. We analytically evaluate the performance of the CCV estimator and provide its mean square estimation error. Our simulation results confirm the accuracy of the proposed method and the error analysis of the proposed CCV estimator.
Incidence of induced abortion in Malawi, 2015
Mhango, Chisale; Philbin, Jesse; Chimwaza, Wanangwa; Chipeta, Effie; Msusa, Ausbert
2017-01-01
Background In Malawi, abortion is legal only if performed to save a woman’s life; other attempts to procure an abortion are punishable by 7–14 years imprisonment. Most induced abortions in Malawi are performed under unsafe conditions, contributing to Malawi’s high maternal mortality ratio. Malawians are currently debating whether to provide additional exceptions under which an abortion may be legally obtained. An estimated 67,300 induced abortions occurred in Malawi in 2009 (equivalent to 23 abortions per 1,000 women aged 15–44), but changes since 2009, including dramatic increases in contraceptive prevalence, may have impacted abortion rates. Methods We conducted a nationally representative survey of health facilities to estimate the number of cases of post-abortion care, as well as a survey of knowledgeable informants to estimate the probability of needing and obtaining post-abortion care following induced abortion. These data were combined with national population and fertility data to determine current estimates of induced abortion and unintended pregnancy in Malawi using the Abortion Incidence Complications Methodology. Results We estimate that approximately 141,044 (95% CI: 121,161–160,928) induced abortions occurred in Malawi in 2015, translating to a national rate of 38 abortions per 1,000 women aged 15–49 (95% CI: 32 to 43); which varied by geographical zone (range: 28–61). We estimate that 53% of pregnancies in Malawi are unintended, and that 30% of unintended pregnancies end in abortion. Given the challenges of estimating induced abortion, and the assumptions required for calculation, results should be viewed as approximate estimates, rather than exact measures. Conclusions The estimated abortion rate in 2015 is higher than in 2009 (potentially due to methodological differences), but similar to recent estimates from nearby countries including Tanzania (36), Uganda (39), and regional estimates in Eastern and Southern Africa (34–35). Over half of pregnancies in Malawi are unintended. Our findings should inform ongoing efforts to reduce maternal morbidity and mortality and to improve public health in Malawi. PMID:28369114
Estimating lifetime and age-conditional probabilities of developing cancer.
Wun, L M; Merrill, R M; Feuer, E J
1998-01-01
Lifetime and age-conditional risk estimates of developing cancer provide a useful summary to the public of the current cancer risk and how this risk compares with earlier periods and among select subgroups of society. These reported estimates, commonly quoted in the popular press, have the potential to promote early detection efforts, to increase cancer awareness, and to serve as an aid in study planning. However, they can also be easily misunderstood and frightening to the general public. The Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute and the American Cancer Society have recently begun including in annual reports lifetime and age-conditional risk estimates of developing cancer. These risk estimates are based on incidence rates that reflect new cases of the cancer in a population free of the cancer. To compute these estimates involves a cancer prevalence adjustment that is computed cross-sectionally from current incidence and mortality data derived within a multiple decrement life table. This paper presents a detailed description of the methodology for deriving lifetime and age-conditional risk estimates of developing cancer. In addition, an extension is made which, using a triple decrement life table, adjusts for a surgical procedure that removes individuals from the risk of developing a given cancer. Two important results which provide insights into the basic methodology are included in the discussion. First, the lifetime risk estimate does not depend on the cancer prevalence adjustment, although this is not the case for age-conditional risk estimates. Second, the lifetime risk estimate is always smaller when it is corrected for a surgical procedure that takes people out of the risk pool to develop the cancer. The methodology is applied to corpus and uterus NOS cancers, with a correction made for hysterectomy prevalence. The interpretation and limitations of risk estimates are also discussed.
Stedman, Margaret R; Feuer, Eric J; Mariotto, Angela B
2014-11-01
The probability of cure is a long-term prognostic measure of cancer survival. Estimates of the cure fraction, the proportion of patients "cured" of the disease, are based on extrapolating survival models beyond the range of data. The objective of this work is to evaluate the sensitivity of cure fraction estimates to model choice and study design. Data were obtained from the Surveillance, Epidemiology, and End Results (SEER)-9 registries to construct a cohort of breast and colorectal cancer patients diagnosed from 1975 to 1985. In a sensitivity analysis, cure fraction estimates are compared from different study designs with short- and long-term follow-up. Methods tested include: cause-specific and relative survival, parametric mixture, and flexible models. In a separate analysis, estimates are projected for 2008 diagnoses using study designs including the full cohort (1975-2008 diagnoses) and restricted to recent diagnoses (1998-2008) with follow-up to 2009. We show that flexible models often provide higher estimates of the cure fraction compared to parametric mixture models. Log normal models generate lower estimates than Weibull parametric models. In general, 12 years is enough follow-up time to estimate the cure fraction for regional and distant stage colorectal cancer but not for breast cancer. 2008 colorectal cure projections show a 15% increase in the cure fraction since 1985. Estimates of the cure fraction are model and study design dependent. It is best to compare results from multiple models and examine model fit to determine the reliability of the estimate. Early-stage cancers are sensitive to survival type and follow-up time because of their longer survival. More flexible models are susceptible to slight fluctuations in the shape of the survival curve which can influence the stability of the estimate; however, stability may be improved by lengthening follow-up and restricting the cohort to reduce heterogeneity in the data. Published by Oxford University Press 2014.
Sun, Chuanyu; VanRaden, Paul M.; Cole, John B.; O'Connell, Jeffrey R.
2014-01-01
Dominance may be an important source of non-additive genetic variance for many traits of dairy cattle. However, nearly all prediction models for dairy cattle have included only additive effects because of the limited number of cows with both genotypes and phenotypes. The role of dominance in the Holstein and Jersey breeds was investigated for eight traits: milk, fat, and protein yields; productive life; daughter pregnancy rate; somatic cell score; fat percent and protein percent. Additive and dominance variance components were estimated and then used to estimate additive and dominance effects of single nucleotide polymorphisms (SNPs). The predictive abilities of three models with both additive and dominance effects and a model with additive effects only were assessed using ten-fold cross-validation. One procedure estimated dominance values, and another estimated dominance deviations; calculation of the dominance relationship matrix was different for the two methods. The third approach enlarged the dataset by including cows with genotype probabilities derived using genotyped ancestors. For yield traits, dominance variance accounted for 5 and 7% of total variance for Holsteins and Jerseys, respectively; using dominance deviations resulted in smaller dominance and larger additive variance estimates. For non-yield traits, dominance variances were very small for both breeds. For yield traits, including additive and dominance effects fit the data better than including only additive effects; average correlations between estimated genetic effects and phenotypes showed that prediction accuracy increased when both effects rather than just additive effects were included. No corresponding gains in prediction ability were found for non-yield traits. Including cows with derived genotype probabilities from genotyped ancestors did not improve prediction accuracy. The largest additive effects were located on chromosome 14 near DGAT1 for yield traits for both breeds; those SNPs also showed the largest dominance effects for fat yield (both breeds) as well as for Holstein milk yield. PMID:25084281
Volcanic stratospheric sulfur injections and aerosol optical depth from 500 BCE to 1900 CE
NASA Astrophysics Data System (ADS)
Toohey, Matthew; Sigl, Michael
2017-11-01
The injection of sulfur into the stratosphere by explosive volcanic eruptions is the cause of significant climate variability. Based on sulfate records from a suite of ice cores from Greenland and Antarctica, the eVolv2k database includes estimates of the magnitudes and approximate source latitudes of major volcanic stratospheric sulfur injection (VSSI) events from 500 BCE to 1900 CE, constituting an update of prior reconstructions and an extension of the record by 1000 years. The database incorporates improvements to the ice core records (in terms of synchronisation and dating) and refinements to the methods used to estimate VSSI from ice core records, and it includes first estimates of the random uncertainties in VSSI values. VSSI estimates for many of the largest eruptions, including Samalas (1257), Tambora (1815), and Laki (1783), are within 10 % of prior estimates. A number of strong events are included in eVolv2k which are largely underestimated or not included in earlier VSSI reconstructions, including events in 540, 574, 682, and 1108 CE. The long-term annual mean VSSI from major volcanic eruptions is estimated to be ˜ 0.5 Tg [S] yr-1, ˜ 50 % greater than a prior reconstruction due to the identification of more events and an increase in the magnitude of many intermediate events. A long-term latitudinally and monthly resolved stratospheric aerosol optical depth (SAOD) time series is reconstructed from the eVolv2k VSSI estimates, and the resulting global mean SAOD is found to be similar (within 33 %) to a prior reconstruction for most of the largest eruptions. The long-term (500 BCE-1900 CE) average global mean SAOD estimated from the eVolv2k VSSI estimates including a constant background injection of stratospheric sulfur is ˜ 0.014, 30 % greater than a prior reconstruction. These new long-term reconstructions of past VSSI and SAOD variability give context to recent volcanic forcing, suggesting that the 20th century was a period of somewhat weaker than average volcanic forcing, with current best estimates of 20th century mean VSSI and SAOD values being 25 and 14 % less, respectively, than the mean of the 500 BCE to 1900 CE period. The reconstructed VSSI and SAOD data are available at https://doi.org/10.1594/WDCC/eVolv2k_v2.
Ehrlich, Emily; Bunn, Terry; Kanotra, Sarojini; Fussman, Chris; Rosenman, Kenneth D.
2016-01-01
Background The US employer-based surveillance system for work-related health conditions underestimates the prevalence of work-related dermatitis. Objective The authors sought to utilize information from workers to improve the accuracy of prevalence estimates for work-related dermatitis. Methods Three state health departments included questions in the 2011 Behavioral Risk Factor Surveillance System survey designed to ascertain the prevalence of dermatitis in the working population, as well as healthcare experiences, personal perceptions of work-relatedness, and job changes associated with dermatitis. Results The percentage of working respondents who reported receiving a clinician’s opinion that their dermatitis was work-related was between 3.8% and 10.2%. When patients’ perceptions were considered, the work-related dermatitis prevalence estimate increased to between 12.9% and 17.6%. Conclusions Including patients’ perceptions of work-relatedness produced a larger prevalence estimate for work-related dermatitis than the previously published estimate of 5.6%, which included only those cases of dermatitis attributed to work by healthcare professionals. PMID:24619601
Hubble Space Telescope Angular Velocity Estimation During the Robotic Servicing Mission
NASA Technical Reports Server (NTRS)
Thienel, Julie K.; Sanner, Robert M.
2005-01-01
In 2004 NASA began investigation of a robotic servicing mission for the Hubble Space Telescope (HST). Such a mission would require estimates of the HST attitude and rates in order to achieve a capture by the proposed Hubble robotic vehicle (HRV). HRV was to be equipped with vision-based sensors, capable of estimating the relative attitude between HST and HRV. The inertial HST attitude is derived from the measured relative attitude and the HRV computed inertial attitude. However, the relative rate between HST and HRV cannot be measured directly. Therefore, the HST rate with respect to inertial space is not known. Two approaches are developed to estimate the HST rates. Both methods utilize the measured relative attitude and the HRV inertial attitude and rates. First, a nonlinear estimator is developed. The nonlinear approach estimates the HST rate through an estimation of the inertial angular momentum. The development includes an analysis of the estimator stability given errors in the measured attitude. Second, a linearized approach is developed. The linearized approach is a pseudo-linear Kalman filter. Simulation test results for both methods are given, including scenarios with erroneous measured attitudes. Even though the development began as an application for the HST robotic servicing mission, the methods presented are applicable to any rendezvous/capture mission involving a non-cooperative target spacecraft.
Estimating the cost-effectiveness of 54 weeks of infliximab for rheumatoid arthritis.
Wong, John B; Singh, Gurkirpal; Kavanaugh, Arthur
2002-10-01
To estimate the cost-effectiveness of infliximab plus methotrexate for active, refractory rheumatoid arthritis. We projected the 54-week results from a randomized controlled trial of infliximab into lifetime economic and clinical outcomes using a Markov computer simulation model. Direct and indirect costs, quality of life, and disability estimates were based on trial results; Arthritis, Rheumatism, and Aging Medical Information System (ARAMIS) database outcomes; and published data. Results were discounted using the standard 3% rate. Because most well-accepted medical therapies have cost-effectiveness ratios below $50,000 to $100,000 per quality-adjusted life-year (QALY) gained, results below this range were considered to be "cost-effective." At 3 mg/kg, each infliximab infusion would cost $1393. When compared with methotrexate alone, 54 weeks of infliximab plus methotrexate decreased the likelihood of having advanced disability from 23% to 11% at the end of 54 weeks, which projected to a lifetime marginal cost-effectiveness ratio of $30,500 per discounted QALY gained, considering only direct medical costs. When applying a societal perspective and including indirect or productivity costs, the marginal cost-effectiveness ratio for infliximab was $9100 per discounted QALY gained. The results remained relatively unchanged with variation of model estimates over a broad range of values. Infliximab plus methotrexate for 54 weeks for rheumatoid arthritis should be cost-effective with its clinical benefit providing good value for the drug cost, especially when including productivity losses. Although infliximab beyond 54 weeks will likely be cost-effective, the economic and clinical benefit remains uncertain and will depend on long-term results of clinical trials.
Lebenberg, Jessica; Lalande, Alain; Clarysse, Patrick; Buvat, Irene; Casta, Christopher; Cochet, Alexandre; Constantinidès, Constantin; Cousty, Jean; de Cesare, Alain; Jehan-Besson, Stephanie; Lefort, Muriel; Najman, Laurent; Roullot, Elodie; Sarry, Laurent; Tilmant, Christophe; Frouin, Frederique; Garreau, Mireille
2015-01-01
This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert.
Lebenberg, Jessica; Lalande, Alain; Clarysse, Patrick; Buvat, Irene; Casta, Christopher; Cochet, Alexandre; Constantinidès, Constantin; Cousty, Jean; de Cesare, Alain; Jehan-Besson, Stephanie; Lefort, Muriel; Najman, Laurent; Roullot, Elodie; Sarry, Laurent; Tilmant, Christophe
2015-01-01
This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert. PMID:26287691
Evaluation of line transect sampling based on remotely sensed data from underwater video
Bergstedt, R.A.; Anderson, D.R.
1990-01-01
We used underwater video in conjunction with the line transect method and a Fourier series estimator to make 13 independent estimates of the density of known populations of bricks lying on the bottom in shallows of Lake Huron. The pooled estimate of density (95.5 bricks per hectare) was close to the true density (89.8 per hectare), and there was no evidence of bias. Confidence intervals for the individual estimates included the true density 85% of the time instead of the nominal 95%. Our results suggest that reliable estimates of the density of objects on a lake bed can be obtained by the use of remote sensing and line transect sampling theory.
NASA Astrophysics Data System (ADS)
Eadie, Gwendolyn M.; Springford, Aaron; Harris, William E.
2017-02-01
We present a hierarchical Bayesian method for estimating the total mass and mass profile of the Milky Way Galaxy. The new hierarchical Bayesian approach further improves the framework presented by Eadie et al. and Eadie and Harris and builds upon the preliminary reports by Eadie et al. The method uses a distribution function f({ E },L) to model the Galaxy and kinematic data from satellite objects, such as globular clusters (GCs), to trace the Galaxy’s gravitational potential. A major advantage of the method is that it not only includes complete and incomplete data simultaneously in the analysis, but also incorporates measurement uncertainties in a coherent and meaningful way. We first test the hierarchical Bayesian framework, which includes measurement uncertainties, using the same data and power-law model assumed in Eadie and Harris and find the results are similar but more strongly constrained. Next, we take advantage of the new statistical framework and incorporate all possible GC data, finding a cumulative mass profile with Bayesian credible regions. This profile implies a mass within 125 kpc of 4.8× {10}11{M}⊙ with a 95% Bayesian credible region of (4.0{--}5.8)× {10}11{M}⊙ . Our results also provide estimates of the true specific energies of all the GCs. By comparing these estimated energies to the measured energies of GCs with complete velocity measurements, we observe that (the few) remote tracers with complete measurements may play a large role in determining a total mass estimate of the Galaxy. Thus, our study stresses the need for more remote tracers with complete velocity measurements.
A framework for the meta-analysis of Bland-Altman studies based on a limits of agreement approach.
Tipton, Elizabeth; Shuster, Jonathan
2017-10-15
Bland-Altman method comparison studies are common in the medical sciences and are used to compare a new measure to a gold-standard (often costlier or more invasive) measure. The distribution of these differences is summarized by two statistics, the 'bias' and standard deviation, and these measures are combined to provide estimates of the limits of agreement (LoA). When these LoA are within the bounds of clinically insignificant differences, the new non-invasive measure is preferred. Very often, multiple Bland-Altman studies have been conducted comparing the same two measures, and random-effects meta-analysis provides a means to pool these estimates. We provide a framework for the meta-analysis of Bland-Altman studies, including methods for estimating the LoA and measures of uncertainty (i.e., confidence intervals). Importantly, these LoA are likely to be wider than those typically reported in Bland-Altman meta-analyses. Frequently, Bland-Altman studies report results based on repeated measures designs but do not properly adjust for this design in the analysis. Meta-analyses of Bland-Altman studies frequently exclude these studies for this reason. We provide a meta-analytic approach that allows inclusion of estimates from these studies. This includes adjustments to the estimate of the standard deviation and a method for pooling the estimates based upon robust variance estimation. An example is included based on a previously published meta-analysis. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
The Estimation of Gestational Age at Birth in Database Studies.
Eberg, Maria; Platt, Robert W; Filion, Kristian B
2017-11-01
Studies on the safety of prenatal medication use require valid estimation of the pregnancy duration. However, gestational age is often incompletely recorded in administrative and clinical databases. Our objective was to compare different approaches to estimating the pregnancy duration. Using data from the Clinical Practice Research Datalink and Hospital Episode Statistics, we examined the following four approaches to estimating missing gestational age: (1) generalized estimating equations for longitudinal data; (2) multiple imputation; (3) estimation based on fetal birth weight and sex; and (4) conventional approaches that assigned a fixed value (39 weeks for all or 39 weeks for full term and 35 weeks for preterm). The gestational age recorded in Hospital Episode Statistics was considered the gold standard. We conducted a simulation study comparing the described approaches in terms of estimated bias and mean square error. A total of 25,929 infants from 22,774 mothers were included in our "gold standard" cohort. The smallest average absolute bias was observed for the generalized estimating equation that included birth weight, while the largest absolute bias occurred when assigning 39-week gestation to all those with missing values. The smallest mean square errors were detected with generalized estimating equations while multiple imputation had the highest mean square errors. The use of generalized estimating equations resulted in the most accurate estimation of missing gestational age when birth weight information was available. In the absence of birth weight, assignment of fixed gestational age based on term/preterm status may be the optimal approach.
Comparing children's GPS tracks with geospatial proxies for exposure to junk food.
Sadler, Richard C; Gilliland, Jason A
2015-01-01
Various geospatial techniques have been employed to estimate children's exposure to environmental cardiometabolic risk factors, including junk food. But many studies uncritically rely on exposure proxies which differ greatly from actual exposure. Misrepresentation of exposure by researchers could lead to poor decisions and ineffective policymaking. This study conducts a GIS-based analysis of GPS tracks--'activity spaces'--and 21 proxies for activity spaces (e.g. buffers, container approaches) for a sample of 526 children (ages 9-14) in London, Ontario, Canada. These measures are combined with a validated food environment database (including fast food and convenience stores) to create a series of junk food exposure estimates and quantify the errors resulting from use of different proxy methods. Results indicate that exposure proxies consistently underestimate exposure to junk foods by as much as 68%. This underestimation is important to policy development because children are exposed to more junk food than estimated using typical methods. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Canfield, Stephen
1999-01-01
This work will demonstrate the integration of sensor and system dynamic data and their appropriate models using an optimal filter to create a robust, adaptable, easily reconfigurable state (motion) estimation system. This state estimation system will clearly show the application of fundamental modeling and filtering techniques. These techniques are presented at a general, first principles level, that can easily be adapted to specific applications. An example of such an application is demonstrated through the development of an integrated GPS/INS navigation system. This system acquires both global position data and inertial body data, to provide optimal estimates of current position and attitude states. The optimal states are estimated using a Kalman filter. The state estimation system will include appropriate error models for the measurement hardware. The results of this work will lead to the development of a "black-box" state estimation system that supplies current motion information (position and attitude states) that can be used to carry out guidance and control strategies. This black-box state estimation system is developed independent of the vehicle dynamics and therefore is directly applicable to a variety of vehicles. Issues in system modeling and application of Kalman filtering techniques are investigated and presented. These issues include linearized models of equations of state, models of the measurement sensors, and appropriate application and parameter setting (tuning) of the Kalman filter. The general model and subsequent algorithm is developed in Matlab for numerical testing. The results of this system are demonstrated through application to data from the X-33 Michael's 9A8 mission and are presented in plots and simple animations.
Corrected score estimation in the proportional hazards model with misclassified discrete covariates
Zucker, David M.; Spiegelman, Donna
2013-01-01
SUMMARY We consider Cox proportional hazards regression when the covariate vector includes error-prone discrete covariates along with error-free covariates, which may be discrete or continuous. The misclassification in the discrete error-prone covariates is allowed to be of any specified form. Building on the work of Nakamura and his colleagues, we present a corrected score method for this setting. The method can handle all three major study designs (internal validation design, external validation design, and replicate measures design), both functional and structural error models, and time-dependent covariates satisfying a certain ‘localized error’ condition. We derive the asymptotic properties of the method and indicate how to adjust the covariance matrix of the regression coefficient estimates to account for estimation of the misclassification matrix. We present the results of a finite-sample simulation study under Weibull survival with a single binary covariate having known misclassification rates. The performance of the method described here was similar to that of related methods we have examined in previous works. Specifically, our new estimator performed as well as or, in a few cases, better than the full Weibull maximum likelihood estimator. We also present simulation results for our method for the case where the misclassification probabilities are estimated from an external replicate measures study. Our method generally performed well in these simulations. The new estimator has a broader range of applicability than many other estimators proposed in the literature, including those described in our own earlier work, in that it can handle time-dependent covariates with an arbitrary misclassification structure. We illustrate the method on data from a study of the relationship between dietary calcium intake and distal colon cancer. PMID:18219700
Alonso, Jordi; Vilagut, Gemma; Chatterji, Somnath; Heeringa, Steven; Schoenbaum, Michael; Üstün, T. Bedirhan; Rojas-Farreras, Sonia; Angermeyer, Matthias; Bromet, Evelyn; Bruffaerts, Ronny; de Girolamo, Giovanni; Gureje, Oye; Haro, Josep Maria; Karam, Aimee N.; Kovess, Viviane; Levinson, Daphna; Liu, Zhaorui; Mora, Maria Elena Medina; Ormel, J.; Posada-Villa, Jose; Uda, Hidenori; Kessler, Ronald C.
2010-01-01
Background The methodology commonly used to estimate disease burden, featuring ratings of severity of individual conditions, has been criticized for ignoring comorbidity. A methodology that addresses this problem is proposed and illustrated here with data from the WHO World Mental Health Surveys. Although the analysis is based on self-reports about one’s own conditions in a community survey, the logic applies equally well to analysis of hypothetical vignettes describing comorbid condition profiles. Methods Face-to-face interviews in 13 countries (six developing, nine developed; n = 31,067; response rate = 69.6%) assessed 10 classes of chronic physical and 9 of mental conditions. A visual analog scale (VAS) was used to assess overall perceived health. Multiple regression analysis with interactions for comorbidity was used to estimate associations of conditions with VAS. Simulation was used to estimate condition-specific effects. Results The best-fitting model included condition main effects and interactions of types by numbers of conditions. Neurological conditions, insomnia, and major depression were rated most severe. Adjustment for comorbidity reduced condition-specific estimates with substantial between-condition variation (.24–.70 ratios of condition-specific estimates with and without adjustment for comorbidity). The societal-level burden rankings were quite different from the individual-level rankings, with the highest societal-level rankings associated with conditions having high prevalence rather than high individual-level severity. Conclusions Plausible estimates of disorder-specific effects on VAS can be obtained using methods that adjust for comorbidity. These adjustments substantially influence condition-specific ratings. PMID:20553636
Advanced methods of structural and trajectory analysis for transport aircraft
NASA Technical Reports Server (NTRS)
Ardema, Mark D.
1995-01-01
This report summarizes the efforts in two areas: (1) development of advanced methods of structural weight estimation, and (2) development of advanced methods of trajectory optimization. The majority of the effort was spent in the structural weight area. A draft of 'Analytical Fuselage and Wing Weight Estimation of Transport Aircraft', resulting from this research, is included as an appendix.
NASA Technical Reports Server (NTRS)
1976-01-01
Results of planetary advanced studies and planning support are summarized. The scope of analyses includes cost estimation research, planetary mission performance, penetrator mission concepts for airless planets/satellites, geology orbiter payload adaptability, lunar mission performance, and advanced planning activities. Study reports and related publications are included in a bibliography section.
Incorporating structure from motion uncertainty into image-based pose estimation
NASA Astrophysics Data System (ADS)
Ludington, Ben T.; Brown, Andrew P.; Sheffler, Michael J.; Taylor, Clark N.; Berardi, Stephen
2015-05-01
A method for generating and utilizing structure from motion (SfM) uncertainty estimates within image-based pose estimation is presented. The method is applied to a class of problems in which SfM algorithms are utilized to form a geo-registered reference model of a particular ground area using imagery gathered during flight by a small unmanned aircraft. The model is then used to form camera pose estimates in near real-time from imagery gathered later. The resulting pose estimates can be utilized by any of the other onboard systems (e.g. as a replacement for GPS data) or downstream exploitation systems, e.g., image-based object trackers. However, many of the consumers of pose estimates require an assessment of the pose accuracy. The method for generating the accuracy assessment is presented. First, the uncertainty in the reference model is estimated. Bundle Adjustment (BA) is utilized for model generation. While the high-level approach for generating a covariance matrix of the BA parameters is straightforward, typical computing hardware is not able to support the required operations due to the scale of the optimization problem within BA. Therefore, a series of sparse matrix operations is utilized to form an exact covariance matrix for only the parameters that are needed at a particular moment. Once the uncertainty in the model has been determined, it is used to augment Perspective-n-Point pose estimation algorithms to improve the pose accuracy and to estimate the resulting pose uncertainty. The implementation of the described method is presented along with results including results gathered from flight test data.
Body mass estimates of hominin fossils and the evolution of human body size.
Grabowski, Mark; Hatala, Kevin G; Jungers, William L; Richmond, Brian G
2015-08-01
Body size directly influences an animal's place in the natural world, including its energy requirements, home range size, relative brain size, locomotion, diet, life history, and behavior. Thus, an understanding of the biology of extinct organisms, including species in our own lineage, requires accurate estimates of body size. Since the last major review of hominin body size based on postcranial morphology over 20 years ago, new fossils have been discovered, species attributions have been clarified, and methods improved. Here, we present the most comprehensive and thoroughly vetted set of individual fossil hominin body mass predictions to date, and estimation equations based on a large (n = 220) sample of modern humans of known body masses. We also present species averages based exclusively on fossils with reliable taxonomic attributions, estimates of species averages by sex, and a metric for levels of sexual dimorphism. Finally, we identify individual traits that appear to be the most reliable for mass estimation for each fossil species, for use when only one measurement is available for a fossil. Our results show that many early hominins were generally smaller-bodied than previously thought, an outcome likely due to larger estimates in previous studies resulting from the use of large-bodied modern human reference samples. Current evidence indicates that modern human-like large size first appeared by at least 3-3.5 Ma in some Australopithecus afarensis individuals. Our results challenge an evolutionary model arguing that body size increased from Australopithecus to early Homo. Instead, we show that there is no reliable evidence that the body size of non-erectus early Homo differed from that of australopiths, and confirm that Homo erectus evolved larger average body size than earlier hominins. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ponte Castañeda, Pedro
2016-11-01
This paper presents a variational method for estimating the effective constitutive response of composite materials with nonlinear constitutive behavior. The method is based on a stationary variational principle for the macroscopic potential in terms of the corresponding potential of a linear comparison composite (LCC) whose properties are the trial fields in the variational principle. When used in combination with estimates for the LCC that are exact to second order in the heterogeneity contrast, the resulting estimates for the nonlinear composite are also guaranteed to be exact to second-order in the contrast. In addition, the new method allows full optimization with respect to the properties of the LCC, leading to estimates that are fully stationary and exhibit no duality gaps. As a result, the effective response and field statistics of the nonlinear composite can be estimated directly from the appropriately optimized linear comparison composite. By way of illustration, the method is applied to a porous, isotropic, power-law material, and the results are found to compare favorably with earlier bounds and estimates. However, the basic ideas of the method are expected to work for broad classes of composites materials, whose effective response can be given appropriate variational representations, including more general elasto-plastic and soft hyperelastic composites and polycrystals.
Sizing and Lifecycle Cost Analysis of an Ares V Composite Interstage
NASA Technical Reports Server (NTRS)
Mann, Troy; Smeltzer, Stan; Grenoble, Ray; Mason, Brian; Rosario, Sev; Fairbairn, Bob
2012-01-01
The Interstage Element of the Ares V launch vehicle was sized using a commercially available structural sizing software tool. Two different concepts were considered, a metallic design and a composite design. Both concepts were sized using similar levels of analysis fidelity and included the influence of design details on each concept. Additionally, the impact of the different manufacturing techniques and failure mechanisms for composite and metallic construction were considered. Significant details were included in analysis models of each concept, including penetrations for human access, joint connections, as well as secondary loading effects. The designs and results of the analysis were used to determine lifecycle cost estimates for the two Interstage designs. Lifecycle cost estimates were based on industry provided cost data for similar launch vehicle components. The results indicated that significant mass as well as cost savings are attainable for the chosen composite concept as compared with a metallic option.
Aly, Sharif S; Zhao, Jianyang; Li, Ben; Jiang, Jiming
2014-01-01
The Intraclass Correlation Coefficient (ICC) is commonly used to estimate the similarity between quantitative measures obtained from different sources. Overdispersed data is traditionally transformed so that linear mixed model (LMM) based ICC can be estimated. A common transformation used is the natural logarithm. The reliability of environmental sampling of fecal slurry on freestall pens has been estimated for Mycobacterium avium subsp. paratuberculosis using the natural logarithm transformed culture results. Recently, the negative binomial ICC was defined based on a generalized linear mixed model for negative binomial distributed data. The current study reports on the negative binomial ICC estimate which includes fixed effects using culture results of environmental samples. Simulations using a wide variety of inputs and negative binomial distribution parameters (r; p) showed better performance of the new negative binomial ICC compared to the ICC based on LMM even when negative binomial data was logarithm, and square root transformed. A second comparison that targeted a wider range of ICC values showed that the mean of estimated ICC closely approximated the true ICC.
A phylogeny and revised classification of Squamata, including 4161 species of lizards and snakes
2013-01-01
Background The extant squamates (>9400 known species of lizards and snakes) are one of the most diverse and conspicuous radiations of terrestrial vertebrates, but no studies have attempted to reconstruct a phylogeny for the group with large-scale taxon sampling. Such an estimate is invaluable for comparative evolutionary studies, and to address their classification. Here, we present the first large-scale phylogenetic estimate for Squamata. Results The estimated phylogeny contains 4161 species, representing all currently recognized families and subfamilies. The analysis is based on up to 12896 base pairs of sequence data per species (average = 2497 bp) from 12 genes, including seven nuclear loci (BDNF, c-mos, NT3, PDC, R35, RAG-1, and RAG-2), and five mitochondrial genes (12S, 16S, cytochrome b, ND2, and ND4). The tree provides important confirmation for recent estimates of higher-level squamate phylogeny based on molecular data (but with more limited taxon sampling), estimates that are very different from previous morphology-based hypotheses. The tree also includes many relationships that differ from previous molecular estimates and many that differ from traditional taxonomy. Conclusions We present a new large-scale phylogeny of squamate reptiles that should be a valuable resource for future comparative studies. We also present a revised classification of squamates at the family and subfamily level to bring the taxonomy more in line with the new phylogenetic hypothesis. This classification includes new, resurrected, and modified subfamilies within gymnophthalmid and scincid lizards, and boid, colubrid, and lamprophiid snakes. PMID:23627680
Lee, Vinson R.; Blew, Rob M.; Farr, Josh N.; Tomas, Rita; Lohman, Timothy G.; Going, Scott B.
2013-01-01
Objective Assess the utility of peripheral quantitative computed tomography (pQCT) for estimating whole body fat in adolescent girls. Research Methods and Procedures Our sample included 458 girls (aged 10.7 ± 1.1y, mean BMI = 18.5 ± 3.3 kg/m2) who had DXA scans for whole body percent fat (DXA %Fat). Soft tissue analysis of pQCT scans provided thigh and calf subcutaneous percent fat and thigh and calf muscle density (muscle fat content surrogates). Anthropometric variables included weight, height and BMI. Indices of maturity included age and maturity offset. The total sample was split into validation (VS; n = 304) and cross-validation (CS; n = 154) samples. Linear regression was used to develop prediction equations for estimating DXA %Fat from anthropometric variables and pQCT-derived soft tissue components in VS and the best prediction equation was applied to CS. Results Thigh and calf SFA %Fat were positively correlated with DXA %Fat (r = 0.84 to 0.85; p <0.001) and thigh and calf muscle densities were inversely related to DXA %Fat (r = −0.30 to −0.44; p < 0.001). The best equation for estimating %Fat included thigh and calf SFA %Fat and thigh and calf muscle density (adj. R2 = 0.90; SEE = 2.7%). Bland-Altman analysis in CS showed accurate estimates of percent fat (adj. R2 = 0.89; SEE = 2.7%) with no bias. Discussion Peripheral QCT derived indices of adiposity can be used to accurately estimate whole body percent fat in adolescent girls. PMID:25147482
NASA Astrophysics Data System (ADS)
Teramae, Tatsuya; Kushida, Daisuke; Takemori, Fumiaki; Kitamura, Akira
Authors proposed the estimation method combining k-means algorithm and NN for evaluating massage. However, this estimation method has a problem that discrimination ratio is decreased to new user. There are two causes of this problem. One is that generalization of NN is bad. Another one is that clustering result by k-means algorithm has not high correlation coefficient in a class. Then, this research proposes k-means algorithm according to correlation coefficient and incremental learning for NN. The proposed k-means algorithm is method included evaluation function based on correlation coefficient. Incremental learning is method that NN is learned by new data and initialized weight based on the existing data. The effect of proposed methods are verified by estimation result using EEG data when testee is given massage.
Smith, Eric G.
2015-01-01
Background: Nonrandomized studies typically cannot account for confounding from unmeasured factors. Method: A method is presented that exploits the recently-identified phenomenon of “confounding amplification” to produce, in principle, a quantitative estimate of total residual confounding resulting from both measured and unmeasured factors. Two nested propensity score models are constructed that differ only in the deliberate introduction of an additional variable(s) that substantially predicts treatment exposure. Residual confounding is then estimated by dividing the change in treatment effect estimate between models by the degree of confounding amplification estimated to occur, adjusting for any association between the additional variable(s) and outcome. Results: Several hypothetical examples are provided to illustrate how the method produces a quantitative estimate of residual confounding if the method’s requirements and assumptions are met. Previously published data is used to illustrate that, whether or not the method routinely provides precise quantitative estimates of residual confounding, the method appears to produce a valuable qualitative estimate of the likely direction and general size of residual confounding. Limitations: Uncertainties exist, including identifying the best approaches for: 1) predicting the amount of confounding amplification, 2) minimizing changes between the nested models unrelated to confounding amplification, 3) adjusting for the association of the introduced variable(s) with outcome, and 4) deriving confidence intervals for the method’s estimates (although bootstrapping is one plausible approach). Conclusions: To this author’s knowledge, it has not been previously suggested that the phenomenon of confounding amplification, if such amplification is as predictable as suggested by a recent simulation, provides a logical basis for estimating total residual confounding. The method's basic approach is straightforward. The method's routine usefulness, however, has not yet been established, nor has the method been fully validated. Rapid further investigation of this novel method is clearly indicated, given the potential value of its quantitative or qualitative output. PMID:25580226
NASA Astrophysics Data System (ADS)
John, Cédric M.; Karner, Garry D.; Mutti, Maria
2004-09-01
δ18Obenthic values from Leg 194 Ocean Drilling Program Sites 1192 and 1195 (drilled on the Marion Plateau) were combined with deep-sea values to reconstruct the magnitude range of the late middle Miocene sea-level fall (13.6 11.4 Ma). In parallel, an estimate for the late middle Miocene sea-level fall was calculated from the stratigraphic relationship identified during Leg 194 and the structural relief of carbonate platforms that form the Marion Plateau. Corrections for thermal subsidence induced by Late Cretaceous rifting, flexural sediment loading, and sediment compaction were taken into account. The response of the lithosphere to sediment loading was considered for a range of effective elastic thicknesses (10 < Te < 40 km). By overlapping the sea-level range of both the deep-sea isotopes and the results from the backstripping analysis, we demonstrate that the amplitude of the late middle Miocene sea-level fall was 45 68 m (56.5 ± 11.5 m). Including an estimate for sea-level variation using the δ18Obenthic results from the subtropical Marion Plateau, the range of sea-level fall is tightly constrained between 45 and 55 m (50.0 ± 5.0 m). This result is the first precise quantitative estimate for the amplitude of the late middle Miocene eustatic fall that sidesteps the errors inherent in using benthic foraminifera assemblages to predict paleo water depth. The estimate also includes an error analysis for the flexural response of the lithosphere to both water and sediment loads. Our result implies that the extent of ice buildup in the Miocene was larger than previously estimated, and conversely that the amount of cooling associated with this event was less important.
Towards Estimating the Magnitude of Earthquakes from EM Data Collected from the Subduction Zone
NASA Astrophysics Data System (ADS)
Heraud, J. A.
2016-12-01
During the past three years, magnetometers deployed in the Peruvian coast have been providing evidence that the ULF pulses received are indeed generated at the subduction or Benioff zone. Such evidence was presented at the AGU 2015 Fall meeting, showing the results of triangulation of pulses from two magnetometers located in the central area of Peru, using data collected during a two-year period. The process has been extended in time, only pulses associated with the occurrence of earthquakes and several pulse parameters have been used to estimate a function relating the magnitude of the earthquake with the value of a function generated with those parameters. The results shown, including an animated data video, are a first approximation towards the estimation of the magnitude of an earthquake about to occur, based on electromagnetic pulses that originated at the subduction zone. During the past three years, magnetometers deployed in the Peruvian coast have been providing evidence that the ULF pulses received are indeed generated at the subduction or Benioff zone. Such evidence was presented at the AGU 2015 Fall meeting, showing the results of triangulation of pulses from two magnetometers located in the central area of Peru, using data collected during a two-year period. The process has been extended in time, only pulses associated with the occurrence of earthquakes have been used and several pulse parameters have been used to estimate a function relating the magnitude of the earthquake with the value of a function generated with those parameters. The results shown, including an animated data video, are a first approximation towards the estimation of the magnitude of an earthquake about to occur, based on electromagnetic pulses that originated at the subduction zone.
Cost analysis in support of minimum energy standards for clothes washers and dryers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1979-02-02
The results of the cost analysis of energy conservation design options for laundry products are presented. The analysis was conducted using two approaches. The first, is directed toward the development of industrial engineering cost estimates of each energy conservation option. This approach results in the estimation of manufacturers costs. The second approach is directed toward determining the market price differential of energy conservation features. The results of this approach are shown. The market cost represents the cost to the consumer. It is the final cost, and therefore includes distribution costs as well as manufacturing costs.
Reducing Contingency through Sampling at the Luckey FUSRAP Site - 13186
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frothingham, David; Barker, Michelle; Buechi, Steve
2013-07-01
Typically, the greatest risk in developing accurate cost estimates for the remediation of hazardous, toxic, and radioactive waste sites is the uncertainty in the estimated volume of contaminated media requiring remediation. Efforts to address this risk in the remediation cost estimate can result in large cost contingencies that are often considered unacceptable when budgeting for site cleanups. Such was the case for the Luckey Formerly Utilized Sites Remedial Action Program (FUSRAP) site near Luckey, Ohio, which had significant uncertainty surrounding the estimated volume of site soils contaminated with radium, uranium, thorium, beryllium, and lead. Funding provided by the American Recoverymore » and Reinvestment Act (ARRA) allowed the U.S. Army Corps of Engineers (USACE) to conduct additional environmental sampling and analysis at the Luckey Site between November 2009 and April 2010, with the objective to further delineate the horizontal and vertical extent of contaminated soils in order to reduce the uncertainty in the soil volume estimate. Investigative work included radiological, geophysical, and topographic field surveys, subsurface borings, and soil sampling. Results from the investigative sampling were used in conjunction with Argonne National Laboratory's Bayesian Approaches for Adaptive Spatial Sampling (BAASS) software to update the contaminated soil volume estimate for the site. This updated volume estimate was then used to update the project cost-to-complete estimate using the USACE Cost and Schedule Risk Analysis process, which develops cost contingencies based on project risks. An investment of $1.1 M of ARRA funds for additional investigative work resulted in a reduction of 135,000 in-situ cubic meters (177,000 in-situ cubic yards) in the estimated base volume estimate. This refinement of the estimated soil volume resulted in a $64.3 M reduction in the estimated project cost-to-complete, through a reduction in the uncertainty in the contaminated soil volume estimate and the associated contingency costs. (authors)« less
Sando, Steven K.; Morgan, Timothy J.; Dutton, DeAnn M.; McCarthy, Peter M.
2009-01-01
Charles M. Russell National Wildlife Refuge (CMR) encompasses about 1.1 million acres (including Fort Peck Reservoir on the Missouri River) in northeastern Montana. To ensure that sufficient streamflow remains in the tributary streams to maintain the riparian corridors, the U.S. Fish and Wildlife Service is negotiating water-rights issues with the Reserved Water Rights Compact Commission of Montana. The U.S. Geological Survey, in cooperation with the U.S. Fish and Wildlife Service, conducted a study to gage, for a short period, selected streams that cross CMR, and analyze data to estimate long-term streamflow characteristics for CMR. The long-term streamflow characteristics of primary interest include the monthly and annual 90-, 80-, 50-, and 20-percent exceedance streamflows and mean streamflows (Q.90, Q.80, Q.50, Q.20, and QM, respectively), and the 1.5-, 2-, and 2.33- year peak flows (PK1.5, PK2, and PK2.33, respectively). The Regional Adjustment Relationship (RAR) was investigated for estimating the monthly and annual Q.90, Q.80, Q.50, Q.20, and QM, and the PK1.5, PK2, and PK2.33 for the short-term CMR gaging stations (hereinafter referred to as CMR stations). The RAR was determined to provide acceptable results for estimating the long-term Q.90, Q.80, Q.50, Q.20, and QM on a monthly basis for the months of March through June, and also on an annual basis. For the months of September through January, the RAR regression equations did not provide acceptable results for any long-term streamflow characteristic. For the month of February, the RAR regression equations provided acceptable results for the long-term Q.50 and QM, but poor results for the long-term Q.90, Q.80, and Q.20. For the months of July and August, the RAR provided acceptable results for the long-term Q.50, Q.20, and QM, but poor results for the long-term Q.90 and Q.80. Estimation coefficients were developed for estimating the long-term streamflow characteristics for which the RAR did not provide acceptable results. The RAR also was determined to provide acceptable results for estimating the PK1.5., PK2, and PK2.33 for the three CMR stations that lacked suitable peak-flow records. Methods for estimating streamflow characteristics at ungaged sites also were derived. Regression analyses that relate individual streamflow characteristics to various basin and climatic characteristics for gaging stations were performed to develop regression equations to estimate streamflow characteristics at ungaged sites. Final equations for the annual Q.50, Q.20, and QM are reported. Acceptable equations also were developed for estimating QM for the months of February, March, April, June, and July, and Q.50, Q.20, and QM on an annual basis. However, equations for QM for the months of February, March, April, June, and July were determined to be less consistent and reliable than the use of estimation coefficients applied to the regression equation results for the annual QM. Acceptable regression equations also were developed for the PK1.5, PK2, and PK2.33.
NASA Astrophysics Data System (ADS)
Courchesne, Samuel
Knowledge of the dynamic characteristics of a fixed-wing UAV is necessary to design flight control laws and to conceive a high quality flight simulator. The basic features of a flight mechanic model include the properties of mass, inertia and major aerodynamic terms. They respond to a complex process involving various numerical analysis techniques and experimental procedures. This thesis focuses on the analysis of estimation techniques applied to estimate problems of stability and control derivatives from flight test data provided by an experimental UAV. To achieve this objective, a modern identification methodology (Quad-M) is used to coordinate the processing tasks from multidisciplinary fields, such as parameter estimation modeling, instrumentation, the definition of flight maneuvers and validation. The system under study is a non-linear model with six degrees of freedom with a linear aerodynamic model. The time domain techniques are used for identification of the drone. The first technique, the equation error method is used to determine the structure of the aerodynamic model. Thereafter, the output error method and filter error method are used to estimate the aerodynamic coefficients values. The Matlab scripts for estimating the parameters obtained from the American Institute of Aeronautics and Astronautics (AIAA) are used and modified as necessary to achieve the desired results. A commendable effort in this part of research is devoted to the design of experiments. This includes an awareness of the system data acquisition onboard and the definition of flight maneuvers. The flight tests were conducted under stable flight conditions and with low atmospheric disturbance. Nevertheless, the identification results showed that the filter error method is most effective for estimating the parameters of the drone due to the presence of process noise and measurement. The aerodynamic coefficients are validated using a numerical analysis of the vortex method. In addition, a simulation model incorporating the estimated parameters is used to compare the behavior of states measured. Finally, a good correspondence between the results is demonstrated despite a limited number of flight data. Keywords: drone, identification, estimation, nonlinear, flight test, system, aerodynamic coefficient.
A new Bayesian Earthquake Analysis Tool (BEAT)
NASA Astrophysics Data System (ADS)
Vasyura-Bathke, Hannes; Dutta, Rishabh; Jónsson, Sigurjón; Mai, Martin
2017-04-01
Modern earthquake source estimation studies increasingly use non-linear optimization strategies to estimate kinematic rupture parameters, often considering geodetic and seismic data jointly. However, the optimization process is complex and consists of several steps that need to be followed in the earthquake parameter estimation procedure. These include pre-describing or modeling the fault geometry, calculating the Green's Functions (often assuming a layered elastic half-space), and estimating the distributed final slip and possibly other kinematic source parameters. Recently, Bayesian inference has become popular for estimating posterior distributions of earthquake source model parameters given measured/estimated/assumed data and model uncertainties. For instance, some research groups consider uncertainties of the layered medium and propagate these to the source parameter uncertainties. Other groups make use of informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed that efficiently explore the often high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational demands of these methods are high and estimation codes are rarely distributed along with the published results. Even if codes are made available, it is often difficult to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in earthquake source estimations, we undertook the effort of producing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package is build on top of the pyrocko seismological toolbox (www.pyrocko.org) and makes use of the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat) and we encourage and solicit contributions to the project. In this contribution, we present our strategy for developing BEAT, show application examples, and discuss future developments.
NASA Astrophysics Data System (ADS)
Tran, A. P.; Dafflon, B.; Hubbard, S.
2017-12-01
Soil organic carbon (SOC) is crucial for predicting carbon climate feedbacks in the vulnerable organic-rich Arctic region. However, it is challenging to achieve this property due to the general limitations of conventional core sampling and analysis methods. In this study, we develop an inversion scheme that uses single or multiple datasets, including soil liquid water content, temperature and ERT data, to estimate the vertical profile of SOC content. Our approach relies on the fact that SOC content strongly influences soil hydrological-thermal parameters, and therefore, indirectly controls the spatiotemporal dynamics of soil liquid water content, temperature and their correlated electrical resistivity. The scheme includes several advantages. First, this is the first time SOC content is estimated by using a coupled hydrogeophysical inversion. Second, by using the Community Land Model, we can account for the land surface dynamics (evapotranspiration, snow accumulation and melting) and ice/liquid phase transition. Third, we combine a deterministic and an adaptive Markov chain Monte Carlo optimization algorithm to better estimate the posterior distributions of desired model parameters. Finally, the simulated subsurface variables are explicitly linked to soil electrical resistivity via petrophysical and geophysical models. We validate the developed scheme using synthetic experiments. The results show that compared to inversion of single dataset, joint inversion of these datasets significantly reduces parameter uncertainty. The joint inversion approach is able to estimate SOC content within the shallow active layer with high reliability. Next, we apply the scheme to estimate OC content along an intensive ERT transect in Barrow, Alaska using multiple datasets acquired in the 2013-2015 period. The preliminary results show a good agreement between modeled and measured soil temperature, thaw layer thickness and electrical resistivity. The accuracy of estimated SOC content will be evaluated by comparison with measurements from soil samples along the transect. Our study presents a new surface-subsurface, deterministic-stochastic hydrogeophysical inversion approach, as well as the benefit of including multiple types of data to estimate SOC and associated hydrological-thermal dynamics.
The Lightning Nitrogen Oxides Model (LNOM): Status and Recent Applications
NASA Technical Reports Server (NTRS)
Koshak, William; Khan, Maudood; Peterson, Harold
2011-01-01
Improvements to the NASA Marshall Space Flight Center Lightning Nitrogen Oxides Model (LNOM) are discussed. Recent results from an August 2006 run of the Community Multiscale Air Quality (CMAQ) modeling system that employs LNOM lightning NOx (= NO + NO2) estimates are provided. The LNOM analyzes Lightning Mapping Array (LMA) data to estimate the raw (i.e., unmixed and otherwise environmentally unmodified) vertical profile of lightning NOx. The latest LNOM estimates of (a) lightning channel length distributions, (b) lightning 1-m segment altitude distributions, and (c) the vertical profile of NOx are presented. The impact of including LNOM-estimates of lightning NOx on CMAQ output is discussed.
Measurement of surface physical properties and radiation balance for KUREX-91 study
NASA Technical Reports Server (NTRS)
Walter-Shea, Elizabeth A.; Blad, Blaine L.; Mesarch, Mark A.; Hays, Cynthia J.
1992-01-01
Biophysical properties and radiation balance components were measured at the Streletskaya Steppe Reserve of the Russian Republic in July 1991. Steppe vegetation parameters characterized include leaf area index (LAI), leaf angle distribution, mean tilt angle, canopy height, leaf spectral properties, leaf water potential, fraction of absorbed photosynthetically active radiation (APAR), and incoming and outgoing shortwave and longwave radiation. Research results, biophysical parameters, radiation balance estimates, and sun-view geometry effects on estimating APAR are discussed. Incoming and outgoing radiation streams are estimated using bidirectional spectral reflectances and bidirectional thermal emittances. Good agreement between measured and modeled estimates of the radiation balance were obtained.
The impact of land use on estimates of pesticide leaching potential: Assessments and uncertainties
NASA Astrophysics Data System (ADS)
Loague, Keith
1991-11-01
This paper illustrates the magnitude of uncertainty which can exist for pesticide leaching assessments, due to data uncertainties, both between soil orders and within a single soil order. The current work differs from previous efforts because the impact of uncertainty in recharge estimates is considered. The examples are for diuron leaching in the Pearl Harbor Basin. The results clearly indicate that land use has a significant impact on both estimates of pesticide leaching potential and the uncertainties associated with those estimates. It appears that the regulation of agricultural chemicals in the future should include consideration for changing land use.
Bunck, C.M.; Chen, C.-L.; Pollock, K.H.
1995-01-01
Traditional methods of estimating survival from radio-telemetry studies use either the Trent-Rongstad approach (Trent and Rongstad 1974, Heisey and Fuller 1985) or the Kaplan-Meier approach (Kaplan and Meier 1958; Pollock et al. 1989a,b). Both methods appear to require the assumption that relocation probability for animals with a functioning radio is 1. In practice this may not always be reasonable and, in fact, is unnecessary. The number of animals at risk (i.e., risk set) can be modified to account for uncertain relocation of individuals. This involves including only relocated animals in the risk set instead of also including animals not relocated but that were seen later. Simulation results show that estimators and tests for comparing survival curves should be based on this modification.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-28
...The Federal Emergency Management Agency (FEMA) has submitted the following information collection to the Office of Management and Budget (OMB) for review and clearance in accordance with the requirements of the Paperwork Reduction Act of 1995. The submission describes the nature of the information collection, the categories of respondents, the estimated burden (i.e., the time, effort and resources used by respondents to respond) and cost, and includes the actual data collection instruments FEMA will use. There has been a change in the respondents, estimated burden, and estimated total annual burden hours from previous 30 day Notice. This change is a result of including the time, effort, and resources to collect information to be used by respondents as well as the significant decline in respondents expected.
Sex differences in estimating multiple intelligences in self and others: a replication in Russia.
Furnham, Adrian; Shagabutdinova, Ksenia
2012-01-01
This was a crosscultural study that focused on sex differences in self- and other-estimates of multiple intelligences (including 10 that were specified by Gardner, 1999 and three by Sternberg, 1988) as well as in an overall general intelligence estimate. It was one of a programmatic series of studies done in over 30 countries that has demonstrated the female "humility" and male "hubris" effect in self-estimated and other-estimated intelligence. Two hundred and thirty Russian university students estimated their own and their parents' overall intelligence and "multiple intelligences." Results revealed no sex difference in estimates of overall intelligence for both self and parents, but men rated themselves higher on spatial intelligence. This contradicted many previous findings in the area which have shown that men rate their own overall intelligence and mathematical intelligence significantly higher than do women. Regressions indicated that estimates of verbal, logical, and spatial intelligences were the best predictors of estimates of overall intelligence, which is a consistent finding over many studies. Regressions also showed that participants' openness to experience and self-respect were good predictors of intelligence estimates. A comparison with a British sample showed that Russians gave higher mother estimates, and were less likely to believe that IQ tests measure intelligence. Results were discussed in relation to the influence of gender role stereotypes on lay conception of intelligence across cultures.
Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan W.
2015-01-01
This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.
Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan W.
2016-01-01
This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.
Tiao, J; Moore, L; Porgo, T V; Belcaid, A
2016-06-01
To assess whether the definition of an IHF used as an exclusion criterion influences the results of trauma center benchmarking. We conducted a multicenter retrospective cohort study with data from an integrated Canadian trauma system. The study population included all patients admitted between 1999 and 2010 to any of the 57 adult trauma centers. Seven definitions of IHF based on diagnostic codes, age, mechanism of injury, and secondary injuries, identified in a systematic review, were used. Trauma centers were benchmarked using risk-adjusted mortality estimates generated using the Trauma Risk Adjustment Model. The agreement between benchmarking results generated under different IHF definitions was evaluated with correlation coefficients on adjusted mortality estimates. Correlation coefficients >0.95 were considered to convey acceptable agreement. The study population consisted of 172,872 patients before exclusion of IHF and between 128,094 and 139,588 patients after exclusion. Correlation coefficients between risk-adjusted mortality estimates generated in populations including and excluding IHF varied between 0.86 and 0.90. Correlation coefficients of estimates generated under different definitions of IHF varied between 0.97 and 0.99, even when analyses were restricted to patients aged ≥65 years. Although the exclusion of patients with IHF has an influence on the results of trauma center benchmarking based on mortality, the definition of IHF in terms of diagnostic codes, age, mechanism of injury and secondary injury has no significant impact on benchmarking results. Results suggest that there is no need to obtain formal consensus on the definition of IHF for benchmarking activities.
Gamble, John F; Nicolich, Mark J; Boffetta, Paolo
2012-08-01
A recent review concluded that the evidence from epidemiology studies was indeterminate and that additional studies were required to support the diesel exhaust-lung cancer hypothesis. This updated review includes seven recent studies. Two population-based studies concluded that significant exposure-response (E-R) trends between cumulative diesel exhaust and lung cancer were unlikely to be entirely explained by bias or confounding. Those studies have quality data on life-style risk factors, but do not allow definitive conclusions because of inconsistent E-R trends, qualitative exposure estimates and exposure misclassification (insufficient latency based on job title), and selection bias from low participation rates. Non-definitive results are consistent with the larger body of population studies. An NCI/NIOSH cohort mortality and nested case-control study of non-metal miners have some surrogate-based quantitative diesel exposure estimates (including highest exposure measured as respirable elemental carbon (REC) in the workplace) and smoking histories. The authors concluded that diesel exhaust may cause lung cancer. Nonetheless, the results are non-definitive because the conclusions are based on E-R patterns where high exposures were deleted to achieve significant results, where a posteriori adjustments were made to augment results, and where inappropriate adjustments were made for the "negative confounding" effects of smoking even though current smoking was not associated with diesel exposure and therefore could not be a confounder. Three cohort studies of bus drivers and truck drivers are in effect air pollution studies without estimates of diesel exhaust exposure and so are not sufficient for assessing the lung cancer-diesel exhaust hypothesis. Results from all occupational cohort studies with quantitative estimates of exposure have limitations, including weak and inconsistent E-R associations that could be explained by bias, confounding or chance, exposure misclassification, and often inadequate latency. In sum, the weight of evidence is considered inadequate to confirm the diesel-lung cancer hypothesis.
Gamble, John F.; Nicolich, Mark J.; Boffetta, Paolo
2012-01-01
A recent review concluded that the evidence from epidemiology studies was indeterminate and that additional studies were required to support the diesel exhaust-lung cancer hypothesis. This updated review includes seven recent studies. Two population-based studies concluded that significant exposure-response (E-R) trends between cumulative diesel exhaust and lung cancer were unlikely to be entirely explained by bias or confounding. Those studies have quality data on life-style risk factors, but do not allow definitive conclusions because of inconsistent E-R trends, qualitative exposure estimates and exposure misclassification (insufficient latency based on job title), and selection bias from low participation rates. Non-definitive results are consistent with the larger body of population studies. An NCI/NIOSH cohort mortality and nested case-control study of non-metal miners have some surrogate-based quantitative diesel exposure estimates (including highest exposure measured as respirable elemental carbon (REC) in the workplace) and smoking histories. The authors concluded that diesel exhaust may cause lung cancer. Nonetheless, the results are non-definitive because the conclusions are based on E-R patterns where high exposures were deleted to achieve significant results, where a posteriori adjustments were made to augment results, and where inappropriate adjustments were made for the “negative confounding” effects of smoking even though current smoking was not associated with diesel exposure and therefore could not be a confounder. Three cohort studies of bus drivers and truck drivers are in effect air pollution studies without estimates of diesel exhaust exposure and so are not sufficient for assessing the lung cancer-diesel exhaust hypothesis. Results from all occupational cohort studies with quantitative estimates of exposure have limitations, including weak and inconsistent E-R associations that could be explained by bias, confounding or chance, exposure misclassification, and often inadequate latency. In sum, the weight of evidence is considered inadequate to confirm the diesel-lung cancer hypothesis. PMID:22656672
Keogh, Ruth H; Daniel, Rhian M; VanderWeele, Tyler J; Vansteelandt, Stijn
2018-05-01
Estimation of causal effects of time-varying exposures using longitudinal data is a common problem in epidemiology. When there are time-varying confounders, which may include past outcomes, affected by prior exposure, standard regression methods can lead to bias. Methods such as inverse probability weighted estimation of marginal structural models have been developed to address this problem. However, in this paper we show how standard regression methods can be used, even in the presence of time-dependent confounding, to estimate the total effect of an exposure on a subsequent outcome by controlling appropriately for prior exposures, outcomes, and time-varying covariates. We refer to the resulting estimation approach as sequential conditional mean models (SCMMs), which can be fitted using generalized estimating equations. We outline this approach and describe how including propensity score adjustment is advantageous. We compare the causal effects being estimated using SCMMs and marginal structural models, and we compare the two approaches using simulations. SCMMs enable more precise inferences, with greater robustness against model misspecification via propensity score adjustment, and easily accommodate continuous exposures and interactions. A new test for direct effects of past exposures on a subsequent outcome is described.
Alomari, Mahmoud A.; Shqair, Dana M.; Khabour, Omar F.; Alawneh, Khaldoon; Nazzal, Mahmoud I.; Keewan, Esraa F.
2012-01-01
Exercise testing is associated with barriers prevent using cardiovascular (CV) endurance (CVE) measure frequently. A recent nonexercise model (NM) is alleged to estimate CVE without exercise. This study examined CVE relationships, using the NM model, with measures of obesity, physical fitness (PF), blood glucose and lipid, and circulation in 188 asymptomatic young (18–40 years) adults. Estimated CVE correlated favorably with measures of PF (r = 0.4 − 0.5) including handgrip strength, distance in 6 munities walking test, and shoulder press, and leg extension strengths, obesity (r = 0.2 − 0.7) including % body fat, body water content, fat mass, muscle mass, BMI, waist and hip circumferences and waist/hip ratio, and circulation (r = 0.2 − 0.3) including blood pressures, blood flow, vascular resistance, and blood (r = 0.2 − 0.5) profile including glucose, total cholesterol, LDL-C, HDL-C, and triglycerides. Additionally, differences (P < 0.05) in examined measures were found between the high, average, and low estimated CVE groups. Obviously the majority of these measures are CV disease risk factors and metabolic syndrome components. These results enhance the NM scientific value, and thus, can be further used in clinical and nonclinical settings. PMID:22606068
Psychological impact of providing women with personalised 10-year breast cancer risk estimates.
French, David P; Southworth, Jake; Howell, Anthony; Harvie, Michelle; Stavrinos, Paula; Watterson, Donna; Sampson, Sarah; Evans, D Gareth; Donnelly, Louise S
2018-05-08
The Predicting Risk of Cancer at Screening (PROCAS) study estimated 10-year breast cancer risk for 53,596 women attending NHS Breast Screening Programme. The present study, nested within the PROCAS study, aimed to assess the psychological impact of receiving breast cancer risk estimates, based on: (a) the Tyrer-Cuzick (T-C) algorithm including breast density or (b) T-C including breast density plus single-nucleotide polymorphisms (SNPs), versus (c) comparison women awaiting results. A sample of 2138 women from the PROCAS study was stratified by testing groups: T-C only, T-C(+SNPs) and comparison women; and by 10-year risk estimates received: 'moderate' (5-7.99%), 'average' (2-4.99%) or 'below average' (<1.99%) risk. Postal questionnaires were returned by 765 (36%) women. Overall state anxiety and cancer worry were low, and similar for women in T-C only and T-C(+SNPs) groups. Women in both T-C only and T-C(+SNPs) groups showed lower-state anxiety but slightly higher cancer worry than comparison women awaiting results. Risk information had no consistent effects on intentions to change behaviour. Most women were satisfied with information provided. There was considerable variation in understanding. No major harms of providing women with 10-year breast cancer risk estimates were detected. Research to establish the feasibility of risk-stratified breast screening is warranted.
Estimating tree bole volume using artificial neural network models for four species in Turkey.
Ozçelik, Ramazan; Diamantopoulou, Maria J; Brooks, John R; Wiant, Harry V
2010-01-01
Tree bole volumes of 89 Scots pine (Pinus sylvestris L.), 96 Brutian pine (Pinus brutia Ten.), 107 Cilicica fir (Abies cilicica Carr.) and 67 Cedar of Lebanon (Cedrus libani A. Rich.) trees were estimated using Artificial Neural Network (ANN) models. Neural networks offer a number of advantages including the ability to implicitly detect complex nonlinear relationships between input and output variables, which is very helpful in tree volume modeling. Two different neural network architectures were used and produced the Back propagation (BPANN) and the Cascade Correlation (CCANN) Artificial Neural Network models. In addition, tree bole volume estimates were compared to other established tree bole volume estimation techniques including the centroid method, taper equations, and existing standard volume tables. An overview of the features of ANNs and traditional methods is presented and the advantages and limitations of each one of them are discussed. For validation purposes, actual volumes were determined by aggregating the volumes of measured short sections (average 1 meter) of the tree bole using Smalian's formula. The results reported in this research suggest that the selected cascade correlation artificial neural network (CCANN) models are reliable for estimating the tree bole volume of the four examined tree species since they gave unbiased results and were superior to almost all methods in terms of error (%) expressed as the mean of the percentage errors. 2009 Elsevier Ltd. All rights reserved.
Belval, D.L.; Campbell, J.P.; Woodside, M.D.
1994-01-01
This report presents the results of a study by the U.S. Geological Survey, in cooperation with the Virginia Department of Environmental Quality-- Division of Intergovernmental Coordination to monitor and estimate loads of selected nutrients and suspended solids discharged to Chesapeake Bay from two major tributaries in Virginia. From July 1988 through June 1990, monitoring consisted of collecting depth-integrated, cross-sectional samples from the James and Rappahannock Rivers during storm- flow conditions and at scheduled intervals. Water- quality constituents that were monitored included total suspended solids (residue, total at 105 degrees Celsius), dissolved nitrite plus nitrate, dissolved ammonia, total Kjeldahl nitrogen (ammonia plus organic), total nitrogen, total phosphorus, dissolved orthopohosphorus, total organic carbon, and dissolved silica. Daily mean load estimates of each constituent were computed by month, using a seven-parameter log-linear-regression model that uses variables of time, discharge, and seasonality. Water-quality data and constituent- load estimates are included in the report in tabular and graphic form. The data and load estimates provided in this report will be used to calibrate the computer modeling efforts of the Chesapeake Bay region, evaluate the water quality of the Bay and the major effects on the water quality, and assess the results of best-management practices in Virginia.
The Problem With Estimating Public Health Spending.
Leider, Jonathon P
2016-01-01
Accurate information on how much the United States spends on public health is critical. These estimates affect planning efforts; reflect the value society places on the public health enterprise; and allows for the demonstration of cost-effectiveness of programs, policies, and services aimed at increasing population health. Yet, at present, there are a limited number of sources of systematic public health finance data. Each of these sources is collected in different ways, for different reasons, and so yields strikingly different results. This article aims to compare and contrast all 4 current national public health finance data sets, including data compiled by Trust for America's Health, the Association of State and Territorial Health Officials (ASTHO), the National Association of County and City Health Officials (NACCHO), and the Census, which underlie the oft-cited National Health Expenditure Account estimates of public health activity. In FY2008, ASTHO estimates that state health agencies spent $24 billion ($94 per capita on average, median $79), while the Census estimated all state governmental agencies including state health agencies spent $60 billion on public health ($200 per capita on average, median $166). Census public health data suggest that local governments spent an average of $87 per capita (median $57), whereas NACCHO estimates that reporting LHDs spent $64 per capita on average (median $36) in FY2008. We conclude that these estimates differ because the various organizations collect data using different means, data definitions, and inclusion/exclusion criteria--most notably around whether to include spending by all agencies versus a state/local health department, and whether behavioral health, disability, and some clinical care spending are included in estimates. Alongside deeper analysis of presently underutilized Census administrative data, we see harmonization efforts and the creation of a standardized expenditure reporting system as a way to meaningfully systematize reporting of public health spending and revenue.
Temporary Losses of Highway Capacity and Impacts on Performance: Phase 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, S.M.
2004-11-10
Traffic congestion and its impacts significantly affect the nation's economic performance and the public's quality of life. In most urban areas, travel demand routinely exceeds highway capacity during peak periods. In addition, events such as crashes, vehicle breakdowns, work zones, adverse weather, railroad crossings, large trucks loading/unloading in urban areas, and other factors such as toll collection facilities and sub-optimal signal timing cause temporary capacity losses, often worsening the conditions on already congested highway networks. The impacts of these temporary capacity losses include delay, reduced mobility, and reduced reliability of the highway system. They can also cause drivers to re-routemore » or reschedule trips. Such information is vital to formulating sound public policies for the highway infrastructure and its operation. In response to this need, Oak Ridge National Laboratory, sponsored by the Federal Highway Administration (FHWA), made an initial attempt to provide nationwide estimates of the capacity losses and delay caused by temporary capacity-reducing events (Chin et al. 2002). This study, called the Temporary Loss of Capacity (TLC) study, estimated capacity loss and delay on freeways and principal arterials resulting from fatal and non-fatal crashes, vehicle breakdowns, and adverse weather, including snow, ice, and fog. In addition, it estimated capacity loss and delay caused by sub-optimal signal timing at intersections on principal arterials. It also included rough estimates of capacity loss and delay on Interstates due to highway construction and maintenance work zones. Capacity loss and delay were estimated for calendar year 1999, except for work zone estimates, which were estimated for May 2001 to May 2002 due to data availability limitations. Prior to the first phase of this study, which was completed in May of 2002, no nationwide estimates of temporary losses of highway capacity by type of capacity-reducing event had been made. This report describes the second phase of the TLC study (TLC2). TLC2 improves upon the first study by expanding the scope to include delays from rain, toll collection facilities, railroad crossings, and commercial truck pickup and delivery (PUD) activities in urban areas. It includes estimates of work zone capacity loss and delay for all freeways and principal arterials, rather than for Interstates only. It also includes improved estimates of delays caused by fog, snow, and ice, which are based on data not available during the initial phase of the study. Finally, computational errors involving crash and breakdown delay in the original TLC report are corrected.« less
Bayesian parameter estimation for chiral effective field theory
NASA Astrophysics Data System (ADS)
Wesolowski, Sarah; Furnstahl, Richard; Phillips, Daniel; Klco, Natalie
2016-09-01
The low-energy constants (LECs) of a chiral effective field theory (EFT) interaction in the two-body sector are fit to observable data using a Bayesian parameter estimation framework. By using Bayesian prior probability distributions (pdfs), we quantify relevant physical expectations such as LEC naturalness and include them in the parameter estimation procedure. The final result is a posterior pdf for the LECs, which can be used to propagate uncertainty resulting from the fit to data to the final observable predictions. The posterior pdf also allows an empirical test of operator redundancy and other features of the potential. We compare results of our framework with other fitting procedures, interpreting the underlying assumptions in Bayesian probabilistic language. We also compare results from fitting all partial waves of the interaction simultaneously to cross section data compared to fitting to extracted phase shifts, appropriately accounting for correlations in the data. Supported in part by the NSF and DOE.
Water-vapor pressure control in a volume
NASA Technical Reports Server (NTRS)
Scialdone, J. J.
1978-01-01
The variation with time of the partial pressure of water in a volume that has openings to the outside environment and includes vapor sources was evaluated as a function of the purging flow and its vapor content. Experimental tests to estimate the diffusion of ambient humidity through openings and to validate calculated results were included. The purging flows required to produce and maintain a certain humidity in shipping containers, storage rooms, and clean rooms can be estimated with the relationship developed here. These purging flows are necessary to prevent the contamination, degradation, and other effects of water vapor on the systems inside these volumes.
Hierarchical information fusion for global displacement estimation in microsensor motion capture.
Meng, Xiaoli; Zhang, Zhi-Qiang; Wu, Jian-Kang; Wong, Wai-Choong
2013-07-01
This paper presents a novel hierarchical information fusion algorithm to obtain human global displacement for different gait patterns, including walking, running, and hopping based on seven body-worn inertial and magnetic measurement units. In the first-level sensor fusion, the orientation for each segment is achieved by a complementary Kalman filter (CKF) which compensates for the orientation error of the inertial navigation system solution through its error state vector. For each foot segment, the displacement is also estimated by the CKF, and zero velocity update is included for the drift reduction in foot displacement estimation. Based on the segment orientations and left/right foot locations, two global displacement estimates can be acquired from left/right lower limb separately using a linked biomechanical model. In the second-level geometric fusion, another Kalman filter is deployed to compensate for the difference between the two estimates from the sensor fusion and get more accurate overall global displacement estimation. The updated global displacement will be transmitted to left/right foot based on the human lower biomechanical model to restrict the drifts in both feet displacements. The experimental results have shown that our proposed method can accurately estimate human locomotion for the three different gait patterns with regard to the optical motion tracker.
Nelson, Lauren; Valle, Jhaqueline; King, Galatea; Mills, Paul K; Richardson, Maxwell J; Roberts, Eric M; Smith, Daniel; English, Paul
2017-05-01
To estimate the proportion of cases and costs of the most common cancers among children aged 0 to 14 years (leukemia, lymphoma, and brain or central nervous system tumors) that were attributable to preventable environmental pollution in California in 2013. We conducted a literature review to identify preventable environmental hazards associated with childhood cancer. We combined risk estimates with California-specific exposure prevalence estimates to calculate hazard-specific environmental attributable fractions (EAFs). We combined hazard-specific EAFs to estimate EAFs for each cancer and calculated an overall EAF. Estimated economic costs included annual (indirect and direct medical) and lifetime costs. Hazards associated with childhood cancer risks included tobacco smoke, residential exposures, and parental occupational exposures. Estimated EAFs for leukemia, lymphoma, and brain or central nervous system cancer were 21.3% (range = 11.7%-30.9%), 16.1% (range = 15.0%-17.2%), and 2.0% (range = 1.7%-2.2%), respectively. The combined EAF was 15.1% (range = 9.4%-20.7%), representing $18.6 million (range = $11.6 to $25.5 million) in annual costs and $31 million in lifetime costs. Reducing environmental hazards and exposures in California could substantially reduce the human burden of childhood cancer and result in significant annual and lifetime savings.
Schoenberg, Mike R; Lange, Rael T; Saklofske, Donald H; Suarez, Mariann; Brickell, Tracey A
2008-12-01
Determination of neuropsychological impairment involves contrasting obtained performances with a comparison standard, which is often an estimate of premorbid IQ. M. R. Schoenberg, R. T. Lange, T. A. Brickell, and D. H. Saklofske (2007) proposed the Child Premorbid Intelligence Estimate (CPIE) to predict premorbid Full Scale IQ (FSIQ) using the Wechsler Intelligence Scale for Children-4th Edition (WISC-IV; Wechsler, 2003). The CPIE includes 12 algorithms to predict FSIQ, 1 using demographic variables and 11 algorithms combining WISC-IV subtest raw scores with demographic variables. The CPIE was applied to a sample of children with acquired traumatic brain injury (TBI sample; n = 40) and a healthy demographically matched sample (n = 40). Paired-samples t tests found estimated premorbid FSIQ differed from obtained FSIQ when applied to the TBI sample (ps
Using the entire history in the analysis of nested case cohort samples.
Rivera, C L; Lumley, T
2016-08-15
Countermatching designs can provide more efficient estimates than simple matching or case-cohort designs in certain situations such as when good surrogate variables for an exposure of interest are available. We extend pseudolikelihood estimation for the Cox model under countermatching designs to models where time-varying covariates are considered. We also implement pseudolikelihood with calibrated weights to improve efficiency in nested case-control designs in the presence of time-varying variables. A simulation study is carried out, which considers four different scenarios including a binary time-dependent variable, a continuous time-dependent variable, and the case including interactions in each. Simulation results show that pseudolikelihood with calibrated weights under countermatching offers large gains in efficiency if compared to case-cohort. Pseudolikelihood with calibrated weights yielded more efficient estimators than pseudolikelihood estimators. Additionally, estimators were more efficient under countermatching than under case-cohort for the situations considered. The methods are illustrated using the Colorado Plateau uranium miners cohort. Furthermore, we present a general method to generate survival times with time-varying covariates. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Scope Complexity Options Risks Excursions (SCORE) Factor Mathematical Description.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gearhart, Jared Lee; Samberson, Jonell Nicole; Shettigar, Subhasini
The purpose of the Scope, Complexity, Options, Risks, Excursions (SCORE) model is to estimate the relative complexity of design variants of future warhead options, resulting in scores. SCORE factors extend this capability by providing estimates of complexity relative to a base system (i.e., all design options are normalized to one weapon system). First, a clearly defined set of scope elements for a warhead option is established. The complexity of each scope element is estimated by Subject Matter Experts (SMEs), including a level of uncertainty, relative to a specific reference system. When determining factors, complexity estimates for a scope element canmore » be directly tied to the base system or chained together via comparable scope elements in a string of reference systems that ends with the base system. The SCORE analysis process is a growing multi-organizational Nuclear Security Enterprise (NSE) effort, under the management of the NA-12 led Enterprise Modeling and Analysis Consortium (EMAC). Historically, it has provided the data elicitation, integration, and computation needed to support the out-year Life Extension Program (LEP) cost estimates included in the Stockpile Stewardship Management Plan (SSMP).« less
Early Teen Marriage and Future Poverty
DAHL, GORDON B.
2010-01-01
Both early teen marriage and dropping out of high school have historically been associated with a variety of negative outcomes, including higher poverty rates throughout life. Are these negative outcomes due to preexisting differences, or do they represent the causal effect of marriage and schooling choices? To better understand the true personal and societal consequences, in this article, I use an instrumental variables (IV) approach that takes advantage of variation in state laws regulating the age at which individuals are allowed to marry, drop out of school, and begin work. The baseline IV estimate indicates that a woman who marries young is 31 percentage points more likely to live in poverty when she is older. Similarly, a woman who drops out of school is 11 percentage points more likely to be poor. The results are robust to a variety of alternative specifications and estimation methods, including limited information maximum likelihood (LIML) estimation and a control function approach. While grouped ordinary least squares (OLS) estimates for the early teen marriage variable are also large, OLS estimates based on individual-level data are small, consistent with a large amount of measurement error. PMID:20879684
Early teen marriage and future poverty.
Dahl, Gordon B
2010-08-01
Both early teen marriage and dropping out of high school have historically been associated with a variety of negative outcomes, including higher poverty rates throughout life. Are these negative outcomes due to preexisting differences, or do they represent the causal effect of marriage and schooling choices? To better understand the true personal and societal consequences, in this article, I use an instrumental variables (IV) approach that takes advantage of variation in state laws regulating the age at which individuals are allowed to marry, drop out of school, and begin work. The baseline IV estimate indicates that a woman who marries young is 31 percentage points more likely to live in poverty when she is older. Similarly, a woman who drops out of school is 11 percentage points more likely to be poor. The results are robust to a variety of alternative specifications and estimation methods, including limited information maximum likelihood (LIML) estimation and a control function approach. While grouped ordinary least squares (OLS) estimates for the early teen marriage variable are also large, OLS estimates based on individual-level data are small, consistent with a large amount of measurement error
Clements, Michelle N; Donnelly, Christl A; Fenwick, Alan; Kabatereine, Narcis B; Knowles, Sarah C L; Meité, Aboulaye; N'Goran, Eliézer K; Nalule, Yolisa; Nogaro, Sarah; Phillips, Anna E; Tukahebwa, Edridah Muheki; Fleming, Fiona M
2017-12-01
The development of new diagnostics is an important tool in the fight against disease. Latent Class Analysis (LCA) is used to estimate the sensitivity and specificity of tests in the absence of a gold standard. The main field diagnostic for Schistosoma mansoni infection, Kato-Katz (KK), is not very sensitive at low infection intensities. A point-of-care circulating cathodic antigen (CCA) test has been shown to be more sensitive than KK. However, CCA can return an ambiguous 'trace' result between 'positive' and 'negative', and much debate has focused on interpretation of traces results. We show how LCA can be extended to include ambiguous trace results and analyse S. mansoni studies from both Côte d'Ivoire (CdI) and Uganda. We compare the diagnostic performance of KK and CCA and the observed results by each test to the estimated infection prevalence in the population. Prevalence by KK was higher in CdI (13.4%) than in Uganda (6.1%), but prevalence by CCA was similar between countries, both when trace was assumed to be negative (CCAtn: 11.7% in CdI and 9.7% in Uganda) and positive (CCAtp: 20.1% in CdI and 22.5% in Uganda). The estimated sensitivity of CCA was more consistent between countries than the estimated sensitivity of KK, and estimated infection prevalence did not significantly differ between CdI (20.5%) and Uganda (19.1%). The prevalence by CCA with trace as positive did not differ significantly from estimates of infection prevalence in either country, whereas both KK and CCA with trace as negative significantly underestimated infection prevalence in both countries. Incorporation of ambiguous results into an LCA enables the effect of different treatment thresholds to be directly assessed and is applicable in many fields. Our results showed that CCA with trace as positive most accurately estimated infection prevalence.
Donnelly, Christl A.; Fenwick, Alan; Kabatereine, Narcis B.; Knowles, Sarah C. L.; Meité, Aboulaye; N'Goran, Eliézer K.; Nalule, Yolisa; Nogaro, Sarah; Phillips, Anna E.; Tukahebwa, Edridah Muheki; Fleming, Fiona M.
2017-01-01
Background The development of new diagnostics is an important tool in the fight against disease. Latent Class Analysis (LCA) is used to estimate the sensitivity and specificity of tests in the absence of a gold standard. The main field diagnostic for Schistosoma mansoni infection, Kato-Katz (KK), is not very sensitive at low infection intensities. A point-of-care circulating cathodic antigen (CCA) test has been shown to be more sensitive than KK. However, CCA can return an ambiguous ‘trace’ result between ‘positive’ and ‘negative’, and much debate has focused on interpretation of traces results. Methodology/Principle findings We show how LCA can be extended to include ambiguous trace results and analyse S. mansoni studies from both Côte d’Ivoire (CdI) and Uganda. We compare the diagnostic performance of KK and CCA and the observed results by each test to the estimated infection prevalence in the population. Prevalence by KK was higher in CdI (13.4%) than in Uganda (6.1%), but prevalence by CCA was similar between countries, both when trace was assumed to be negative (CCAtn: 11.7% in CdI and 9.7% in Uganda) and positive (CCAtp: 20.1% in CdI and 22.5% in Uganda). The estimated sensitivity of CCA was more consistent between countries than the estimated sensitivity of KK, and estimated infection prevalence did not significantly differ between CdI (20.5%) and Uganda (19.1%). The prevalence by CCA with trace as positive did not differ significantly from estimates of infection prevalence in either country, whereas both KK and CCA with trace as negative significantly underestimated infection prevalence in both countries. Conclusions Incorporation of ambiguous results into an LCA enables the effect of different treatment thresholds to be directly assessed and is applicable in many fields. Our results showed that CCA with trace as positive most accurately estimated infection prevalence. PMID:29220354
The economic costs of alcohol consumption in Thailand, 2006
2010-01-01
Background There is evidence that the adverse consequences of alcohol impose a substantial economic burden on societies worldwide. Given the lack of generalizability of study results across different settings, many attempts have been made to estimate the economic costs of alcohol for various settings; however, these have mostly been confined to industrialized countries. To our knowledge, there are a very limited number of well-designed studies which estimate the economic costs of alcohol consumption in developing countries, including Thailand. Therefore, this study aims to estimate these economic costs, in Thailand, 2006. Methods This is a prevalence-based, cost-of-illness study. The estimated costs in this study included both direct and indirect costs. Direct costs included health care costs, costs of law enforcement, and costs of property damage due to road-traffic accidents. Indirect costs included costs of productivity loss due to premature mortality, and costs of reduced productivity due to absenteeism and presenteeism (reduced on-the-job productivity). Results The total economic cost of alcohol consumption in Thailand in 2006 was estimated at 156,105.4 million baht (9,627 million US$ PPP) or about 1.99% of the total Gross Domestic Product (GDP). Indirect costs outweigh direct costs, representing 96% of the total cost. The largest cost attributable to alcohol consumption is that of productivity loss due to premature mortality (104,128 million baht/6,422 million US$ PPP), followed by cost of productivity loss due to reduced productivity (45,464.6 million baht/2,804 million US$ PPP), health care cost (5,491.2 million baht/339 million US$ PPP), cost of property damage as a result of road traffic accidents (779.4 million baht/48 million US$ PPP), and cost of law enforcement (242.4 million baht/15 million US$ PPP), respectively. The results from the sensitivity analysis revealed that the cost ranges from 115,160.4 million baht to 214,053.0 million baht (7,102.1 - 13,201 million US$ PPP) depending on the methods and assumptions employed. Conclusions Alcohol imposes a substantial economic burden on Thai society, and according to these findings, the Thai government needs to pay significantly more attention to implementing more effective alcohol policies/interventions in order to reduce the negative consequences associated with alcohol. PMID:20534112
The economic consequences of neurosurgical disease in low- and middle-income countries.
Rudolfson, Niclas; Dewan, Michael C; Park, Kee B; Shrime, Mark G; Meara, John G; Alkire, Blake C
2018-05-18
OBJECTIVE The objective of this study was to estimate the economic consequences of neurosurgical disease in low- and middle-income countries (LMICs). METHODS The authors estimated gross domestic product (GDP) losses and the broader welfare losses attributable to 5 neurosurgical disease categories in LMICs using two distinct economic models. The value of lost output (VLO) model projects annual GDP losses due to neurosurgical disease during 2015-2030, and is based on the WHO's "Projecting the Economic Cost of Ill-health" tool. The value of lost economic welfare (VLW) model estimates total welfare losses, which is based on the value of a statistical life and includes nonmarket losses such as the inherent value placed on good health, resulting from neurosurgical disease in 2015 alone. RESULTS The VLO model estimates the selected neurosurgical diseases will result in $4.4 trillion (2013 US dollars, purchasing power parity) in GDP losses during 2015-2030 in the 90 included LMICs. Economic losses are projected to disproportionately affect low- and lower-middle-income countries, risking up to a 0.6% and 0.54% loss of GDP, respectively, in 2030. The VLW model evaluated 127 LMICs, and estimates that these countries experienced $3 trillion (2013 US dollars, purchasing power parity) in economic welfare losses in 2015. Regardless of the model used, the majority of the losses can be attributed to stroke and traumatic brain injury. CONCLUSIONS The economic impact of neurosurgical diseases in LMICs is significant. The magnitude of economic losses due to neurosurgical diseases in LMICs provides further motivation beyond already compelling humanitarian reasons for action.
Tug fleet and ground operations schedules and controls. Volume 3: Program cost estimates
NASA Technical Reports Server (NTRS)
1975-01-01
Cost data for the tug DDT&E and operations phases are presented. Option 6 is the recommended option selected from seven options considered and was used as the basis for ground processing estimates. Option 6 provides for processing the tug in a factory clean environment in the low bay area of VAB with subsequent cleaning to visibly clean. The basis and results of the trade study to select Option 6 processing plan is included. Cost estimating methodology, a work breakdown structure, and a dictionary of WBS definitions is also provided.
Image interpolation by adaptive 2-D autoregressive modeling and soft-decision estimation.
Zhang, Xiangjun; Wu, Xiaolin
2008-06-01
The challenge of image interpolation is to preserve spatial details. We propose a soft-decision interpolation technique that estimates missing pixels in groups rather than one at a time. The new technique learns and adapts to varying scene structures using a 2-D piecewise autoregressive model. The model parameters are estimated in a moving window in the input low-resolution image. The pixel structure dictated by the learnt model is enforced by the soft-decision estimation process onto a block of pixels, including both observed and estimated. The result is equivalent to that of a high-order adaptive nonseparable 2-D interpolation filter. This new image interpolation approach preserves spatial coherence of interpolated images better than the existing methods, and it produces the best results so far over a wide range of scenes in both PSNR measure and subjective visual quality. Edges and textures are well preserved, and common interpolation artifacts (blurring, ringing, jaggies, zippering, etc.) are greatly reduced.
An Anisotropic A posteriori Error Estimator for CFD
NASA Astrophysics Data System (ADS)
Feijóo, Raúl A.; Padra, Claudio; Quintana, Fernando
In this article, a robust anisotropic adaptive algorithm is presented, to solve compressible-flow equations using a stabilized CFD solver and automatic mesh generators. The association includes a mesh generator, a flow solver, and an a posteriori error-estimator code. The estimator was selected among several choices available (Almeida et al. (2000). Comput. Methods Appl. Mech. Engng, 182, 379-400; Borges et al. (1998). "Computational mechanics: new trends and applications". Proceedings of the 4th World Congress on Computational Mechanics, Bs.As., Argentina) giving a powerful computational tool. The main aim is to capture solution discontinuities, in this case, shocks, using the least amount of computational resources, i.e. elements, compatible with a solution of good quality. This leads to high aspect-ratio elements (stretching). To achieve this, a directional error estimator was specifically selected. The numerical results show good behavior of the error estimator, resulting in strongly-adapted meshes in few steps, typically three or four iterations, enough to capture shocks using a moderate and well-distributed amount of elements.
Using GIS-based methods and lidar data to estimate rooftop solar technical potential in US cities
Margolis, Robert; Gagnon, Pieter; Melius, Jennifer; ...
2017-07-06
Here, we estimate the technical potential of rooftop solar photovoltaics (PV) for select US cities by combining light detection and ranging (lidar) data, a validated analytical method for determining rooftop PV suitability employing geographic information systems, and modeling of PV electricity generation. We find that rooftop PV's ability to meet estimated city electricity consumption varies widely - from meeting 16% of annual consumption (in Washington, DC) to meeting 88% (in Mission Viejo, CA). Important drivers include average rooftop suitability, household footprint/per-capita roof space, the quality of the solar resource, and the city's estimated electricity consumption. In addition to city-wide results,more » we also estimate the ability of aggregations of households to offset their electricity consumption with PV. In a companion article, we will use statistical modeling to extend our results and estimate national rooftop PV technical potential. In addition, our publically available data and methods may help policy makers, utilities, researchers, and others perform customized analyses to meet their specific needs.« less
Using GIS-based methods and lidar data to estimate rooftop solar technical potential in US cities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margolis, Robert; Gagnon, Pieter; Melius, Jennifer
Here, we estimate the technical potential of rooftop solar photovoltaics (PV) for select US cities by combining light detection and ranging (lidar) data, a validated analytical method for determining rooftop PV suitability employing geographic information systems, and modeling of PV electricity generation. We find that rooftop PV's ability to meet estimated city electricity consumption varies widely - from meeting 16% of annual consumption (in Washington, DC) to meeting 88% (in Mission Viejo, CA). Important drivers include average rooftop suitability, household footprint/per-capita roof space, the quality of the solar resource, and the city's estimated electricity consumption. In addition to city-wide results,more » we also estimate the ability of aggregations of households to offset their electricity consumption with PV. In a companion article, we will use statistical modeling to extend our results and estimate national rooftop PV technical potential. In addition, our publically available data and methods may help policy makers, utilities, researchers, and others perform customized analyses to meet their specific needs.« less
NASA Technical Reports Server (NTRS)
Davis, John H.
1993-01-01
Lunar spherical harmonic gravity coefficients are estimated from simulated observations of a near-circular low altitude polar orbiter disturbed by lunar mascons. Lunar gravity sensing missions using earth-based nearside observations with and without satellite-based far-side observations are simulated and least squares maximum likelihood estimates are developed for spherical harmonic expansion fit models. Simulations and parameter estimations are performed by a modified version of the Smithsonian Astrophysical Observatory's Planetary Ephemeris Program. Two different lunar spacecraft mission phases are simulated to evaluate the estimated fit models. Results for predicting state covariances one orbit ahead are presented along with the state errors resulting from the mismodeled gravity field. The position errors from planning a lunar landing maneuver with a mismodeled gravity field are also presented. These simulations clearly demonstrate the need to include observations of satellite motion over the far side in estimating the lunar gravity field. The simulations also illustrate that the eighth degree and order expansions used in the simulated fits were unable to adequately model lunar mascons.
Estimated home ranges can misrepresent habitat relationships on patchy landscapes
Mitchell, M.S.; Powell, R.A.
2008-01-01
Home ranges of animals are generally structured by the selective use of resource-bearing patches that comprise habitat. Based on this concept, home ranges of animals estimated from location data are commonly used to infer habitat relationships. Because home ranges estimated from animal locations are largely continuous in space, the resource-bearing patches selected by an animal from a fragmented distribution of patches would be difficult to discern; unselected patches included in the home range estimate would bias an understanding of important habitat relationships. To evaluate potential for this bias, we generated simulated home ranges based on optimal selection of resource-bearing patches across a series of simulated resource distributions that varied in the spatial continuity of resources. For simulated home ranges where selected patches were spatially disjunct, we included interstitial, unselected cells most likely to be traveled by an animal moving among selected patches. We compared characteristics of the simulated home ranges with and without interstitial patches to evaluate how insights derived from field estimates can differ from actual characteristics of home ranges, depending on patchiness of landscapes. Our results showed that contiguous home range estimates could lead to misleading insights on the quality, size, resource content, and efficiency of home ranges, proportional to the spatial discontinuity of resource-bearing patches. We conclude the potential bias of including unselected, largely irrelevant patches in the field estimates of home ranges of animals can be high, particularly for home range estimators that assume uniform use of space within home range boundaries. Thus, inferences about the habitat relationships that ultimately define an animal's home range can be misleading where animals occupy landscapes with patchily distributed resources.
43 CFR 418.28 - Conditions of delivery.
Code of Federal Regulations, 2010 CFR
2010-10-01
... particulars including the known or estimated location and amounts; (3) The amount will not be included as a valid headgate delivery for purposes of computing the Project efficiency and resultant incentive credit... treated directly as a debit to Lahontan storage in the same manner as an efficiency debit. (b) District...
Handling Correlations between Covariates and Random Slopes in Multilevel Models
ERIC Educational Resources Information Center
Bates, Michael David; Castellano, Katherine E.; Rabe-Hesketh, Sophia; Skrondal, Anders
2014-01-01
This article discusses estimation of multilevel/hierarchical linear models that include cluster-level random intercepts and random slopes. Viewing the models as structural, the random intercepts and slopes represent the effects of omitted cluster-level covariates that may be correlated with included covariates. The resulting correlations between…
Rodd, Jennifer M; Vitello, Sylvia; Woollams, Anna M; Adank, Patti
2015-02-01
We conducted an Activation Likelihood Estimation (ALE) meta-analysis to identify brain regions that are recruited by linguistic stimuli requiring relatively demanding semantic or syntactic processing. We included 54 functional MRI studies that explicitly varied the semantic or syntactic processing load, while holding constant demands on earlier stages of processing. We included studies that introduced a syntactic/semantic ambiguity or anomaly, used a priming manipulation that specifically reduced the load on semantic/syntactic processing, or varied the level of syntactic complexity. The results confirmed the critical role of the posterior left Inferior Frontal Gyrus (LIFG) in semantic and syntactic processing. These results challenge models of sentence comprehension highlighting the role of anterior LIFG for semantic processing. In addition, the results emphasise the posterior (but not anterior) temporal lobe for both semantic and syntactic processing. Crown Copyright © 2014. Published by Elsevier Inc. All rights reserved.
An Estimate of Avian Mortality at Communication Towers in the United States and Canada
Longcore, Travis; Rich, Catherine; Mineau, Pierre; MacDonald, Beau; Bert, Daniel G.; Sullivan, Lauren M.; Mutrie, Erin; Gauthreaux, Sidney A.; Avery, Michael L.; Crawford, Robert L.; Manville, Albert M.; Travis, Emilie R.; Drake, David
2012-01-01
Avian mortality at communication towers in the continental United States and Canada is an issue of pressing conservation concern. Previous estimates of this mortality have been based on limited data and have not included Canada. We compiled a database of communication towers in the continental United States and Canada and estimated avian mortality by tower with a regression relating avian mortality to tower height. This equation was derived from 38 tower studies for which mortality data were available and corrected for sampling effort, search efficiency, and scavenging where appropriate. Although most studies document mortality at guyed towers with steady-burning lights, we accounted for lower mortality at towers without guy wires or steady-burning lights by adjusting estimates based on published studies. The resulting estimate of mortality at towers is 6.8 million birds per year in the United States and Canada. Bootstrapped subsampling indicated that the regression was robust to the choice of studies included and a comparison of multiple regression models showed that incorporating sampling, scavenging, and search efficiency adjustments improved model fit. Estimating total avian mortality is only a first step in developing an assessment of the biological significance of mortality at communication towers for individual species or groups of species. Nevertheless, our estimate can be used to evaluate this source of mortality, develop subsequent per-species mortality estimates, and motivate policy action. PMID:22558082
NASA Astrophysics Data System (ADS)
Ma, Hongliang; Xu, Shijie
2014-09-01
This paper presents an improved real-time sequential filter (IRTSF) for magnetometer-only attitude and angular velocity estimation of spacecraft during its attitude changing (including fast and large angular attitude maneuver, rapidly spinning or uncontrolled tumble). In this new magnetometer-only attitude determination technique, both attitude dynamics equation and first time derivative of measured magnetic field vector are directly leaded into filtering equations based on the traditional single vector attitude determination method of gyroless and real-time sequential filter (RTSF) of magnetometer-only attitude estimation. The process noise model of IRTSF includes attitude kinematics and dynamics equations, and its measurement model consists of magnetic field vector and its first time derivative. The observability of IRTSF for small or large angular velocity changing spacecraft is evaluated by an improved Lie-Differentiation, and the degrees of observability of IRTSF for different initial estimation errors are analyzed by the condition number and a solved covariance matrix. Numerical simulation results indicate that: (1) the attitude and angular velocity of spacecraft can be estimated with sufficient accuracy using IRTSF from magnetometer-only data; (2) compared with that of RTSF, the estimation accuracies and observability degrees of attitude and angular velocity using IRTSF from magnetometer-only data are both improved; and (3) universality: the IRTSF of magnetometer-only attitude and angular velocity estimation is observable for any different initial state estimation error vector.
An estimate of avian mortality at communication towers in the United States and Canada.
Longcore, Travis; Rich, Catherine; Mineau, Pierre; MacDonald, Beau; Bert, Daniel G; Sullivan, Lauren M; Mutrie, Erin; Gauthreaux, Sidney A; Avery, Michael L; Crawford, Robert L; Manville, Albert M; Travis, Emilie R; Drake, David
2012-01-01
Avian mortality at communication towers in the continental United States and Canada is an issue of pressing conservation concern. Previous estimates of this mortality have been based on limited data and have not included Canada. We compiled a database of communication towers in the continental United States and Canada and estimated avian mortality by tower with a regression relating avian mortality to tower height. This equation was derived from 38 tower studies for which mortality data were available and corrected for sampling effort, search efficiency, and scavenging where appropriate. Although most studies document mortality at guyed towers with steady-burning lights, we accounted for lower mortality at towers without guy wires or steady-burning lights by adjusting estimates based on published studies. The resulting estimate of mortality at towers is 6.8 million birds per year in the United States and Canada. Bootstrapped subsampling indicated that the regression was robust to the choice of studies included and a comparison of multiple regression models showed that incorporating sampling, scavenging, and search efficiency adjustments improved model fit. Estimating total avian mortality is only a first step in developing an assessment of the biological significance of mortality at communication towers for individual species or groups of species. Nevertheless, our estimate can be used to evaluate this source of mortality, develop subsequent per-species mortality estimates, and motivate policy action.
Earle, P.S.; Wald, D.J.; Allen, T.I.; Jaiswal, K.S.; Porter, K.A.; Hearne, M.G.
2008-01-01
One half-hour after the May 12th Mw 7.9 Wenchuan, China earthquake, the U.S. Geological Survey’s Prompt Assessment of Global Earthquakes for Response (PAGER) system distributed an automatically generated alert stating that 1.2 million people were exposed to severe-to-extreme shaking (Modified Mercalli Intensity VIII or greater). It was immediately clear that a large-scale disaster had occurred. These alerts were widely distributed and referenced by the major media outlets and used by governments, scientific, and relief agencies to guide their responses. The PAGER alerts and Web pages included predictive ShakeMaps showing estimates of ground shaking, maps of population density, and a list of estimated intensities at impacted cities. Manual, revised alerts were issued in the following hours that included the dimensions of the fault rupture. Within a half-day, PAGER’s estimates of the population exposed to strong shaking levels stabilized at 5.2 million people. A coordinated research effort is underway to extend PAGER’s capability to include estimates of the number of casualties. We are pursuing loss models that will allow PAGER the flexibility to use detailed inventory and engineering results in regions where these data are available while also calculating loss estimates in regions where little is known about the type and strength of the built infrastructure. Prototype PAGER fatality estimates are currently implemented and can be manually triggered. In the hours following the Wenchuan earthquake, these models predicted fatalities in the tens of thousands.
Fretheim, Atle; Soumerai, Stephen B; Zhang, Fang; Oxman, Andrew D; Ross-Degnan, Dennis
2013-08-01
We reanalyzed the data from a cluster-randomized controlled trial (C-RCT) of a quality improvement intervention for prescribing antihypertensive medication. Our objective was to estimate the effectiveness of the intervention using both interrupted time-series (ITS) and RCT methods, and to compare the findings. We first conducted an ITS analysis using data only from the intervention arm of the trial because our main objective was to compare the findings from an ITS analysis with the findings from the C-RCT. We used segmented regression methods to estimate changes in level or slope coincident with the intervention, controlling for baseline trend. We analyzed the C-RCT data using generalized estimating equations. Last, we estimated the intervention effect by including data from both study groups and by conducting a controlled ITS analysis of the difference between the slope and level changes in the intervention and control groups. The estimates of absolute change resulting from the intervention were ITS analysis, 11.5% (95% confidence interval [CI]: 9.5, 13.5); C-RCT, 9.0% (95% CI: 4.9, 13.1); and the controlled ITS analysis, 14.0% (95% CI: 8.6, 19.4). ITS analysis can provide an effect estimate that is concordant with the results of a cluster-randomized trial. A broader range of comparisons from other RCTs would help to determine whether these are generalizable results. Copyright © 2013 Elsevier Inc. All rights reserved.
Salas, M M S; Nascimento, G G; Huysmans, M C; Demarco, F F
2015-01-01
The main purpose of this systematic review was to estimate the prevalence of dental erosion in permanent teeth of children and adolescents. An electronic search was performed up to and including March 2014. Eligibility criteria included population-based studies in permanent teeth of children and adolescents aged 8-19-year-old reporting the prevalence or data that allowed the calculation of prevalence rates of tooth erosion. Data collection assessed information regarding geographic location, type of index used for clinical examination, sample size, year of publication, age, examined teeth and tissue exposure. The estimated prevalence of erosive wear was determined, followed by a meta-regression analysis. Twenty-two papers were included in the systematic review. The overall estimated prevalence of tooth erosion was 30.4% (95%IC 23.8-37.0). In the multivariate meta-regression model use of the Tooth Wear Index for clinical examination, studies with sample smaller than 1000 subjects and those conducted in the Middle East and Africa remained associated with higher dental erosion prevalence rates. Our results demonstrated that the estimated prevalence of erosive wear in permanent teeth of children and adolescents is 30.4% with high heterogeneity between studies. Additionally, the correct choice of a clinical index for dental erosion detection and the geographic location play an important role for the large variability of erosive tooth wear in permanent teeth of children and adolescents. The prevalence of tooth erosion observed in permanent teeth of children and adolescents was considerable high. Our results demonstrated that prevalence rate of erosive wear was influenced by methodological and diagnosis factors. When tooth erosion is assessed, the clinical index should be considered. Copyright © 2014 Elsevier Ltd. All rights reserved.
Estimation of Particulate Mass and Manganese Exposure Levels among Welders
Hobson, Angela; Seixas, Noah; Sterling, David; Racette, Brad A.
2011-01-01
Background: Welders are frequently exposed to Manganese (Mn), which may increase the risk of neurological impairment. Historical exposure estimates for welding-exposed workers are needed for epidemiological studies evaluating the relationship between welding and neurological or other health outcomes. The objective of this study was to develop and validate a multivariate model to estimate quantitative levels of welding fume exposures based on welding particulate mass and Mn concentrations reported in the published literature. Methods: Articles that described welding particulate and Mn exposures during field welding activities were identified through a comprehensive literature search. Summary measures of exposure and related determinants such as year of sampling, welding process performed, type of ventilation used, degree of enclosure, base metal, and location of sampling filter were extracted from each article. The natural log of the reported arithmetic mean exposure level was used as the dependent variable in model building, while the independent variables included the exposure determinants. Cross-validation was performed to aid in model selection and to evaluate the generalizability of the models. Results: A total of 33 particulate and 27 Mn means were included in the regression analysis. The final model explained 76% of the variability in the mean exposures and included welding process and degree of enclosure as predictors. There was very little change in the explained variability and root mean squared error between the final model and its cross-validation model indicating the final model is robust given the available data. Conclusions: This model may be improved with more detailed exposure determinants; however, the relatively large amount of variance explained by the final model along with the positive generalizability results of the cross-validation increases the confidence that the estimates derived from this model can be used for estimating welder exposures in absence of individual measurement data. PMID:20870928
Considering dominance in reduced single-step genomic evaluations.
Ertl, J; Edel, C; Pimentel, E C G; Emmerling, R; Götz, K-U
2018-06-01
Single-step models including dominance can be an enormous computational task and can even be prohibitive for practical application. In this study, we try to answer the question whether a reduced single-step model is able to estimate breeding values of bulls and breeding values, dominance deviations and total genetic values of cows with acceptable quality. Genetic values and phenotypes were simulated (500 repetitions) for a small Fleckvieh pedigree consisting of 371 bulls (180 thereof genotyped) and 553 cows (40 thereof genotyped). This pedigree was virtually extended for 2,407 non-genotyped daughters. Genetic values were estimated with the single-step model and with different reduced single-step models. Including more relatives of genotyped cows in the reduced single-step model resulted in a better agreement of results with the single-step model. Accuracies of genetic values were largest with single-step and smallest with reduced single-step when only the cows genotyped were modelled. The results indicate that a reduced single-step model is suitable to estimate breeding values of bulls and breeding values, dominance deviations and total genetic values of cows with acceptable quality. © 2018 Blackwell Verlag GmbH.
Beyond Self-Report: Tools to Compare Estimated and Real-World Smartphone Use
Andrews, Sally; Ellis, David A.; Shaw, Heather; Piwek, Lukasz
2015-01-01
Psychologists typically rely on self-report data when quantifying mobile phone usage, despite little evidence of its validity. In this paper we explore the accuracy of using self-reported estimates when compared with actual smartphone use. We also include source code to process and visualise these data. We compared 23 participants’ actual smartphone use over a two-week period with self-reported estimates and the Mobile Phone Problem Use Scale. Our results indicate that estimated time spent using a smartphone may be an adequate measure of use, unless a greater resolution of data are required. Estimates concerning the number of times an individual used their phone across a typical day did not correlate with actual smartphone use. Neither estimated duration nor number of uses correlated with the Mobile Phone Problem Use Scale. We conclude that estimated smartphone use should be interpreted with caution in psychological research. PMID:26509895
NASA Instrument Cost/Schedule Model
NASA Technical Reports Server (NTRS)
Habib-Agahi, Hamid; Mrozinski, Joe; Fox, George
2011-01-01
NASA's Office of Independent Program and Cost Evaluation (IPCE) has established a number of initiatives to improve its cost and schedule estimating capabilities. 12One of these initiatives has resulted in the JPL developed NASA Instrument Cost Model. NICM is a cost and schedule estimator that contains: A system level cost estimation tool; a subsystem level cost estimation tool; a database of cost and technical parameters of over 140 previously flown remote sensing and in-situ instruments; a schedule estimator; a set of rules to estimate cost and schedule by life cycle phases (B/C/D); and a novel tool for developing joint probability distributions for cost and schedule risk (Joint Confidence Level (JCL)). This paper describes the development and use of NICM, including the data normalization processes, data mining methods (cluster analysis, principal components analysis, regression analysis and bootstrap cross validation), the estimating equations themselves and a demonstration of the NICM tool suite.
Beyond Self-Report: Tools to Compare Estimated and Real-World Smartphone Use.
Andrews, Sally; Ellis, David A; Shaw, Heather; Piwek, Lukasz
2015-01-01
Psychologists typically rely on self-report data when quantifying mobile phone usage, despite little evidence of its validity. In this paper we explore the accuracy of using self-reported estimates when compared with actual smartphone use. We also include source code to process and visualise these data. We compared 23 participants' actual smartphone use over a two-week period with self-reported estimates and the Mobile Phone Problem Use Scale. Our results indicate that estimated time spent using a smartphone may be an adequate measure of use, unless a greater resolution of data are required. Estimates concerning the number of times an individual used their phone across a typical day did not correlate with actual smartphone use. Neither estimated duration nor number of uses correlated with the Mobile Phone Problem Use Scale. We conclude that estimated smartphone use should be interpreted with caution in psychological research.
Subramanian, Sundarraman
2008-01-01
This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423
Subramanian, Sundarraman
2006-01-01
This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented.
Machtans, Craig S.; Thogmartin, Wayne E.
2014-01-01
The publication of a U.S. estimate of bird–window collisions by Loss et al. is an example of the somewhat contentious approach of using extrapolations to obtain large-scale estimates from small-scale studies. We review the approach by Loss et al. and other authors who have published papers on human-induced avian mortality and describe the drawbacks and advantages to publishing what could be considered imperfect science. The main drawback is the inherent and somewhat unquantifiable bias of using small-scale studies to scale up to a national estimate. The direct benefits include development of new methodologies for creating the estimates, an explicit treatment of known biases with acknowledged uncertainty in the final estimate, and the novel results. Other overarching benefits are that these types of papers are catalysts for improving all aspects of the science of estimates and for policies that must respond to the new information.
Early Universe synthesis of asymmetric dark matter nuggets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gresham, Moira I.; Lou, Hou Keong; Zurek, Kathryn M.
We compute the mass function of bound states of asymmetric dark matter - nuggets - synthesized in the early Universe. We apply our results for the nugget density and binding energy computed from a nuclear model to obtain analytic estimates of the typical nugget size exiting synthesis. We numerically solve the Boltzmann equation for synthesis including two-to-two fusion reactions, estimating the impact of bottlenecks on the mass function exiting synthesis. These results provide the basis for studying the late Universe cosmology of nuggets in a future companion paper.
Early Universe synthesis of asymmetric dark matter nuggets
Gresham, Moira I.; Lou, Hou Keong; Zurek, Kathryn M.
2018-02-12
We compute the mass function of bound states of asymmetric dark matter - nuggets - synthesized in the early Universe. We apply our results for the nugget density and binding energy computed from a nuclear model to obtain analytic estimates of the typical nugget size exiting synthesis. We numerically solve the Boltzmann equation for synthesis including two-to-two fusion reactions, estimating the impact of bottlenecks on the mass function exiting synthesis. These results provide the basis for studying the late Universe cosmology of nuggets in a future companion paper.
Early Universe synthesis of asymmetric dark matter nuggets
NASA Astrophysics Data System (ADS)
Gresham, Moira I.; Lou, Hou Keong; Zurek, Kathryn M.
2018-02-01
We compute the mass function of bound states of asymmetric dark matter—nuggets—synthesized in the early Universe. We apply our results for the nugget density and binding energy computed from a nuclear model to obtain analytic estimates of the typical nugget size exiting synthesis. We numerically solve the Boltzmann equation for synthesis including two-to-two fusion reactions, estimating the impact of bottlenecks on the mass function exiting synthesis. These results provide the basis for studying the late Universe cosmology of nuggets in a future companion paper.
Why We Should Not Be Indifferent to Specification Choices for Difference-in-Differences.
Ryan, Andrew M; Burgess, James F; Dimick, Justin B
2015-08-01
To evaluate the effects of specification choices on the accuracy of estimates in difference-in-differences (DID) models. Process-of-care quality data from Hospital Compare between 2003 and 2009. We performed a Monte Carlo simulation experiment to estimate the effect of an imaginary policy on quality. The experiment was performed for three different scenarios in which the probability of treatment was (1) unrelated to pre-intervention performance; (2) positively correlated with pre-intervention levels of performance; and (3) positively correlated with pre-intervention trends in performance. We estimated alternative DID models that varied with respect to the choice of data intervals, the comparison group, and the method of obtaining inference. We assessed estimator bias as the mean absolute deviation between estimated program effects and their true value. We evaluated the accuracy of inferences through statistical power and rates of false rejection of the null hypothesis. Performance of alternative specifications varied dramatically when the probability of treatment was correlated with pre-intervention levels or trends. In these cases, propensity score matching resulted in much more accurate point estimates. The use of permutation tests resulted in lower false rejection rates for the highly biased estimators, but the use of clustered standard errors resulted in slightly lower false rejection rates for the matching estimators. When treatment and comparison groups differed on pre-intervention levels or trends, our results supported specifications for DID models that include matching for more accurate point estimates and models using clustered standard errors or permutation tests for better inference. Based on our findings, we propose a checklist for DID analysis. © Health Research and Educational Trust.
NASA Astrophysics Data System (ADS)
Boren, E. J.; Boschetti, L.; Johnson, D.
2016-12-01
With near-future droughts predicted to become both more frequent and more intense (Allen et al. 2015, Diffenbaugh et al. 2015), the estimation of satellite-derived vegetation water content would benefit a wide range of environmental applications including agricultural, vegetation, and fire risk monitoring. No vegetation water content thematic product is currently available (Yebra et al. 2013), but the successful launch of the Landsat 8 OLI and Sentinel 2A satellites, and the forthcoming Sentinel 2B, provide the opportunity for monitoring biophysical variables at a scale (10-30m) and temporal resolution (5 days) needed by most applications. Radiative transfer models (RTM) use a set of biophysical parameters to produce an estimated spectral response and - when used in inverse mode - provide a way to use satellite spectral data to estimate vegetation biophysical parameters, including water content (Zarco-Tejada et al. 2003). Using the coupled leaf and canopy level model PROSAIL5, and Landsat 8 OLI and Sentinel 2A MSI optical satellite data, the present research compares the results of three model inversion techniques: iterative optimization (OPT), look-up table (LUT), and artificial neural network (ANN) training. Ancillary biophysical data, needed for constraining the inversion process, were collected from various crop species grown in a controlled setting and under different water stress conditions. The measurements included fresh weight, dry weight, leaf area, and spectral leaf transmittance and reflectance in the 350-2500 nm range. Plot-level data, collected coincidently with satellite overpasses during three summer field campaigns in northern Idaho (2014 to 2016), are used to evaluate the results of the model inversion. Field measurements included fresh weight, dry weight, leaf area index, plant height, and top of canopy reflectance in the 350-2500 nm range. The results of the model inversion intercomparison exercised are used to characterize the uncertainties of vegetation water content estimation from Landsat 8 OLI and Sentinel 2A data.
Model Effects on GLAS-Based Regional Estimates of Forest Biomass and Carbon
NASA Technical Reports Server (NTRS)
Nelson, Ross F.
2010-01-01
Ice, Cloud, and land Elevation Satellite (ICESat) / Geosciences Laser Altimeter System (GLAS) waveform data are used to estimate biomass and carbon on a 1.27 X 10(exp 6) square km study area in the Province of Quebec, Canada, below the tree line. The same input datasets and sampling design are used in conjunction with four different predictive models to estimate total aboveground dry forest biomass and forest carbon. The four models include non-stratified and stratified versions of a multiple linear model where either biomass or (biomass)(exp 0.5) serves as the dependent variable. The use of different models in Quebec introduces differences in Provincial dry biomass estimates of up to 0.35 G, with a range of 4.94 +/- 0.28 Gt to 5.29 +/-0.36 Gt. The differences among model estimates are statistically non-significant, however, and the results demonstrate the degree to which carbon estimates vary strictly as a function of the model used to estimate regional biomass. Results also indicate that GLAS measurements become problematic with respect to height and biomass retrievals in the boreal forest when biomass values fall below 20 t/ha and when GLAS 75th percentile heights fall below 7 m.
Khadadah, Mousa
2013-01-01
To evaluate the direct costs of treating asthma in Kuwait. Population figures were obtained from the 2005 census and projected to 2008. Treatment profiles were obtained from the Asthma Insights and Reality for the Gulf and Near East (AIRGNE) study. Asthma prevalence and unit cost estimates were based on results from a Delphi technique. These estimates were applied to the total Kuwaiti population aged 5 years and over to obtain the number of people diagnosed with asthma. The estimates from the Delphi exercise and the AIRGNE results were used to determine the number of asthma patients managed in government facilities. Direct drug costs were provided by the Ministry of Health. Treatment costs (Kuwaiti dinars, KD) were also calculated using the Delphi exercise and the AIRGNE data. The prevalence of asthma was estimated to be 15% of adults and 18% of children (93,923 adults; 70,158 children). Of these, 84,530 (90%) adults and 58,932 (84.0%) children were estimated to be using government healthcare facilities. Inpatient visits accounted for the largest portion of total direct costs (43%), followed by emergency room visits (29%), outpatient visits (21%) and medications (7%). The annual cost of treatment, excluding medications, was KD 29,946,776 (USD 107,076,063) for adults and KD 24,295,439 (USD 86,869,450) for children. Including medications, the total annual direct cost of asthma treatment was estimated to be over KD 58 million (USD 207 million). Asthma costs Kuwait a huge sum of money, though the estimates were conservative because only Kuwaiti nationals were included. Given the high medical expenditures associated with emergency room and inpatient visits, relative to lower medication costs, efforts should be focused on improving asthma control rather than reducing expenditure on procurement of medication. Copyright © 2012 S. Karger AG, Basel.
NASA Astrophysics Data System (ADS)
Dafflon, B.; Barrash, W.; Cardiff, M.; Johnson, T. C.
2011-12-01
Reliable predictions of groundwater flow and solute transport require an estimation of the detailed distribution of the parameters (e.g., hydraulic conductivity, effective porosity) controlling these processes. However, such parameters are difficult to estimate because of the inaccessibility and complexity of the subsurface. In this regard, developments in parameter estimation techniques and investigations of field experiments are still challenging and necessary to improve our understanding and the prediction of hydrological processes. Here we analyze a conservative tracer test conducted at the Boise Hydrogeophysical Research Site in 2001 in a heterogeneous unconfined fluvial aquifer. Some relevant characteristics of this test include: variable-density (sinking) effects because of the injection concentration of the bromide tracer, the relatively small size of the experiment, and the availability of various sources of geophysical and hydrological information. The information contained in this experiment is evaluated through several parameter estimation approaches, including a grid-search-based strategy, stochastic simulation of hydrological property distributions, and deterministic inversion using regularization and pilot-point techniques. Doing this allows us to investigate hydraulic conductivity and effective porosity distributions and to compare the effects of assumptions from several methods and parameterizations. Our results provide new insights into the understanding of variable-density transport processes and the hydrological relevance of incorporating various sources of information in parameter estimation approaches. Among others, the variable-density effect and the effective porosity distribution, as well as their coupling with the hydraulic conductivity structure, are seen to be significant in the transport process. The results also show that assumed prior information can strongly influence the estimated distributions of hydrological properties.
NASA Astrophysics Data System (ADS)
Tong, M.; Xue, M.
2006-12-01
An important source of model error for convective-scale data assimilation and prediction is microphysical parameterization. This study investigates the possibility of estimating up to five fundamental microphysical parameters, which are closely involved in the definition of drop size distribution of microphysical species in a commonly used single-moment ice microphysics scheme, using radar observations and the ensemble Kalman filter method. The five parameters include the intercept parameters for rain, snow and hail/graupel, and the bulk densities of hail/graupel and snow. Parameter sensitivity and identifiability are first examined. The ensemble square-root Kalman filter (EnSRF) is employed for simultaneous state and parameter estimation. OSS experiments are performed for a model-simulated supercell storm, in which the five microphysical parameters are estimated individually or in different combinations starting from different initial guesses. When error exists in only one of the microphysical parameters, the parameter can be successfully estimated without exception. The estimation of multiple parameters is found to be less robust, with end results of estimation being sensitive to the realization of the initial parameter perturbation. This is believed to be because of the reduced parameter identifiability and the existence of non-unique solutions. The results of state estimation are, however, always improved when simultaneous parameter estimation is performed, even when the estimated parameters values are not accurate.
Resampling methods in Microsoft Excel® for estimating reference intervals
Theodorsson, Elvar
2015-01-01
Computer- intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular. Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples. PMID:26527366
Resampling methods in Microsoft Excel® for estimating reference intervals.
Theodorsson, Elvar
2015-01-01
Computer-intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular. Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples.
Hydroacoustic estimates of fish biomass and spatial distributions in shallow lakes
NASA Astrophysics Data System (ADS)
Lian, Yuxi; Huang, Geng; Godlewska, Małgorzata; Cai, Xingwei; Li, Chang; Ye, Shaowen; Liu, Jiashou; Li, Zhongjie
2017-03-01
We conducted acoustical surveys with a horizontal beam transducer to detect fish and with a vertical beam transducer to detect depth and macrophytes in two typical shallow lakes along the middle and lower reaches of the Changjiang (Yangtze) River in November 2013. Both lakes are subject to active fish management with annual stocking and removal of large fish. The purpose of the study was to compare hydroacoustic horizontal beam estimates with fish landings. The preliminary results show that the fish distribution patterns differed in the two lakes and were affected by water depth and macrophyte coverage. The hydroacoustically estimated fish biomass matched the commercial catch very well in Niushan Lake, but it was two times higher in Kuilei Lake. However, acoustic estimates included all fish, whereas the catch included only fish >45 cm (smaller ones were released). We were unable to determine the proper regression between acoustic target strength and fish length for the dominant fish species in the two lakes.
Hydroacoustic estimates of fish biomass and spatial distributions in shallow lakes
NASA Astrophysics Data System (ADS)
Lian, Yuxi; Huang, Geng; Godlewska, Małgorzata; Cai, Xingwei; Li, Chang; Ye, Shaowen; Liu, Jiashou; Li, Zhongjie
2018-03-01
We conducted acoustical surveys with a horizontal beam transducer to detect fish and with a vertical beam transducer to detect depth and macrophytes in two typical shallow lakes along the middle and lower reaches of the Changjiang (Yangtze) River in November 2013. Both lakes are subject to active fish management with annual stocking and removal of large fish. The purpose of the study was to compare hydroacoustic horizontal beam estimates with fish landings. The preliminary results show that the fish distribution patterns differed in the two lakes and were affected by water depth and macrophyte coverage. The hydroacoustically estimated fish biomass matched the commercial catch very well in Niushan Lake, but it was two times higher in Kuilei Lake. However, acoustic estimates included all fish, whereas the catch included only fish >45 cm (smaller ones were released). We were unable to determine the proper regression between acoustic target strength and fish length for the dominant fish species in the two lakes.
Robust inference in discrete hazard models for randomized clinical trials.
Nguyen, Vinh Q; Gillen, Daniel L
2012-10-01
Time-to-event data in which failures are only assessed at discrete time points are common in many clinical trials. Examples include oncology studies where events are observed through periodic screenings such as radiographic scans. When the survival endpoint is acknowledged to be discrete, common methods for the analysis of observed failure times include the discrete hazard models (e.g., the discrete-time proportional hazards and the continuation ratio model) and the proportional odds model. In this manuscript, we consider estimation of a marginal treatment effect in discrete hazard models where the constant treatment effect assumption is violated. We demonstrate that the estimator resulting from these discrete hazard models is consistent for a parameter that depends on the underlying censoring distribution. An estimator that removes the dependence on the censoring mechanism is proposed and its asymptotic distribution is derived. Basing inference on the proposed estimator allows for statistical inference that is scientifically meaningful and reproducible. Simulation is used to assess the performance of the presented methodology in finite samples.
Alikhani, Jamal; Takacs, Imre; Al-Omari, Ahmed; Murthy, Sudhir; Massoudieh, Arash
2017-03-01
A parameter estimation framework was used to evaluate the ability of observed data from a full-scale nitrification-denitrification bioreactor to reduce the uncertainty associated with the bio-kinetic and stoichiometric parameters of an activated sludge model (ASM). Samples collected over a period of 150 days from the effluent as well as from the reactor tanks were used. A hybrid genetic algorithm and Bayesian inference were used to perform deterministic and parameter estimations, respectively. The main goal was to assess the ability of the data to obtain reliable parameter estimates for a modified version of the ASM. The modified ASM model includes methylotrophic processes which play the main role in methanol-fed denitrification. Sensitivity analysis was also used to explain the ability of the data to provide information about each of the parameters. The results showed that the uncertainty in the estimates of the most sensitive parameters (including growth rate, decay rate, and yield coefficients) decreased with respect to the prior information.
Carter, Rickey E; Sonne, Susan C; Brady, Kathleen T
2005-01-01
Background Adequate participant recruitment is vital to the conduct of a clinical trial. Projected recruitment rates are often over-estimated, and the time to recruit the target population (accrual period) is often under-estimated. Methods This report illustrates three approaches to estimating the accrual period and applies the methods to a multi-center, randomized, placebo controlled trial undergoing development. Results Incorporating known sources of accrual variation can yield a more justified estimate of the accrual period. Simulation studies can be incorporated into a clinical trial's planning phase to provide estimates for key accrual summaries including the mean and standard deviation of the accrual period. Conclusion The accrual period of a clinical trial should be carefully considered, and the allocation of sufficient time for participant recruitment is a fundamental aspect of planning a clinical trial. PMID:15796782
Faggion, Clovis Mariano; Wu, Yun-Chun; Scheidgen, Moritz; Tu, Yu-Kang
2015-01-01
Background Risk of bias (ROB) may threaten the internal validity of a clinical trial by distorting the magnitude of treatment effect estimates, although some conflicting information on this assumption exists. Objective The objective of this study was evaluate the effect of ROB on the magnitude of treatment effect estimates in randomized controlled trials (RCTs) in periodontology and implant dentistry. Methods A search for Cochrane systematic reviews (SRs), including meta-analyses of RCTs published in periodontology and implant dentistry fields, was performed in the Cochrane Library in September 2014. Random-effect meta-analyses were performed by grouping RCTs with different levels of ROBs in three domains (sequence generation, allocation concealment, and blinding of outcome assessment). To increase power and precision, only SRs with meta-analyses including at least 10 RCTs were included. Meta-regression was performed to investigate the association between ROB characteristics and the magnitudes of intervention effects in the meta-analyses. Results Of the 24 initially screened SRs, 21 SRs were excluded because they did not include at least 10 RCTs in the meta-analyses. Three SRs (two from periodontology field) generated information for conducting 27 meta-analyses. Meta-regression did not reveal significant differences in the relationship of the ROB level with the size of treatment effect estimates, although a trend for inflated estimates was observed in domains with unclear ROBs. Conclusion In this sample of RCTs, high and (mainly) unclear risks of selection and detection biases did not seem to influence the size of treatment effect estimates, although several confounders might have influenced the strength of the association. PMID:26422698
Seismic Vulnerability Assessment for Montreal -An Application of HAZUS-MH4
NASA Astrophysics Data System (ADS)
Yu, Keyan
2011-12-01
Seismic loss estimation for Montreal, Canada is performed for a 2% in 50 years seismic hazard using the HAZUS-MH4 tool developed by US Federal Emergency Management. The software is manipulated to accept a Canadian setting for the Montreal study region, which includes 522 census tracts. The accuracy of loss estimations using HAZUS is dependent on the quality and quantity of data collection and preparation. The data collected for Montreal study region comprise: (1) the building inventory (2) hazard maps regarding soil amplification, liquefaction, and landslides (3) population distribution at three different times of the day (4) census demographic information and (5) synthetic ground motion contour maps using three different ground motion prediction equations. All these data are prepared and assembled into geodatabases that are compatible with the HAZUS software. The study estimated that roughly 5% of the building stock would be damaged with direct economic losses evaluated at 1.4 billion dollars for a scenario corresponding to the 2% in 50 years scenario. The maximum number of casualties associated with this scenario corresponds to a time of occurrence of 2pm and would result in approximately 500 people being injured. Epistemic uncertainty was considered by obtaining damage estimates for three attenuation functions that were developed for Eastern North America. The results indicate that loss estimates are highly sensitive to the choice of the attenuation function and suggests that epistemic uncertainty should be considered both for the definition of the hazard function and in loss estimation methodologies. The next steps in the study should be to increase the size of the survey area to the Greater Montreal which includes more than 3 million inhabitants and to perform more targeted studies for critical areas such as downtown Montreal, and the south-eastern tip of Montreal. The current study was performed mainly for the built environment; the next phase will need to include more information relative to lifelines and their impact on risks.
The cost of clinical mastitis in the first 30 days of lactation: An economic modeling tool.
Rollin, E; Dhuyvetter, K C; Overton, M W
2015-12-01
Clinical mastitis results in considerable economic losses for dairy producers and is most commonly diagnosed in early lactation. The objective of this research was to estimate the economic impact of clinical mastitis occurring during the first 30 days of lactation for a representative US dairy. A deterministic partial budget model was created to estimate direct and indirect costs per case of clinical mastitis occurring during the first 30 days of lactation. Model inputs were selected from the available literature, or when none were available, from herd data. The average case of clinical mastitis resulted in a total economic cost of $444, including $128 in direct costs and $316 in indirect costs. Direct costs included diagnostics ($10), therapeutics ($36), non-saleable milk ($25), veterinary service ($4), labor ($21), and death loss ($32). Indirect costs included future milk production loss ($125), premature culling and replacement loss ($182), and future reproductive loss ($9). Accurate decision making regarding mastitis control relies on understanding the economic impacts of clinical mastitis, especially the longer term indirect costs that represent 71% of the total cost per case of mastitis. Future milk production loss represents 28% of total cost, and future culling and replacement loss represents 41% of the total cost of a case of clinical mastitis. In contrast to older estimates, these values represent the current dairy economic climate, including milk price ($0.461/kg), feed price ($0.279/kg DM (dry matter)), and replacement costs ($2,094/head), along with the latest published estimates on the production and culling effects of clinical mastitis. This economic model is designed to be customized for specific dairy producers and their herd characteristics to better aid them in developing mastitis control strategies. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Ward, Zachary J.; Long, Michael W.; Resch, Stephen C.; Gortmaker, Steven L.; Cradock, Angie L.; Giles, Catherine; Hsiao, Amber; Wang, Y. Claire
2016-01-01
Background State-level estimates from the Centers for Disease Control and Prevention (CDC) underestimate the obesity epidemic because they use self-reported height and weight. We describe a novel bias-correction method and produce corrected state-level estimates of obesity and severe obesity. Methods Using non-parametric statistical matching, we adjusted self-reported data from the Behavioral Risk Factor Surveillance System (BRFSS) 2013 (n = 386,795) using measured data from the National Health and Nutrition Examination Survey (NHANES) (n = 16,924). We validated our national estimates against NHANES and estimated bias-corrected state-specific prevalence of obesity (BMI≥30) and severe obesity (BMI≥35). We compared these results with previous adjustment methods. Results Compared to NHANES, self-reported BRFSS data underestimated national prevalence of obesity by 16% (28.67% vs 34.01%), and severe obesity by 23% (11.03% vs 14.26%). Our method was not significantly different from NHANES for obesity or severe obesity, while previous methods underestimated both. Only four states had a corrected obesity prevalence below 30%, with four exceeding 40%–in contrast, most states were below 30% in CDC maps. Conclusions Twelve million adults with obesity (including 6.7 million with severe obesity) were misclassified by CDC state-level estimates. Previous bias-correction methods also resulted in underestimates. Accurate state-level estimates are necessary to plan for resources to address the obesity epidemic. PMID:26954566
Pandemic risk: how large are the expected losses?
Fan, Victoria Y; Jamison, Dean T; Summers, Lawrence H
2018-02-01
There is an unmet need for greater investment in preparedness against major epidemics and pandemics. The arguments in favour of such investment have been largely based on estimates of the losses in national incomes that might occur as the result of a major epidemic or pandemic. Recently, we extended the estimate to include the valuation of the lives lost as a result of pandemic-related increases in mortality. This produced markedly higher estimates of the full value of loss that might occur as the result of a future pandemic. We parametrized an exceedance probability function for a global influenza pandemic and estimated that the expected number of influenza-pandemic-related deaths is about 720 000 per year. We calculated that the expected annual losses from pandemic risk to be about 500 billion United States dollars - or 0.6% of global income - per year. This estimate falls within - but towards the lower end of - the Intergovernmental Panel on Climate Change's estimates of the value of the losses from global warming, which range from 0.2% to 2% of global income. The estimated percentage of annual national income represented by the expected value of losses varied by country income grouping: from a little over 0.3% in high-income countries to 1.6% in lower-middle-income countries. Most of the losses from influenza pandemics come from rare, severe events.
How accurately can the peak skin dose in fluoroscopy be determined using indirect dose metrics?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, A. Kyle, E-mail: kyle.jones@mdanderson.org; Ensor, Joe E.; Pasciak, Alexander S.
Purpose: Skin dosimetry is important for fluoroscopically-guided interventions, as peak skin doses (PSD) that result in skin reactions can be reached during these procedures. There is no consensus as to whether or not indirect skin dosimetry is sufficiently accurate for fluoroscopically-guided interventions. However, measuring PSD with film is difficult and the decision to do so must be madea priori. The purpose of this study was to assess the accuracy of different types of indirect dose estimates and to determine if PSD can be calculated within ±50% using indirect dose metrics for embolization procedures. Methods: PSD were measured directly using radiochromicmore » film for 41 consecutive embolization procedures at two sites. Indirect dose metrics from the procedures were collected, including reference air kerma. Four different estimates of PSD were calculated from the indirect dose metrics and compared along with reference air kerma to the measured PSD for each case. The four indirect estimates included a standard calculation method, the use of detailed information from the radiation dose structured report, and two simplified calculation methods based on the standard method. Indirect dosimetry results were compared with direct measurements, including an analysis of uncertainty associated with film dosimetry. Factors affecting the accuracy of the different indirect estimates were examined. Results: When using the standard calculation method, calculated PSD were within ±35% for all 41 procedures studied. Calculated PSD were within ±50% for a simplified method using a single source-to-patient distance for all calculations. Reference air kerma was within ±50% for all but one procedure. Cases for which reference air kerma or calculated PSD exhibited large (±35%) differences from the measured PSD were analyzed, and two main causative factors were identified: unusually small or large source-to-patient distances and large contributions to reference air kerma from cone beam computed tomography or acquisition runs acquired at large primary gantry angles. When calculated uncertainty limits [−12.8%, 10%] were applied to directly measured PSD, most indirect PSD estimates remained within ±50% of the measured PSD. Conclusions: Using indirect dose metrics, PSD can be determined within ±35% for embolization procedures. Reference air kerma can be used without modification to set notification limits and substantial radiation dose levels, provided the displayed reference air kerma is accurate. These results can reasonably be extended to similar procedures, including vascular and interventional oncology. Considering these results, film dosimetry is likely an unnecessary effort for these types of procedures when indirect dose metrics are available.« less
Improving Focal Depth Estimates: Studies of Depth Phase Detection at Regional Distances
NASA Astrophysics Data System (ADS)
Stroujkova, A.; Reiter, D. T.; Shumway, R. H.
2006-12-01
The accurate estimation of the depth of small, regionally recorded events continues to be an important and difficult explosion monitoring research problem. Depth phases (free surface reflections) are the primary tool that seismologists use to constrain the depth of a seismic event. When depth phases from an event are detected, an accurate source depth is easily found by using the delay times of the depth phases relative to the P wave and a velocity profile near the source. Cepstral techniques, including cepstral F-statistics, represent a class of methods designed for the depth-phase detection and identification; however, they offer only a moderate level of success at epicentral distances less than 15°. This is due to complexities in the Pn coda, which can lead to numerous false detections in addition to the true phase detection. Therefore, cepstral methods cannot be used independently to reliably identify depth phases. Other evidence, such as apparent velocities, amplitudes and frequency content, must be used to confirm whether the phase is truly a depth phase. In this study we used a variety of array methods to estimate apparent phase velocities and arrival azimuths, including beam-forming, semblance analysis, MUltiple SIgnal Classification (MUSIC) (e.g., Schmidt, 1979), and cross-correlation (e.g., Cansi, 1995; Tibuleac and Herrin, 1997). To facilitate the processing and comparison of results, we developed a MATLAB-based processing tool, which allows application of all of these techniques (i.e., augmented cepstral processing) in a single environment. The main objective of this research was to combine the results of three focal-depth estimation techniques and their associated standard errors into a statistically valid unified depth estimate. The three techniques include: 1. Direct focal depth estimate from the depth-phase arrival times picked via augmented cepstral processing. 2. Hypocenter location from direct and surface-reflected arrivals observed on sparse networks of regional stations using a Grid-search, Multiple-Event Location method (GMEL; Rodi and Toksöz, 2000; 2001). 3. Surface-wave dispersion inversion for event depth and focal mechanism (Herrmann and Ammon, 2002). To validate our approach and provide quality control for our solutions, we applied the techniques to moderated- sized events (mb between 4.5 and 6.0) with known focal mechanisms. We illustrate the techniques using events observed at regional distances from the KSAR (Wonju, South Korea) teleseismic array and other nearby broadband three-component stations. Our results indicate that the techniques can produce excellent agreement between the various depth estimates. In addition, combining the techniques into a "unified" estimate greatly reduced location errors and improved robustness of the solution, even if results from the individual methods yielded large standard errors.
The Cost of Penicillin Allergy Evaluation.
Blumenthal, Kimberly G; Li, Yu; Banerji, Aleena; Yun, Brian J; Long, Aidan A; Walensky, Rochelle P
2017-09-22
Unverified penicillin allergy leads to adverse downstream clinical and economic sequelae. Penicillin allergy evaluation can be used to identify true, IgE-mediated allergy. To estimate the cost of penicillin allergy evaluation using time-driven activity-based costing (TDABC). We implemented TDABC throughout the care pathway for 30 outpatients presenting for penicillin allergy evaluation. The base-case evaluation included penicillin skin testing and a 1-step amoxicillin drug challenge, performed by an allergist. We varied assumptions about the provider type, clinical setting, procedure type, and personnel timing. The base-case penicillin allergy evaluation costs $220 in 2016 US dollars: $98 for personnel, $119 for consumables, and $3 for space. In sensitivity analyses, lower cost estimates were achieved when only a drug challenge was performed (ie, no skin test, $84) and a nurse practitioner provider was used ($170). Adjusting for the probability of anaphylaxis did not result in a changed estimate ($220); although other analyses led to modest changes in the TDABC estimate ($214-$246), higher estimates were identified with changing to a low-demand practice setting ($268), a 50% increase in personnel times ($269), and including clinician documentation time ($288). In a least/most costly scenario analyses, the lowest TDABC estimate was $40 and the highest was $537. Using TDABC, penicillin allergy evaluation costs $220; even with varied assumptions adjusting for operational challenges, clinical setting, and expanded testing, penicillin allergy evaluation still costs only about $540. This modest investment may be offset for patients treated with costly alternative antibiotics that also may result in adverse consequences. Copyright © 2017 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.
Freeman, Matthew C; Stocks, Meredith E; Cumming, Oliver; Jeandron, Aurelie; Higgins, Julian P T; Wolf, Jennyfer; Prüss-Ustün, Annette; Bonjour, Sophie; Hunter, Paul R; Fewtrell, Lorna; Curtis, Valerie
2014-08-01
To estimate the global prevalence of handwashing with soap and derive a pooled estimate of the effect of hygiene on diarrhoeal diseases, based on a systematic search of the literature. Studies with data on observed rates of handwashing with soap published between 1990 and August 2013 were identified from a systematic search of PubMed, Embase and ISI Web of Knowledge. A separate search was conducted for studies on the effect of hygiene on diarrhoeal disease that included randomised controlled trials, quasi-randomised trials with control group, observational studies using matching techniques and observational studies with a control group where the intervention was well defined. The search used Cochrane Library, Global Health, BIOSIS, PubMed, and Embase databases supplemented with reference lists from previously published systematic reviews to identify studies published between 1970 and August 2013. Results were combined using multilevel modelling for handwashing prevalence and meta-regression for risk estimates. From the 42 studies reporting handwashing prevalence we estimate that approximately 19% of the world population washes hands with soap after contact with excreta (i.e. use of a sanitation facility or contact with children's excreta). Meta-regression of risk estimates suggests that handwashing reduces the risk of diarrhoeal disease by 40% (risk ratio 0.60, 95% CI 0.53-0.68); however, when we included an adjustment for unblinded studies, the effect estimate was reduced to 23% (risk ratio 0.77, 95% CI 0.32-1.86). Our results show that handwashing after contact with excreta is poorly practiced globally, despite the likely positive health benefits. © 2014 John Wiley & Sons Ltd.
Psychometric Properties of IRT Proficiency Estimates
ERIC Educational Resources Information Center
Kolen, Michael J.; Tong, Ye
2010-01-01
Psychometric properties of item response theory proficiency estimates are considered in this paper. Proficiency estimators based on summed scores and pattern scores include non-Bayes maximum likelihood and test characteristic curve estimators and Bayesian estimators. The psychometric properties investigated include reliability, conditional…
Estimation of laser beam pointing parameters in the presence of atmospheric turbulence.
Borah, Deva K; Voelz, David G
2007-08-10
The problem of estimating mechanical boresight and jitter performance of a laser pointing system in the presence of atmospheric turbulence is considered. A novel estimator based on maximizing an average probability density function (pdf) of the received signal is presented. The proposed estimator uses a Gaussian far-field mean irradiance profile, and the irradiance pdf is assumed to be lognormal. The estimates are obtained using a sequence of return signal values from the intended target. Alternatively, one can think of the estimates being made by a cooperative target using the received signal samples directly. The estimator does not require sample-to-sample atmospheric turbulence parameter information. The approach is evaluated using wave optics simulation for both weak and strong turbulence conditions. Our results show that very good boresight and jitter estimation performance can be obtained under the weak turbulence regime. We also propose a novel technique to include the effect of very low received intensity values that cannot be measured well by the receiving device. The proposed technique provides significant improvement over a conventional approach where such samples are simply ignored. Since our method is derived from the lognormal irradiance pdf, the performance under strong turbulence is degraded. However, the ideas can be extended with appropriate pdf models to obtain more accurate results under strong turbulence conditions.
Impact of the time scale of model sensitivity response on coupled model parameter estimation
NASA Astrophysics Data System (ADS)
Liu, Chang; Zhang, Shaoqing; Li, Shan; Liu, Zhengyu
2017-11-01
That a model has sensitivity responses to parameter uncertainties is a key concept in implementing model parameter estimation using filtering theory and methodology. Depending on the nature of associated physics and characteristic variability of the fluid in a coupled system, the response time scales of a model to parameters can be different, from hourly to decadal. Unlike state estimation, where the update frequency is usually linked with observational frequency, the update frequency for parameter estimation must be associated with the time scale of the model sensitivity response to the parameter being estimated. Here, with a simple coupled model, the impact of model sensitivity response time scales on coupled model parameter estimation is studied. The model includes characteristic synoptic to decadal scales by coupling a long-term varying deep ocean with a slow-varying upper ocean forced by a chaotic atmosphere. Results show that, using the update frequency determined by the model sensitivity response time scale, both the reliability and quality of parameter estimation can be improved significantly, and thus the estimated parameters make the model more consistent with the observation. These simple model results provide a guideline for when real observations are used to optimize the parameters in a coupled general circulation model for improving climate analysis and prediction initialization.
Garcés-Vega, Francisco; Marks, Bradley P
2014-08-01
In the last 20 years, the use of microbial reduction models has expanded significantly, including inactivation (linear and nonlinear), survival, and transfer models. However, a major constraint for model development is the impossibility to directly quantify the number of viable microorganisms below the limit of detection (LOD) for a given study. Different approaches have been used to manage this challenge, including ignoring negative plate counts, using statistical estimations, or applying data transformations. Our objective was to illustrate and quantify the effect of negative plate count data management approaches on parameter estimation for microbial reduction models. Because it is impossible to obtain accurate plate counts below the LOD, we performed simulated experiments to generate synthetic data for both log-linear and Weibull-type microbial reductions. We then applied five different, previously reported data management practices and fit log-linear and Weibull models to the resulting data. The results indicated a significant effect (α = 0.05) of the data management practices on the estimated model parameters and performance indicators. For example, when the negative plate counts were replaced by the LOD for log-linear data sets, the slope of the subsequent log-linear model was, on average, 22% smaller than for the original data, the resulting model underpredicted lethality by up to 2.0 log, and the Weibull model was erroneously selected as the most likely correct model for those data. The results demonstrate that it is important to explicitly report LODs and related data management protocols, which can significantly affect model results, interpretation, and utility. Ultimately, we recommend using only the positive plate counts to estimate model parameters for microbial reduction curves and avoiding any data value substitutions or transformations when managing negative plate counts to yield the most accurate model parameters.
Status and trends of the Lake Huron offshore demersal fish community, 1976-2012
Roseman, Edward F.; Riley, Stephen C.; Farha, Steve A.; Maitland, Bryan M.; Tucker, Taaja R.; Provo, Stacy A.; McLean, Matthew W.
2015-01-01
The USGS Great Lakes Science Center has conducted trawl surveys to assess annual changes in the offshore demersal fish community of Lake Huron since 1973. Sample sites include five ports in U.S. waters with less frequent sampling near Goderich, Ontario. The 2012 fall bottom trawl survey was carried out between 20 October – 5 November 2012 and included all U.S. ports as well as Goderich, ON. The 2012 main basin prey fish biomass estimate for Lake Huron was 97 kilotonnes, higher than the estimate in 2011 (63.2 Kt), approximately one third of the maximum estimate in the time series, and nearly 6 times higher than the minimum estimate in 2009. The biomass estimates for adult alewife in 2012 were higher than 2011, but remained much lower than observed before the crash in 2004, and populations were dominated by small fish. Estimated biomass of rainbow smelt also increased and was the highest observed since 2005. Estimated adult bloater biomass in Lake Huron has been increasing in recent years, and the 2012 biomass estimate was the third highest ever observed in the survey. Biomass estimates for trout-perch and ninespine stickleback were higher than in 2011 but still remained low compared to historic estimates. The estimated biomass of deepwater and slimy sculpins increased over 2011, and slimy sculpin in particular seem to be increasing in abundance. The 2012 biomass estimate for round goby was similar to that in 2011 and was the highest observed in the survey. Substantial numbers of wild juvenile lake trout were captured again in 2012, suggesting that natural reproduction by lake trout continues to occur. The 2012 Lake Huron bottom trawl survey results suggest that several species of offshore demersal fish are beginning to increase in abundance.
Willingness to pay and determinants of choice for improved malaria treatment in rural Nepal.
Morey, Edward R; Sharma, Vijaya R; Mills, Anne
2003-07-01
A logit model is used to estimate provider choice from six types by malaria patients in rural Nepal. Patient characteristics that influence choice include travel costs, income category, household size, gender, and severity of malaria. Income effects are introduced by assuming the marginal utility of money is a step function of expenditures on the numeraire. This method of incorporating income effects is ideally suited for situations when exact income data is not available. Significant provider characteristics include wait time for treatment and wait time for laboratory results. Household willingness to pay (wtp) is estimated for increasing the number of providers and for providing more sites with blood testing capabilities. Wtp estimates vary significantly across households and allow one to assess how much different households would benefit or lose under different government proposals.
Crustal dynamics project data analysis, 1988: VLBI geodetic results, 1979 - 1987
NASA Technical Reports Server (NTRS)
Ma, C.; Ryan, J. W.; Caprette, D.
1989-01-01
The results obtained by the Goddard VLBI (very long base interferometry) Data Analysis Team from the analysis of 712 Mark 3 VLBI geodetic data sets acquired from fixed and mobile observing sites through the end of 1987 are reported. A large solution, GLB401, was used to obtain earth rotation parameters and site velocities. A second large solution, GLB405, was used to obtain baseline evolutions. Radio source positions were estimated globally while nutation offsets were estimated from each data set. Site positions are tabulated on a yearly basis from 1979 through 1988. The results include 55 sites and 270 baselines.
Real-Time Aerodynamic Parameter Estimation without Air Flow Angle Measurements
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2010-01-01
A technique for estimating aerodynamic parameters in real time from flight data without air flow angle measurements is described and demonstrated. The method is applied to simulated F-16 data, and to flight data from a subscale jet transport aircraft. Modeling results obtained with the new approach using flight data without air flow angle measurements were compared to modeling results computed conventionally using flight data that included air flow angle measurements. Comparisons demonstrated that the new technique can provide accurate aerodynamic modeling results without air flow angle measurements, which are often difficult and expensive to obtain. Implications for efficient flight testing and flight safety are discussed.
NASA Technical Reports Server (NTRS)
Kalayeh, H. M.; Landgrebe, D. A.
1983-01-01
A criterion which measures the quality of the estimate of the covariance matrix of a multivariate normal distribution is developed. Based on this criterion, the necessary number of training samples is predicted. Experimental results which are used as a guide for determining the number of training samples are included. Previously announced in STAR as N82-28109
A Qualitative Analysis of the Navy’s HSI Billet Structure
2008-06-01
of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering...and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other...subspecialty code. The research results support the hypothesis that the work requirements of the July 2007 data set of 4600P-coded billets (billets
Advanced planetary analyses. [for planetary mission planning
NASA Technical Reports Server (NTRS)
1974-01-01
The results are summarized of research accomplished during this period concerning planetary mission planning are summarized. The tasks reported include the cost estimations research, planetary missions handbook, and advanced planning activities.
Hevesi, Joseph A.; Flint, Alan L.; Istok, Jonathan D.
1992-01-01
Values of average annual precipitation (AAP) may be important for hydrologic characterization of a potential high-level nuclear-waste repository site at Yucca Mountain, Nevada. Reliable measurements of AAP are sparse in the vicinity of Yucca Mountain, and estimates of AAP were needed for an isohyetal mapping over a 2600-square-mile watershed containing Yucca Mountain. Estimates were obtained with a multivariate geostatistical model developed using AAP and elevation data from a network of 42 precipitation stations in southern Nevada and southeastern California. An additional 1531 elevations were obtained to improve estimation accuracy. Isohyets representing estimates obtained using univariate geostatistics (kriging) defined a smooth and continuous surface. Isohyets representing estimates obtained using multivariate geostatistics (cokriging) defined an irregular surface that more accurately represented expected local orographic influences on AAP. Cokriging results included a maximum estimate within the study area of 335 mm at an elevation of 7400 ft, an average estimate of 157 mm for the study area, and an average estimate of 172 mm at eight locations in the vicinity of the potential repository site. Kriging estimates tended to be lower in comparison because the increased AAP expected for remote mountainous topography was not adequately represented by the available sample. Regression results between cokriging estimates and elevation were similar to regression results between measured AAP and elevation. The position of the cokriging 250-mm isohyet relative to the boundaries of pinyon pine and juniper woodlands provided indirect evidence of improved estimation accuracy because the cokriging result agreed well with investigations by others concerning the relationship between elevation, vegetation, and climate in the Great Basin. Calculated estimation variances were also mapped and compared to evaluate improvements in estimation accuracy. Cokriging estimation variances were reduced by an average of 54% relative to kriging variances within the study area. Cokriging reduced estimation variances at the potential repository site by 55% relative to kriging. The usefulness of an existing network of stations for measuring AAP within the study area was evaluated using cokriging variances, and twenty additional stations were located for the purpose of improving the accuracy of future isohyetal mappings. Using the expanded network of stations, the maximum cokriging estimation variance within the study area was reduced by 78% relative to the existing network, and the average estimation variance was reduced by 52%.
Covariate adjustment of event histories estimated from Markov chains: the additive approach.
Aalen, O O; Borgan, O; Fekjaer, H
2001-12-01
Markov chain models are frequently used for studying event histories that include transitions between several states. An empirical transition matrix for nonhomogeneous Markov chains has previously been developed, including a detailed statistical theory based on counting processes and martingales. In this article, we show how to estimate transition probabilities dependent on covariates. This technique may, e.g., be used for making estimates of individual prognosis in epidemiological or clinical studies. The covariates are included through nonparametric additive models on the transition intensities of the Markov chain. The additive model allows for estimation of covariate-dependent transition intensities, and again a detailed theory exists based on counting processes. The martingale setting now allows for a very natural combination of the empirical transition matrix and the additive model, resulting in estimates that can be expressed as stochastic integrals, and hence their properties are easily evaluated. Two medical examples will be given. In the first example, we study how the lung cancer mortality of uranium miners depends on smoking and radon exposure. In the second example, we study how the probability of being in response depends on patient group and prophylactic treatment for leukemia patients who have had a bone marrow transplantation. A program in R and S-PLUS that can carry out the analyses described here has been developed and is freely available on the Internet.
Global Maps of Temporal Streamflow Characteristics Based on Observations from Many Small Catchments
NASA Astrophysics Data System (ADS)
Beck, H.; van Dijk, A.; de Roo, A.
2014-12-01
Streamflow (Q) estimation in ungauged catchments is one of the greatest challenges facing hydrologists. We used observed Q from approximately 7500 small catchments (<10,000 km2) around the globe to train neural network ensembles to estimate temporal Q distribution characteristics from climate and physiographic characteristics of the catchments. In total 17 Q characteristics were selected, including mean annual Q, baseflow index, and a number of flow percentiles. Training coefficients of determination for the estimation of the Q characteristics ranged from 0.56 for the baseflow recession constant to 0.93 for the Q timing. Overall, climate indices dominated among the predictors. Predictors related to soils and geology were the least important, perhaps due to data quality. The trained neural network ensembles were subsequently applied spatially over the ice-free land surface including ungauged regions, resulting in global maps of the Q characteristics (0.125° spatial resolution). These maps possess several unique features: 1) they represent purely observation-driven estimates; 2) are based on an unprecedentedly large set of catchments; and 3) have associated uncertainty estimates. The maps can be used for various hydrological applications, including the diagnosis of macro-scale hydrological models. To demonstrate this, the produced maps were compared to equivalent maps derived from the simulated daily Q of five macro-scale hydrological models, highlighting various opportunities for improvement in model Q behavior. The produced dataset is available for download.
Robust and efficient estimation with weighted composite quantile regression
NASA Astrophysics Data System (ADS)
Jiang, Xuejun; Li, Jingzhi; Xia, Tian; Yan, Wanfeng
2016-09-01
In this paper we introduce a weighted composite quantile regression (CQR) estimation approach and study its application in nonlinear models such as exponential models and ARCH-type models. The weighted CQR is augmented by using a data-driven weighting scheme. With the error distribution unspecified, the proposed estimators share robustness from quantile regression and achieve nearly the same efficiency as the oracle maximum likelihood estimator (MLE) for a variety of error distributions including the normal, mixed-normal, Student's t, Cauchy distributions, etc. We also suggest an algorithm for the fast implementation of the proposed methodology. Simulations are carried out to compare the performance of different estimators, and the proposed approach is used to analyze the daily S&P 500 Composite index, which verifies the effectiveness and efficiency of our theoretical results.
State of charge estimation in Ni-MH rechargeable batteries
NASA Astrophysics Data System (ADS)
Milocco, R. H.; Castro, B. E.
In this work we estimate the state of charge (SOC) of Ni-MH rechargeable batteries using the Kalman filter based on a simplified electrochemical model. First, we derive the complete electrochemical model of the battery which includes diffusional processes and kinetic reactions in both Ni and MH electrodes. The full model is further reduced in a cascade of two parts, a linear time invariant dynamical sub-model followed by a static nonlinearity. Both parts are identified using the current and potential measured at the terminals of the battery with a simple 1-D minimization procedure. The inverse of the static nonlinearity together with a Kalman filter provide the SOC estimation as a linear estimation problem. Experimental results with commercial batteries are provided to illustrate the estimation procedure and to show the performance.
Missing-value estimation using linear and non-linear regression with Bayesian gene selection.
Zhou, Xiaobo; Wang, Xiaodong; Dougherty, Edward R
2003-11-22
Data from microarray experiments are usually in the form of large matrices of expression levels of genes under different experimental conditions. Owing to various reasons, there are frequently missing values. Estimating these missing values is important because they affect downstream analysis, such as clustering, classification and network design. Several methods of missing-value estimation are in use. The problem has two parts: (1) selection of genes for estimation and (2) design of an estimation rule. We propose Bayesian variable selection to obtain genes to be used for estimation, and employ both linear and nonlinear regression for the estimation rule itself. Fast implementation issues for these methods are discussed, including the use of QR decomposition for parameter estimation. The proposed methods are tested on data sets arising from hereditary breast cancer and small round blue-cell tumors. The results compare very favorably with currently used methods based on the normalized root-mean-square error. The appendix is available from http://gspsnap.tamu.edu/gspweb/zxb/missing_zxb/ (user: gspweb; passwd: gsplab).
Dental age assessment of southern Chinese using the United Kingdom Caucasian reference dataset.
Jayaraman, Jayakumar; Roberts, Graham J; King, Nigel M; Wong, Hai Ming
2012-03-10
Dental age assessment is one the most accurate methods for estimating the age of an unknown person. Demirjian's dataset on a French-Canadian population has been widely tested for its applicability on various ethnic groups including southern Chinese. Following inaccurate results from these studies, investigators are now confronted with using alternate datasets for comparison. Testing the applicability of other reliable datasets which result in accurate findings might limit the need to develop population specific standards. Recently, a Reference Data Set (RDS) similar to the Demirjian was prepared in the United Kingdom (UK) and has been subsequently validated. The advantages of the UK Caucasian RDS includes versatility from including both the maxillary and mandibular dentitions, involvement of a wide age group of subjects for evaluation and the possibility of precise age estimation with the mathematical technique of meta-analysis. The aim of this study was to evaluate the applicability of the United Kingdom Caucasian RDS on southern Chinese subjects. Dental panoramic tomographs (DPT) of 266 subjects (133 males and 133 females) aged 2-21 years that were previously taken for clinical diagnostic purposes were selected and scored by a single calibrated examiner based on Demirjian's classification of tooth developmental stages (A-H). The ages corresponding to each stage of tooth developmental stage were obtained from the UK dataset. Intra-examiner reproducibility was tested and the Cohen kappa (0.88) showed that the level of agreement was 'almost perfect'. The estimated dental age was then compared with the chronological age using a paired t-test, with statistical significance set at p<0.01. The results showed that the UK dataset, underestimated the age of southern Chinese subjects by 0.24 years but the results were not statistically significant. In conclusion, the UK Caucasian RDS may not be suitable for estimating the age of southern Chinese subjects and there is a need for an ethnic specific reference dataset for southern Chinese. Copyright © 2011. Published by Elsevier Ireland Ltd.
The impact of organizational structure on flight software cost risk
NASA Technical Reports Server (NTRS)
Hihn, Jairus; Lum, Karen; Monson, Erik
2004-01-01
This paper summarizes the final results of the follow-up study updating the estimated software effort growth for those projects that were still under development and including an evaluation of the roles versus observed cost risk for the missions included in the original study which expands the data set to thirteen missions.
Survey of Crop Losses in Response to Phytoparasitic Nematodes in the United States for 1994
Koenning, S. R.; Overstreet, C.; Noling, J. W.; Donald, P. A.; Becker, J. O.; Fortnum, B. A.
1999-01-01
Previous reports of crop losses to plant-parasitic nematodes have relied on published results of survey data based on certain commodities, including tobacco, peanuts, cotton, and soybean. Reports on crop-loss assessment by land-grant universities and many commodity groups generally are no longer available, with the exception of the University of Georgia, the Beltwide Cotton Conference, and selected groups concerned with soybean. The Society of Nematologists Extension Committee contacted extension personnel in 49 U.S. states for information on estimated crop losses caused by plant-parasitic nematodes in major crops for the year 1994. Included in this paper are survey results from 35 states on various crops including corn, cotton, soybean, peanut, wheat, rice, sugarcane, sorghum, tobacco, numerous vegetable crops, fruit and nut crops, and golf greens. The data are reported systematically by state and include the estimated loss, hectarage of production, source of information, nematode species or taxon when available, and crop value. The major genera of phytoparasitic nematodes reported to cause crop losses were Heterodera, Hoplolaimus, Meloidogyne, Pratylenchus, Rotylenchulus, and Xiphinema. PMID:19270925
Survey of crop losses in response to phytoparasitic nematodes in the United States for 1994.
Koenning, S R; Overstreet, C; Noling, J W; Donald, P A; Becker, J O; Fortnum, B A
1999-12-01
Previous reports of crop losses to plant-parasitic nematodes have relied on published results of survey data based on certain commodities, including tobacco, peanuts, cotton, and soybean. Reports on crop-loss assessment by land-grant universities and many commodity groups generally are no longer available, with the exception of the University of Georgia, the Beltwide Cotton Conference, and selected groups concerned with soybean. The Society of Nematologists Extension Committee contacted extension personnel in 49 U.S. states for information on estimated crop losses caused by plant-parasitic nematodes in major crops for the year 1994. Included in this paper are survey results from 35 states on various crops including corn, cotton, soybean, peanut, wheat, rice, sugarcane, sorghum, tobacco, numerous vegetable crops, fruit and nut crops, and golf greens. The data are reported systematically by state and include the estimated loss, hectarage of production, source of information, nematode species or taxon when available, and crop value. The major genera of phytoparasitic nematodes reported to cause crop losses were Heterodera, Hoplolaimus, Meloidogyne, Pratylenchus, Rotylenchulus, and Xiphinema.
Reconciling medical expenditure estimates from the MEPS and NHEA, 2007.
Bernard, Didem; Cowan, Cathy; Selden, Thomas; Cai, Liming; Catlin, Aaron; Heffler, Stephen
2012-01-01
Provide a comparison of health care expenditure estimates for 2007 from the Medical Expenditure Panel Survey (MEPS) and the National Health Expenditure Accounts (NHEA). Reconciling these estimates serves two important purposes. First, it is an important quality assurance exercise for improving and ensuring the integrity of each source's estimates. Second, the reconciliation provides a consistent baseline of health expenditure data for policy simulations. Our results assist researchers to adjust MEPS to be consistent with the NHEA so that the projected costs as well as budgetary and tax implications of any policy change are consistent with national health spending estimates. The Medical Expenditure Panel Survey produced by the Agency for Healthcare Research and Quality, and the National Health Center for Health Statistics and the National Health Expenditures produced by the Centers for Medicare & Medicaid Service's Office of the Actuary. In this study, we focus on the personal health care (PHC) sector, which includes the goods and services rendered to treat or prevent a specific disease or condition in an individual. The official 2007 NHEA estimate for PHC spending is $1,915 billion and the MEPS estimate is $1,126 billion. Adjusting the NHEA estimates for differences in underlying populations, covered services, and other measurement concepts reduces the NHEA estimate for 2007 to $1,366 billion. As a result, MEPS is $240 billion, or 17.6 percent, less than the adjusted NHEA total.
Time estimation predicts mathematical intelligence.
Kramer, Peter; Bressan, Paola; Grassi, Massimo
2011-01-01
Performing mental subtractions affects time (duration) estimates, and making time estimates disrupts mental subtractions. This interaction has been attributed to the concurrent involvement of time estimation and arithmetic with general intelligence and working memory. Given the extant evidence of a relationship between time and number, here we test the stronger hypothesis that time estimation correlates specifically with mathematical intelligence, and not with general intelligence or working-memory capacity. Participants performed a (prospective) time estimation experiment, completed several subtests of the WAIS intelligence test, and self-rated their mathematical skill. For five different durations, we found that time estimation correlated with both arithmetic ability and self-rated mathematical skill. Controlling for non-mathematical intelligence (including working memory capacity) did not change the results. Conversely, correlations between time estimation and non-mathematical intelligence either were nonsignificant, or disappeared after controlling for mathematical intelligence. We conclude that time estimation specifically predicts mathematical intelligence. On the basis of the relevant literature, we furthermore conclude that the relationship between time estimation and mathematical intelligence is likely due to a common reliance on spatial ability.
78 FR 9865 - Air Carrier Contract Maintenance Requirements; Extension of Comment Period
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-12
..., RACCA believes the proposed rulemaking would result in significant unintended consequences, including greater operator cost and manpower requirements than those estimated in the NPRM, loss of efficiency, unbudgeted loss of aircraft availability, and a substantial additional workload for the FAA that would result...
Aulenbach, Brent T.
2006-01-01
Annual stream-water loads were calculated near the outlet of four of the larger river basins (Susquehanna, St. Lawrence, Mississippi-Atchafalaya, and Columbia) in the United States for dissolved nitrite plus nitrate (NO2 + NO3) and total phosphorus using LOADEST load estimation software. Loads were estimated for the period 1968-2004; although loads estimated for individual river basins and chemical constituent combinations typically were for shorter time periods due to limitations in data availability. Stream discharge and water-quality data for load estimates were obtained from the U.S. Geological Survey (USGS) with additional stream discharge data for the Mississippi-Atchafalaya River Basin from the U.S. Army Corps of Engineers. The loads were estimated to support national assessments of changes in stream nutrient loads that are periodically conducted by Federal agencies (for example, U.S. Environmental Protection Agency) and other water- and land-resource organizations. Data, methods, and results of load estimates are summarized herein; including World Wide Web links to electronic ASCII text files containing the raw data. The load estimates are compared to dissolved NO2 + NO3 loads for three of the large river basins from 1971 to 1998 that the USGS provided during 2001 to The H. John Heinz III Center for Science, Economics and the Environment (The Heinz Center) for a report The Heinz Center published during 2002. Differences in the load estimates are the result of using the most up-to-date monitoring data since the 2001 analysis, differences in how concentrations less than the reporting limit were handled by the load estimation models, and some errors and exclusions in the 2001 analysis datasets (which resulted in some inaccurate load estimates).
Particulate air pollution and panel studies in children: a systematic review
Ward, D; Ayres, J
2004-01-01
Aims: To systematically review the results of such studies in children, estimate summary measures of effect, and investigate potential sources of heterogeneity. Methods: Studies were identified by searching electronic databases to June 2002, including those where outcomes and particulate level measurements were made at least daily for ⩾8 weeks, and analysed using an appropriate regression model. Study results were compared using forest plots, and fixed and random effects summary effect estimates obtained. Publication bias was considered using a funnel plot. Results: Twenty two studies were identified, all except two reporting PM10 (24 hour mean) >50 µg.m-3. Reported effects of PM10 on PEF were widely spread and smaller than those for PM2.5 (fixed effects summary: -0.012 v -0.063 l.min-1 per µg.m-3 rise). A similar pattern was evident for symptoms. Random effects models produced larger estimates. Overall, in between-study comparisons, panels of children with diagnosed asthma or pre-existing respiratory symptoms appeared less affected by PM10 levels than those without, and effect estimates were larger where studies were conducted in higher ozone conditions. Larger PM10 effect estimates were obtained from studies using generalised estimating equations to model autocorrelation and where results were derived by pooling subject specific regression coefficients. A funnel plot of PM10 results for PEF was markedly asymmetrical. Conclusions: The majority of identified studies indicate an adverse effect of particulate air pollution that is greater for PM2.5 than PM10. However, results show considerable heterogeneity and there is evidence consistent with publication bias, so limited confidence may be placed on summary estimates of effect. The possibility of interaction between particle and ozone effects merits further investigation, as does variability due to analytical differences that alter the interpretation of final estimates. PMID:15031404
Xue, Yang; Yang, Zhongyang; Wang, Xiaoyan; Lin, Zhipan; Li, Dunxi; Su, Shaofeng
2016-01-01
Casuarina equisetifolia is commonly planted and used in the construction of coastal shelterbelt protection in Hainan Island. Thus, it is critical to accurately estimate the tree biomass of Casuarina equisetifolia L. for forest managers to evaluate the biomass stock in Hainan. The data for this work consisted of 72 trees, which were divided into three age groups: young forest, middle-aged forest, and mature forest. The proportion of biomass from the trunk significantly increased with age (P<0.05). However, the biomass of the branch and leaf decreased, and the biomass of the root did not change. To test whether the crown radius (CR) can improve biomass estimates of C. equisetifolia, we introduced CR into the biomass models. Here, six models were used to estimate the biomass of each component, including the trunk, the branch, the leaf, and the root. In each group, we selected one model among these six models for each component. The results showed that including the CR greatly improved the model performance and reduced the error, especially for the young and mature forests. In addition, to ensure biomass additivity, the selected equation for each component was fitted as a system of equations using seemingly unrelated regression (SUR). The SUR method not only gave efficient and accurate estimates but also achieved the logical additivity. The results in this study provide a robust estimation of tree biomass components and total biomass over three groups of C. equisetifolia.
Estimating the Relative Water Content of Leaves in a Cotton Canopy
NASA Technical Reports Server (NTRS)
Vanderbilt, Vern; Daughtry, Craig; Kupinski, Meredith; French, Andrew; Chipman, Russell; Dahlgren, Robert
2017-01-01
Remotely sensing plant canopy water status remains a long-term goal of remote sensing research. Established approaches to estimating canopy water status the Crop Water Stress Index, the Water Deficit Index and the Equivalent Water Thickness involve measurements in the thermal or reflective infrared. Here we report plant water status estimates based upon analysis of polarized visible imagery of a cotton canopy measured by ground Multi-Spectral Polarization Imager (MSPI). Such estimators potentially provide access to the plant hydrological photochemistry that manifests scattering and absorption effects in the visible spectral region.Twice during one day, +- 3 hours from solar noon, we collected polarized imagery and relative water content data on a cotton test plot located at the Arid Land Agricultural Research Center, United States Department of Agriculture, Maricopa, AZ. The test plot, a small portion of a large cotton field, contained stressed plants ready for irrigation. The evening prior to data collection we irrigated several rows of plants within the test plot. Thus, ground MSPI imagery from both morning and afternoon included cotton plants with a range of water statuses. Data analysis includes classifying the polarized imagery into sunlit reflecting, sunlit transmitting, shaded foliage and bare soil. We estimate the leaf surface reflection and interior reflection based upon the per pixel polarization and sunview directions. We compare our cotton results with our prior polarization results for corn and soybean leaves measured in the lab and corn leaves measured in the field.
Xue, Yang; Yang, Zhongyang; Wang, Xiaoyan; Lin, Zhipan; Li, Dunxi; Su, Shaofeng
2016-01-01
Casuarina equisetifolia is commonly planted and used in the construction of coastal shelterbelt protection in Hainan Island. Thus, it is critical to accurately estimate the tree biomass of Casuarina equisetifolia L. for forest managers to evaluate the biomass stock in Hainan. The data for this work consisted of 72 trees, which were divided into three age groups: young forest, middle-aged forest, and mature forest. The proportion of biomass from the trunk significantly increased with age (P<0.05). However, the biomass of the branch and leaf decreased, and the biomass of the root did not change. To test whether the crown radius (CR) can improve biomass estimates of C. equisetifolia, we introduced CR into the biomass models. Here, six models were used to estimate the biomass of each component, including the trunk, the branch, the leaf, and the root. In each group, we selected one model among these six models for each component. The results showed that including the CR greatly improved the model performance and reduced the error, especially for the young and mature forests. In addition, to ensure biomass additivity, the selected equation for each component was fitted as a system of equations using seemingly unrelated regression (SUR). The SUR method not only gave efficient and accurate estimates but also achieved the logical additivity. The results in this study provide a robust estimation of tree biomass components and total biomass over three groups of C. equisetifolia. PMID:27002822
Estimation of submarine mass failure probability from a sequence of deposits with age dates
Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.
2013-01-01
The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.
Medical image segmentation to estimate HER2 gene status in breast cancer
NASA Astrophysics Data System (ADS)
Palacios-Navarro, Guillermo; Acirón-Pomar, José Manuel; Vilchez-Sorribas, Enrique; Zambrano, Eddie Galarza
2016-02-01
This work deals with the estimation of HER2 Gene status in breast tumour images treated with in situ hybridization techniques (ISH). We propose a simple algorithm to obtain the amplification factor of HER2 gene. The obtained results are very close to those obtained by specialists in a manual way. The developed algorithm is based on colour image segmentation and has been included in a software application tool for breast tumour analysis. The developed tool focus on the estimation of the seriousness of tumours, facilitating the work of pathologists and contributing to a better diagnosis.
NASA Technical Reports Server (NTRS)
Aldrich, R. C.; Dana, R. W.; Roberts, E. H. (Principal Investigator)
1977-01-01
The author has identified the following significant results. A stratified random sample using LANDSAT band 5 and 7 panchromatic prints resulted in estimates of water in counties with sampling errors less than + or - 9% (67% probability level). A forest inventory using a four band LANDSAT color composite resulted in estimates of forest area by counties that were within + or - 6.7% and + or - 3.7% respectively (67% probability level). Estimates of forest area for counties by computer assisted techniques were within + or - 21% of operational forest survey figures and for all counties the difference was only one percent. Correlations of airborne terrain reflectance measurements with LANDSAT radiance verified a linear atmospheric model with an additive (path radiance) term and multiplicative (transmittance) term. Coefficients of determination for 28 of the 32 modeling attempts, not adverseley affected by rain shower occurring between the times of LANDSAT passage and aircraft overflights, exceeded 0.83.
Using Audit Information to Adjust Parameter Estimates for Data Errors in Clinical Trials
Shepherd, Bryan E.; Shaw, Pamela A.; Dodd, Lori E.
2013-01-01
Background Audits are often performed to assess the quality of clinical trial data, but beyond detecting fraud or sloppiness, the audit data is generally ignored. In earlier work using data from a non-randomized study, Shepherd and Yu (2011) developed statistical methods to incorporate audit results into study estimates, and demonstrated that audit data could be used to eliminate bias. Purpose In this manuscript we examine the usefulness of audit-based error-correction methods in clinical trial settings where a continuous outcome is of primary interest. Methods We demonstrate the bias of multiple linear regression estimates in general settings with an outcome that may have errors and a set of covariates for which some may have errors and others, including treatment assignment, are recorded correctly for all subjects. We study this bias under different assumptions including independence between treatment assignment, covariates, and data errors (conceivable in a double-blinded randomized trial) and independence between treatment assignment and covariates but not data errors (possible in an unblinded randomized trial). We review moment-based estimators to incorporate the audit data and propose new multiple imputation estimators. The performance of estimators is studied in simulations. Results When treatment is randomized and unrelated to data errors, estimates of the treatment effect using the original error-prone data (i.e., ignoring the audit results) are unbiased. In this setting, both moment and multiple imputation estimators incorporating audit data are more variable than standard analyses using the original data. In contrast, in settings where treatment is randomized but correlated with data errors and in settings where treatment is not randomized, standard treatment effect estimates will be biased. And in all settings, parameter estimates for the original, error-prone covariates will be biased. Treatment and covariate effect estimates can be corrected by incorporating audit data using either the multiple imputation or moment-based approaches. Bias, precision, and coverage of confidence intervals improve as the audit size increases. Limitations The extent of bias and the performance of methods depend on the extent and nature of the error as well as the size of the audit. This work only considers methods for the linear model. Settings much different than those considered here need further study. Conclusions In randomized trials with continuous outcomes and treatment assignment independent of data errors, standard analyses of treatment effects will be unbiased and are recommended. However, if treatment assignment is correlated with data errors or other covariates, naive analyses may be biased. In these settings, and when covariate effects are of interest, approaches for incorporating audit results should be considered. PMID:22848072
Pihl, Michael Johannes; Jensen, Jørgen Arendt
2014-10-01
A method for 3-D velocity vector estimation using transverse oscillations is presented. The method employs a 2-D transducer and decouples the velocity estimation into three orthogonal components, which are estimated simultaneously and from the same data. The validity of the method is investigated by conducting simulations emulating a 32 × 32 matrix transducer. The results are evaluated using two performance metrics related to precision and accuracy. The study includes several parameters including 49 flow directions, the SNR, steering angle, and apodization types. The 49 flow directions cover the positive octant of the unit sphere. In terms of accuracy, the median bias is -2%. The precision of v(x) and v(y) depends on the flow angle ß and ranges from 5% to 31% relative to the peak velocity magnitude of 1 m/s. For comparison, the range is 0.4 to 2% for v(z). The parameter study also reveals, that the velocity estimation breaks down with an SNR between -6 and -3 dB. In terms of computational load, the estimation of the three velocity components requires 0.75 billion floating point operations per second (0.75 Gflops) for a realistic setup. This is well within the capability of modern scanners.
Emergency Department Length of Stay: Accuracy of Patient Estimates
Parker, Brendan T.; Marco, Catherine
2014-01-01
Introduction Managing a patient’s expectations in the emergency department (ED) environment is challenging. Previous studies have identified several factors associated with ED patient satisfaction. Lengthy wait times have shown to be associated with dissatisfaction with ED care. Understanding that patients are inaccurate at their estimation of wait time, which could lead to lower satisfaction, provides administrators possible points of intervention to help improve accuracy of estimation and possibly satisfaction with the ED. This study was undertaken to examine the accuracy of patient estimates of time periods in an ED and identify factors associated with accuracy. Method In this prospective convenience sample survey at UTMC ED, we collected data between March and July 2012. Outcome measures included duration of each phase of ED care and patient estimates of these time periods. Results Among 309 participants, the majority underestimated the total length of stay (LOS) in the ED (median difference −7 minutes (IQR −29-12)). There was significant variability in ED LOS (median 155 minutes (IQR 75–240)). No significant associations were identified between accuracy of time estimates and gender, age, race, or insurance status. Participants with longer ED LOS demonstrated lower patient satisfaction scores (p<0.001). Conclusion Patients demonstrated inaccurate time estimates of ED treatment times, including total LOS. Patients with longer ED LOS had lower patient satisfaction scores. PMID:24672606
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waldhoff, Stephanie T.; Anthoff, David; Rose, Steven K.
We use FUND 3.8 to estimate the social cost of four greenhouse gases: carbon dioxide, methane, nitrous oxide, and sulphur hexafluoride emissions. The damage potential for each gas—the ratio of the social cost of the non-carbon dioxide greenhouse gas to the social cost of carbon dioxide—is also estimated. The damage potentials are compared to several metrics, focusing in particular on the global warming potentials, which are frequently used to measure the trade-off between gases in the form of carbon dioxide equivalents. We find that damage potentials could be significantly higher than global warming potentials. This finding implies that previous papersmore » have underestimated the relative importance of reducing non-carbon dioxide greenhouse gas emissions from an economic damage perspective. We show results for a range of sensitivity analyses: carbon dioxide fertilization on agriculture productivity, terrestrial feedbacks, climate sensitivity, discounting, equity weighting, and socioeconomic and emissions scenarios. The sensitivity of the results to carbon dioxide fertilization is a primary focus as it is an important element of climate change that has not been considered in much of the previous literature. We estimate that carbon dioxide fertilization has a large positive impact that reduces the social cost of carbon dioxide with a much smaller effect on the other greenhouse gases. As a result, our estimates of the damage potentials of methane and nitrous oxide are much higher compared to estimates that ignore carbon dioxide fertilization. As a result, our base estimates of the damage potential for methane and nitrous oxide that include carbon dioxide fertilization are twice their respective global warming potentials. Our base estimate of the damage potential of sulphur hexafluoride is similar to the one previous estimate, both almost three times the global warming potential.« less
Economic burden made celiac disease an expensive and challenging condition for Iranian patients.
Pourhoseingholi, Mohamad Amin; Rostami-Nejad, Mohammad; Barzegar, Farnoush; Rostami, Kamran; Volta, Umberto; Sadeghi, Amir; Honarkar, Zahra; Salehi, Niloofar; Asadzadeh-Aghdaei, Hamid; Baghestani, Ahmad Reza; Zali, Mohammad Reza
2017-01-01
The aim of this study was to estimate the economic burden of celiac disease (CD) in Iran. The assessment of burden of CD has become an important primary or secondary outcome measure in clinical and epidemiologic studies. Information regarding medical costs and gluten free diet (GFD) costs were gathered using questionnaire and checklists offered to the selected patients with CD. The data included the direct medical cost (including Doctor Visit, hospitalization, clinical test examinations, endoscopies, etc.), GFD cost and loss productivity cost (as the indirect cost) for CD patient were estimated. The factors used for cost estimation included frequency of health resource utilization and gluten free diet basket. Purchasing Power Parity Dollar (PPP$) was used in order to make inter-country comparisons. Total of 213 celiac patients entered to this study. The mean (standard deviation) of total cost per patient per year was 3377 (1853) PPP$. This total cost including direct medical cost, GFD costs and loss productivity cost per patients per year. Also the mean and standard deviation of medical cost and GFD cost were 195 (128) PPP$ and 932 (734) PPP$ respectively. The total costs of CD were significantly higher for male. Also GFD cost and total cost were higher for unmarried patients. In conclusion, our estimation of CD economic burden is indicating that CD patients face substantial expense that might not be affordable for a good number of these patients. The estimated economic burden may put these patients at high risk for dietary neglect resulting in increasing the risk of long term complications.
Estimation of hepatitis C virus infections resulting from vertical transmission in Egypt.
Benova, Lenka; Awad, Susanne F; Miller, F DeWolfe; Abu-Raddad, Laith J
2015-03-01
Despite having the highest hepatitis C virus (HCV) prevalence in the world, the ongoing level of HCV incidence in Egypt and its drivers are poorly understood. Whereas HCV mother-to-child infection is a well-established transmission route, there are no estimates of HCV infections resulting from vertical transmission for any country, including Egypt. The aim of this study was to estimate the absolute number of new HCV infections resulting from vertical transmission in Egypt. We developed a conceptual framework of HCV vertical transmission, expressed in terms of a mathematical model and based on maternal HCV antibody and viremia. The mathematical model estimated the number of HCV vertical infections nationally and for six subnational areas. Applying two vertical transmission risk estimates to the 2008 Egyptian birth cohort, we estimated that between 3,080 and 5,167 HCV infections resulted from vertical transmission among children born in 2008. HCV vertical transmission may account for half of incident cases in the <5-year age group. Disproportionately higher proportions of vertical infections were estimated in Lower Rural and Upper Rural subnational areas. This geographical clustering was a result of higher-area-level HCV prevalence among women and higher fertility rates. Vertical transmission is one of the primary HCV infection routes among children<5 years in Egypt. The absolute number of vertical transmissions and the young age at infection highlight a public health concern. These findings also emphasize the need to quantify the relative contributions of other transmission routes to HCV incidence in Egypt. © 2014 The Authors. Hepatology published by Wiley Periodicals, Inc., on behalf of the American Association for the Study of Liver Diseases.
Estimating Effects with Rare Outcomes and High Dimensional Covariates: Knowledge is Power
Ahern, Jennifer; Galea, Sandro; van der Laan, Mark
2016-01-01
Many of the secondary outcomes in observational studies and randomized trials are rare. Methods for estimating causal effects and associations with rare outcomes, however, are limited, and this represents a missed opportunity for investigation. In this article, we construct a new targeted minimum loss-based estimator (TMLE) for the effect or association of an exposure on a rare outcome. We focus on the causal risk difference and statistical models incorporating bounds on the conditional mean of the outcome, given the exposure and measured confounders. By construction, the proposed estimator constrains the predicted outcomes to respect this model knowledge. Theoretically, this bounding provides stability and power to estimate the exposure effect. In finite sample simulations, the proposed estimator performed as well, if not better, than alternative estimators, including a propensity score matching estimator, inverse probability of treatment weighted (IPTW) estimator, augmented-IPTW and the standard TMLE algorithm. The new estimator yielded consistent estimates if either the conditional mean outcome or the propensity score was consistently estimated. As a substitution estimator, TMLE guaranteed the point estimates were within the parameter range. We applied the estimator to investigate the association between permissive neighborhood drunkenness norms and alcohol use disorder. Our results highlight the potential for double robust, semiparametric efficient estimation with rare events and high dimensional covariates. PMID:28529839
Evaluating MODIS satellite versus terrestrial data driven productivity estimates in Austria
NASA Astrophysics Data System (ADS)
Petritsch, R.; Boisvenue, C.; Pietsch, S. A.; Hasenauer, H.; Running, S. W.
2009-04-01
Sensors, such as the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA's Terra satellite, are developed for monitoring global and/or regional ecosystem fluxes like net primary production (NPP). Although these systems should allow us to assess carbon sequestration issues, forest management impacts, etc., relatively little is known about the consistency and accuracy in the resulting satellite driven estimates versus production estimates driven from ground data. In this study we compare the following NPP estimation methods: (i) NPP estimates as derived from MODIS and available on the internet; (ii) estimates resulting from the off-line version of the MODIS algorithm; (iii) estimates using regional meteorological data within the offline algorithm; (iv) NPP estimates from a species specific biogeochemical ecosystem model adopted for Alpine conditions; and (v) NPP estimates calculated from individual tree measurements. Single tree measurements were available from 624 forested sites across Austria but only the data from 165 sample plots included all the necessary information for performing the comparison on plot level. To ensure independence of satellite-driven and ground-based predictions, only latitude and longitude for each site were used to obtain MODIS estimates. Along with the comparison of the different methods, we discuss problems like the differing dates of field campaigns (<1999) and acquisition of satellite images (2000-2005) or incompatible productivity definitions within the methods and come up with a framework for combining terrestrial and satellite data based productivity estimates. On average MODIS estimates agreed well with the output of the models self-initialization (spin-up) and biomass increment calculated from tree measurements is not significantly different from model results; however, correlation between satellite-derived versus terrestrial estimates are relatively poor. Considering the different scales as they are 9km² from MODIS and 1000m² from the sample plots together with the heterogeneous landscape may qualify the low correlation, particularly as the correlation increases when strongly fragmented sites are left out.
2011-01-01
Background Molecular marker information is a common source to draw inferences about the relationship between genetic and phenotypic variation. Genetic effects are often modelled as additively acting marker allele effects. The true mode of biological action can, of course, be different from this plain assumption. One possibility to better understand the genetic architecture of complex traits is to include intra-locus (dominance) and inter-locus (epistasis) interaction of alleles as well as the additive genetic effects when fitting a model to a trait. Several Bayesian MCMC approaches exist for the genome-wide estimation of genetic effects with high accuracy of genetic value prediction. Including pairwise interaction for thousands of loci would probably go beyond the scope of such a sampling algorithm because then millions of effects are to be estimated simultaneously leading to months of computation time. Alternative solving strategies are required when epistasis is studied. Methods We extended a fast Bayesian method (fBayesB), which was previously proposed for a purely additive model, to include non-additive effects. The fBayesB approach was used to estimate genetic effects on the basis of simulated datasets. Different scenarios were simulated to study the loss of accuracy of prediction, if epistatic effects were not simulated but modelled and vice versa. Results If 23 QTL were simulated to cause additive and dominance effects, both fBayesB and a conventional MCMC sampler BayesB yielded similar results in terms of accuracy of genetic value prediction and bias of variance component estimation based on a model including additive and dominance effects. Applying fBayesB to data with epistasis, accuracy could be improved by 5% when all pairwise interactions were modelled as well. The accuracy decreased more than 20% if genetic variation was spread over 230 QTL. In this scenario, accuracy based on modelling only additive and dominance effects was generally superior to that of the complex model including epistatic effects. Conclusions This simulation study showed that the fBayesB approach is convenient for genetic value prediction. Jointly estimating additive and non-additive effects (especially dominance) has reasonable impact on the accuracy of prediction and the proportion of genetic variation assigned to the additive genetic source. PMID:21867519
Estimating the circuit delay of FPGA with a transfer learning method
NASA Astrophysics Data System (ADS)
Cui, Xiuhai; Liu, Datong; Peng, Yu; Peng, Xiyuan
2017-10-01
With the increase of FPGA (Field Programmable Gate Array, FPGA) functionality, FPGA has become an on-chip system platform. Due to increase the complexity of FPGA, estimating the delay of FPGA is a very challenge work. To solve the problems, we propose a transfer learning estimation delay (TLED) method to simplify the delay estimation of different speed grade FPGA. In fact, the same style different speed grade FPGA comes from the same process and layout. The delay has some correlation among different speed grade FPGA. Therefore, one kind of speed grade FPGA is chosen as a basic training sample in this paper. Other training samples of different speed grade can get from the basic training samples through of transfer learning. At the same time, we also select a few target FPGA samples as training samples. A general predictive model is trained by these samples. Thus one kind of estimation model is used to estimate different speed grade FPGA circuit delay. The framework of TRED includes three phases: 1) Building a basic circuit delay library which includes multipliers, adders, shifters, and so on. These circuits are used to train and build the predictive model. 2) By contrasting experiments among different algorithms, the forest random algorithm is selected to train predictive model. 3) The target circuit delay is predicted by the predictive model. The Artix-7, Kintex-7, and Virtex-7 are selected to do experiments. Each of them includes -1, -2, -2l, and -3 different speed grade. The experiments show the delay estimation accuracy score is more than 92% with the TLED method. This result shows that the TLED method is a feasible delay assessment method, especially in the high-level synthesis stage of FPGA tool, which is an efficient and effective delay assessment method.
Methane Emissions from the Natural Gas Transmission and Storage System in the United States.
Zimmerle, Daniel J; Williams, Laurie L; Vaughn, Timothy L; Quinn, Casey; Subramanian, R; Duggan, Gerald P; Willson, Bryan; Opsomer, Jean D; Marchese, Anthony J; Martinez, David M; Robinson, Allen L
2015-08-04
The recent growth in production and utilization of natural gas offers potential climate benefits, but those benefits depend on lifecycle emissions of methane, the primary component of natural gas and a potent greenhouse gas. This study estimates methane emissions from the transmission and storage (T&S) sector of the United States natural gas industry using new data collected during 2012, including 2,292 onsite measurements, additional emissions data from 677 facilities and activity data from 922 facilities. The largest emission sources were fugitive emissions from certain compressor-related equipment and "super-emitter" facilities. We estimate total methane emissions from the T&S sector at 1,503 [1,220 to 1,950] Gg/yr (95% confidence interval) compared to the 2012 Environmental Protection Agency's Greenhouse Gas Inventory (GHGI) estimate of 2,071 [1,680 to 2,690] Gg/yr. While the overlap in confidence intervals indicates that the difference is not statistically significant, this is the result of several significant, but offsetting, factors. Factors which reduce the study estimate include a lower estimated facility count, a shift away from engines toward lower-emitting turbine and electric compressor drivers, and reductions in the usage of gas-driven pneumatic devices. Factors that increase the study estimate relative to the GHGI include updated emission rates in certain emission categories and explicit treatment of skewed emissions at both component and facility levels. For T&S stations that are required to report to the EPA's Greenhouse Gas Reporting Program (GHGRP), this study estimates total emissions to be 260% [215% to 330%] of the reportable emissions for these stations, primarily due to the inclusion of emission sources that are not reported under the GHGRP rules, updated emission factors, and super-emitter emissions.
The Community Cloud retrieval for CLimate (CC4CL) - Part 2: The optimal estimation approach
NASA Astrophysics Data System (ADS)
McGarragh, Gregory R.; Poulsen, Caroline A.; Thomas, Gareth E.; Povey, Adam C.; Sus, Oliver; Stapelberg, Stefan; Schlundt, Cornelia; Proud, Simon; Christensen, Matthew W.; Stengel, Martin; Hollmann, Rainer; Grainger, Roy G.
2018-06-01
The Community Cloud retrieval for Climate (CC4CL) is a cloud property retrieval system for satellite-based multispectral imagers and is an important component of the Cloud Climate Change Initiative (Cloud_cci) project. In this paper we discuss the optimal estimation retrieval of cloud optical thickness, effective radius and cloud top pressure based on the Optimal Retrieval of Aerosol and Cloud (ORAC) algorithm. Key to this method is the forward model, which includes the clear-sky model, the liquid water and ice cloud models, the surface model including a bidirectional reflectance distribution function (BRDF), and the "fast" radiative transfer solution (which includes a multiple scattering treatment). All of these components and their assumptions and limitations will be discussed in detail. The forward model provides the accuracy appropriate for our retrieval method. The errors are comparable to the instrument noise for cloud optical thicknesses greater than 10. At optical thicknesses less than 10 modeling errors become more significant. The retrieval method is then presented describing optimal estimation in general, the nonlinear inversion method employed, measurement and a priori inputs, the propagation of input uncertainties and the calculation of subsidiary quantities that are derived from the retrieval results. An evaluation of the retrieval was performed using measurements simulated with noise levels appropriate for the MODIS instrument. Results show errors less than 10 % for cloud optical thicknesses greater than 10. Results for clouds of optical thicknesses less than 10 have errors up to 20 %.
Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William
2014-01-01
Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies.
On-line estimation of nonlinear physical systems
Christakos, G.
1988-01-01
Recursive algorithms for estimating states of nonlinear physical systems are presented. Orthogonality properties are rediscovered and the associated polynomials are used to linearize state and observation models of the underlying random processes. This requires some key hypotheses regarding the structure of these processes, which may then take account of a wide range of applications. The latter include streamflow forecasting, flood estimation, environmental protection, earthquake engineering, and mine planning. The proposed estimation algorithm may be compared favorably to Taylor series-type filters, nonlinear filters which approximate the probability density by Edgeworth or Gram-Charlier series, as well as to conventional statistical linearization-type estimators. Moreover, the method has several advantages over nonrecursive estimators like disjunctive kriging. To link theory with practice, some numerical results for a simulated system are presented, in which responses from the proposed and extended Kalman algorithms are compared. ?? 1988 International Association for Mathematical Geology.
NASA Astrophysics Data System (ADS)
Shrivastava, Akash; Mohanty, A. R.
2018-03-01
This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.
Preliminary evaluation of spectral, normal and meteorological crop stage estimation approaches
NASA Technical Reports Server (NTRS)
Cate, R. B.; Artley, J. A.; Doraiswamy, P. C.; Hodges, T.; Kinsler, M. C.; Phinney, D. E.; Sestak, M. L. (Principal Investigator)
1980-01-01
Several of the projects in the AgRISTARS program require crop phenology information, including classification, acreage and yield estimation, and detection of episodal events. This study evaluates several crop calendar estimation techniques for their potential use in the program. The techniques, although generic in approach, were developed and tested on spring wheat data collected in 1978. There are three basic approaches to crop stage estimation: historical averages for an area (normal crop calendars), agrometeorological modeling of known crop-weather relationships agrometeorological (agromet) crop calendars, and interpretation of spectral signatures (spectral crop calendars). In all, 10 combinations of planting and biostage estimation models were evaluated. Dates of stage occurrence are estimated with biases between -4 and +4 days while root mean square errors range from 10 to 15 days. Results are inconclusive as to the superiority of any of the models and further evaluation of the models with the 1979 data set is recommended.
Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William
2014-01-01
Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies. PMID:24992657
NASA Technical Reports Server (NTRS)
Wunsch, Carl; Stammer, Detlef
1995-01-01
Two years of altimetric data from the TOPEX/POSEIDON spacecraft have been used to produce preliminary estimates of the space and time spectra of global variability for both sea surface height and slope. The results are expressed in terms of both degree variances from spherical harmonic expansions and in along-track wavenumbers. Simple analytic approximations both in terms of piece-wise power laws and Pade fractions are provided for comparison with independent measurements and for easy use of the results. A number of uses of such spectra exist, including the possibility of combining the altimetric data with other observations, predictions of spatial coherences, and the estimation of the accuracy of apparent secular trends in sea level.
NASA Astrophysics Data System (ADS)
Zhang, Dongbo; Peng, Yinghui; Yi, Yao; Shang, Xingyu
2013-10-01
Detection of red lesions [hemorrhages (HRs) and microaneurysms (MAs)] is crucial for the diagnosis of early diabetic retinopathy. A method based on background estimation and adapted to specific characteristics of HRs and MAs is proposed. Candidate red lesions are located by background estimation and Mahalanobis distance measure and then some adaptive postprocessing techniques, which include vessel detection, nonvessel exclusion based on shape analysis, and noise points exclusion by double-ring filter (only used for MAs detection), are conducted to remove nonlesion pixels. The method is evaluated on our collected image dataset, and experimental results show that it is better than or approximate to other previous approaches. It is effective to reduce the false-positive and false-negative results that arise from incomplete and inaccurate vessel structure.
Mooring line damping estimation for a floating wind turbine.
Qiao, Dongsheng; Ou, Jinping
2014-01-01
The dynamic responses of mooring line serve important functions in the station keeping of a floating wind turbine (FWT). Mooring line damping significantly influences the global motions of a FWT. This study investigates the estimation of mooring line damping on the basis of the National Renewable Energy Laboratory 5 MW offshore wind turbine model that is mounted on the ITI Energy barge. A numerical estimation method is derived from the energy absorption of a mooring line resulting from FWT motion. The method is validated by performing a 1/80 scale model test. Different parameter changes are analyzed for mooring line damping induced by horizontal and vertical motions. These parameters include excitation amplitude, excitation period, and drag coefficient. Results suggest that mooring line damping must be carefully considered in the FWT design.
Mooring Line Damping Estimation for a Floating Wind Turbine
Qiao, Dongsheng; Ou, Jinping
2014-01-01
The dynamic responses of mooring line serve important functions in the station keeping of a floating wind turbine (FWT). Mooring line damping significantly influences the global motions of a FWT. This study investigates the estimation of mooring line damping on the basis of the National Renewable Energy Laboratory 5 MW offshore wind turbine model that is mounted on the ITI Energy barge. A numerical estimation method is derived from the energy absorption of a mooring line resulting from FWT motion. The method is validated by performing a 1/80 scale model test. Different parameter changes are analyzed for mooring line damping induced by horizontal and vertical motions. These parameters include excitation amplitude, excitation period, and drag coefficient. Results suggest that mooring line damping must be carefully considered in the FWT design. PMID:25243231
Dwivedi, Dipankar; Mohanty, Binayak P.; Lesikar, Bruce J.
2013-01-01
Microbes have been identified as a major contaminant of water resources. Escherichia coli (E. coli) is a commonly used indicator organism. It is well recognized that the fate of E. coli in surface water systems is governed by multiple physical, chemical, and biological factors. The aim of this work is to provide insight into the physical, chemical, and biological factors along with their interactions that are critical in the estimation of E. coli loads in surface streams. There are various models to predict E. coli loads in streams, but they tend to be system or site specific or overly complex without enhancing our understanding of these factors. Hence, based on available data, a Bayesian Neural Network (BNN) is presented for estimating E. coli loads based on physical, chemical, and biological factors in streams. The BNN has the dual advantage of overcoming the absence of quality data (with regards to consistency in data) and determination of mechanistic model parameters by employing a probabilistic framework. This study evaluates whether the BNN model can be an effective alternative tool to mechanistic models for E. coli loads estimation in streams. For this purpose, a comparison with a traditional model (LOADEST, USGS) is conducted. The models are compared for estimated E. coli loads based on available water quality data in Plum Creek, Texas. All the model efficiency measures suggest that overall E. coli loads estimations by the BNN model are better than the E. coli loads estimations by the LOADEST model on all the three occasions (three-fold cross validation). Thirteen factors were used for estimating E. coli loads with the exhaustive feature selection technique, which indicated that six of thirteen factors are important for estimating E. coli loads. Physical factors included temperature and dissolved oxygen; chemical factors include phosphate and ammonia; biological factors include suspended solids and chlorophyll. The results highlight that the LOADEST model estimates E. coli loads better in the smaller ranges, whereas the BNN model estimates E. coli loads better in the higher ranges. Hence, the BNN model can be used to design targeted monitoring programs and implement regulatory standards through TMDL programs. PMID:24511166
Torgerson, Paul R; Devleesschauwer, Brecht; Praet, Nicolas; Speybroeck, Niko; Willingham, Arve Lee; Kasuga, Fumiko; Rokni, Mohammad B; Zhou, Xiao-Nong; Fèvre, Eric M; Sripa, Banchob; Gargouri, Neyla; Fürst, Thomas; Budke, Christine M; Carabin, Hélène; Kirk, Martyn D; Angulo, Frederick J; Havelaar, Arie; de Silva, Nilanthi
2015-12-01
Foodborne diseases are globally important, resulting in considerable morbidity and mortality. Parasitic diseases often result in high burdens of disease in low and middle income countries and are frequently transmitted to humans via contaminated food. This study presents the first estimates of the global and regional human disease burden of 10 helminth diseases and toxoplasmosis that may be attributed to contaminated food. Data were abstracted from 16 systematic reviews or similar studies published between 2010 and 2015; from 5 disease data bases accessed in 2015; and from 79 reports, 73 of which have been published since 2000, 4 published between 1995 and 2000 and 2 published in 1986 and 1981. These included reports from national surveillance systems, journal articles, and national estimates of foodborne diseases. These data were used to estimate the number of infections, sequelae, deaths, and Disability Adjusted Life Years (DALYs), by age and region for 2010. These parasitic diseases, resulted in 48.4 million cases (95% Uncertainty intervals [UI] of 43.4-79.0 million) and 59,724 (95% UI 48,017-83,616) deaths annually resulting in 8.78 million (95% UI 7.62-12.51 million) DALYs. We estimated that 48% (95% UI 38%-56%) of cases of these parasitic diseases were foodborne, resulting in 76% (95% UI 65%-81%) of the DALYs attributable to these diseases. Overall, foodborne parasitic disease, excluding enteric protozoa, caused an estimated 23.2 million (95% UI 18.2-38.1 million) cases and 45,927 (95% UI 34,763-59,933) deaths annually resulting in an estimated 6.64 million (95% UI 5.61-8.41 million) DALYs. Foodborne Ascaris infection (12.3 million cases, 95% UI 8.29-22.0 million) and foodborne toxoplasmosis (10.3 million cases, 95% UI 7.40-14.9 million) were the most common foodborne parasitic diseases. Human cysticercosis with 2.78 million DALYs (95% UI 2.14-3.61 million), foodborne trematodosis with 2.02 million DALYs (95% UI 1.65-2.48 million) and foodborne toxoplasmosis with 825,000 DALYs (95% UI 561,000-1.26 million) resulted in the highest burdens in terms of DALYs, mainly due to years lived with disability. Foodborne enteric protozoa, reported elsewhere, resulted in an additional 67.2 million illnesses or 492,000 DALYs. Major limitations of our study include often substantial data gaps that had to be filled by imputation and suffer from the uncertainties that surround such models. Due to resource limitations it was also not possible to consider all potentially foodborne parasites (for example Trypanosoma cruzi). Parasites are frequently transmitted to humans through contaminated food. These estimates represent an important step forward in understanding the impact of foodborne diseases globally and regionally. The disease burden due to most foodborne parasites is highly focal and results in significant morbidity and mortality among vulnerable populations.
First-Order System Least-Squares for the Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Bochev, P.; Cai, Z.; Manteuffel, T. A.; McCormick, S. F.
1996-01-01
This paper develops a least-squares approach to the solution of the incompressible Navier-Stokes equations in primitive variables. As with our earlier work on Stokes equations, we recast the Navier-Stokes equations as a first-order system by introducing a velocity flux variable and associated curl and trace equations. We show that the resulting system is well-posed, and that an associated least-squares principle yields optimal discretization error estimates in the H(sup 1) norm in each variable (including the velocity flux) and optimal multigrid convergence estimates for the resulting algebraic system.
NASA Astrophysics Data System (ADS)
Guo, Shu-Juan; Fu, Xin-Chu
2010-07-01
In this paper, by applying Lasalle's invariance principle and some results about the trace of a matrix, we propose a method for estimating the topological structure of a discrete dynamical network based on the dynamical evolution of the network. The network concerned can be directed or undirected, weighted or unweighted, and the local dynamics of each node can be nonidentical. The connections among the nodes can be all unknown or partially known. Finally, two examples, including a Hénon map and a central network, are illustrated to verify the theoretical results.
Grozdov, D S; Kolotov, V P; Lavrukhin, Yu E
2016-04-01
A method of full energy peak efficiency estimation in the space around scintillation detector, including the presence of a collimator, has been developed. It is based on a mathematical convolution of the experimental results with the following data extrapolation. The efficiency data showed the average uncertainty less than 10%. Software to calculate integral efficiency for nuclear power plant plume was elaborated. The paper also provides results of nuclear power plant plume height estimation by analysis of the spectral data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cost-benefit analysis for a lead wheel weight phase-out in Canada.
Campbell, P M; Corneau, E; Nishimura, D; Teng, E; Ekoualla, D
2018-05-06
Lead wheel weights (LWWs) have been banned in Europe, and some US States, but they continue to dominate the market in Canada. Exposure to lead is associated with numerous health impacts and can result in multiple and irreversible health problems which include cognitive impairment when exposure occurs during early development. Such impacts incur high individual and social costs. The purpose of this study was to assess the costs and public health benefits of a Risk Management Strategy (RMS) that would result from a LWW phase-out in Canada and compare this to a Business-As-Usual (BAU) scenario. The contribution of LWWs to lead concentrations in media including roadway soil/dust, ambient and indoor air, and indoor dust were estimated. The Integrated Exposure Uptake Biokinetic Model for Lead in Children (IEUBK) was used to develop estimates for the blood lead levels (BLLs) in children (μg/dL) associated with the BAU and the RMS. The BLLs estimated via the IEUBK model were then used to assess the IQ decrements associated with the BAU that would be avoided under the RMS. The subsequent overall societal benefits in terms of increased lifetime earning potential and reduced crime rate, were then estimated and compared to industry and government costs. LWWs form 72% of the Canadian wheel weight market and >1500 tonnes of lead as new LWWs attached to vehicles enters Canadian society annually. We estimate that 110-131 tonnes of lead in detached WWs are abraded on roadways in Canada each year. A LWW phase-out was predicted to result in a drop in pre-school BLLs of up to 0.4 μg/dL. The estimated net benefits associated with the RMS based on cognitive decrements avoided and hence increased lifetime earning potential (increased productivity) and reduced crime are expected to be: C$248 million (8% discount rate) to C$1.2 billion (3% discount rate) per year. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Green, C. T.; Liao, L.; Nolan, B. T.; Juckem, P. F.; Ransom, K.; Harter, T.
2017-12-01
Process-based modeling of regional NO3- fluxes to groundwater is critical for understanding and managing water quality. Measurements of atmospheric tracers of groundwater age and dissolved-gas indicators of denitrification progress have potential to improve estimates of NO3- reactive transport processes. This presentation introduces a regionalized version of a vertical flux method (VFM) that uses simple mathematical estimates of advective-dispersive reactive transport with regularization procedures to calibrate estimated tracer concentrations to observed equivalents. The calibrated VFM provides estimates of chemical, hydrologic and reaction parameters (source concentration time series, recharge, effective porosity, dispersivity, reaction rate coefficients) and derived values (e.g. mean unsaturated zone travel time, eventual depth of the NO3- front) for individual wells. Statistical learning methods are used to extrapolate parameters and predictions from wells to continuous areas. The regional VFM was applied to 473 well samples in central-eastern Wisconsin. Chemical measurements included O2, NO3-, N2 from denitrification, and atmospheric tracers of groundwater age including carbon-14, chlorofluorocarbons, tritium, and triogiogenic helium. VFM results were consistent with observed chemistry, and calibrated parameters were in-line with independent estimates. Results indicated that (1) unsaturated zone travel times were a substantial portion of the transit time to wells and streams (2) fractions of N leached to groundwater have changed over time, with increasing fractions from manure and decreasing fractions from fertilizer, and (3) under current practices and conditions, 60% of the shallow aquifer will eventually be affected by NO3- contamination. Based on GIS coverages of variables related to soils, land use and hydrology, the VFM results at individual wells were extrapolated regionally using boosted regression trees, a statistical learning approach, that related the GIS variables to the VFM parameters and predictions. Future work will explore applications at larger scales with direct integration of the statistical prediction model with the mechanistic VFM.
Du, Hua Qiang; Sun, Xiao Yan; Han, Ning; Mao, Fang Jie
2017-10-01
By synergistically using the object-based image analysis (OBIA) and the classification and regression tree (CART) methods, the distribution information, the indexes (including diameter at breast, tree height, and crown closure), and the aboveground carbon storage (AGC) of moso bamboo forest in Shanchuan Town, Anji County, Zhejiang Province were investigated. The results showed that the moso bamboo forest could be accurately delineated by integrating the multi-scale ima ge segmentation in OBIA technique and CART, which connected the image objects at various scales, with a pretty good producer's accuracy of 89.1%. The investigation of indexes estimated by regression tree model that was constructed based on the features extracted from the image objects reached normal or better accuracy, in which the crown closure model archived the best estimating accuracy of 67.9%. The estimating accuracy of diameter at breast and tree height was relatively low, which was consistent with conclusion that estimating diameter at breast and tree height using optical remote sensing could not achieve satisfactory results. Estimation of AGC reached relatively high accuracy, and accuracy of the region of high value achieved above 80%.
Galárraga, Omar; Salinas-Rodríguez, Aarón; Sesma-Vázquez, Sergio
2009-01-01
The goal of Seguro Popular (SP) in Mexico was to improve the financial protection of the uninsured population against excessive health expenditures. This paper estimates the impact of SP on catastrophic health expenditures (CHE), as well as out-of-pocket (OOP) health expenditures, from two different sources. First, we use the SP Impact Evaluation Survey (2005–2006), and compare the instrumental variables (IV) results with the experimental benchmark. Then, we use the same IV methods with the National Health and Nutrition Survey (ENSANUT 2006). We estimate naïve models, assuming exogeneity, and contrast them with IV models that take advantage of the specific SP implementation mechanisms for identification. The IV models estimated included two-stage least squares (2SLS), bivariate probit, and two-stage residual inclusion (2SRI) models. Instrumental variables estimates resulted in comparable estimates against the “gold standard.” Instrumental variables estimates indicate a reduction of 54% in catastrophic expenditures at the national level. SP beneficiaries also had lower expenditures on outpatient and medicine expenditures. The selection-corrected protective effect is found not only in the limited experimental dataset, but also at the national level. PMID:19756796
Galárraga, Omar; Sosa-Rubí, Sandra G; Salinas-Rodríguez, Aarón; Sesma-Vázquez, Sergio
2010-10-01
The goal of Seguro Popular (SP) in Mexico was to improve the financial protection of the uninsured population against excessive health expenditures. This paper estimates the impact of SP on catastrophic health expenditures (CHE), as well as out-of-pocket (OOP) health expenditures, from two different sources. First, we use the SP Impact Evaluation Survey (2005-2006), and compare the instrumental variables (IV) results with the experimental benchmark. Then, we use the same IV methods with the National Health and Nutrition Survey (ENSANUT 2006). We estimate naïve models, assuming exogeneity, and contrast them with IV models that take advantage of the specific SP implementation mechanisms for identification. The IV models estimated included two-stage least squares (2SLS), bivariate probit, and two-stage residual inclusion (2SRI) models. Instrumental variables estimates resulted in comparable estimates against the "gold standard." Instrumental variables estimates indicate a reduction of 54% in catastrophic expenditures at the national level. SP beneficiaries also had lower expenditures on outpatient and medicine expenditures. The selection-corrected protective effect is found not only in the limited experimental dataset, but also at the national level.
Jackson, Michael L
2009-10-01
Many health outcomes exhibit seasonal variation in incidence, including accidents, suicides, and infections. For seasonal outcomes it can be difficult to distinguish the causal roles played by factors that also vary seasonally, such as weather, air pollution, and pathogen circulation. Various approaches to estimating the association between a seasonal exposure and a seasonal outcome in ecologic studies are reviewed, using studies of influenza-related mortality as an example. Because mortality rates vary seasonally and circulation of other respiratory viruses peaks during influenza season, it is a challenge to estimate which winter deaths were caused by influenza. Results of studies that estimated the contribution of influenza to all-cause mortality using different methods on the same data are compared. Methods for estimating associations between season exposures and outcomes vary greatly in their advantages, disadvantages, and assumptions. Even when applied to identical data, different methods can give greatly different results for the expected contribution of influenza to all-cause mortality. When the association between exposures and outcomes that vary seasonally is estimated, models must be selected carefully, keeping in mind the assumptions inherent in each model.
Estimating the number of female sex workers in Côte d'Ivoire: results and lessons learned.
Vuylsteke, Bea; Sika, Lazare; Semdé, Gisèle; Anoma, Camille; Kacou, Elise; Laga, Marie
2017-09-01
To report on the results of three size estimations of the populations of female sex workers (FSW) in five cities in Côte d'Ivoire and on operational lessons learned, which may be relevant for key population programmes in other parts of the world. We applied three methods: mapping and census, capture-recapture and service multiplier. All were applied between 2008 and 2009 in Abidjan, San Pedro, Bouaké, Yamoussoukro and Abengourou. Abidjan was the city with the highest number of FSW by far, with estimations between 7880 (census) and 13 714 (service multiplier). The estimations in San Pedro, Bouaké and Yamoussoukro were very similar, with figures ranging from 1160 (Yamoussoukro, census) to 1916 (San Pedro, capture-recapture). Important operational lessons were learned, including strategies for mapping, the importance of involving peer sex workers for implementing the capture-recapture and the identification of the right question for the multiplier method. Successful application of three methods to estimate the population size of FSW in five cities in Côte d'Ivoire enabled us to make recommendations for size estimations of key population in low-income countries. © 2017 John Wiley & Sons Ltd.
The economic burden of schizophrenia in Canada in 2004.
Goeree, R; Farahati, F; Burke, N; Blackhouse, G; O'Reilly, D; Pyne, J; Tarride, J-E
2005-12-01
To estimate the financial burden of schizophrenia in Canada in 2004. A prevalence-based cost-of-illness (COI) approach was used. The primary sources of information for the study included a review of the published literature, a review of published reports and documents, secondary analysis of administrative datasets, and information collected directly from various federal and provincial government programs and services. The literature review included publications up to April 2005 reported in MedLine, EMBASE and PsychINFO. Where specific information from a province was not available, the method of mean substitution from other provinces was used. Costs incurred by various levels/departments of government were separated into healthcare and non-healthcare costs. Also included in the analysis was the value of lost productivity for premature mortality and morbidity associated with schizophrenia. Sensitivity analysis was used to test major cost assumptions used in the analysis. Where possible, all resource utilization estimates for the financial burden of schizophrenia were obtained for 2004 and are expressed in 2004 Canadian dollars (CAN dollars). The estimated number of persons with schizophrenia in Canada in 2004 was 234 305 (95% CI, 136 201-333 402). The direct healthcare and non-healthcare costs were estimated to be 2.02 billion CAN dollars in 2004. There were 374 deaths attributed to schizophrenia. This combined with the high unemployment rate due to schizophrenia resulted in an additional productivity morbidity and mortality loss estimate of 4.83 billion CAN dollars, for a total cost estimate in 2004 of 6.85 billion CAN dollars. By far the largest component of the total cost estimate was for productivity losses associated with morbidity in schizophrenia (70% of total costs) and the results showed that total cost estimates were most sensitive to alternative assumptions regarding the additional unemployment due to schizophrenia in Canada. Despite significant improvements in the past decade in pharmacotherapy, programs and services available for patients with schizophrenia, the economic burden of schizophrenia in Canada remains high. The most significant factor affecting the cost of schizophrenia in Canada is lost productivity due to morbidity. Programs targeted at improving patient symptoms and functioning to increase workforce participation has the potential to make a significant contribution in reducing the cost of this severe mental illness in Canada.
Global parameter estimation for thermodynamic models of transcriptional regulation.
Suleimenov, Yerzhan; Ay, Ahmet; Samee, Md Abul Hassan; Dresch, Jacqueline M; Sinha, Saurabh; Arnosti, David N
2013-07-15
Deciphering the mechanisms involved in gene regulation holds the key to understanding the control of central biological processes, including human disease, population variation, and the evolution of morphological innovations. New experimental techniques including whole genome sequencing and transcriptome analysis have enabled comprehensive modeling approaches to study gene regulation. In many cases, it is useful to be able to assign biological significance to the inferred model parameters, but such interpretation should take into account features that affect these parameters, including model construction and sensitivity, the type of fitness calculation, and the effectiveness of parameter estimation. This last point is often neglected, as estimation methods are often selected for historical reasons or for computational ease. Here, we compare the performance of two parameter estimation techniques broadly representative of local and global approaches, namely, a quasi-Newton/Nelder-Mead simplex (QN/NMS) method and a covariance matrix adaptation-evolutionary strategy (CMA-ES) method. The estimation methods were applied to a set of thermodynamic models of gene transcription applied to regulatory elements active in the Drosophila embryo. Measuring overall fit, the global CMA-ES method performed significantly better than the local QN/NMS method on high quality data sets, but this difference was negligible on lower quality data sets with increased noise or on data sets simplified by stringent thresholding. Our results suggest that the choice of parameter estimation technique for evaluation of gene expression models depends both on quality of data, the nature of the models [again, remains to be established] and the aims of the modeling effort. Copyright © 2013 Elsevier Inc. All rights reserved.
Estimated costs of production and potential prices for the WHO Essential Medicines List
Hill, Andrew M; Barber, Melissa J
2018-01-01
Introduction There are persistent gaps in access to affordable medicines. The WHO Model List of Essential Medicines (EML) includes medicines considered necessary for functional health systems. Methods A generic price estimation formula was developed by reviewing published analyses of cost of production for medicines and assuming manufacture in India, which included costs of formulation, packaging, taxation and a 10% profit margin. Data on per-kilogram prices of active pharmaceutical ingredient exported from India were retrieved from an online database. Estimated prices were compared with the lowest globally available prices for HIV/AIDS, tuberculosis (TB) and malaria medicines, and current prices in the UK, South Africa and India. Results The estimation formula had good predictive accuracy for HIV/AIDS, TB and malaria medicines. Estimated generic prices ranged from US$0.01 to US$1.45 per unit, with most in the lower end of this range. Lowest available prices were greater than estimated generic prices for 214/277 (77%) comparable items in the UK, 142/212 (67%) in South Africa and 118/298 (40%) in India. Lowest available prices were more than three times above estimated generic price for 47% of cases compared in the UK and 22% in South Africa. Conclusion A wide range of medicines in the EML can be profitably manufactured at very low cost. Most EML medicines are sold in the UK and South Africa at prices significantly higher than those estimated from production costs. Generic price estimation and international price comparisons could empower government price negotiations and support cost-effectiveness calculations. PMID:29564159
Szczegielniak, Jan; Łuniewski, Jacek; Stanisławski, Rafał; Bogacz, Katarzyna; Krajczy, Marcin; Rydel, Marek
2018-01-01
Background The six-minute walk test (6MWT) is considered to be a simple and inexpensive tool for the assessment of functional tolerance of submaximal effort. The aim of this work was 1) to background the nonlinear nature of the energy expenditure process due to physical activity, 2) to compare the results/scores of the submaximal treadmill exercise test and those of 6MWT in pulmonary patients and 3) to develop nonlinear mathematical models relating the two. Methods The study group included patients with the COPD. All patients were subjected to a submaximal exercise test and a 6MWT. To develop an optimal mathematical solution and compare the results of the exercise test and the 6MWT, the least squares and genetic algorithms were employed to estimate parameters of polynomial expansion and piecewise linear models. Results Mathematical analysis enabled to construct nonlinear models for estimating the MET result of submaximal exercise test based on average walk velocity (or distance) in the 6MWT. Conclusions Submaximal effort tolerance in COPD patients can be effectively estimated from new, rehabilitation-oriented, nonlinear models based on the generalized MET concept and the 6MWT. PMID:29425213
Serum uric acid and cancer mortality and incidence: a systematic review and meta-analysis.
Dovell, Frances; Boffetta, Paolo
2018-07-01
Elevated serum uric acid (SUA) is a marker of chronic inflammation and has been suggested to be associated with increased risk of cancer, but its antioxidant capacity would justify an anticancer effect. Previous meta-analyses did not include all available results. We conducted a systematic review of prospective studies on SUA level and risk of all cancers and specific cancers, a conducted a meta-analysis based on random-effects models for high versus low SUA level as well as for an increase in 1 mg/dl SUA. The relative risk of all cancers for high versus low SUA level was 1.11 (95% confidence interval: 0.94-1.27; 11 risk estimates); that for a mg/dl increase in SUA level was 1.03 (95% confidence interval: 0.99-1.07). Similar results were obtained for lung cancer (six risk estimates) and colon cancer (four risk estimates). Results for other cancers were sparse. Elevated SUA levels appear to be associated with a modest increase in overall cancer risk, although the combined risk estimate did not reach the formal level of statistical significance. Results for specific cancers were limited and mainly negative.
Incorporation of MRI-AIF Information For Improved Kinetic Modelling of Dynamic PET Data
NASA Astrophysics Data System (ADS)
Sari, Hasan; Erlandsson, Kjell; Thielemans, Kris; Atkinson, David; Ourselin, Sebastien; Arridge, Simon; Hutton, Brian F.
2015-06-01
In the analysis of dynamic PET data, compartmental kinetic analysis methods require an accurate knowledge of the arterial input function (AIF). Although arterial blood sampling is the gold standard of the methods used to measure the AIF, it is usually not preferred as it is an invasive method. An alternative method is the simultaneous estimation method (SIME), where physiological parameters and the AIF are estimated together, using information from different anatomical regions. Due to the large number of parameters to estimate in its optimisation, SIME is a computationally complex method and may sometimes fail to give accurate estimates. In this work, we try to improve SIME by utilising an input function derived from a simultaneously obtained DSC-MRI scan. With the assumption that the true value of one of the six parameter PET-AIF model can be derived from an MRI-AIF, the method is tested using simulated data. The results indicate that SIME can yield more robust results when the MRI information is included with a significant reduction in absolute bias of Ki estimates.
Estimating 1970-99 average annual groundwater recharge in Wisconsin using streamflow data
Gebert, Warren A.; Walker, John F.; Kennedy, James L.
2011-01-01
Average annual recharge in Wisconsin for the period 1970-99 was estimated using streamflow data from U.S. Geological Survey continuous-record streamflow-gaging stations and partial-record sites. Partial-record sites have discharge measurements collected during low-flow conditions. The average annual base flow of a stream divided by the drainage area is a good approximation of the recharge rate; therefore, once average annual base flow is determined recharge can be calculated. Estimates of recharge for nearly 72 percent of the surface area of the State are provided. The results illustrate substantial spatial variability of recharge across the State, ranging from less than 1 inch to more than 12 inches per year. The average basin size for partial-record sites (50 square miles) was less than the average basin size for the gaging stations (305 square miles). Including results for smaller basins reveals a spatial variability that otherwise would be smoothed out using only estimates for larger basins. An error analysis indicates that the techniques used provide base flow estimates with standard errors ranging from 5.4 to 14 percent.
Thilak, Vimal; Voelz, David G; Creusere, Charles D
2007-10-20
A passive-polarization-based imaging system records the polarization state of light reflected by objects that are illuminated with an unpolarized and generally uncontrolled source. Such systems can be useful in many remote sensing applications including target detection, object segmentation, and material classification. We present a method to jointly estimate the complex index of refraction and the reflection angle (reflected zenith angle) of a target from multiple measurements collected by a passive polarimeter. An expression for the degree of polarization is derived from the microfacet polarimetric bidirectional reflectance model for the case of scattering in the plane of incidence. Using this expression, we develop a nonlinear least-squares estimation algorithm for extracting an apparent index of refraction and the reflection angle from a set of polarization measurements collected from multiple source positions. Computer simulation results show that the estimation accuracy generally improves with an increasing number of source position measurements. Laboratory results indicate that the proposed method is effective for recovering the reflection angle and that the estimated index of refraction provides a feature vector that is robust to the reflection angle.
NASA Astrophysics Data System (ADS)
Thilak, Vimal; Voelz, David G.; Creusere, Charles D.
2007-10-01
A passive-polarization-based imaging system records the polarization state of light reflected by objects that are illuminated with an unpolarized and generally uncontrolled source. Such systems can be useful in many remote sensing applications including target detection, object segmentation, and material classification. We present a method to jointly estimate the complex index of refraction and the reflection angle (reflected zenith angle) of a target from multiple measurements collected by a passive polarimeter. An expression for the degree of polarization is derived from the microfacet polarimetric bidirectional reflectance model for the case of scattering in the plane of incidence. Using this expression, we develop a nonlinear least-squares estimation algorithm for extracting an apparent index of refraction and the reflection angle from a set of polarization measurements collected from multiple source positions. Computer simulation results show that the estimation accuracy generally improves with an increasing number of source position measurements. Laboratory results indicate that the proposed method is effective for recovering the reflection angle and that the estimated index of refraction provides a feature vector that is robust to the reflection angle.
2014-01-01
Background This paper describes the “EMG Driven Force Estimator (EMGD-FE)”, a Matlab® graphical user interface (GUI) application that estimates skeletal muscle forces from electromyography (EMG) signals. Muscle forces are obtained by numerically integrating a system of ordinary differential equations (ODEs) that simulates Hill-type muscle dynamics and that utilises EMG signals as input. In the current version, the GUI can estimate the forces of lower limb muscles executing isometric contractions. Muscles from other parts of the body can be tested as well, although no default values for model parameters are provided. To achieve accurate evaluations, EMG collection is performed simultaneously with torque measurement from a dynamometer. The computer application guides the user, step-by-step, to pre-process the raw EMG signals, create inputs for the muscle model, numerically integrate the ODEs and analyse the results. Results An example of the application’s functions is presented using the quadriceps femoris muscle. Individual muscle force estimations for the four components as well the knee isometric torque are shown. Conclusions The proposed GUI can estimate individual muscle forces from EMG signals of skeletal muscles. The estimation accuracy depends on several factors, including signal collection and modelling hypothesis issues. PMID:24708668
,
2000-01-01
Oil and natural gas account for approximately 63 percent of the world’s total energy consumption. The U.S. Geological Survey periodically estimates the amount of oil and gas remaining to be found in the world. Since 1981, each of the last four of these assessments has shown a slight increase in the combined volume of identified reserves and undiscovered resources. The latest assessment estimates the volume of technically recoverable conventional oil and gas that may be added to the world's reserves, exclusive of the United States, in the next 30 years. The USGS World Petroleum Assessment 2000 reports an increase in global petroleum resources, including a 20-percent increase in undiscovered oil and a 14-percent decrease in undiscovered natural gas compared to the previous assessment (table 1). These results have important implications for energy prices, policy, security, and the global resource balance.
Yuan, Wenping; Liu, Shuguang; Dong, Wenjie; Liang, Shunlin; Zhao, Shuqing; Chen, Jingming; Xu, Wenfang; Li, Xianglan; Barr, Alan; Andrew Black, T; Yan, Wende; Goulden, Mike L; Kulmala, Liisa; Lindroth, Anders; Margolis, Hank A; Matsuura, Yojiro; Moors, Eddy; van der Molen, Michiel; Ohta, Takeshi; Pilegaard, Kim; Varlagin, Andrej; Vesala, Timo
2014-06-26
The satellite-derived normalized difference vegetation index (NDVI), which is used for estimating gross primary production (GPP), often includes contributions from both mosses and vascular plants in boreal ecosystems. For the same NDVI, moss can generate only about one-third of the GPP that vascular plants can because of its much lower photosynthetic capacity. Here, based on eddy covariance measurements, we show that the difference in photosynthetic capacity between these two plant functional types has never been explicitly included when estimating regional GPP in the boreal region, resulting in a substantial overestimation. The magnitude of this overestimation could have important implications regarding a change from a current carbon sink to a carbon source in the boreal region. Moss abundance, associated with ecosystem disturbances, needs to be mapped and incorporated into GPP estimates in order to adequately assess the role of the boreal region in the global carbon cycle.
Improving Factor Score Estimation Through the Use of Observed Background Characteristics
Curran, Patrick J.; Cole, Veronica; Bauer, Daniel J.; Hussong, Andrea M.; Gottfredson, Nisha
2016-01-01
A challenge facing nearly all studies in the psychological sciences is how to best combine multiple items into a valid and reliable score to be used in subsequent modelling. The most ubiquitous method is to compute a mean of items, but more contemporary approaches use various forms of latent score estimation. Regardless of approach, outside of large-scale testing applications, scoring models rarely include background characteristics to improve score quality. The current paper used a Monte Carlo simulation design to study score quality for different psychometric models that did and did not include covariates across levels of sample size, number of items, and degree of measurement invariance. The inclusion of covariates improved score quality for nearly all design factors, and in no case did the covariates degrade score quality relative to not considering the influences at all. Results suggest that the inclusion of observed covariates can improve factor score estimation. PMID:28757790
Joint Inference of Population Assignment and Demographic History
Choi, Sang Chul; Hey, Jody
2011-01-01
A new approach to assigning individuals to populations using genetic data is described. Most existing methods work by maximizing Hardy–Weinberg and linkage equilibrium within populations, neither of which will apply for many demographic histories. By including a demographic model, within a likelihood framework based on coalescent theory, we can jointly study demographic history and population assignment. Genealogies and population assignments are sampled from a posterior distribution using a general isolation-with-migration model for multiple populations. A measure of partition distance between assignments facilitates not only the summary of a posterior sample of assignments, but also the estimation of the posterior density for the demographic history. It is shown that joint estimates of assignment and demographic history are possible, including estimation of population phylogeny for samples from three populations. The new method is compared to results of a widely used assignment method, using simulated and published empirical data sets. PMID:21775468
Coherent Lidar Design and Performance Verification
NASA Technical Reports Server (NTRS)
Frehlich, Rod
1996-01-01
This final report summarizes the investigative results from the 3 complete years of funding and corresponding publications are listed. The first year saw the verification of beam alignment for coherent Doppler lidar in space by using the surface return. The second year saw the analysis and computerized simulation of using heterodyne efficiency as an absolute measure of performance of coherent Doppler lidar. A new method was proposed to determine the estimation error for Doppler lidar wind measurements without the need for an independent wind measurement. Coherent Doppler lidar signal covariance, including wind shear and turbulence, was derived and calculated for typical atmospheric conditions. The effects of wind turbulence defined by Kolmogorov spatial statistics were investigated theoretically and with simulations. The third year saw the performance of coherent Doppler lidar in the weak signal regime determined by computer simulations using the best velocity estimators. Improved algorithms for extracting the performance of velocity estimators with wind turbulence included were also produced.
Differentiating moss from higher plants is critical in studying the carbon cycle of the boreal biome
Yuan, Wenping; Liu, Shuguang; Dong, Wenjie; Liang, Shunlin; Zhao, Shuqing; Chen, Jingming; Xu, Wenfang; Li, Xianglan; Barr, Alan; Black, T. Andrew; Yan, Wende; Goulden, Michael; Kulmala, Liisa; Lindroth, Anders; Margolis, Hank A.; Matsuura, Yojiro; Moors, Eddy; van der Molen, Michiel; Ohta, Takeshi; Pilegaard, Kim; Varlagin, Andrej; Vesala, Timo
2014-01-01
The satellite-derived normalized difference vegetation index (NDVI), which is used for estimating gross primary production (GPP), often includes contributions from both mosses and vascular plants in boreal ecosystems. For the same NDVI, moss can generate only about one-third of the GPP that vascular plants can because of its much lower photosynthetic capacity. Here, based on eddy covariance measurements, we show that the difference in photosynthetic capacity between these two plant functional types has never been explicitly included when estimating regional GPP in the boreal region, resulting in a substantial overestimation. The magnitude of this overestimation could have important implications regarding a change from a current carbon sink to a carbon source in the boreal region. Moss abundance, associated with ecosystem disturbances, needs to be mapped and incorporated into GPP estimates in order to adequately assess the role of the boreal region in the global carbon cycle.
Oppong, Raymond; Smith, Richard D; Little, Paul; Verheij, Theo; Butler, Christopher C; Goossens, Herman; Coenen, Samuel; Moore, Michael; Coast, Joanna
2016-01-01
Background Lower respiratory tract infections (LRTIs) are a major disease burden and are often treated with antibiotics. Typically, studies evaluating the use of antibiotics focus on immediate costs of care, and do not account for the wider implications of antimicrobial resistance. Aim This study sought to establish whether antibiotics (principally amoxicillin) are cost effective in patients with LRTIs, and to explore the implications of taking into account costs associated with resistance. Design and setting Multinational randomised double-blinded trial in 2060 patients with acute cough/LRTIs recruited in 12 European countries. Method A cost-utility analysis from a health system perspective with a time horizon of 28 days was conducted. The primary outcome measure was the quality-adjusted life year (QALY). Hierarchical modelling was used to estimate incremental cost-effectiveness ratios (ICERs). Results Amoxicillin was associated with an ICER of €8216 (£6540) per QALY gained when the cost of resistance was excluded. If the cost of resistance is greater than €11 (£9) per patient, then amoxicillin treatment is no longer cost effective. Including possible estimates of the cost of resistance resulted in ICERs ranging from €14 730 (£11 949) per QALY gained — when only multidrug resistance costs and health care costs are included — to €727 135 (£589 856) per QALY gained when broader societal costs are also included. Conclusion Economic evaluation of antibiotic prescribing strategies that do not include the cost of resistance may provide misleading results that could be of questionable use to policymakers. However, further work is required to estimate robust costs of resistance. PMID:27402969
The economic costs of alcohol consumption in Thailand, 2006.
Thavorncharoensap, Montarat; Teerawattananon, Yot; Yothasamut, Jomkwan; Lertpitakpong, Chanida; Thitiboonsuwan, Khannika; Neramitpitagkul, Prapag; Chaikledkaew, Usa
2010-06-09
There is evidence that the adverse consequences of alcohol impose a substantial economic burden on societies worldwide. Given the lack of generalizability of study results across different settings, many attempts have been made to estimate the economic costs of alcohol for various settings; however, these have mostly been confined to industrialized countries. To our knowledge, there are a very limited number of well-designed studies which estimate the economic costs of alcohol consumption in developing countries, including Thailand. Therefore, this study aims to estimate these economic costs, in Thailand, 2006. This is a prevalence-based, cost-of-illness study. The estimated costs in this study included both direct and indirect costs. Direct costs included health care costs, costs of law enforcement, and costs of property damage due to road-traffic accidents. Indirect costs included costs of productivity loss due to premature mortality, and costs of reduced productivity due to absenteeism and presenteeism (reduced on-the-job productivity). The total economic cost of alcohol consumption in Thailand in 2006 was estimated at 156,105.4 million baht (9,627 million US$ PPP) or about 1.99% of the total Gross Domestic Product (GDP). Indirect costs outweigh direct costs, representing 96% of the total cost. The largest cost attributable to alcohol consumption is that of productivity loss due to premature mortality (104,128 million baht/6,422 million US$ PPP), followed by cost of productivity loss due to reduced productivity (45,464.6 million baht/2,804 million US$ PPP), health care cost (5,491.2 million baht/339 million US$ PPP), cost of property damage as a result of road traffic accidents (779.4 million baht/48 million US$ PPP), and cost of law enforcement (242.4 million baht/15 million US$ PPP), respectively. The results from the sensitivity analysis revealed that the cost ranges from 115,160.4 million baht to 214,053.0 million baht (7,102.1 - 13,201 million US$ PPP) depending on the methods and assumptions employed. Alcohol imposes a substantial economic burden on Thai society, and according to these findings, the Thai government needs to pay significantly more attention to implementing more effective alcohol policies/interventions in order to reduce the negative consequences associated with alcohol.
Inflammatory Biomarkers and Risk of Schizophrenia: A 2-Sample Mendelian Randomization Study.
Hartwig, Fernando Pires; Borges, Maria Carolina; Horta, Bernardo Lessa; Bowden, Jack; Davey Smith, George
2017-12-01
Positive associations between inflammatory biomarkers and risk of psychiatric disorders, including schizophrenia, have been reported in observational studies. However, conventional observational studies are prone to bias, such as reverse causation and residual confounding, thus limiting our understanding of the effect (if any) of inflammatory biomarkers on schizophrenia risk. To evaluate whether inflammatory biomarkers have an effect on the risk of developing schizophrenia. Two-sample mendelian randomization study using genetic variants associated with inflammatory biomarkers as instrumental variables to improve inference. Summary association results from large consortia of candidate gene or genome-wide association studies, including several epidemiologic studies with different designs, were used. Gene-inflammatory biomarker associations were estimated in pooled samples ranging from 1645 to more than 80 000 individuals, while gene-schizophrenia associations were estimated in more than 30 000 cases and more than 45 000 ancestry-matched controls. In most studies included in the consortia, participants were of European ancestry, and the prevalence of men was approximately 50%. All studies were conducted in adults, with a wide age range (18 to 80 years). Genetically elevated circulating levels of C-reactive protein (CRP), interleukin-1 receptor antagonist (IL-1Ra), and soluble interleukin-6 receptor (sIL-6R). Risk of developing schizophrenia. Individuals with schizophrenia or schizoaffective disorders were included as cases. Given that many studies contributed to the analyses, different diagnostic procedures were used. The pooled odds ratio estimate using 18 CRP genetic instruments was 0.90 (random effects 95% CI, 0.84-0.97; P = .005) per 2-fold increment in CRP levels; consistent results were obtained using different mendelian randomization methods and a more conservative set of instruments. The odds ratio for sIL-6R was 1.06 (95% CI, 1.01-1.12; P = .02) per 2-fold increment. Estimates for IL-1Ra were inconsistent among instruments, and pooled estimates were imprecise and centered on the null. Under mendelian randomization assumptions, our findings suggest a protective effect of CRP and a risk-increasing effect of sIL-6R (potentially mediated at least in part by CRP) on schizophrenia risk. It is possible that such effects are a result of increased susceptibility to early life infection.
A Novel Methodology to Estimate the Treatment Effect in Presence of Highly Variable Placebo Response
Gomeni, Roberto; Goyal, Navin; Bressolle, Françoise; Fava, Maurizio
2015-01-01
One of the main reasons for the inefficiency of multicenter randomized clinical trials (RCTs) in depression is the excessively high level of placebo response. The aim of this work was to propose a novel methodology to analyze RCTs based on the assumption that centers with high placebo response are less informative than the other centers for estimating the ‘true' treatment effect (TE). A linear mixed-effect modeling approach for repeated measures (MMRM) was used as a reference approach. The new method for estimating TE was based on a nonlinear longitudinal modeling of clinical scores (NLMMRM). NLMMRM estimates TE by associating a weighting factor to the data collected in each center. The weight was defined by the posterior probability of detecting a clinically relevant difference between active treatment and placebo at that center. Data from five RCTs in depression were used to compare the performance of MMRM with NLMMRM. The results of the analyses showed an average improvement of ~15% in the TE estimated with NLMMRM when the center effect was included in the analyses. Opposite results were observed with MMRM: TE estimate was reduced by ~4% when the center effect was considered as covariate in the analysis. The novel NLMMRM approach provides a tool for controlling the confounding effect of high placebo response, to increase signal detection and to provide a more reliable estimate of the ‘true' TE by controlling false negative results associated with excessively high placebo response. PMID:25895454
Truong, Q T; Nguyen, Q V; Truong, V T; Park, H C; Byun, D Y; Goo, N S
2011-09-01
We present an unsteady blade element theory (BET) model to estimate the aerodynamic forces produced by a freely flying beetle and a beetle-mimicking flapping wing system. Added mass and rotational forces are included to accommodate the unsteady force. In addition to the aerodynamic forces needed to accurately estimate the time history of the forces, the inertial forces of the wings are also calculated. All of the force components are considered based on the full three-dimensional (3D) motion of the wing. The result obtained by the present BET model is validated with the data which were presented in a reference paper. The difference between the averages of the estimated forces (lift and drag) and the measured forces in the reference is about 5.7%. The BET model is also used to estimate the force produced by a freely flying beetle and a beetle-mimicking flapping wing system. The wing kinematics used in the BET calculation of a real beetle and the flapping wing system are captured using high-speed cameras. The results show that the average estimated vertical force of the beetle is reasonably close to the weight of the beetle, and the average estimated thrust of the beetle-mimicking flapping wing system is in good agreement with the measured value. Our results show that the unsteady lift and drag coefficients measured by Dickinson et al are still useful for relatively higher Reynolds number cases, and the proposed BET can be a good way to estimate the force produced by a flapping wing system.
Garcia, Adriana; Masbruch, Melissa D.; Susong, David D.
2014-01-01
The U.S. Geological Survey, as part of the Department of the Interior’s WaterSMART (Sustain and Manage America’s Resources for Tomorrow) initiative, compiled published estimates of groundwater discharge to streams in the Upper Colorado River Basin as a geospatial database. For the purpose of this report, groundwater discharge to streams is the baseflow portion of streamflow that includes contributions of groundwater from various flow paths. Reported estimates of groundwater discharge were assigned as attributes to stream reaches derived from the high-resolution National Hydrography Dataset. A total of 235 estimates of groundwater discharge to streams were compiled and included in the dataset. Feature class attributes of the geospatial database include groundwater discharge (acre-feet per year), method of estimation, citation abbreviation, defined reach, and 8-digit hydrologic unit code(s). Baseflow index (BFI) estimates of groundwater discharge were calculated using an existing streamflow characteristics dataset and were included as an attribute in the geospatial database. A comparison of the BFI estimates to the compiled estimates of groundwater discharge found that the BFI estimates were greater than the reported groundwater discharge estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pasyanos, M E
The behavior of surface waves at long periods is indicative of subcrustal velocity structure. Using recently published dispersion models, we invert surface wave group velocities for lithospheric structure, including lithospheric thickness, over much of the Eastern Hemisphere, encompassing Eurasia, Africa, and the Indian Ocean. Thicker lithosphere under Precambrian shields and platforms are clearly observed, not only under the large cratons (West Africa, Congo, Baltic, Russia, Siberia, India), but also under smaller blocks like the Tarim Basin and Yangtze craton. In contrast, it is found that remobilized Precambrian structures like the Saharan Shield and Sino-Korean Paraplatform do not have well-established lithosphericmore » keels. The thinnest lithospheric thickness is found under oceanic and continental rifts, as well as along convergence zones. We compare our results to thermal models of continental lithosphere, lithospheric cooling models of oceanic lithosphere, lithosphere-asthenosphere boundary (LAB) estimates from S-wave receiver functions, and velocity variations of global tomography models. In addition to comparing results for the broad region, we examine in detail the regions of Central Africa, Siberia, and Tibet. While there are clear differences in the various estimates, overall the results are generally consistent. Inconsistencies between the estimates may be due to a variety of reasons including lateral and depth resolution differences and the comparison of what may be different lithospheric features.« less
Two Experiments for Estimating Free Convection and Radiation Heat Transfer Coefficients
ERIC Educational Resources Information Center
Economides, Michael J.; Maloney, J. O.
1978-01-01
This article describes two simple undergraduate heat transfer experiments which may reinforce a student's understanding of free convection and radiation. Apparatus, experimental procedure, typical results, and discussion are included. (Author/BB)
Liang, Yuzhen; Torralba-Sanchez, Tifany L; Di Toro, Dominic M
2018-04-18
Polyparameter Linear Free Energy Relationships (pp-LFERs) using Abraham system parameters have many useful applications. However, developing the Abraham system parameters depends on the availability and quality of the Abraham solute parameters. Using Quantum Chemically estimated Abraham solute Parameters (QCAP) is shown to produce pp-LFERs that have lower root mean square errors (RMSEs) of predictions for solvent-water partition coefficients than parameters that are estimated using other presently available methods. pp-LFERs system parameters are estimated for solvent-water, plant cuticle-water systems, and for novel compounds using QCAP solute parameters and experimental partition coefficients. Refitting the system parameter improves the calculation accuracy and eliminates the bias. Refitted models for solvent-water partition coefficients using QCAP solute parameters give better results (RMSE = 0.278 to 0.506 log units for 24 systems) than those based on ABSOLV (0.326 to 0.618) and QSPR (0.294 to 0.700) solute parameters. For munition constituents and munition-like compounds not included in the calibration of the refitted model, QCAP solute parameters produce pp-LFER models with much lower RMSEs for solvent-water partition coefficients (RMSE = 0.734 and 0.664 for original and refitted model, respectively) than ABSOLV (4.46 and 5.98) and QSPR (2.838 and 2.723). Refitting plant cuticle-water pp-LFER including munition constituents using QCAP solute parameters also results in lower RMSE (RMSE = 0.386) than that using ABSOLV (0.778) and QSPR (0.512) solute parameters. Therefore, for fitting a model in situations for which experimental data exist and system parameters can be re-estimated, or for which system parameters do not exist and need to be developed, QCAP is the quantum chemical method of choice.
NASA Astrophysics Data System (ADS)
Hagemann, M.; Gleason, C. J.
2017-12-01
The upcoming (2021) Surface Water and Ocean Topography (SWOT) NASA satellite mission aims, in part, to estimate discharge on major rivers worldwide using reach-scale measurements of stream width, slope, and height. Current formalizations of channel and floodplain hydraulics are insufficient to fully constrain this problem mathematically, resulting in an infinitely large solution set for any set of satellite observations. Recent work has reformulated this problem in a Bayesian statistical setting, in which the likelihood distributions derive directly from hydraulic flow-law equations. When coupled with prior distributions on unknown flow-law parameters, this formulation probabilistically constrains the parameter space, and results in a computationally tractable description of discharge. Using a curated dataset of over 200,000 in-situ acoustic Doppler current profiler (ADCP) discharge measurements from over 10,000 USGS gaging stations throughout the United States, we developed empirical prior distributions for flow-law parameters that are not observable by SWOT, but that are required in order to estimate discharge. This analysis quantified prior uncertainties on quantities including cross-sectional area, at-a-station hydraulic geometry width exponent, and discharge variability, that are dependent on SWOT-observable variables including reach-scale statistics of width and height. When compared against discharge estimation approaches that do not use this prior information, the Bayesian approach using ADCP-derived priors demonstrated consistently improved performance across a range of performance metrics. This Bayesian approach formally transfers information from in-situ gaging stations to remote-sensed estimation of discharge, in which the desired quantities are not directly observable. Further investigation using large in-situ datasets is therefore a promising way forward in improving satellite-based estimates of river discharge.
Estimating evapotranspiration in natural and constructed wetlands
Lott, R. Brandon; Hunt, Randall J.
2001-01-01
Difficulties in accurately calculating evapotranspiration (ET) in wetlands can lead to inaccurate water balances—information important for many compensatory mitigation projects. Simple meteorological methods or off-site ET data often are used to estimate ET, but these approaches do not include potentially important site-specific factors such as plant community, root-zone water levels, and soil properties. The objective of this study was to compare a commonly used meterological estimate of potential evapotranspiration (PET) with direct measurements of ET (lysimeters and water-table fluctuations) and small-scale root-zone geochemistry in a natural and constructed wetland system. Unlike what has been commonly noted, the results of the study demonstrated that the commonly used Penman combination method of estimating PET underestimated the ET that was measured directly in the natural wetland over most of the growing season. This result is likely due to surface heterogeneity and related roughness efffects not included in the simple PET estimate. The meterological method more closely approximated season-long measured ET rates in the constructed wetland but may overestimate the ET rate late in the growing season. ET rates also were temporally variable in wetlands over a range of time scales because they can be influenced by the relation of the water table to the root zone and the timing of plant senescence. Small-scale geochemical sampling of the shallow root zone was able to provide an independent evaluation of ET rates, supporting the identification of higher ET rates in the natural wetlands and differences in temporal ET rates due to the timing of senescence. These discrepancies illustrate potential problems with extrapolating off-site estimates of ET or single measurements of ET from a site over space or time.
Estimates of Stellar Weak Interaction Rates for Nuclei in the Mass Range A=65-80
NASA Astrophysics Data System (ADS)
Pruet, Jason; Fuller, George M.
2003-11-01
We estimate lepton capture and emission rates, as well as neutrino energy loss rates, for nuclei in the mass range A=65-80. These rates are calculated on a temperature/density grid appropriate for a wide range of astrophysical applications including simulations of late time stellar evolution and X-ray bursts. The basic inputs in our single-particle and empirically inspired model are (i) experimentally measured level information, weak transition matrix elements, and lifetimes, (ii) estimates of matrix elements for allowed experimentally unmeasured transitions based on the systematics of experimentally observed allowed transitions, and (iii) estimates of the centroids of the GT resonances motivated by shell model calculations in the fp shell as well as by (n, p) and (p, n) experiments. Fermi resonances (isobaric analog states) are also included, and it is shown that Fermi transitions dominate the rates for most interesting proton-rich nuclei for which an experimentally determined ground state lifetime is unavailable. For the purposes of comparing our results with more detailed shell model based calculations we also calculate weak rates for nuclei in the mass range A=60-65 for which Langanke & Martinez-Pinedo have provided rates. The typical deviation in the electron capture and β-decay rates for these ~30 nuclei is less than a factor of 2 or 3 for a wide range of temperature and density appropriate for presupernova stellar evolution. We also discuss some subtleties associated with the partition functions used in calculations of stellar weak rates and show that the proper treatment of the partition functions is essential for estimating high-temperature β-decay rates. In particular, we show that partition functions based on unconverged Lanczos calculations can result in errors in estimates of high-temperature β-decay rates.
NASA Astrophysics Data System (ADS)
Petty, A.; Tsamados, M.; Kurtz, N. T.
2016-12-01
Here we present atmospheric form drag estimates over Arctic sea ice using high resolution, three-dimensional surface elevation data from NASA's Operation IceBridge Airborne Topographic Mapper (ATM), and surface roughness estimates from the Advanced Scatterometer (ASCAT). Surface features of the ice pack (e.g. pressure ridges) are detected using IceBridge ATM elevation data and a novel surface feature-picking algorithm. We use simple form drag parameterizations to convert the observed height and spacing of surface features into an effective atmospheric form drag coefficient. The results demonstrate strong regional variability in the atmospheric form drag coefficient, linked to variability in both the height and spacing of surface features. This includes form drag estimates around 2-3 times higher over the multiyear ice north of Greenland, compared to the first-year ice of the Beaufort/Chukchi seas. We compare results from both scanning and linear profiling to ensure our results are consistent with previous studies investigating form drag over Arctic sea ice. A strong correlation between ASCAT surface roughness estimates (using radar backscatter) and the IceBridge form drag results enable us to extrapolate the IceBridge data collected over the western-Arctic across the entire Arctic Ocean. While our focus is on spring, due to the timing of the primary IceBridge campaigns since 2009, we also take advantage of the autumn data collected by IceBridge in 2015 to investigate seasonality in Arctic ice topography and the resulting form drag coefficient. Our results offer the first large-scale assessment of atmospheric form drag over Arctic sea ice due to variable ice topography (i.e. within the Arctic pack ice). The analysis is being extended to the Antarctic IceBridge sea ice data, and the results are being used to calibrate a sophisticated form drag parameterization scheme included in the sea ice model CICE, to improve the representation of form drag over Arctic and Antarctic sea ice in global climate models.
State estimation improves prospects for ocean research
NASA Astrophysics Data System (ADS)
Stammer, Detlef; Wunsch, C.; Fukumori, I.; Marshall, J.
Rigorous global ocean state estimation methods can now be used to produce dynamically consistent time-varying model/data syntheses, the results of which are being used to study a variety of important scientific problems. Figure 1 shows a schematic of a complete ocean observing and synthesis system that includes global observations and state-of-the-art ocean general circulation models (OGCM) run on modern computer platforms. A global observing system is described in detail in Smith and Koblinsky [2001],and the present status of ocean modeling and anticipated improvements are addressed by Griffies et al. [2001]. Here, the focus is on the third component of state estimation: the synthesis of the observations and a model into a unified, dynamically consistent estimate.
Development of a digital automatic control law for steep glideslope capture and flare
NASA Technical Reports Server (NTRS)
Halyo, N.
1977-01-01
A longitudinal digital guidance and control law for steep glideslopes using MLS (Microwave Landing System) data is developed for CTOL aircraft using modern estimation and control techniques. The control law covers the final approach phases of glideslope capture, glideslope tracking, and flare to touchdown for automatic landings under adverse weather conditions. The control law uses a constant gain Kalman filter to process MLS and body-mounted accelerometer data to form estimates of flight path errors and wind velocities including wind shear. The flight path error estimates and wind estimates are used for feedback in generating control surface commands. Results of a digital simulation of the aircraft dynamics and the guidance and control law are presented for various wind conditions.
Estimating Rooftop Suitability for PV: A Review of Methods, Patents, and Validation Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melius, J.; Margolis, R.; Ong, S.
2013-12-01
A number of methods have been developed using remote sensing data to estimate rooftop area suitable for the installation of photovoltaics (PV) at various geospatial resolutions. This report reviews the literature and patents on methods for estimating rooftop-area appropriate for PV, including constant-value methods, manual selection methods, and GIS-based methods. This report also presents NREL's proposed method for estimating suitable rooftop area for PV using Light Detection and Ranging (LiDAR) data in conjunction with a GIS model to predict areas with appropriate slope, orientation, and sunlight. NREL's method is validated against solar installation data from New Jersey, Colorado, and Californiamore » to compare modeled results to actual on-the-ground measurements.« less
Elimination of Emergency Department Medication Errors Due To Estimated Weights.
Greenwalt, Mary; Griffen, David; Wilkerson, Jim
2017-01-01
From 7/2014 through 6/2015, 10 emergency department (ED) medication dosing errors were reported through the electronic incident reporting system of an urban academic medical center. Analysis of these medication errors identified inaccurate estimated weight on patients as the root cause. The goal of this project was to reduce weight-based dosing medication errors due to inaccurate estimated weights on patients presenting to the ED. Chart review revealed that 13.8% of estimated weights documented on admitted ED patients varied more than 10% from subsequent actual admission weights recorded. A random sample of 100 charts containing estimated weights revealed 2 previously unreported significant medication dosage errors (.02 significant error rate). Key improvements included removing barriers to weighing ED patients, storytelling to engage staff and change culture, and removal of the estimated weight documentation field from the ED electronic health record (EHR) forms. With these improvements estimated weights on ED patients, and the resulting medication errors, were eliminated.
Zhan, Hanyu; Voelz, David G; Cho, Sang-Yeon; Xiao, Xifeng
2015-11-20
The estimation of the refractive index from optical scattering off a target's surface is an important task for remote sensing applications. Optical polarimetry is an approach that shows promise for refractive index estimation. However, this estimation often relies on polarimetric models that are limited to specular targets involving single surface scattering. Here, an analytic model is developed for the degree of polarization (DOP) associated with reflection from a rough surface that includes the effect of diffuse scattering. A multiplicative factor is derived to account for the diffuse component and evaluation of the model indicates that diffuse scattering can significantly affect the DOP values. The scattering model is used in a new approach for refractive index estimation from a series of DOP values that involves jointly estimating n, k, and ρ(d)with a nonlinear equation solver. The approach is shown to work well with simulation data and additive noise. When applied to laboratory-measured DOP values, the approach produces significantly improved index estimation results relative to reference values.
Probabilistic segmentation and intensity estimation for microarray images.
Gottardo, Raphael; Besag, Julian; Stephens, Matthew; Murua, Alejandro
2006-01-01
We describe a probabilistic approach to simultaneous image segmentation and intensity estimation for complementary DNA microarray experiments. The approach overcomes several limitations of existing methods. In particular, it (a) uses a flexible Markov random field approach to segmentation that allows for a wider range of spot shapes than existing methods, including relatively common 'doughnut-shaped' spots; (b) models the image directly as background plus hybridization intensity, and estimates the two quantities simultaneously, avoiding the common logical error that estimates of foreground may be less than those of the corresponding background if the two are estimated separately; and (c) uses a probabilistic modeling approach to simultaneously perform segmentation and intensity estimation, and to compute spot quality measures. We describe two approaches to parameter estimation: a fast algorithm, based on the expectation-maximization and the iterated conditional modes algorithms, and a fully Bayesian framework. These approaches produce comparable results, and both appear to offer some advantages over other methods. We use an HIV experiment to compare our approach to two commercial software products: Spot and Arrayvision.
A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.
Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun
2017-09-21
The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.
STS-40 descent BET products: Development and results
NASA Technical Reports Server (NTRS)
Oakes, Kevin F.; Wood, James S.; Findlay, John T.
1991-01-01
Descent Best Estimate Trajectory (BET) Data were generated for the final Orbiter Experiments Flight, STS-40. This report discusses the actual development of these post-flight products: the inertial BET, the Extended BET, and the Aerodynamic BET. Summary results are also included. The inertial BET was determined based on processing Tracking and Data Relay Satellite (TDRSS) coherent Doppler data in conjunction with observations from eleven C-band stations, to include data from the Kwajalein Atoll and the usual California coastal radars, as well as data from five cinetheodolite cameras in the vicinity of the runways at EAFB. The anchor epoch utilized for the trajectory reconstruction was 53,904 Greenwich Mean Time (GMT) seconds which corresponds to an altitude at epoch of approximately 708 kft. Atmospheric data to enable development of an Extended BET for this mission were upsurped from the JSC operational post-flight BET. These data were evaluated based on Space Shuttle-derived considerations as well as model comparisons. The Aerodynamic BET includes configuration information, final mass properties, and both flight-determined and predicted aerodynamic performance estimates. The predicted data were based on the final pre-operational databook, updated to include flight determined incrementals based on an earlier ensemble of flights. Aerodynamic performance comparisons are presented and correlated versus statistical results based on twenty-two previous missions.
Estimating respiratory rate from FBG optical sensors by using signal quality measurement.
Yongwei Zhu; Maniyeri, Jayachandran; Fook, Victor Foo Siang; Haihong Zhang
2015-08-01
Non-intrusiveness is one of the advantages of in-bed optical sensor device for monitoring vital signs, including heart rate and respiratory rate. Estimating respiratory rate reliably using such sensors, however, is challenging, due to body movement, signal variation according to different subjects or body positions, etc. This paper presents a method for reliable respiratory rate estimation for FBG optical sensors by introducing signal quality estimation. The method estimates the quality of the signal waveform by detecting regularly repetitive patterns using proposed spectrum and cepstrum analysis. Multiple window sizes are used to cater for a wide range of target respiratory rates. Furthermore, the readings of multiple sensors are fused to derive a final respiratory rate. Experiments with 12 subjects and 2 body positions were conducted using polysomnography belt signal as groundtruth. The results demonstrated the effectiveness of the method.
Restoration of Monotonicity Respecting in Dynamic Regression
Huang, Yijian
2017-01-01
Dynamic regression models, including the quantile regression model and Aalen’s additive hazards model, are widely adopted to investigate evolving covariate effects. Yet lack of monotonicity respecting with standard estimation procedures remains an outstanding issue. Advances have recently been made, but none provides a complete resolution. In this article, we propose a novel adaptive interpolation method to restore monotonicity respecting, by successively identifying and then interpolating nearest monotonicity-respecting points of an original estimator. Under mild regularity conditions, the resulting regression coefficient estimator is shown to be asymptotically equivalent to the original. Our numerical studies have demonstrated that the proposed estimator is much more smooth and may have better finite-sample efficiency than the original as well as, when available as only in special cases, other competing monotonicity-respecting estimators. Illustration with a clinical study is provided. PMID:29430068
Application of Multilayer Feedforward Neural Networks to Precipitation Cell-Top Altitude Estimation
NASA Technical Reports Server (NTRS)
Spina, Michelle S.; Schwartz, Michael J.; Staelin, David H.; Gasiewski, Albin J.
1998-01-01
The use of passive 118-GHz O2 observations of rain cells for precipitation cell-top altitude estimation is demonstrated by using a multilayer feed forward neural network retrieval system. Rain cell observations at 118 GHz were compared with estimates of the cell-top altitude obtained by optical stereoscopy. The observations were made with 2 4 km horizontal spatial resolution by using the Millimeter-wave Temperature Sounder (MTS) scanning spectrometer aboard the NASA ER-2 research aircraft during the Genesis of Atlantic Lows Experiment (GALE) and the COoperative Huntsville Meteorological EXperiment (COHMEX) in 1986. The neural network estimator applied to MTS spectral differences between clouds, and nearby clear air yielded an rms discrepancy of 1.76 km for a combined cumulus, mature, and dissipating cell set and 1.44 km for the cumulus-only set. An improvement in rms discrepancy to 1.36 km was achieved by including additional MTS information on the absolute atmospheric temperature profile. An incremental method for training neural networks was developed that yielded robust results, despite the use of as few as 56 training spectra. Comparison of these results with a nonlinear statistical estimator shows that superior results can be obtained with a neural network retrieval system. Imagery of estimated cell-top altitudes was created from 118-GHz spectral imagery gathered from CAMEX, September through October 1993, and from cyclone Oliver, February 7, 1993.
Comparing Methods for Estimating Direct Costs of Adverse Drug Events.
Gyllensten, Hanna; Jönsson, Anna K; Hakkarainen, Katja M; Svensson, Staffan; Hägg, Staffan; Rehnberg, Clas
2017-12-01
To estimate how direct health care costs resulting from adverse drug events (ADEs) and cost distribution are affected by methodological decisions regarding identification of ADEs, assigning relevant resource use to ADEs, and estimating costs for the assigned resources. ADEs were identified from medical records and diagnostic codes for a random sample of 4970 Swedish adults during a 3-month study period in 2008 and were assessed for causality. Results were compared for five cost evaluation methods, including different methods for identifying ADEs, assigning resource use to ADEs, and for estimating costs for the assigned resources (resource use method, proportion of registered cost method, unit cost method, diagnostic code method, and main diagnosis method). Different levels of causality for ADEs and ADEs' contribution to health care resource use were considered. Using the five methods, the maximum estimated overall direct health care costs resulting from ADEs ranged from Sk10,000 (Sk = Swedish krona; ~€1,500 in 2016 values) using the diagnostic code method to more than Sk3,000,000 (~€414,000) using the unit cost method in our study population. The most conservative definitions for ADEs' contribution to health care resource use and the causality of ADEs resulted in average costs per patient ranging from Sk0 using the diagnostic code method to Sk4066 (~€500) using the unit cost method. The estimated costs resulting from ADEs varied considerably depending on the methodological choices. The results indicate that costs for ADEs need to be identified through medical record review and by using detailed unit cost data. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Dechartres, Agnes; Bond, Elizabeth G; Scheer, Jordan; Riveros, Carolina; Atal, Ignacio; Ravaud, Philippe
2016-11-30
Publication bias and other reporting bias have been well documented for journal articles, but no study has evaluated the nature of results posted at ClinicalTrials.gov. We aimed to assess how many randomized controlled trials (RCTs) with results posted at ClinicalTrials.gov report statistically significant results and whether the proportion of trials with significant results differs when no treatment effect estimate or p-value is posted. We searched ClinicalTrials.gov in June 2015 for all studies with results posted. We included completed RCTs with a superiority hypothesis and considered results for the first primary outcome with results posted. For each trial, we assessed whether a treatment effect estimate and/or p-value was reported at ClinicalTrials.gov and if yes, whether results were statistically significant. If no treatment effect estimate or p-value was reported, we calculated the treatment effect and corresponding p-value using results per arm posted at ClinicalTrials.gov when sufficient data were reported. From the 17,536 studies with results posted at ClinicalTrials.gov, we identified 2823 completed phase 3 or 4 randomized trials with a superiority hypothesis. Of these, 1400 (50%) reported a treatment effect estimate and/or p-value. Results were statistically significant for 844 trials (60%), with a median p-value of 0.01 (Q1-Q3: 0.001-0.26). For the 1423 trials with no treatment effect estimate or p-value posted, we could calculate the treatment effect and corresponding p-value using results reported per arm for 929 (65%). For 494 trials (35%), p-values could not be calculated mainly because of insufficient reporting, censored data, or repeated measurements over time. For the 929 trials we could calculate p-values, we found statistically significant results for 342 (37%), with a median p-value of 0.19 (Q1-Q3: 0.005-0.59). Half of the trials with results posted at ClinicalTrials.gov reported a treatment effect estimate and/or p-value, with significant results for 60% of these. p-values could be calculated from results reported per arm at ClinicalTrials.gov for only 65% of the other trials. The proportion of significant results was much lower for these trials, which suggests a selective posting of treatment effect estimates and/or p-values when results are statistically significant.
Updated Magmatic Flux Rate Estimates for the Hawaii Plume
NASA Astrophysics Data System (ADS)
Wessel, P.
2013-12-01
Several studies have estimated the magmatic flux rate along the Hawaiian-Emperor Chain using a variety of methods and arriving at different results. These flux rate estimates have weaknesses because of incomplete data sets and different modeling assumptions, especially for the youngest portion of the chain (<3 Ma). While they generally agree on the 1st order features, there is less agreement on the magnitude and relative size of secondary flux variations. Some of these differences arise from the use of different methodologies, but the significance of this variability is difficult to assess due to a lack of confidence bounds on the estimates obtained with these disparate methods. All methods introduce some error, but to date there has been little or no quantification of error estimates for the inferred melt flux, making an assessment problematic. Here we re-evaluate the melt flux for the Hawaii plume with the latest gridded data sets (SRTM30+ and FAA 21.1) using several methods, including the optimal robust separator (ORS) and directional median filtering techniques (DiM). We also compute realistic confidence limits on the results. In particular, the DiM technique was specifically developed to aid in the estimation of surface loads that are superimposed on wider bathymetric swells and it provides error estimates on the optimal residuals. Confidence bounds are assigned separately for the estimated surface load (obtained from the ORS regional/residual separation techniques) and the inferred subsurface volume (from gravity-constrained isostasy and plate flexure optimizations). These new and robust estimates will allow us to assess which secondary features in the resulting melt flux curve are significant and should be incorporated when correlating melt flux variations with other geophysical and geochemical observations.
Gafford, J. Atlee; Krebill, Hope; Lai, Sue Min; Christiadi; Doolittle, Gary C.
2017-01-01
Purpose Patients benefit from receiving cancer treatment closer to home when possible and at high-volume regional centers when specialized care is required. The purpose of this analysis was to estimate the economic impact of retaining more patients in-state for cancer clinical trials and care, which might offset some of the costs of establishing broader cancer trial and treatment networks. Method Kansas Cancer Registry data were used to estimate the number of patients retained in-state for cancer care following the expansion of local cancer clinical trial options through the Midwest Cancer Alliance based at the University of Kansas Medical Center. The 2014 economic impact of this enhanced local clinical trial network was estimated in four parts: Medical spending was estimated on the basis of National Cancer Institute cost-of-care estimates. Household travel cost savings were estimated as the difference between in-state and out-of-state travel costs. Trial-related grant income was calculated from administrative records. Indirect and induced economic benefits to the state were estimated using an economic impact model. Results The authors estimated that the enhanced local cancer clinical trial network resulted in approximately $6.9 million in additional economic activity in the state in 2014, or $362,000 per patient retained in-state. This estimate includes $3.6 million in direct spending and $3.3 million in indirect economic activity. The enhanced trial network also resulted in 45 additional jobs. Conclusions Retaining patients in-state for cancer care and clinical trial participation allows patients to remain closer to home for care and enhances the state economy. PMID:28253204
Acceleration estimation using a single GPS receiver for airborne scalar gravimetry
NASA Astrophysics Data System (ADS)
Zhang, Xiaohong; Zheng, Kai; Lu, Cuixian; Wan, Jiakuan; Liu, Zhanke; Ren, Xiaodong
2017-11-01
Kinematic acceleration estimated using the global positioning system (GPS) is significant for airborne scalar gravimetry. As the conventional approach based on the differential global positioning system (DGPS) presents several drawbacks, including additional cost or the impracticality of setting up nearby base stations in challenging environments, we introduce an alternative approach, Modified Kin-VADASE (MKin-VADASE), based on a modified Kin-VADASE approach without the requirement to have ground-base stations. In this approach, the aircraft velocities are first estimated with the modified Kin-VADASE. Then the accelerations are obtained from velocity estimates using the Taylor approximation differentiator. The impact of carrier-phase measurement noise and satellite ephemeris errors on acceleration estimates are investigated carefully in the frequency domain with the Fast Fourier Transform Algorithm (FFT). The results show that the satellite clock products have a significant impact on the acceleration estimates. Then, the performance of MKin-VADASE, PPP, and DGPS are validated using flight tests carried out in Shanxi Province, China. The accelerations are estimated using the three approaches, then used to calculate the gravity disturbances. Finally, the analysis of crossover difference and the terrestrial gravity data are used to evaluate the accuracy of gravity disturbance estimates. The results show that the performances of MKin-VADASE, PPP and DGPS are comparable, but the computational complexity of MKin-VADASE is greatly reduced with regard to PPP and DGPS. For the results of the three approaches, the RMS of crossover differences of gravity disturbance estimates is approximately 1-1.5 mGal at a spatial resolution of 3.5 km (half wavelength) after crossover adjustment, and the accuracy is approximately 3-4 mGal with respect to terrestrial gravity data.
Ritchie, Andrew M; Lo, Nathan; Ho, Simon Y W
2017-05-01
In Bayesian phylogenetic analyses of genetic data, prior probability distributions need to be specified for the model parameters, including the tree. When Bayesian methods are used for molecular dating, available tree priors include those designed for species-level data, such as the pure-birth and birth-death priors, and coalescent-based priors designed for population-level data. However, molecular dating methods are frequently applied to data sets that include multiple individuals across multiple species. Such data sets violate the assumptions of both the speciation and coalescent-based tree priors, making it unclear which should be chosen and whether this choice can affect the estimation of node times. To investigate this problem, we used a simulation approach to produce data sets with different proportions of within- and between-species sampling under the multispecies coalescent model. These data sets were then analyzed under pure-birth, birth-death, constant-size coalescent, and skyline coalescent tree priors. We also explored the ability of Bayesian model testing to select the best-performing priors. We confirmed the applicability of our results to empirical data sets from cetaceans, phocids, and coregonid whitefish. Estimates of node times were generally robust to the choice of tree prior, but some combinations of tree priors and sampling schemes led to large differences in the age estimates. In particular, the pure-birth tree prior frequently led to inaccurate estimates for data sets containing a mixture of inter- and intraspecific sampling, whereas the birth-death and skyline coalescent priors produced stable results across all scenarios. Model testing provided an adequate means of rejecting inappropriate tree priors. Our results suggest that tree priors do not strongly affect Bayesian molecular dating results in most cases, even when severely misspecified. However, the choice of tree prior can be significant for the accuracy of dating results in the case of data sets with mixed inter- and intraspecies sampling. [Bayesian phylogenetic methods; model testing; molecular dating; node time; tree prior.]. © The authors 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For permissions, please e-mail: journals.permission@oup.com.
MSFC Sortie Laboratory Environmental Control System (ECS) phase B design study results
NASA Technical Reports Server (NTRS)
Ignatonis, A. J.; Mitchell, K. L.
1974-01-01
Phase B effort of the Sortie Lab program has concluded. Results of that effort are presented which pertain to the definitions of the environmental control system (ECS). Numerous design studies were performed in Phase B to investigate system feasibility, complexity, weight, and cost. The results and methods employed for these design studies are included. An autonomous Sortie Lab ECS was developed which utilizes a deployed space radiator. Total system weight was projected to be 1814.4 kg including the radiator and fluids. ECS power requirements were estimated at 950 watts.
2010-01-01
Background Estimating the economic impact of influenza is complicated because the disease may have non-specific symptoms, and many patients with influenza are registered with other diagnoses. Furthermore, in some countries like Norway, employees can be on paid sick leave for a specified number of days without a doctor's certificate ("self-reported sick leave") and these sick leaves are not registered. Both problems result in gaps in the existing literature: costs associated with influenza-related illness and self-reported sick leave are rarely included. The aim of this study was to improve estimates of total influenza-related health-care costs and productivity losses by estimating these missing costs. Methods Using Norwegian data, the weekly numbers of influenza-attributable hospital admissions and certified sick leaves registered with other diagnoses were estimated from influenza-like illness surveillance data using quasi-Poisson regression. The number of self-reported sick leaves was estimated using a Monte-Carlo simulation model of illness recovery curves based on the number of certified sick leaves. A probabilistic sensitivity analysis was conducted on the economic outcomes. Results During the 1998/99 through 2005/06 influenza seasons, the models estimated an annual average of 2700 excess influenza-associated hospitalizations in Norway, of which 16% were registered as influenza, 51% as pneumonia and 33% were registered with other diagnoses. The direct cost of seasonal influenza totaled US$22 million annually, including costs of pharmaceuticals and outpatient services. The annual average number of working days lost was predicted at 793 000, resulting in an estimated productivity loss of US$231 million. Self-reported sick leave accounted for approximately one-third of the total indirect cost. During a pandemic, the total cost could rise to over US$800 million. Conclusions Influenza places a considerable burden on patients and society with indirect costs greatly exceeding direct costs. The cost of influenza-attributable complications and the cost of self-reported sick leave represent a considerable part of the economic burden of influenza. PMID:21106057
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leung, K; Wong, M; Ng, Y
Purpose: Interventional cardiac procedures utilize frequent fluoroscopy and cineangiography, which impose considerable radiation risk to patients, especially pediatric patients. Accurate calculation of effective dose is important in order to estimate cancer risk over the rest of their lifetime. This study evaluates the difference in effective dose calculated by Monte Carlo simulation with those estimated by locally-derived conversion factors (CF-local) and by commonly quoted conversion factors from Karambatsakidou et al (CF-K). Methods: Effective dose (E),of 12 pediatric patients, age between 2.5–19 years old, who had undergone interventional cardiac procedures, were calculated using PCXMC-2.0 software. Tube spectrum, irradiation geometry, exposure parameters andmore » dose-area product (DAP) of each projection were included in the software calculation. Effective doses for each patient were also estimated by two Methods: 1) CF-local: conversion factor derived locally by generalizing results of 12 patients, multiplied by DAP of each patient gives E-local. 2) CF-K: selected factor from above-mentioned literature, multiplied by DAP of each patient gives E-K. Results: Mean of E, E-local and E-K were 16.01 mSv, 16.80 mSv and 22.25 mSv respectively. A deviation of −29.35% to +34.85% between E and E-local, while a greater deviation of −28.96% to +60.86% between E and EK were observed. E-K overestimated the effective dose for patients at age 7.5–19. Conclusion: Effective dose obtained by conversion factors is simple and quick to estimate radiation risk of pediatric patients. This study showed that estimation by CF-local may bear an error of 35% when compared with Monte Carlo calculation. If using conversion factors derived by other studies may result in an even greater error, of up to 60%, due to factors that are not catered for in the estimation, including patient size, projection angles, exposure parameters, tube filtration, etc. Users must be aware of these potential inaccuracies when simple conversion method is employed.« less
Benchmarking the Integration of WAVEWATCH III Results into HAZUS-MH: Preliminary Results
NASA Technical Reports Server (NTRS)
Berglund, Judith; Holland, Donald; McKellip, Rodney; Sciaudone, Jeff; Vickery, Peter; Wang, Zhanxian; Ying, Ken
2005-01-01
The report summarizes the results from the preliminary benchmarking activities associated with the use of WAVEWATCH III (WW3) results in the HAZUS-MH MR1 flood module. Project partner Applied Research Associates (ARA) is integrating the WW3 model into HAZUS. The current version of HAZUS-MH predicts loss estimates from hurricane-related coastal flooding by using values of surge only. Using WW3, wave setup can be included with surge. Loss estimates resulting from the use of surge-only and surge-plus-wave-setup were compared. This benchmarking study is preliminary because the HAZUS-MH MR1 flood module was under development at the time of the study. In addition, WW3 is not scheduled to be fully integrated with HAZUS-MH and available for public release until 2008.
Rapid neutral-neutral reactions at low temperatures: a new network and first results for TMC-1
NASA Astrophysics Data System (ADS)
Smith, Ian W. M.; Herbst, Eric; Chang, Qiang
2004-05-01
There is now ample evidence from an assortment of experiments, especially those involving the CRESU (Cinétique de Réaction en Ecoulement Supersonique Uniforme) technique, that a variety of neutral-neutral reactions possess no activation energy barrier and are quite rapid at very low temperatures. These reactions include both radical-radical systems and, more surprisingly, systems involving an atom or a radical and one `stable' species. Generalizing from the small but growing number of systems studied in the laboratory, we estimate reaction rate coefficients for a larger number of such reactions and include these estimates in a new network of gas-phase reactions for use in low-temperature interstellar chemistry. Designated osu.2003, the new network is available on the World Wide Web and will be continually updated. A table of new results for molecular abundances in the dark cloud TMC-1 (CP) is provided and compared with results from an older (new standard model; nsm) network.
Cart3D Simulations for the First AIAA Sonic Boom Prediction Workshop
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J.; Nemec, Marian
2014-01-01
Simulation results for the First AIAA Sonic Boom Prediction Workshop (LBW1) are presented using an inviscid, embedded-boundary Cartesian mesh method. The method employs adjoint-based error estimation and adaptive meshing to automatically determine resolution requirements of the computational domain. Results are presented for both mandatory and optional test cases. These include an axisymmetric body of revolution, a 69deg delta wing model and a complete model of the Lockheed N+2 supersonic tri-jet with V-tail and flow through nacelles. In addition to formal mesh refinement studies and examination of the adjoint-based error estimates, mesh convergence is assessed by presenting simulation results for meshes at several resolutions which are comparable in size to the unstructured grids distributed by the workshop organizers. Data provided includes both the pressure signals required by the workshop and information on code performance in both memory and processing time. Various enhanced techniques offering improved simulation efficiency will be demonstrated and discussed.
Lives Saved Tool (LiST) costing: a module to examine costs and prioritize interventions.
Bollinger, Lori A; Sanders, Rachel; Winfrey, William; Adesina, Adebiyi
2017-11-07
Achieving the Sustainable Development Goals will require careful allocation of resources in order to achieve the highest impact. The Lives Saved Tool (LiST) has been used widely to calculate the impact of maternal, neonatal and child health (MNCH) interventions for program planning and multi-country estimation in several Lancet Series commissions. As use of the LiST model increases, many have expressed a desire to cost interventions within the model, in order to support budgeting and prioritization of interventions by countries. A limited LiST costing module was introduced several years ago, but with gaps in cost types. Updates to inputs have now been added to make the module fully functional for a range of uses. This paper builds on previous work that developed an initial version of the LiST costing module to provide costs for MNCH interventions using an ingredients-based costing approach. Here, we update in 2016 the previous econometric estimates from 2013 with newly-available data and also include above-facility level costs such as program management. The updated econometric estimates inform percentages of intervention-level costs for some direct costs and indirect costs. These estimates add to existing values for direct cost requirements for items such as drugs and supplies and required provider time which were already available in LiST Costing. Results generated by the LiST costing module include costs for each intervention, as well as disaggregated costs by intervention including drug and supply costs, labor costs, other recurrent costs, capital costs, and above-service delivery costs. These results can be combined with mortality estimates to support prioritization of interventions by countries. The LiST costing module provides an option for countries to identify resource requirements for scaling up a maternal, neonatal, and child health program, and to examine the financial impact of different resource allocation strategies. It can be a useful tool for countries as they seek to identify the best investments for scarce resources. The purpose of the LiST model is to provide a tool to make resource allocation decisions in a strategic planning process through prioritizing interventions based on resulting impact on maternal and child mortality and morbidity.
LAGEOS geodetic analysis-SL7.1
NASA Technical Reports Server (NTRS)
Smith, D. E.; Kolenkiewicz, R.; Dunn, P. J.; Klosko, S. M.; Robbins, J. W.; Torrence, M. H.; Williamson, R. G.; Pavlis, E. C.; Douglas, N. B.; Fricke, S. K.
1991-01-01
Laser ranging measurements to the LAGEOS satellite from 1976 through 1989 are related via geodetic and orbital theories to a variety of geodetic and geodynamic parameters. The SL7.1 analyses are explained of this data set including the estimation process for geodetic parameters such as Earth's gravitational constant (GM), those describing the Earth's elasticity properties (Love numbers), and the temporally varying geodetic parameters such as Earth's orientation (polar motion and Delta UT1) and tracking site horizontal tectonic motions. Descriptions of the reference systems, tectonic models, and adopted geodetic constants are provided; these are the framework within which the SL7.1 solution takes place. Estimates of temporal variations in non-conservative force parameters are included in these SL7.1 analyses as well as parameters describing the orbital states at monthly epochs. This information is useful in further refining models used to describe close-Earth satellite behavior. Estimates of intersite motions and individual tracking site motions computed through the network adjustment scheme are given. Tabulations of tracking site eccentricities, data summaries, estimated monthly orbital and force model parameters, polar motion, Earth rotation, and tracking station coordinate results are also provided.
Estimation and simulation of multi-beam sonar noise.
Holmin, Arne Johannes; Korneliussen, Rolf J; Tjøstheim, Dag
2016-02-01
Methods for the estimation and modeling of noise present in multi-beam sonar data, including the magnitude, probability distribution, and spatial correlation of the noise, are developed. The methods consider individual acoustic samples and facilitate compensation of highly localized noise as well as subtraction of noise estimates averaged over time. The modeled noise is included in an existing multi-beam sonar simulation model [Holmin, Handegard, Korneliussen, and Tjøstheim, J. Acoust. Soc. Am. 132, 3720-3734 (2012)], resulting in an improved model that can be used to strengthen interpretation of data collected in situ at any signal to noise ratio. Two experiments, from the former study in which multi-beam sonar data of herring schools were simulated, are repeated with inclusion of noise. These experiments demonstrate (1) the potentially large effect of changes in fish orientation on the backscatter from a school, and (2) the estimation of behavioral characteristics such as the polarization and packing density of fish schools. The latter is achieved by comparing real data with simulated data for different polarizations and packing densities.
Variable input observer for state estimation of high-rate dynamics
NASA Astrophysics Data System (ADS)
Hong, Jonathan; Cao, Liang; Laflamme, Simon; Dodson, Jacob
2017-04-01
High-rate systems operating in the 10 μs to 10 ms timescale are likely to experience damaging effects due to rapid environmental changes (e.g., turbulence, ballistic impact). Some of these systems could benefit from real-time state estimation to enable their full potential. Examples of such systems include blast mitigation strategies, automotive airbag technologies, and hypersonic vehicles. Particular challenges in high-rate state estimation include: 1) complex time varying nonlinearities of system (e.g. noise, uncertainty, and disturbance); 2) rapid environmental changes; 3) requirement of high convergence rate. Here, we propose using a Variable Input Observer (VIO) concept to vary the input space as the event unfolds. When systems experience high-rate dynamics, rapid changes in the system occur. To investigate the VIO's potential, a VIO-based neuro-observer is constructed and studied using experimental data collected from a laboratory impact test. Results demonstrate that the input space is unique to different impact conditions, and that adjusting the input space throughout the dynamic event produces better estimations than using a traditional fixed input space strategy.
Development of Neuromorphic Sift Operator with Application to High Speed Image Matching
NASA Astrophysics Data System (ADS)
Shankayi, M.; Saadatseresht, M.; Bitetto, M. A. V.
2015-12-01
There was always a speed/accuracy challenge in photogrammetric mapping process, including feature detection and matching. Most of the researches have improved algorithm's speed with simplifications or software modifications which increase the accuracy of the image matching process. This research tries to improve speed without enhancing the accuracy of the same algorithm using Neuromorphic techniques. In this research we have developed a general design of a Neuromorphic ASIC to handle algorithms such as SIFT. We also have investigated neural assignment in each step of the SIFT algorithm. With a rough estimation based on delay of the used elements including MAC and comparator, we have estimated the resulting chip's performance for 3 scenarios, Full HD movie (Videogrammetry), 24 MP (UAV photogrammetry), and 88 MP image sequence. Our estimations led to approximate 3000 fps for Full HD movie, 250 fps for 24 MP image sequence and 68 fps for 88MP Ultracam image sequence which can be a huge improvement for current photogrammetric processing systems. We also estimated the power consumption of less than10 watts which is not comparable to current workflows.
Deriving Leaf Area Index (LAI) from multiple lidar remote sensing systems
NASA Astrophysics Data System (ADS)
Tang, H.; Dubayah, R.; Zhao, F.
2012-12-01
LAI is an important biophysical variable linking biogeochemical cycles of earth systems. Observations with passive optical remote sensing are plagued by saturation and results from different passive and active sensors are often inconsistent. Recently lidar remote sensing has been applied to derive vertical canopy structure including LAI and its vertical profile. In this research we compare LAI retrievals from three different types of lidar sensors. The study areas include the La Selva Biological Station in Costa Rica and Sierra Nevada Forest in California. We first obtain independent LAI estimates from different lidar systems including airborne lidar (LVIS), spaceborne lidar (GLAS) and ground lidar (Echidna). LAI retrievals are then evaluated between sensors as a function of scale, land cover type and sensor characteristics. We also assess the accuracy of these LAI products against ground measurements. By providing a link between ground observations, ground lidar, aircraft and space-based lidar we hope to demonstrate a path for deriving more accurate estimates of LAI on a global basis, and to provide a more robust means of validating passive optical estimates of this important variable.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-21
...) of CAA and 40 CFR Part 1042, Subpart D). Estimated number of respondents: 200 (total, including..., depending on the program. Total estimated burden: 3,012 hours per year. Burden is defined at 5 CFR 1320.03(b) Total estimated cost: Estimated total annual costs: $200,000 (per year), includes an estimated $65,155...
The Influence of Mean Trophic Level on Biomass and Production in Marine Ecosystems
NASA Astrophysics Data System (ADS)
Woodson, C. B.; Schramski, J.
2016-02-01
The oceans have faced rapid removal of top predators causing a reduction in the mean trophic level of many marine ecosystems due to fishing down the food web. However, estimating the pre-exploitation biomass of the ocean has been difficult. Historical population sizes have been estimated using population dynamics models, archaeological or historical records, fisheries data, living memory, ecological monitoring data, genetics, and metabolic theory. In this talk, we expand on the use of metabolic theory by including complex trophic webs to estimate pre-exploitation levels of marine biomass. Our results suggest that historical marine biomass could be as much as 10 times higher than current estimates and that the total carrying capacity of the ocean is sensitive to mean trophic level and trophic web complexity. We further show that the production levels needed to support the added biomass are possible due to biomass accumulation and predator-prey overlap in regions such as fronts. These results have important implications for marine biogeochemical cycling, fisheries management, and conservation efforts.
Real-Time Stability and Control Derivative Extraction From F-15 Flight Data
NASA Technical Reports Server (NTRS)
Smith, Mark S.; Moes, Timothy R.; Morelli, Eugene A.
2003-01-01
A real-time, frequency-domain, equation-error parameter identification (PID) technique was used to estimate stability and control derivatives from flight data. This technique is being studied to support adaptive control system concepts currently being developed by NASA (National Aeronautics and Space Administration), academia, and industry. This report describes the basic real-time algorithm used for this study and implementation issues for onboard usage as part of an indirect-adaptive control system. A confidence measures system for automated evaluation of PID results is discussed. Results calculated using flight data from a modified F-15 aircraft are presented. Test maneuvers included pilot input doublets and automated inputs at several flight conditions. Estimated derivatives are compared to aerodynamic model predictions. Data indicate that the real-time PID used for this study performs well enough to be used for onboard parameter estimation. For suitable test inputs, the parameter estimates converged rapidly to sufficient levels of accuracy. The devised confidence measures used were moderately successful.
Gjerde, Hallvard; Normann, Per T; Christophersen, Asbjørg S; Mørland, Jørg
2011-07-15
To estimate the prevalence of driving with blood drug concentrations above the recently proposed Norwegian legal limits for drugged driving in random traffic. The results from a roadside survey of 10,816 drivers was used as basis for the estimation, and the most prevalent drugs were included. Three approaches were used to estimate the prevalence of drug concentrations above the proposed legal limits in blood based on drug concentrations in oral fluid: comparison with drug concentrations observed in oral fluid and blood in pharmacokinetic studies, estimating the prevalence of drug concentrations in blood by calculating the prevalence of drug concentrations in oral fluid that were larger than the limit in blood multiplied with mean oral fluid/blood ratios, and a mathematical simulation mimicking the relationship between drug concentration distributions in blood and oral fluid for populations of drug users. In total, alcohol or drugs were detected in 5.7% of the samples of oral fluid from drivers in normal traffic; 3.8% (n=410) were positive for the drugs that we included in the assessment. The estimation of drug concentrations in blood suggested that about 1.5% had concentrations above the proposed legal limits in blood for the studied drugs, which is about 40% of those who were positive for the drugs in oral fluid. The estimated prevalence of driving with concentrations of psychoactive drugs in blood above the proposed legal limits was for illegal drugs 0.4% and for medicinal drugs 1.1%. These may be regarded as minimum estimates as some drugs were not included in the assessment. These prevalences are higher than the prevalence of driving with blood alcohol concentrations above the legal limit of 0.2g/kg in Norway. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Ho, Hsing-Hao; Li, Ya-Hui; Lee, Jih-Chin; Wang, Chih-Wei; Yu, Yi-Lin; Hueng, Dueng-Yuan; Hsu, Hsian-He
2018-01-01
Purpose We estimated the volume of vestibular schwannomas by an ice cream cone formula using thin-sliced magnetic resonance images (MRI) and compared the estimation accuracy among different estimating formulas and between different models. Methods The study was approved by a local institutional review board. A total of 100 patients with vestibular schwannomas examined by MRI between January 2011 and November 2015 were enrolled retrospectively. Informed consent was waived. Volumes of vestibular schwannomas were estimated by cuboidal, ellipsoidal, and spherical formulas based on a one-component model, and cuboidal, ellipsoidal, Linskey’s, and ice cream cone formulas based on a two-component model. The estimated volumes were compared to the volumes measured by planimetry. Intraobserver reproducibility and interobserver agreement was tested. Estimation error, including absolute percentage error (APE) and percentage error (PE), was calculated. Statistical analysis included intraclass correlation coefficient (ICC), linear regression analysis, one-way analysis of variance, and paired t-tests with P < 0.05 considered statistically significant. Results Overall tumor size was 4.80 ± 6.8 mL (mean ±standard deviation). All ICCs were no less than 0.992, suggestive of high intraobserver reproducibility and high interobserver agreement. Cuboidal formulas significantly overestimated the tumor volume by a factor of 1.9 to 2.4 (P ≤ 0.001). The one-component ellipsoidal and spherical formulas overestimated the tumor volume with an APE of 20.3% and 29.2%, respectively. The two-component ice cream cone method, and ellipsoidal and Linskey’s formulas significantly reduced the APE to 11.0%, 10.1%, and 12.5%, respectively (all P < 0.001). Conclusion The ice cream cone method and other two-component formulas including the ellipsoidal and Linskey’s formulas allow for estimation of vestibular schwannoma volume more accurately than all one-component formulas. PMID:29438424
Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Norris, Edward T.; Liu, Xin, E-mail: xinliu@mst.edu; Hsieh, Jiang
Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. Themore » CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer. Conclusions: The simulation results showed that the deterministic method can be effectively used to estimate the absorbed dose in a CTDI phantom. The accuracy of the discrete ordinates method was close to that of a Monte Carlo simulation, and the primary benefit of the discrete ordinates method lies in its rapid computation speed. It is expected that further optimization of this method in routine clinical CT dose estimation will improve its accuracy and speed.« less
Manns, Braden; McKenzie, Susan Q.; Au, Flora; Gignac, Pamela M.; Geller, Lawrence Ian
2017-01-01
Background: Many working-age individuals with advanced chronic kidney disease (CKD) are unable to work, or are only able to work at a reduced capacity and/or with a reduction in time at work, and receive disability payments, either from the Canadian government or from private insurers, but the magnitude of those payments is unknown. Objective: The objective of this study was to estimate Canada Pension Plan Disability Benefit and private disability insurance benefits paid to Canadians with advanced kidney failure, and how feasible improvements in prevention, identification, and early treatment of CKD and increased use of kidney transplantation might mitigate those costs. Design: This study used an analytical model combining Canadian data from various sources. Setting and Patients: This study included all patients with advanced CKD in Canada, including those with estimated glomerular filtration rate (eGFR) <30 mL/min/m2 and those on dialysis. Measurements: We combined disability estimates from a provincial kidney care program with the prevalence of advanced CKD and estimated disability payments from the Canada Pension Plan and private insurance plans to estimate overall disability benefit payments for Canadians with advanced CKD. Results: We estimate that Canadians with advanced kidney failure are receiving disability benefit payments of at least Can$217 million annually. These estimates are sensitive to the proportion of individuals with advanced kidney disease who are unable to work, and plausible variation in this estimate could mean patients with advanced kidney disease are receiving up to Can$260 million per year. Feasible strategies to reduce the proportion of individuals with advanced kidney disease, either through prevention, delay or reduction in severity, or increasing the rate of transplantation, could result in reductions in the cost of Canada Pension Plan and private disability insurance payments by Can$13.8 million per year within 5 years. Limitations: This study does not estimate how CKD prevention or increasing the rate of kidney transplantation might influence health care cost savings more broadly, and does not include the cost to provincial governments for programs that provide income for individuals without private insurance and who do not qualify for Canada Pension Plan disability payments. Conclusions: Private disability insurance providers and federal government programs incur high costs related to individuals with advanced kidney failure, highlighting the significance of kidney disease not only to patients, and their families, but also to these other important stakeholders. Improvements in care of individuals with kidney disease could reduce these costs. PMID:28491340
Newcom, D W; Baas, T J; Stalder, K J; Schwab, C R
2005-04-01
Three selection models were evaluated to compare selection candidate rankings based on EBV and to evaluate subsequent effects of model-derived EBV on the selection differential and expected genetic response in the population. Data were collected from carcass- and ultrasound-derived estimates of loin i.m. fat percent (IMF) in a population of Duroc swine under selection to increase IMF. The models compared were Model 1, a two-trait animal model used in the selection experiment that included ultrasound IMF from all pigs scanned and carcass IMF from pigs slaughtered to estimate breeding values for both carcass (C1) and ultrasound IMF (U1); Model 2, a single-trait animal model that included ultrasound IMF values on all pigs scanned to estimate breeding values for ultrasound IMF (U2); and Model 3, a multiple-trait animal model including carcass IMF from slaughtered pigs and the first three principal components from a total of 10 image parameters averaged across four longitudinal ultrasound images to estimate breeding values for carcass IMF (C3). Rank correlations between breeding value estimates for U1 and C1, U1 and U2, and C1 and C3 were 0.95, 0.97, and 0.92, respectively. Other rank correlations were 0.86 or less. In the selection experiment, approximately the top 10% of boars and 50% of gilts were selected. Selection differentials for pigs in Generation 3 were greatest when ranking pigs based on C1, followed by U1, U2, and C3. In addition, selection differential and estimated response were evaluated when simulating selection of the top 1, 5, and 10% of sires and 50% of dams. Results of this analysis indicated the greatest selection differential was for selection based on C1. The greatest loss in selection differential was found for selection based on C3 when selecting the top 10 and 1% of boars and 50% of gilts. The loss in estimated response when selecting varying percentages of boars and the top 50% of gilts was greatest when selection was based on C3 (16.0 to 25.8%) and least for selection based on U1 (1.3 to 10.9%). Estimated genetic change from selection based on carcass IMF was greater than selection based on ultrasound IMF. Results show that selection based on a combination of ultrasonically predicted IMF and sib carcass IMF produced the greatest selection differentials and should lead to the greatest genetic change.
Galactic cosmic ray radiation levels in spacecraft on interplanetary missions
NASA Technical Reports Server (NTRS)
Shinn, J. L.; Nealy, J. E.; Townsend, L. W.; Wilson, J. W.; Wood, J.S.
1994-01-01
Using the Langley Research Center Galactic Cosmic Ray (GCR) transport computer code (HZETRN) and the Computerized Anatomical Man (CAM) model, crew radiation levels inside manned spacecraft on interplanetary missions are estimated. These radiation-level estimates include particle fluxes, LET (Linear Energy Transfer) spectra, absorbed dose, and dose equivalent within various organs of interest in GCR protection studies. Changes in these radiation levels resulting from the use of various different types of shield materials are presented.
Energy efficient engine: Propulsion system-aircraft integration evaluation
NASA Technical Reports Server (NTRS)
Owens, R. E.
1979-01-01
Flight performance and operating economics of future commercial transports utilizing the energy efficient engine were assessed as well as the probability of meeting NASA's goals for TSFC, DOC, noise, and emissions. Results of the initial propulsion systems aircraft integration evaluation presented include estimates of engine performance, predictions of fuel burns, operating costs of the flight propulsion system installed in seven selected advanced study commercial transports, estimates of noise and emissions, considerations of thrust growth, and the achievement-probability analysis.
An application of the suction analog for the analysis of asymmetric flow situations
NASA Technical Reports Server (NTRS)
Luckring, J. M.
1976-01-01
A recent extension of the suction analogy for estimation of vortex loads on asymmetric configurations is reviewed. This extension includes asymmetric augmented vortex lift and the forward sweep effect on side edge suction. Application of this extension to a series of skewed wings has resulted in an improved estimating capability for a wide range of asymmetric flow situations. Hence, the suction analogy concept now has more general applicability for subsonic lifting surface analysis.
NASA Technical Reports Server (NTRS)
Hallum, C. R.; Basu, J. P. (Principal Investigator)
1979-01-01
A natural stratum-based sampling scheme and the aggregation procedures for estimating wheat area, yield, and production and their associated prediction error estimates are described. The methodology utilizes LANDSAT imagery and agrophysical data to permit an improved stratification in foreign areas by ignoring political boundaries and restratifying along boundaries that are more homogeneous with respect to the distribution of agricultural density, soil characteristics, and average climatic conditions. A summary of test results is given including a discussion of the various problems encountered.
Estimating plant available water content from remotely sensed evapotranspiration
NASA Astrophysics Data System (ADS)
van Dijk, A. I. J. M.; Warren, G.; Doody, T.
2012-04-01
Plant available water content (PAWC) is an emergent soil property that is a critical variable in hydrological modelling. PAWC determines the active soil water storage and, in water-limited environments, is the main cause of different ecohydrological behaviour between (deep-rooted) perennial vegetation and (shallow-rooted) seasonal vegetation. Conventionally, PAWC is estimated for a combination of soil and vegetation from three variables: maximum rooting depth and the volumetric water content at field capacity and permanent wilting point, respectively. Without elaborate local field observation, large uncertainties in PAWC occur due to the assumptions associated with each of the three variables. We developed an alternative, observation-based method to estimate PAWC from precipitation observations and CSIRO MODIS Reflectance-based Evapotranspiration (CMRSET) estimates. Processing steps include (1) removing residual systematic bias in the CMRSET estimates, (2) making spatially appropriate assumptions about local water inputs and surface runoff losses, (3) using mean seasonal patterns in precipitation and CMRSET to estimate the seasonal pattern in soil water storage changes, (4) from these, calculating the mean seasonal storage range, which can be treated as an estimate of PAWC. We evaluate the resulting PAWC estimates against those determined in field experiments for 180 sites across Australia. We show that the method produces better estimates of PAWC than conventional techniques. In addition, the method provides detailed information with full continental coverage at moderate resolution (250 m) scale. The resulting maps can be used to identify likely groundwater dependent ecosystems and to derive PAWC distributions for each combination of soil and vegetation type.
Yost, Erin E; Stanek, John; DeWoskin, Robert S; Burgoon, Lyle D
2016-07-19
The United States Environmental Protection Agency (EPA) identified 1173 chemicals associated with hydraulic fracturing fluids, flowback, or produced water, of which 1026 (87%) lack chronic oral toxicity values for human health assessments. To facilitate the ranking and prioritization of chemicals that lack toxicity values, it may be useful to employ toxicity estimates from quantitative structure-activity relationship (QSAR) models. Here we describe an approach for applying the results of a QSAR model from the TOPKAT program suite, which provides estimates of the rat chronic oral lowest-observed-adverse-effect level (LOAEL). Of the 1173 chemicals, TOPKAT was able to generate LOAEL estimates for 515 (44%). To address the uncertainty associated with these estimates, we assigned qualitative confidence scores (high, medium, or low) to each TOPKAT LOAEL estimate, and found 481 to be high-confidence. For 48 chemicals that had both a high-confidence TOPKAT LOAEL estimate and a chronic oral reference dose from EPA's Integrated Risk Information System (IRIS) database, Spearman rank correlation identified 68% agreement between the two values (permutation p-value =1 × 10(-11)). These results provide support for the use of TOPKAT LOAEL estimates in identifying and prioritizing potentially hazardous chemicals. High-confidence TOPKAT LOAEL estimates were available for 389 of 1026 hydraulic fracturing-related chemicals that lack chronic oral RfVs and OSFs from EPA-identified sources, including a subset of chemicals that are frequently used in hydraulic fracturing fluids.
McAuley, L; Pham, B; Tugwell, P; Moher, D
2000-10-07
The inclusion of only a subset of all available evidence in a meta-analysis may introduce biases and threaten its validity; this is particularly likely if the subset of included studies differ from those not included, which may be the case for published and grey literature (unpublished studies, with limited distribution). We set out to examine whether exclusion of grey literature, compared with its inclusion in meta-analysis, provides different estimates of the effectiveness of interventions assessed in randomised trials. From a random sample of 135 meta-analyses, we identified and retrieved 33 publications that included both grey and published primary studies. The 33 publications contributed 41 separate meta-analyses from several disease areas. General characteristics of the meta-analyses and associated studies and outcome data at the trial level were collected. We explored the effects of the inclusion of grey literature on the quantitative results using logistic-regression analyses. 33% of the meta-analyses were found to include some form of grey literature. The grey literature, when included, accounts for between 4.5% and 75% of the studies in a meta-analysis. On average, published work, compared with grey literature, yielded significantly larger estimates of the intervention effect by 15% (ratio of odds ratios=1.15 [95% CI 1.04-1.28]). Excluding abstracts from the analysis further compounded the exaggeration (1.33 [1.10-1.60]). The exclusion of grey literature from meta-analyses can lead to exaggerated estimates of intervention effectiveness. In general, meta-analysts should attempt to identify, retrieve, and include all reports, grey and published, that meet predefined inclusion criteria.
Ladtap XL Version 2017: A Spreadsheet For Estimating Dose Resulting From Aqueous Releases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minter, K.; Jannik, T.
LADTAP XL© is an EXCEL© spreadsheet used to estimate dose to offsite individuals and populations resulting from routine and accidental releases of radioactive materials to the Savannah River. LADTAP XL© contains two worksheets: LADTAP and IRRIDOSE. The LADTAP worksheet estimates dose for environmental pathways including external exposure resulting from recreational activities on the Savannah River and internal exposure resulting from ingestion of water, fish, and invertebrates originating from the Savannah River. IRRIDOSE estimates offsite dose to individuals and populations from irrigation of foodstuffs with contaminated water from the Savannah River. In 2004, a complete description of the LADTAP XL© codemore » and an associated user’s manual was documented in LADTAP XL©: A Spreadsheet for Estimating Dose Resulting from Aqueous Release (WSRC-TR-2004-00059) and revised input parameters, dose coefficients, and radionuclide decay constants were incorporated into LADTAP XL© Version 2013 (SRNL-STI-2011-00238). LADTAP XL© Version 2017 is a slight modification to Version 2013 with minor changes made for more user-friendly parameter inputs and organization, updates in the time conversion factors used within the dose calculations, and fixed an issue with the expected time build-up parameter referenced within the population shoreline dose calculations. This manual has been produced to update the code description, verification of the models, and provide an updated user’s manual. LADTAP XL© Version 2017 has been verified by Minter (2017) and is ready for use at the Savannah River Site (SRS).« less
Improved Methodology for Benefit Estimation of Preservation Projects
DOT National Transportation Integrated Search
2018-04-01
This research report presents an improved process for evaluating the benefits and economic tradeoffs associated with a variety of highway preservation projects. It includes a summary of results from a comprehensive phone survey concerning the use and...
NASA Technical Reports Server (NTRS)
1977-01-01
Results of planetary advanced studies and planning support are summarized. The scope of analyses includes cost estimation research, planetary mission performance, penetrator advanced studies, Mercury mission transport requirements, definition of super solar electric propulsion/solar sail mission discriminators, and advanced planning activities.
Statistical Estimation of Rollover Risk
DOT National Transportation Integrated Search
1989-08-01
This report describes the results of a statistical analysis to determine the : probability of a rollover in a single vehicle accident. Over 39,000 accidents, : which included 4910 rollovers in the states of Texas, Maryland, and Washington were : exam...
Stanišić Stojić, Svetlana; Stanišić, Nemanja; Stojić, Andreja
2016-07-11
To propose a new method for including the cumulative mid-term effects of air pollution in the traditional Poisson regression model and compare the temperature-related mortality risk estimates, before and after including air pollution data. The analysis comprised a total of 56,920 residents aged 65 years or older who died from circulatory and respiratory diseases in Belgrade, Serbia, and daily mean PM10, NO2, SO2 and soot concentrations obtained for the period 2009-2014. After accounting for the cumulative effects of air pollutants, the risk associated with cold temperatures was significantly lower and the overall temperature-attributable risk decreased from 8.80 to 3.00 %. Furthermore, the optimum range of temperature, within which no excess temperature-related mortality is expected to occur, was very broad, between -5 and 21 °C, which differs from the previous findings that most of the attributable deaths were associated with mild temperatures. These results suggest that, in polluted areas of developing countries, most of the mortality risk, previously attributed to cold temperatures, can be explained by the mid-term effects of air pollution. The results also showed that the estimated relative importance of PM10 was the smallest of four examined pollutant species, and thus, including PM10 data only is clearly not the most effective way to control for the effects of air pollution.
NASA Astrophysics Data System (ADS)
Kalyanapu, A. J.; Thames, B. A.
2013-12-01
Dam breach modeling often includes application of models that are sophisticated, yet computationally intensive to compute flood propagation at high temporal and spatial resolutions. This results in a significant need for computational capacity that requires development of newer flood models using multi-processor and graphics processing techniques. Recently, a comprehensive benchmark exercise titled the 12th Benchmark Workshop on Numerical Analysis of Dams, is organized by the International Commission on Large Dams (ICOLD) to evaluate the performance of these various tools used for dam break risk assessment. The ICOLD workshop is focused on estimating the consequences of failure of a hypothetical dam near a hypothetical populated area with complex demographics, and economic activity. The current study uses this hypothetical case study and focuses on evaluating the effects of dam breach methodologies on consequence estimation and analysis. The current study uses ICOLD hypothetical data including the topography, dam geometric and construction information, land use/land cover data along with socio-economic and demographic data. The objective of this study is to evaluate impacts of using four different dam breach methods on the consequence estimates used in the risk assessments. The four methodologies used are: i) Froehlich (1995), ii) MacDonald and Langridge-Monopolis 1984 (MLM), iii) Von Thun and Gillete 1990 (VTG), and iv) Froehlich (2008). To achieve this objective, three different modeling components were used. First, using the HEC-RAS v.4.1, dam breach discharge hydrographs are developed. These hydrographs are then provided as flow inputs into a two dimensional flood model named Flood2D-GPU, which leverages the computer's graphics card for much improved computational capabilities of the model input. Lastly, outputs from Flood2D-GPU, including inundated areas, depth grids, velocity grids, and flood wave arrival time grids, are input into HEC-FIA, which provides the consequence assessment for the solution to the problem statement. For the four breach methodologies, a sensitivity analysis of four breach parameters, breach side slope (SS), breach width (Wb), breach invert elevation (Elb), and time of failure (tf), is conducted. Up to, 68 simulations are computed to produce breach hydrographs in HEC-RAS for input into Flood2D-GPU. The Flood2D-GPU simulation results were then post-processed in HEC-FIA to evaluate: Total Population at Risk (PAR), 14-yr and Under PAR (PAR14-), 65-yr and Over PAR (PAR65+), Loss of Life (LOL) and Direct Economic Impact (DEI). The MLM approach resulted in wide variability in simulated minimum and maximum values of PAR, PAR 65+ and LOL estimates. For PAR14- and DEI, Froehlich (1995) resulted in lower values while MLM resulted in higher estimates. This preliminary study demonstrated the relative performance of four commonly used dam breach methodologies and their impacts on consequence estimation.
A practical guideline for intracranial volume estimation in patients with Alzheimer's disease
2015-01-01
Background Intracranial volume (ICV) is an important normalization measure used in morphometric analyses to correct for head size in studies of Alzheimer Disease (AD). Inaccurate ICV estimation could introduce bias in the outcome. The current study provides a decision aid in defining protocols for ICV estimation in patients with Alzheimer disease in terms of sampling frequencies that can be optimally used on the volumetric MRI data, and the type of software most suitable for use in estimating the ICV measure. Methods Two groups of 22 subjects are considered, including adult controls (AC) and patients with Alzheimer Disease (AD). Reference measurements were calculated for each subject by manually tracing intracranial cavity by the means of visual inspection. The reliability of reference measurements were assured through intra- and inter- variation analyses. Three publicly well-known software packages (Freesurfer, FSL, and SPM) were examined in their ability to automatically estimate ICV across the groups. Results Analysis of the results supported the significant effect of estimation method, gender, cognitive condition of the subject and the interaction among method and cognitive condition factors in the measured ICV. Results on sub-sampling studies with a 95% confidence showed that in order to keep the accuracy of the interleaved slice sampling protocol above 99%, the sampling period cannot exceed 20 millimeters for AC and 15 millimeters for AD. Freesurfer showed promising estimates for both adult groups. However SPM showed more consistency in its ICV estimation over the different phases of the study. Conclusions This study emphasized the importance in selecting the appropriate protocol, the choice of the sampling period in the manual estimation of ICV and selection of suitable software for the automated estimation of ICV. The current study serves as an initial framework for establishing an appropriate protocol in both manual and automatic ICV estimations with different subject populations. PMID:25953026
NREL Screens Universities for Solar and Battery Storage Potential
DOE Office of Scientific and Technical Information (OSTI.GOV)
In support of the U.S. Department of Energy's SunShot initiative, NREL provided solar photovoltaic (PV) screenings in 2016 for eight universities seeking to go solar. NREL conducted an initial technoeconomic assessment of PV and storage feasibility at the selected universities using the REopt model, an energy planning platform that can be used to evaluate RE options, estimate costs, and suggest a mix of RE technologies to meet defined assumptions and constraints. NREL provided each university with customized results, including the cost-effectiveness of PV and storage, recommended system size, estimated capital cost to implement the technology, and estimated life cycle costmore » savings.« less
An analysis of lateral stability in power-off flight with charts for use in design
NASA Technical Reports Server (NTRS)
Zimmerman, Charles H
1937-01-01
The aerodynamic and mass factors governing lateral stability are discussed and formulas are given for their estimation. Relatively simple relationships between the governing factors and the resulting stability characteristics are presented. A series of charts is included with which approximate stability characteristics may be rapidly estimated. The effects of the various governing factors upon the stability characteristics are discussed in detail. It is pointed out that much additional research is necessary both to correlate stability characteristics with riding, flying, and handling qualities and to provide suitable data for accurate estimates of those characteristics of an airplane while it is in the design stage.
[Comparison of three stand-level biomass estimation methods].
Dong, Li Hu; Li, Feng Ri
2016-12-01
At present, the forest biomass methods of regional scale attract most of attention of the researchers, and developing the stand-level biomass model is popular. Based on the forestry inventory data of larch plantation (Larix olgensis) in Jilin Province, we used non-linear seemly unrelated regression (NSUR) to estimate the parameters in two additive system of stand-level biomass equations, i.e., stand-level biomass equations including the stand variables and stand biomass equations including the biomass expansion factor (i.e., Model system 1 and Model system 2), listed the constant biomass expansion factor for larch plantation and compared the prediction accuracy of three stand-level biomass estimation methods. The results indicated that for two additive system of biomass equations, the adjusted coefficient of determination (R a 2 ) of the total and stem equations was more than 0.95, the root mean squared error (RMSE), the mean prediction error (MPE) and the mean absolute error (MAE) were smaller. The branch and foliage biomass equations were worse than total and stem biomass equations, and the adjusted coefficient of determination (R a 2 ) was less than 0.95. The prediction accuracy of a constant biomass expansion factor was relatively lower than the prediction accuracy of Model system 1 and Model system 2. Overall, although stand-level biomass equation including the biomass expansion factor belonged to the volume-derived biomass estimation method, and was different from the stand biomass equations including stand variables in essence, but the obtained prediction accuracy of the two methods was similar. The constant biomass expansion factor had the lower prediction accuracy, and was inappropriate. In addition, in order to make the model parameter estimation more effective, the established stand-level biomass equations should consider the additivity in a system of all tree component biomass and total biomass equations.
Schmucker, Christine M; Blümle, Anette; Schell, Lisa K; Schwarzer, Guido; Oeller, Patrick; Cabrera, Laura; von Elm, Erik; Briel, Matthias; Meerpohl, Joerg J
2017-01-01
A meta-analysis as part of a systematic review aims to provide a thorough, comprehensive and unbiased statistical summary of data from the literature. However, relevant study results could be missing from a meta-analysis because of selective publication and inadequate dissemination. If missing outcome data differ systematically from published ones, a meta-analysis will be biased with an inaccurate assessment of the intervention effect. As part of the EU-funded OPEN project (www.open-project.eu) we conducted a systematic review that assessed whether the inclusion of data that were not published at all and/or published only in the grey literature influences pooled effect estimates in meta-analyses and leads to different interpretation. Systematic review of published literature (methodological research projects). Four bibliographic databases were searched up to February 2016 without restriction of publication year or language. Methodological research projects were considered eligible for inclusion if they reviewed a cohort of meta-analyses which (i) compared pooled effect estimates of meta-analyses of health care interventions according to publication status of data or (ii) examined whether the inclusion of unpublished or grey literature data impacts the result of a meta-analysis. Seven methodological research projects including 187 meta-analyses comparing pooled treatment effect estimates according to different publication status were identified. Two research projects showed that published data showed larger pooled treatment effects in favour of the intervention than unpublished or grey literature data (Ratio of ORs 1.15, 95% CI 1.04-1.28 and 1.34, 95% CI 1.09-1.66). In the remaining research projects pooled effect estimates and/or overall findings were not significantly changed by the inclusion of unpublished and/or grey literature data. The precision of the pooled estimate was increased with narrower 95% confidence interval. Although we may anticipate that systematic reviews and meta-analyses not including unpublished or grey literature study results are likely to overestimate the treatment effects, current empirical research shows that this is only the case in a minority of reviews. Therefore, currently, a meta-analyst should particularly consider time, effort and costs when adding such data to their analysis. Future research is needed to identify which reviews may benefit most from including unpublished or grey data.
Blümle, Anette; Schell, Lisa K.; Schwarzer, Guido; Oeller, Patrick; Cabrera, Laura; von Elm, Erik; Briel, Matthias; Meerpohl, Joerg J.
2017-01-01
Background A meta-analysis as part of a systematic review aims to provide a thorough, comprehensive and unbiased statistical summary of data from the literature. However, relevant study results could be missing from a meta-analysis because of selective publication and inadequate dissemination. If missing outcome data differ systematically from published ones, a meta-analysis will be biased with an inaccurate assessment of the intervention effect. As part of the EU-funded OPEN project (www.open-project.eu) we conducted a systematic review that assessed whether the inclusion of data that were not published at all and/or published only in the grey literature influences pooled effect estimates in meta-analyses and leads to different interpretation. Methods and findings Systematic review of published literature (methodological research projects). Four bibliographic databases were searched up to February 2016 without restriction of publication year or language. Methodological research projects were considered eligible for inclusion if they reviewed a cohort of meta-analyses which (i) compared pooled effect estimates of meta-analyses of health care interventions according to publication status of data or (ii) examined whether the inclusion of unpublished or grey literature data impacts the result of a meta-analysis. Seven methodological research projects including 187 meta-analyses comparing pooled treatment effect estimates according to different publication status were identified. Two research projects showed that published data showed larger pooled treatment effects in favour of the intervention than unpublished or grey literature data (Ratio of ORs 1.15, 95% CI 1.04–1.28 and 1.34, 95% CI 1.09–1.66). In the remaining research projects pooled effect estimates and/or overall findings were not significantly changed by the inclusion of unpublished and/or grey literature data. The precision of the pooled estimate was increased with narrower 95% confidence interval. Conclusions Although we may anticipate that systematic reviews and meta-analyses not including unpublished or grey literature study results are likely to overestimate the treatment effects, current empirical research shows that this is only the case in a minority of reviews. Therefore, currently, a meta-analyst should particularly consider time, effort and costs when adding such data to their analysis. Future research is needed to identify which reviews may benefit most from including unpublished or grey data. PMID:28441452
Capesius, Joseph P.; Arnold, L. Rick
2012-01-01
The Mass Balance results were quite variable over time such that they appeared suspect with respect to the concept of groundwater flow as being gradual and slow. The large degree of variability in the day-to-day and month-to-month Mass Balance results is likely the result of many factors. These factors could include ungaged stream inflows or outflows, short-term streamflow losses to and gains from temporary bank storage, and any lag in streamflow accounting owing to streamflow lag time of flow within a reach. The Pilot Point time series results were much less variable than the Mass Balance results and extreme values were effectively constrained. Less day-to-day variability, smaller magnitude extreme values, and smoother transitions in base-flow estimates provided by the Pilot Point method are more consistent with a conceptual model of groundwater flow being gradual and slow. The Pilot Point method provided a better fit to the conceptual model of groundwater flow and appeared to provide reasonable estimates of base flow.
Castillo, A R; St-Pierre, N R; Silva del Rio, N; Weiss, W P
2013-05-01
Thirty-nine commercial dairies in Merced County, California were enrolled in the present study to (1) compare lactating cow mineral intakes (via drinking water and total mixed ration) to the National Research Council (NRC) requirements, (2) evaluate the association between dietary concentrations of minerals with and without drinking water and adjusted for mineral concentrations in milk, and (3) compare 4 different methods to estimate excretion of minerals using either assays or estimations of milk mineral outputs and total daily mineral intake per cow with or without minerals coming from drinking water. Dairies were selected to represent a range of herd milk yields and a range of water mineral contents. Samples of total mixed ration, drinking water, and bulk tank milk were taken on 2 different days, 3 to 7d apart in each farm. Across-farm medians and percentile distributions were used to analyze results. The herd median milk yield interquartile ranged (10th to 90th percentile) from less than 25 to more than 39 kg/d and the concentration of total solids in water interquartile ranged from less than 200 to more than 1,490 mg/L. Including drinking water minerals in the diets increased dietary concentrations by <4% for all minerals except for Na and Cl, which increased by 9.3 and 6.5%, respectively. Concentrations of P and K in milk were essentially the same as the NRC value to estimate lactation requirements. However, NRC milk values of Ca, Cl, and Zn were 10 to 20% greater than dairy farm values; and Na, Cu, Fe, and Mn were no less than 36% below NRC values. Estimated excretion of minerals via manure varied substantially across farms. Farms in the 10th percentile did have 2 to 3 times less estimated mineral excretions than those in the 90th percentile (depending on the mineral). Although including water minerals increased excretion of most minerals, the actual median effect of Ca, Mg, S, Cu, Fe, and Mn was less than 5%, and about 8% for Na and Cl. Replacing assayed concentrations of minerals in milk with NRC constants resulted in reduced estimated excretion of Ca, Na, Cu, Fe, and Zn, but median differences were <5% except for Na which was 7.5%. Results indicate that not including mineral intake via drinking water and not using assayed concentrations of milk minerals lead to errors in estimation manure excretion of minerals (e.g., Ca, Na, Cl, and S). Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Rosenthal, W. D.; Mcfarland, M. J.; Theis, S. W.; Jones, C. L. (Principal Investigator)
1982-01-01
Agricultural crop classification models using two or more spectral regions (visible through microwave) were developed and tested and biomass was estimated by including microwave with visible and infrared data. The study was conducted at Guymon, Oklahoma and Dalhart, Texas utilizing aircraft multispectral data and ground truth soil moisture and biomass information. Results indicate that inclusion of C, L, and P band active microwave data from look angles greater than 35 deg from nadir with visible and infrared data improved crop discrimination and biomass estimates compared to results using only visible and infrared data. The active microwave frequencies were sensitive to different biomass levels. In addition, two indices, one using only active microwave data and the other using data from the middle and near infrared bands, were well correlated to total biomass.
Orbit/attitude estimation with LANDSAT Landmark data
NASA Technical Reports Server (NTRS)
Hall, D. L.; Waligora, S.
1979-01-01
The use of LANDSAT landmark data for orbit/attitude and camera bias estimation was studied. The preliminary results of these investigations are presented. The Goddard Trajectory Determination System (GTDS) error analysis capability was used to perform error analysis studies. A number of questions were addressed including parameter observability and sensitivity, effects on the solve-for parameter errors of data span, density, and distribution an a priori covariance weighting. The use of the GTDS differential correction capability with acutal landmark data was examined. The rms line and element observation residuals were studied as a function of the solve-for parameter set, a priori covariance weighting, force model, attitude model and data characteristics. Sample results are presented. Finally, verfication and preliminary system evaluation of the LANDSAT NAVPAK system for sequential (extended Kalman Filter) estimation of orbit, and camera bias parameters is given.
Space shuttle propulsion estimation development verification, volume 1
NASA Technical Reports Server (NTRS)
Rogers, Robert M.
1989-01-01
The results of the Propulsion Estimation Development Verification are summarized. A computer program developed under a previous contract (NAS8-35324) was modified to include improved models for the Solid Rocket Booster (SRB) internal ballistics, the Space Shuttle Main Engine (SSME) power coefficient model, the vehicle dynamics using quaternions, and an improved Kalman filter algorithm based on the U-D factorized algorithm. As additional output, the estimated propulsion performances, for each device are computed with the associated 1-sigma bounds. The outputs of the estimation program are provided in graphical plots. An additional effort was expended to examine the use of the estimation approach to evaluate single engine test data. In addition to the propulsion estimation program PFILTER, a program was developed to produce a best estimate of trajectory (BET). The program LFILTER, also uses the U-D factorized algorithm form of the Kalman filter as in the propulsion estimation program PFILTER. The necessary definitions and equations explaining the Kalman filtering approach for the PFILTER program, the models used for this application for dynamics and measurements, program description, and program operation are presented.
Fatality estimator user’s guide
Huso, Manuela M.; Som, Nicholas; Ladd, Lew
2012-12-11
Only carcasses judged to have been killed after the previous search should be included in the fatality data set submitted to this estimator software. This estimator already corrects for carcasses missed in previous searches, so carcasses judged to have been missed at least once should be considered “incidental” and not included in the fatality data set used to estimate fatality. Note: When observed carcass count is <5 (including 0 for species known to be at risk, but not observed), USGS Data Series 881 (http://pubs.usgs.gov/ds/0881/) is recommended for fatality estimation.
Multidimensional Poverty in China: Findings Based on the CHNS
ERIC Educational Resources Information Center
Yu, Jiantuo
2013-01-01
This paper estimates multidimensional poverty in China by applying the Alkire-Foster methodology to the China Health and Nutrition Survey 2000-2009 data. Five dimensions are included: income, living standard, education, health and social security. Results suggest that rapid economic growth has resulted not only in a reduction in income poverty but…
Geothermal resources and reserves in Indonesia: an updated revision
NASA Astrophysics Data System (ADS)
Fauzi, A.
2015-02-01
More than 300 high- to low-enthalpy geothermal sources have been identified throughout Indonesia. From the early 1980s until the late 1990s, the geothermal potential for power production in Indonesia was estimated to be about 20 000 MWe. The most recent estimate exceeds 29 000 MWe derived from the 300 sites (Geological Agency, December 2013). This resource estimate has been obtained by adding all of the estimated geothermal potential resources and reserves classified as "speculative", "hypothetical", "possible", "probable", and "proven" from all sites where such information is available. However, this approach to estimating the geothermal potential is flawed because it includes double counting of some reserve estimates as resource estimates, thus giving an inflated figure for the total national geothermal potential. This paper describes an updated revision of the geothermal resource estimate in Indonesia using a more realistic methodology. The methodology proposes that the preliminary "Speculative Resource" category should cover the full potential of a geothermal area and form the base reference figure for the resource of the area. Further investigation of this resource may improve the level of confidence of the category of reserves but will not necessarily increase the figure of the "preliminary resource estimate" as a whole, unless the result of the investigation is higher. A previous paper (Fauzi, 2013a, b) redefined and revised the geothermal resource estimate for Indonesia. The methodology, adopted from Fauzi (2013a, b), will be fully described in this paper. As a result of using the revised methodology, the potential geothermal resources and reserves for Indonesia are estimated to be about 24 000 MWe, some 5000 MWe less than the 2013 national estimate.
The Spatial Distribution of Forest Biomass in the Brazilian Amazon: A Comparison of Estimates
NASA Technical Reports Server (NTRS)
Houghton, R. A.; Lawrence, J. L.; Hackler, J. L.; Brown, S.
2001-01-01
The amount of carbon released to the atmosphere as a result of deforestation is determined, in part, by the amount of carbon held in the biomass of the forests converted to other uses. Uncertainty in forest biomass is responsible for much of the uncertainty in current estimates of the flux of carbon from land-use change. We compared several estimates of forest biomass for the Brazilian Amazon, based on spatial interpolations of direct measurements, relationships to climatic variables, and remote sensing data. We asked three questions. First, do the methods yield similar estimates? Second, do they yield similar spatial patterns of distribution of biomass? And, third, what factors need most attention if we are to predict more accurately the distribution of forest biomass over large areas? Amazonian forests (including dead and below-ground biomass) vary by more than a factor of two, from a low of 39 PgC to a high of 93 PgC. Furthermore, the estimates disagree as to the regions of high and low biomass. The lack of agreement among estimates confirms the need for reliable determination of aboveground biomass over large areas. Potential methods include direct measurement of biomass through forest inventories with improved allometric regression equations, dynamic modeling of forest recovery following observed stand-replacing disturbances (the approach used in this research), and estimation of aboveground biomass from airborne or satellite-based instruments sensitive to the vertical structure plant canopies.
CAREX Canada: an enhanced model for assessing occupational carcinogen exposure
Peters, Cheryl E; Ge, Calvin B; Hall, Amy L; Davies, Hugh W; Demers, Paul A
2015-01-01
Objectives To estimate the numbers of workers exposed to known and suspected occupational carcinogens in Canada, building on the methods of CARcinogen EXposure (CAREX) projects in the European Union (EU). Methods CAREX Canada consists of estimates of the prevalence and level of exposure to occupational carcinogens. CAREX Canada includes occupational agents evaluated by the International Agency for Research on Cancer as known, probable or possible human carcinogens that were present and feasible to assess in Canadian workplaces. A Canadian Workplace Exposure Database was established to identify the potential for exposure in particular industries and occupations, and to create exposure level estimates among priority agents, where possible. CAREX EU data were reviewed for relevance to the Canadian context and the proportion of workers likely to be exposed by industry and occupation in Canada was assigned using expert assessment and agreement by a minimum of two occupational hygienists. These proportions were used to generate prevalence estimates by linkage with the Census of Population for 2006, and these estimates are available by industry, occupation, sex and province. Results CAREX Canada estimated the number of workers exposed to 44 known, probable and suspected carcinogens. Estimates of levels of exposure were further developed for 18 priority agents. Common exposures included night shift work (1.9 million exposed), solar ultraviolet radiation exposure (1.5 million exposed) and diesel engine exhaust (781 000 exposed). Conclusions A substantial proportion of Canadian workers are exposed to known and suspected carcinogens at work. PMID:24969047
Improving the accuracy of Laplacian estimation with novel multipolar concentric ring electrodes
Ding, Quan; Besio, Walter G.
2015-01-01
Conventional electroencephalography with disc electrodes has major drawbacks including poor spatial resolution, selectivity and low signal-to-noise ratio that are critically limiting its use. Concentric ring electrodes, consisting of several elements including the central disc and a number of concentric rings, are a promising alternative with potential to improve all of the aforementioned aspects significantly. In our previous work, the tripolar concentric ring electrode was successfully used in a wide range of applications demonstrating its superiority to conventional disc electrode, in particular, in accuracy of Laplacian estimation. This paper takes the next step toward further improving the Laplacian estimation with novel multipolar concentric ring electrodes by completing and validating a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2 that allows cancellation of all the truncation terms up to the order of 2n. An explicit formula based on inversion of a square Vandermonde matrix is derived to make computation of multipolar Laplacian more efficient. To confirm the analytic result of the accuracy of Laplacian estimate increasing with the increase of n and to assess the significance of this gain in accuracy for practical applications finite element method model analysis has been performed. Multipolar concentric ring electrode configurations with n ranging from 1 ring (bipolar electrode configuration) to 6 rings (septapolar electrode configuration) were directly compared and obtained results suggest the significance of the increase in Laplacian accuracy caused by increase of n. PMID:26693200
Comparison of screening-level and Monte Carlo approaches for wildlife food web exposure modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pastorok, R.; Butcher, M.; LaTier, A.
1995-12-31
The implications of using quantitative uncertainty analysis (e.g., Monte Carlo) and site-specific tissue residue data for wildlife exposure modeling were examined with data on trace elements at the Clark Fork River Superfund Site. Exposure of white-tailed deer, red fox, and American kestrel was evaluated using three approaches. First, a screening-level exposure model was based on conservative estimates of exposure parameters, including estimates of dietary residues derived from bioconcentration factors (BCFs) and soil chemistry. A second model without Monte Carlo was based on site-specific data for tissue residues of trace elements (As, Cd, Cu, Pb, Zn) in key dietary species andmore » plausible assumptions for habitat spatial segmentation and other exposure parameters. Dietary species sampled included dominant grasses (tufted hairgrass and redtop), willows, alfalfa, barley, invertebrates (grasshoppers, spiders, and beetles), and deer mice. Third, the Monte Carlo analysis was based on the site-specific residue data and assumed or estimated distributions for exposure parameters. Substantial uncertainties are associated with several exposure parameters, especially BCFS, such that exposure and risk may be greatly overestimated in screening-level approaches. The results of the three approaches are compared with respect to realism, practicality, and data gaps. Collection of site-specific data on trace elements concentrations in plants and animals eaten by the target wildlife receptors is a cost-effective way to obtain realistic estimates of exposure. Implications of the results for exposure and risk estimates are discussed relative to use of wildlife exposure modeling and evaluation of remedial actions at Superfund sites.« less
Improving the accuracy of Laplacian estimation with novel multipolar concentric ring electrodes.
Makeyev, Oleksandr; Ding, Quan; Besio, Walter G
2016-02-01
Conventional electroencephalography with disc electrodes has major drawbacks including poor spatial resolution, selectivity and low signal-to-noise ratio that are critically limiting its use. Concentric ring electrodes, consisting of several elements including the central disc and a number of concentric rings, are a promising alternative with potential to improve all of the aforementioned aspects significantly. In our previous work, the tripolar concentric ring electrode was successfully used in a wide range of applications demonstrating its superiority to conventional disc electrode, in particular, in accuracy of Laplacian estimation. This paper takes the next step toward further improving the Laplacian estimation with novel multipolar concentric ring electrodes by completing and validating a general approach to estimation of the Laplacian for an ( n + 1)-polar electrode with n rings using the (4 n + 1)-point method for n ≥ 2 that allows cancellation of all the truncation terms up to the order of 2 n . An explicit formula based on inversion of a square Vandermonde matrix is derived to make computation of multipolar Laplacian more efficient. To confirm the analytic result of the accuracy of Laplacian estimate increasing with the increase of n and to assess the significance of this gain in accuracy for practical applications finite element method model analysis has been performed. Multipolar concentric ring electrode configurations with n ranging from 1 ring (bipolar electrode configuration) to 6 rings (septapolar electrode configuration) were directly compared and obtained results suggest the significance of the increase in Laplacian accuracy caused by increase of n .
Green, Kerry M.; Stuart, Elizabeth A.
2014-01-01
Objective This study provides guidance on how propensity score methods can be combined with moderation analyses (i.e., effect modification) to examine subgroup differences in potential causal effects in non-experimental studies. As a motivating example, we focus on how depression may affect subsequent substance use differently for men and women. Method Using data from a longitudinal community cohort study (N=952) of urban African Americans with assessments in childhood, adolescence, young adulthood and midlife, we estimate the influence of depression by young adulthood on substance use outcomes in midlife, and whether that influence varies by gender. We illustrate and compare five different techniques for estimating subgroup effects using propensity score methods, including separate propensity score models and matching for men and women, a joint propensity score model for men and women with matching separately and together by gender, and a joint male/female propensity score model that includes theoretically important gender interactions with matching separately and together by gender. Results Analyses showed that estimating separate models for men and women yielded the best balance and, therefore, is a preferred technique when subgroup analyses are of interest, at least in this data. Results also showed substance use consequences of depression but no significant gender differences. Conclusions It is critical to prespecify subgroup effects before the estimation of propensity scores and to check balance within subgroups regardless of the type of propensity score model used. Results also suggest that depression may affect multiple substance use outcomes in midlife for both men and women relatively equally. PMID:24731233
Johansen, M P; Barnett, C L; Beresford, N A; Brown, J E; Černe, M; Howard, B J; Kamboj, S; Keum, D-K; Smodiš, B; Twining, J R; Vandenhove, H; Vives i Batlle, J; Wood, M D; Yu, C
2012-06-15
Radiological doses to terrestrial wildlife were examined in this model inter-comparison study that emphasised factors causing variability in dose estimation. The study participants used varying modelling approaches and information sources to estimate dose rates and tissue concentrations for a range of biota types exposed to soil contamination at a shallow radionuclide waste burial site in Australia. Results indicated that the dominant factor causing variation in dose rate estimates (up to three orders of magnitude on mean total dose rates) was the soil-to-organism transfer of radionuclides that included variation in transfer parameter values as well as transfer calculation methods. Additional variation was associated with other modelling factors including: how participants conceptualised and modelled the exposure configurations (two orders of magnitude); which progeny to include with the parent radionuclide (typically less than one order of magnitude); and dose calculation parameters, including radiation weighting factors and dose conversion coefficients (typically less than one order of magnitude). Probabilistic approaches to model parameterisation were used to encompass and describe variable model parameters and outcomes. The study confirms the need for continued evaluation of the underlying mechanisms governing soil-to-organism transfer of radionuclides to improve estimation of dose rates to terrestrial wildlife. The exposure pathways and configurations available in most current codes are limited when considering instances where organisms access subsurface contamination through rooting, burrowing, or using different localised waste areas as part of their habitual routines. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kim, Y.; Kimball, J. S.; PARK, H.; Yi, Y.
2017-12-01
Climate change in the Boreal-Arctic region has experienced greater surface air temperature (SAT) warming than the global average in recent decades, which is promoting permafrost thawing and active layer deepening. Permafrost extent (PE) and active layer thickness (ALT) are key environmental indicators of recent climate change, and strongly impact other eco-hydrological processes including land-atmosphere carbon exchange. We developed a new approach for regional estimation and monitoring of PE using daily landscape freeze-thaw (FT) records derived from satellite microwave (37 GHz) brightness temperature (Tb) observations. ALT was estimated within the PE domain using empirical modeling of land cover dependent edaphic factors and an annual thawing index derived from MODIS land surface temperature (LST) observations and reanalysis based surface air temperatures (SAT). The PE and ALT estimates were derived over the 1980-2016 satellite record and NASA ABoVE (Arctic Boreal Vulnerability Experiment) domain encompassing Alaska and Northwest Canada. The baseline model estimates were derived at 25-km resolution consistent with the satellite FT global record. Our results show recent widespread PE decline and deepening ALT trends, with larger spatial variability and model uncertainty along the southern PE boundary. Larger PE and ALT variability occurs over heterogeneous permafrost subzones characterized by dense vegetation, and variable snow cover and organic layer conditions. We also tested alternative PE and ALT estimates derived using finer (6-km) scale satellite Tb (36.5 GHz) and FT retrievals from a calibrated AMSR-E and AMSR2 sensor record. The PE and ALT results were compared against other independent observations, including process model simulations, in situ measurements, and permafrost inventory records. A model sensitivity analysis was conducted to evaluate snow cover, soil organic layer, and vegetation composition impacts to ALT. The finer delineation of permafrost and active layer conditions provides enhanced regional monitoring of PE and ALT changes over the ABoVE domain, including heterogeneous permafrost subzones.
Recovery from PTSD following Hurricane Katrina
McLaughlin, Katie A.; Berglund, Patricia; Gruber, Michael J.; Kessler, Ronald C.; Sampson, Nancy A.; Zaslavsky, Alan M.
2011-01-01
Background We examined patterns and correlates of speed of recovery of estimated posttraumatic stress disorder (PTSD) among people who developed PTSD in the wake of Hurricane Katrina. Method A probability sample of pre-hurricane residents of areas affected by Hurricane Katrina was administered a telephone survey 7-19 months following the hurricane and again 24-27 months post-hurricane. The baseline survey assessed PTSD using a validated screening scale and assessed a number of hypothesized predictors of PTSD recovery that included socio-demographics, pre-hurricane history of psychopathology, hurricane-related stressors, social support, and social competence. Exposure to post-hurricane stressors and course of estimated PTSD were assessed in a follow-up interview. Results An estimated 17.1% of respondents had a history of estimated hurricane-related PTSD at baseline and 29.2% by the follow-up survey. Of the respondents who developed estimated hurricane-related PTSD, 39.0% recovered by the time of the follow-up survey with a mean duration of 16.5 months. Predictors of slow recovery included exposure to a life-threatening situation, hurricane-related housing adversity, and high income. Other socio-demographics, history of psychopathology, social support, social competence, and post-hurricane stressors were unrelated to recovery from estimated PTSD. Conclusions The majority of adults who developed estimated PTSD after Hurricane Katrina did not recover within 18-27 months. Delayed onset was common. Findings document the importance of initial trauma exposure severity in predicting course of illness and suggest that pre- and post-trauma factors typically associated with course of estimated PTSD did not influence recovery following Hurricane Katrina. PMID:21308887
Improvements in prevalence trend fitting and incidence estimation in EPP 2013
Brown, Tim; Bao, Le; Eaton, Jeffrey W.; Hogan, Daniel R.; Mahy, Mary; Marsh, Kimberly; Mathers, Bradley M.; Puckett, Robert
2014-01-01
Objective: Describe modifications to the latest version of the Joint United Nations Programme on AIDS (UNAIDS) Estimation and Projection Package component of Spectrum (EPP 2013) to improve prevalence fitting and incidence trend estimation in national epidemics and global estimates of HIV burden. Methods: Key changes made under the guidance of the UNAIDS Reference Group on Estimates, Modelling and Projections include: availability of a range of incidence calculation models and guidance for selecting a model; a shift to reporting the Bayesian median instead of the maximum likelihood estimate; procedures for comparison and validation against reported HIV and AIDS data; incorporation of national surveys as an integral part of the fitting and calibration procedure, allowing survey trends to inform the fit; improved antenatal clinic calibration procedures in countries without surveys; adjustment of national antiretroviral therapy reports used in the fitting to include only those aged 15–49 years; better estimates of mortality among people who inject drugs; and enhancements to speed fitting. Results: The revised models in EPP 2013 allow closer fits to observed prevalence trend data and reflect improving understanding of HIV epidemics and associated data. Conclusion: Spectrum and EPP continue to adapt to make better use of the existing data sources, incorporate new sources of information in their fitting and validation procedures, and correct for quantifiable biases in inputs as they are identified and understood. These adaptations provide countries with better calibrated estimates of incidence and prevalence, which increase epidemic understanding and provide a solid base for program and policy planning. PMID:25406747
Jacobs, Philip; Lier, Douglas; Gooch, Katherine; Buesch, Katharina; Lorimer, Michelle; Mitchell, Ian
2013-01-01
BACKGROUND: Approximately one in 10 hospitalized patients will acquire a nosocomial infection (NI) after admission to hospital, of which 71% are due to respiratory viruses, including the respiratory syncytial virus (RSV). NIs are concerning and lead to prolonged hospitalizations. The economics of NIs are typically described in generalized terms and specific cost data are lacking. OBJECTIVE: To develop an evidence-based model for predicting the risk and cost of nosocomial RSV infection in pediatric settings. METHODS: A model was developed, from a Canadian perspective, to capture all costs related to an RSV infection hospitalization, including the risk and cost of an NI, diagnostic testing and infection control. All data inputs were derived from published literature. Deterministic sensitivity analyses were performed to evaluate the uncertainty associated with the estimates and to explore the impact of changes to key variables. A probabilistic sensitivity analysis was performed to estimate a confidence interval for the overall cost estimate. RESULTS: The estimated cost of nosocomial RSV infection adds approximately 30.5% to the hospitalization costs for the treatment of community-acquired severe RSV infection. The net benefits of the prevention activities were estimated to be equivalent to 9% of the total RSV-related costs. Changes in the estimated hospital infection transmission rates did not have a significant impact on the base-case estimate. CONCLUSIONS: The risk and cost of nosocomial RSV infection contributes to the overall burden of RSV. The present model, which was developed to estimate this burden, can be adapted to other countries with different disease epidemiology, costs and hospital infection transmission rates. PMID:24421788
Jayaraman, Jayakumar; Wong, Hai Ming; King, Nigel M; Roberts, Graham J
2013-07-01
Estimation of age of an individual can be performed by evaluating the pattern of dental development. A dataset for age estimation based on the dental maturity of a French-Canadian population was published over 35 years ago and has become the most widely accepted dataset. The applicability of this dataset has been tested on different population groups. To estimate the observed differences between Chronological age (CA) and Dental age (DA) when the French Canadian dataset was used to estimate the age of different population groups. A systematic search of literature for papers utilizing the French Canadian dataset for age estimation was performed. All language articles from PubMed, Embase and Cochrane databases were electronically searched for terms 'Demirjian' and 'Dental age' published between January 1973 and December 2011. A hand search of articles was also conducted. A total of 274 studies were identified from which 34 studies were included for qualitative analysis and 12 studies were included for quantitative assessment and meta-analysis. When synthesizing the estimation results from different population groups, on average, the Demirjian dataset overestimated the age of females by 0.65 years (-0.10 years to +2.82 years) and males by 0.60 years (-0.23 years to +3.04 years). The French Canadian dataset overestimates the age of the subjects by more than six months and hence this dataset should be used only with considerable caution when estimating age of group of subjects of any global population. Copyright © 2013 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
2012-01-01
Background Researchers and policy makers have determined that accounting for productivity costs, or “indirect costs,” may be as important as including direct medical expenditures when evaluating the societal value of health interventions. These costs are also important when estimating the global burden of disease. The estimation of indirect costs is commonly done on a country-specific basis. However, there are few studies that evaluate indirect costs across countries using a consistent methodology. Methods Using the human capital approach, we developed a model that estimates productivity costs as the present value of lifetime earnings (PVLE) lost due to premature mortality. Applying this methodology, the model estimates productivity costs for 29 selected countries, both developed and emerging. We also provide an illustration of how the inclusion of productivity costs contributes to an analysis of the societal burden of smoking. A sensitivity analysis is undertaken to assess productivity costs on the basis of the friction cost approach. Results PVLE estimates were higher for certain subpopulations, such as men, younger people, and people in developed countries. In the case study, productivity cost estimates from our model showed that productivity loss was a substantial share of the total cost burden of premature mortality due to smoking, accounting for over 75 % of total lifetime costs in the United States and 67 % of total lifetime costs in Brazil. Productivity costs were much lower using the friction cost approach among those of working age. Conclusions Our PVLE model is a novel tool allowing researchers to incorporate the value of lost productivity due to premature mortality into economic analyses of treatments for diseases or health interventions. We provide PVLE estimates for a number of emerging and developed countries. Including productivity costs in a health economics study allows for a more comprehensive analysis, and, as demonstrated by our illustration, can have important effects on the results and conclusions. PMID:22731620
Hejl, H.R.
1989-01-01
The precipitation-runoff modeling system was applied to the 8.21 sq-mi drainage area of the Ah-shi-sle-pah Wash watershed in northwestern New Mexico. The calibration periods were May to September of 1981 and 1982, and the verification period was May to September 1983. Twelve storms were available for calibration and 8 storms were available for verification. For calibration A (hydraulic conductivity estimated from onsite data and other storm-mode parameters optimized), the computed standard error of estimate was 50% for runoff volumes and 72% of peak discharges. Calibration B included hydraulic conductivity in the optimization, which reduced the standard error of estimate to 28 % for runoff volumes and 50% for peak discharges. Optimized values for hydraulic conductivity resulted in reductions from 1.00 to 0.26 in/h and 0.20 to 0.03 in/h for the 2 general soils groups in the calibrations. Simulated runoff volumes using 7 of 8 storms occurring during the verification period had a standard error of estimate of 40% for verification A and 38% for verification B. Simulated peak discharge had a standard error of estimate of 120% for verification A and 56% for verification B. Including the eighth storm which had a relatively small magnitude in the verification analysis more than doubled the standard error of estimating volumes and peaks. (USGS)
Estimating maneuvers for precise relative orbit determination using GPS
NASA Astrophysics Data System (ADS)
Allende-Alba, Gerardo; Montenbruck, Oliver; Ardaens, Jean-Sébastien; Wermuth, Martin; Hugentobler, Urs
2017-01-01
Precise relative orbit determination is an essential element for the generation of science products from distributed instrumentation of formation flying satellites in low Earth orbit. According to the mission profile, the required formation is typically maintained and/or controlled by executing maneuvers. In order to generate consistent and precise orbit products, a strategy for maneuver handling is mandatory in order to avoid discontinuities or precision degradation before, after and during maneuver execution. Precise orbit determination offers the possibility of maneuver estimation in an adjustment of single-satellite trajectories using GPS measurements. However, a consistent formulation of a precise relative orbit determination scheme requires the implementation of a maneuver estimation strategy which can be used, in addition, to improve the precision of maneuver estimates by drawing upon the use of differential GPS measurements. The present study introduces a method for precise relative orbit determination based on a reduced-dynamic batch processing of differential GPS pseudorange and carrier phase measurements, which includes maneuver estimation as part of the relative orbit adjustment. The proposed method has been validated using flight data from space missions with different rates of maneuvering activity, including the GRACE, TanDEM-X and PRISMA missions. The results show the feasibility of obtaining precise relative orbits without degradation in the vicinity of maneuvers as well as improved maneuver estimates that can be used for better maneuver planning in flight dynamics operations.
Brenner, Hermann; Jansen, Lina
2016-02-01
Monitoring cancer survival is a key task of cancer registries, but timely disclosure of progress in long-term survival remains a challenge. We introduce and evaluate a novel method, denoted "boomerang method," for deriving more up-to-date estimates of long-term survival. We applied three established methods (cohort, complete, and period analysis) and the boomerang method to derive up-to-date 10-year relative survival of patients diagnosed with common solid cancers and hematological malignancies in the United States. Using the Surveillance, Epidemiology and End Results 9 database, we compared the most up-to-date age-specific estimates that might have been obtained with the database including patients diagnosed up to 2001 with 10-year survival later observed for patients diagnosed in 1997-2001. For cancers with little or no increase in survival over time, the various estimates of 10-year relative survival potentially available by the end of 2001 were generally rather similar. For malignancies with strongly increasing survival over time, including breast and prostate cancer and all hematological malignancies, the boomerang method provided estimates that were closest to later observed 10-year relative survival in 23 of the 34 groups assessed. The boomerang method can substantially improve up-to-dateness of long-term cancer survival estimates in times of ongoing improvement in prognosis. Copyright © 2016 Elsevier Inc. All rights reserved.
Regional estimation of extreme suspended sediment concentrations using watershed characteristics
NASA Astrophysics Data System (ADS)
Tramblay, Yves; Ouarda, Taha B. M. J.; St-Hilaire, André; Poulin, Jimmy
2010-01-01
SummaryThe number of stations monitoring daily suspended sediment concentration (SSC) has been decreasing since the 1980s in North America while suspended sediment is considered as a key variable for water quality. The objective of this study is to test the feasibility of regionalising extreme SSC, i.e. estimating SSC extremes values for ungauged basins. Annual maximum SSC for 72 rivers in Canada and USA were modelled with probability distributions in order to estimate quantiles corresponding to different return periods. Regionalisation techniques, originally developed for flood prediction in ungauged basins, were tested using the climatic, topographic, land cover and soils attributes of the watersheds. Two approaches were compared, using either physiographic characteristics or seasonality of extreme SSC to delineate the regions. Multiple regression models to estimate SSC quantiles as a function of watershed characteristics were built in each region, and compared to a global model including all sites. Regional estimates of SSC quantiles were compared with the local values. Results show that regional estimation of extreme SSC is more efficient than a global regression model including all sites. Groups/regions of stations have been identified, using either the watershed characteristics or the seasonality of occurrence for extreme SSC values providing a method to better describe the extreme events of SSC. The most important variables for predicting extreme SSC are the percentage of clay in the soils, precipitation intensity and forest cover.
Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng
2015-01-01
Summary The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function. PMID:27346982
Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng
2016-06-01
The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function.
A systematic review of the reporting of tinnitus prevalence and severity.
McCormack, Abby; Edmondson-Jones, Mark; Somerset, Sarah; Hall, Deborah
2016-07-01
There is no standard diagnostic criterion for tinnitus, although some clinical assessment instruments do exist for identifying patient complaints. Within epidemiological studies the presence of tinnitus is determined primarily by self-report, typically in response to a single question. Using these methods prevalence figures vary widely. Given the variety of published estimates worldwide, we assessed and collated published prevalence estimates of tinnitus and tinnitus severity, creating a narrative synthesis of the data. The variability between prevalence estimates was investigated in order to determine any barriers to data synthesis and to identify reasons for heterogeneity. and analysis: A systematic review included all adult population studies reporting the prevalence of tinnitus from January 1980 to July 2015. We searched five databases (Embase, Medline, PsychInfo, CINAHL and Web Of Science), using a combination of medical subject headings (MeSH) and relevant text words. Observational studies including cross-sectional studies were included, but studies estimating the incidence of tinnitus (e.g. cohort studies) were outside the scope of this systematic review. The databases identified 875 papers and a further 16 were identified through manual searching. After duplicates were removed, 515 remained. On the basis of the title, abstract and full-text screening, 400, 48 and 27 papers respectively were removed. This left 40 papers, reporting 39 different studies, for data extraction. Sixteen countries were represented, with the majority of the studies from the European region (38.5%). Publications since 2010 represented half of all included studies (48.7%). Overall prevalence figures for each study ranged from 5.1% to 42.7%. For the 12 studies that used the same definition of tinnitus, prevalence ranged from 11.9% to 30.3%. Twenty-six studies (66.7%) reported tinnitus prevalence by different age groups, and generally showed an increase in prevalence as age increases. Half the studies reported tinnitus prevalence by gender. The pattern generally showed higher tinnitus prevalence among males than females. There were 8 different types of definitions of tinnitus, the most common being "tinnitus lasting for more than five minutes at a time" (34.3%). Only seven studies gave any justification for the question that was used, or acknowledged the lack of standard questions for tinnitus. There is widespread inconsistency in defining and reporting tinnitus, leading to variability in prevalence estimates among studies. Nearly half of the included studies had a high risk of bias and this limits the generalisability of prevalence estimates. In addition, the available prevalence data is heterogeneous thereby preventing the ability to pool the data and perform meta-analyses. Sources of heterogeneity include different diagnostic criteria, different age groups, different study focus and differences in reporting and analysis of the results. Heterogeneity thus made comparison across studies impracticable. Deriving global estimates of the prevalence of tinnitus involves combining results from studies which are consistent in their definition and measurement of tinnitus, survey methodology and in the reporting and analysis of the results. Ultimately comparison among studies is unachievable without such consistency. The strength of this systematic review is in providing a record of all the available, recent epidemiological data in each global region and in making recommendations for promoting standardisation. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kirchengast, Gottfried; Li, Ying; Scherllin-Pirscher, Barbara; Schwärz, Marc; Schwarz, Jakob; Nielsen, Johannes K.
2017-04-01
The GNSS radio occultation (RO) technique is an important remote sensing technique for obtaining thermodynamic profiles of temperature, humidity, and pressure in the Earth's troposphere. However, due to refraction effects of both dry ambient air and water vapor in the troposphere, retrieval of accurate thermodynamic profiles at these lower altitudes is challenging and requires suitable background information in addition to the RO refractivity information. Here we introduce a new moist air retrieval algorithm aiming to improve the quality and robustness of retrieving temperature, humidity and pressure profiles in moist air tropospheric conditions. The new algorithm consists of four steps: (1) use of prescribed specific humidity and its uncertainty to retrieve temperature and its associated uncertainty; (2) use of prescribed temperature and its uncertainty to retrieve specific humidity and its associated uncertainty; (3) use of the previous results to estimate final temperature and specific humidity profiles through optimal estimation; (4) determination of air pressure and density profiles from the results obtained before. The new algorithm does not require elaborated matrix inversions which are otherwise widely used in 1D-Var retrieval algorithms, and it allows a transparent uncertainty propagation, whereby the uncertainties of prescribed variables are dynamically estimated accounting for their spatial and temporal variations. Estimated random uncertainties are calculated by constructing error covariance matrices from co-located ECMWF short-range forecast and corresponding analysis profiles. Systematic uncertainties are estimated by empirical modeling. The influence of regarding or disregarding vertical error correlations is quantified. The new scheme is implemented with static input uncertainty profiles in WEGC's current OPSv5.6 processing system and with full scope in WEGC's next-generation system, the Reference Occultation Processing System (rOPS). Results from both WEGC systems, current OPSv5.6 and next-generation rOPS, are shown and discussed, based on both insights from individual profiles and statistical ensembles, and compared to moist air retrieval results from the UCAR Boulder and ROM-SAF Copenhagen centers. The results show that the new algorithmic scheme improves the temperature, humidity and pressure retrieval performance, in particular also the robustness including for integrated uncertainty estimation for large-scale applications, over the previous algorithms. The new rOPS-implemented algorithm will therefore be used in the first large-scale reprocessing towards a tropospheric climate data record 2001-2016 by the rOPS, including its integrated uncertainty propagation.
Eluru, Naveen; Chakour, Vincent; Chamberlain, Morgan; Miranda-Moreno, Luis F
2013-10-01
Vehicle operating speed measured on roadways is a critical component for a host of analysis in the transportation field including transportation safety, traffic flow modeling, roadway geometric design, vehicle emissions modeling, and road user route decisions. The current research effort contributes to the literature on examining vehicle speed on urban roads methodologically and substantively. In terms of methodology, we formulate a new econometric model framework for examining speed profiles. The proposed model is an ordered response formulation of a fractional split model. The ordered nature of the speed variable allows us to propose an ordered variant of the fractional split model in the literature. The proposed formulation allows us to model the proportion of vehicles traveling in each speed interval for the entire segment of roadway. We extend the model to allow the influence of exogenous variables to vary across the population. Further, we develop a panel mixed version of the fractional split model to account for the influence of site-specific unobserved effects. The paper contributes substantively by estimating the proposed model using a unique dataset from Montreal consisting of weekly speed data (collected in hourly intervals) for about 50 local roads and 70 arterial roads. We estimate separate models for local roads and arterial roads. The model estimation exercise considers a whole host of variables including geometric design attributes, roadway attributes, traffic characteristics and environmental factors. The model results highlight the role of various street characteristics including number of lanes, presence of parking, presence of sidewalks, vertical grade, and bicycle route on vehicle speed proportions. The results also highlight the presence of site-specific unobserved effects influencing the speed distribution. The parameters from the modeling exercise are validated using a hold-out sample not considered for model estimation. The results indicate that the proposed panel mixed ordered probit fractional split model offers promise for modeling such proportional ordinal variables. Copyright © 2013 Elsevier Ltd. All rights reserved.
The age and phylogeny of wood boring weevils and the origin of subsociality.
Jordal, Bjarte H; Sequeira, Andrea S; Cognato, Anthony I
2011-06-01
A large proportion of the hyperdiverse weevils are wood boring and many of these taxa have subsocial family structures. The origin and relationship between certain wood boring weevil taxa has been problematic to solve and hypotheses on their phylogenies change substantially between different studies. We aimed at testing the phylogenetic position and monophyly of the most prominent wood boring taxa Scolytinae, Platypodinae and Cossoninae, including a range of weevil outgroups with either the herbivorous or wood boring habit. Many putatively intergrading taxa were included in a broad phylogenetic analysis for the first time in this study, such as Schedlarius, Mecopelmus, Coptonotus, Dactylipalpus, Coptocorynus and allied Araucariini taxa, Dobionus, Psepholax, Amorphocerus-Porthetes, and some peculiar wood boring Conoderini with bark beetle behaviour. Data analyses were based on 128 morphological characters, rDNA nucleotides from the D2-D3 segment of 28S, and nucleotides and amino acids from the protein encoding gene fragments of CAD, ArgK, EF-1α and COI. Although the results varied for some of the groups between various data sets and analyses, one may conclude the following from this study: Scolytinae and Platypodinae are likely sister lineages most closely related to Coptonotus; Cossoninae is monophyletic (including Araucariini) and more distantly related to Scolytinae; Amorphocerini is not part of Cossoninae and Psepholax may belong to Cryptorhynchini. Likelihood estimation of ancestral state reconstruction of subsociality indicated five or six origins as a conservative estimate. Overall the phylogenetic results were quite dependent on morphological data and we conclude that more genetic loci must be sampled to improve phylogenetic resolution. However, some results such as the derived position of Scolytinae were consistent between morphological and molecular data. A revised time estimation of the origin of Curculionidae and various subfamily groups were made using the recently updated fossil age of Scolytinae (100 Ma), which had a significant influence on node age estimates. Copyright © 2011 Elsevier Inc. All rights reserved.
Assessment of Physical Activity and Energy Expenditure: An Overview of Objective Measures
Hills, Andrew P.; Mokhtar, Najat; Byrne, Nuala M.
2014-01-01
The ability to assess energy expenditure (EE) and estimate physical activity (PA) in free-living individuals is extremely important in the global context of non-communicable diseases including malnutrition, overnutrition (obesity), and diabetes. It is also important to appreciate that PA and EE are different constructs with PA defined as any bodily movement that results in EE and accordingly, energy is expended as a result of PA. However, total energy expenditure, best assessed using the criterion doubly labeled water (DLW) technique, includes components in addition to physical activity energy expenditure, namely resting energy expenditure and the thermic effect of food. Given the large number of assessment techniques currently used to estimate PA in humans, it is imperative to understand the relative merits of each. The goal of this review is to provide information on the utility and limitations of a range of objective measures of PA and their relationship with EE. The measures discussed include those based on EE or oxygen uptake including DLW, activity energy expenditure, physical activity level, and metabolic equivalent; those based on heart rate monitoring and motion sensors; and because of their widespread use, selected subjective measures. PMID:25988109
van der Hoop, Julie M; Vanderlaan, Angelia S M; Taggart, Christopher T
2012-10-01
Vessel strikes are the primary source of known mortality for the endangered North Atlantic right whale (Eubalaena glacialis). Multi-institutional efforts to reduce mortality associated with vessel strikes include vessel-routing amendments such as the International Maritime Organization voluntary "area to be avoided" (ATBA) in the Roseway Basin right whale feeding habitat on the southwestern Scotian Shelf. Though relative probabilities of lethal vessel strikes have been estimated and published, absolute probabilities remain unknown. We used a modeling approach to determine the regional effect of the ATBA, by estimating reductions in the expected number of lethal vessel strikes. This analysis differs from others in that it explicitly includes a spatiotemporal analysis of real-time transits of vessels through a population of simulated, swimming right whales. Combining automatic identification system (AIS) vessel navigation data and an observationally based whale movement model allowed us to determine the spatial and temporal intersection of vessels and whales, from which various probability estimates of lethal vessel strikes are derived. We estimate one lethal vessel strike every 0.775-2.07 years prior to ATBA implementation, consistent with and more constrained than previous estimates of every 2-16 years. Following implementation, a lethal vessel strike is expected every 41 years. When whale abundance is held constant across years, we estimate that voluntary vessel compliance with the ATBA results in an 82% reduction in the per capita rate of lethal strikes; very similar to a previously published estimate of 82% reduction in the relative risk of a lethal vessel strike. The models we developed can inform decision-making and policy design, based on their ability to provide absolute, population-corrected, time-varying estimates of lethal vessel strikes, and they are easily transported to other regions and situations.
Updated Value of Service Reliability Estimates for Electric Utility Customers in the United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sullivan, Michael; Schellenberg, Josh; Blundell, Marshall
2015-01-01
This report updates the 2009 meta-analysis that provides estimates of the value of service reliability for electricity customers in the United States (U.S.). The meta-dataset now includes 34 different datasets from surveys fielded by 10 different utility companies between 1989 and 2012. Because these studies used nearly identical interruption cost estimation or willingness-to-pay/accept methods, it was possible to integrate their results into a single meta-dataset describing the value of electric service reliability observed in all of them. Once the datasets from the various studies were combined, a two-part regression model was used to estimate customer damage functions that can bemore » generally applied to calculate customer interruption costs per event by season, time of day, day of week, and geographical regions within the U.S. for industrial, commercial, and residential customers. This report focuses on the backwards stepwise selection process that was used to develop the final revised model for all customer classes. Across customer classes, the revised customer interruption cost model has improved significantly because it incorporates more data and does not include the many extraneous variables that were in the original specification from the 2009 meta-analysis. The backwards stepwise selection process led to a more parsimonious model that only included key variables, while still achieving comparable out-of-sample predictive performance. In turn, users of interruption cost estimation tools such as the Interruption Cost Estimate (ICE) Calculator will have less customer characteristics information to provide and the associated inputs page will be far less cumbersome. The upcoming new version of the ICE Calculator is anticipated to be released in 2015.« less
Costs And Savings Associated With Community Water Fluoridation In The United States.
O'Connell, Joan; Rockell, Jennifer; Ouellet, Judith; Tomar, Scott L; Maas, William
2016-12-01
The most comprehensive study of US community water fluoridation program benefits and costs was published in 2001. This study provides updated estimates using an economic model that includes recent data on program costs, dental caries increments, and dental treatments. In 2013 more than 211 million people had access to fluoridated water through community water systems serving 1,000 or more people. Savings associated with dental caries averted in 2013 as a result of fluoridation were estimated to be $32.19 per capita for this population. Based on 2013 estimated costs ($324 million), net savings (savings minus costs) from fluoridation systems were estimated to be $6,469 million and the estimated return on investment, 20.0. While communities should assess their specific costs for continuing or implementing a fluoridation program, these updated findings indicate that program savings are likely to exceed costs. Project HOPE—The People-to-People Health Foundation, Inc.
Network Model-Assisted Inference from Respondent-Driven Sampling Data
Gile, Krista J.; Handcock, Mark S.
2015-01-01
Summary Respondent-Driven Sampling is a widely-used method for sampling hard-to-reach human populations by link-tracing over their social networks. Inference from such data requires specialized techniques because the sampling process is both partially beyond the control of the researcher, and partially implicitly defined. Therefore, it is not generally possible to directly compute the sampling weights for traditional design-based inference, and likelihood inference requires modeling the complex sampling process. As an alternative, we introduce a model-assisted approach, resulting in a design-based estimator leveraging a working network model. We derive a new class of estimators for population means and a corresponding bootstrap standard error estimator. We demonstrate improved performance compared to existing estimators, including adjustment for an initial convenience sample. We also apply the method and an extension to the estimation of HIV prevalence in a high-risk population. PMID:26640328
Network Model-Assisted Inference from Respondent-Driven Sampling Data.
Gile, Krista J; Handcock, Mark S
2015-06-01
Respondent-Driven Sampling is a widely-used method for sampling hard-to-reach human populations by link-tracing over their social networks. Inference from such data requires specialized techniques because the sampling process is both partially beyond the control of the researcher, and partially implicitly defined. Therefore, it is not generally possible to directly compute the sampling weights for traditional design-based inference, and likelihood inference requires modeling the complex sampling process. As an alternative, we introduce a model-assisted approach, resulting in a design-based estimator leveraging a working network model. We derive a new class of estimators for population means and a corresponding bootstrap standard error estimator. We demonstrate improved performance compared to existing estimators, including adjustment for an initial convenience sample. We also apply the method and an extension to the estimation of HIV prevalence in a high-risk population.
Software for Estimating Costs of Testing Rocket Engines
NASA Technical Reports Server (NTRS)
Hines, Merlon M.
2004-01-01
A high-level parametric mathematical model for estimating the costs of testing rocket engines and components at Stennis Space Center has been implemented as a Microsoft Excel program that generates multiple spreadsheets. The model and the program are both denoted, simply, the Cost Estimating Model (CEM). The inputs to the CEM are the parameters that describe particular tests, including test types (component or engine test), numbers and duration of tests, thrust levels, and other parameters. The CEM estimates anticipated total project costs for a specific test. Estimates are broken down into testing categories based on a work-breakdown structure and a cost-element structure. A notable historical assumption incorporated into the CEM is that total labor times depend mainly on thrust levels. As a result of a recent modification of the CEM to increase the accuracy of predicted labor times, the dependence of labor time on thrust level is now embodied in third- and fourth-order polynomials.
Software for Estimating Costs of Testing Rocket Engines
NASA Technical Reports Server (NTRS)
Hines, Merion M.
2002-01-01
A high-level parametric mathematical model for estimating the costs of testing rocket engines and components at Stennis Space Center has been implemented as a Microsoft Excel program that generates multiple spreadsheets. The model and the program are both denoted, simply, the Cost Estimating Model (CEM). The inputs to the CEM are the parameters that describe particular tests, including test types (component or engine test), numbers and duration of tests, thrust levels, and other parameters. The CEM estimates anticipated total project costs for a specific test. Estimates are broken down into testing categories based on a work-breakdown structure and a cost-element structure. A notable historical assumption incorporated into the CEM is that total labor times depend mainly on thrust levels. As a result of a recent modification of the CEM to increase the accuracy of predicted labor times, the dependence of labor time on thrust level is now embodied in third- and fourth-order polynomials.
Software for Estimating Costs of Testing Rocket Engines
NASA Technical Reports Server (NTRS)
Hines, Merlon M.
2003-01-01
A high-level parametric mathematical model for estimating the costs of testing rocket engines and components at Stennis Space Center has been implemented as a Microsoft Excel program that generates multiple spreadsheets. The model and the program are both denoted, simply, the Cost Estimating Model (CEM). The inputs to the CEM are the parameters that describe particular tests, including test types (component or engine test), numbers and duration of tests, thrust levels, and other parameters. The CEM estimates anticipated total project costs for a specific test. Estimates are broken down into testing categories based on a work-breakdown structure and a cost-element structure. A notable historical assumption incorporated into the CEM is that total labor times depend mainly on thrust levels. As a result of a recent modification of the CEM to increase the accuracy of predicted labor times, the dependence of labor time on thrust level is now embodied in third- and fourth-order polynomials.
Falk, Carl F; Cai, Li
2016-06-01
We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.
What You Don't Know Can Hurt You: Missing Data and Partial Credit Model Estimates
Thomas, Sarah L.; Schmidt, Karen M.; Erbacher, Monica K.; Bergeman, Cindy S.
2017-01-01
The authors investigated the effect of Missing Completely at Random (MCAR) item responses on partial credit model (PCM) parameter estimates in a longitudinal study of Positive Affect. Participants were 307 adults from the older cohort of the Notre Dame Study of Health and Well-Being (Bergeman and Deboeck, 2014) who completed questionnaires including Positive Affect items for 56 days. Additional missing responses were introduced to the data, randomly replacing 20%, 50%, and 70% of the responses on each item and each day with missing values, in addition to the existing missing data. Results indicated that item locations and person trait level measures diverged from the original estimates as the level of degradation from induced missing data increased. In addition, standard errors of these estimates increased with the level of degradation. Thus, MCAR data does damage the quality and precision of PCM estimates. PMID:26784376
Observation-Corrected Precipitation Estimates in GEOS-5
NASA Technical Reports Server (NTRS)
Reichle, Rolf H.; Liu, Qing
2014-01-01
Several GEOS-5 applications, including the GEOS-5 seasonal forecasting system and the MERRA-Land data product, rely on global precipitation data that have been corrected with satellite and or gauge-based precipitation observations. This document describes the methodology used to generate the corrected precipitation estimates and their use in GEOS-5 applications. The corrected precipitation estimates are derived by disaggregating publicly available, observationally based, global precipitation products from daily or pentad totals to hourly accumulations using background precipitation estimates from the GEOS-5 atmospheric data assimilation system. Depending on the specific combination of the observational precipitation product and the GEOS-5 background estimates, the observational product may also be downscaled in space. The resulting corrected precipitation data product is at the finer temporal and spatial resolution of the GEOS-5 background and matches the observed precipitation at the coarser scale of the observational product, separately for each day (or pentad) and each grid cell.
NASA Technical Reports Server (NTRS)
Liu, G.
1985-01-01
One of the major concerns in the design of an active control system is obtaining the information needed for effective feedback. This involves the combination of sensing and estimation. A sensor location index is defined as the weighted sum of the mean square estimation errors in which the sensor locations can be regarded as estimator design parameters. The design goal is to choose these locations to minimize the sensor location index. The choice of the number of sensors is a tradeoff between the estimation quality based upon the same performance index and the total costs of installing and maintaining extra sensors. An experimental study for choosing the sensor location was conducted on an aeroelastic system. The system modeling which includes the unsteady aerodynamics model developed by Stephen Rock was improved. Experimental results verify the trend of the theoretical predictions of the sensor location index for different sensor locations at various wind speeds.
NASA Technical Reports Server (NTRS)
Korram, S.
1977-01-01
The design of general remote sensing-aided methodologies was studied to provide the estimates of several important inputs to water yield forecast models. These input parameters are snow area extent, snow water content, and evapotranspiration. The study area is Feather River Watershed (780,000 hectares), Northern California. The general approach involved a stepwise sequence of identification of the required information, sample design, measurement/estimation, and evaluation of results. All the relevent and available information types needed in the estimation process are being defined. These include Landsat, meteorological satellite, and aircraft imagery, topographic and geologic data, ground truth data, and climatic data from ground stations. A cost-effective multistage sampling approach was employed in quantification of all the required parameters. The physical and statistical models for both snow quantification and evapotranspiration estimation was developed. These models use the information obtained by aerial and ground data through appropriate statistical sampling design.
Magnetospheric Multiscale (MMS) Mission Attitude Ground System Design
NASA Technical Reports Server (NTRS)
Sedlak, Joseph E.; Superfin, Emil; Raymond, Juan C.
2011-01-01
This paper presents an overview of the attitude ground system (AGS) currently under development for the Magnetospheric Multiscale (MMS) mission. The primary responsibilities for the MMS AGS are definitive attitude determination, validation of the onboard attitude filter, and computation of certain parameters needed to improve maneuver performance. For these purposes, the ground support utilities include attitude and rate estimation for validation of the onboard estimates, sensor calibration, inertia tensor calibration, accelerometer bias estimation, center of mass estimation, and production of a definitive attitude history for use by the science teams. Much of the AGS functionality already exists in utilities used at NASA's Goddard Space Flight Center with support heritage from many other missions, but new utilities are being created specifically for the MMS mission, such as for the inertia tensor, accelerometer bias, and center of mass estimation. Algorithms and test results for all the major AGS subsystems are presented here.
Online Estimation of Model Parameters of Lithium-Ion Battery Using the Cubature Kalman Filter
NASA Astrophysics Data System (ADS)
Tian, Yong; Yan, Rusheng; Tian, Jindong; Zhou, Shijie; Hu, Chao
2017-11-01
Online estimation of state variables, including state-of-charge (SOC), state-of-energy (SOE) and state-of-health (SOH) is greatly crucial for the operation safety of lithium-ion battery. In order to improve estimation accuracy of these state variables, a precise battery model needs to be established. As the lithium-ion battery is a nonlinear time-varying system, the model parameters significantly vary with many factors, such as ambient temperature, discharge rate and depth of discharge, etc. This paper presents an online estimation method of model parameters for lithium-ion battery based on the cubature Kalman filter. The commonly used first-order resistor-capacitor equivalent circuit model is selected as the battery model, based on which the model parameters are estimated online. Experimental results show that the presented method can accurately track the parameters variation at different scenarios.
Estimation of the state of solar activity type stars by virtual observations of CrAVO
NASA Astrophysics Data System (ADS)
Dolgov, A. A.; Shlyapnikov, A. A.
2012-05-01
The results of precosseing of negatives with direct images of the sky from CrAO glass library are presented in this work, which became a part of on-line archive of the Crimean Astronomical Virtual Observatory (CrAVO). Based on the obtained data, the parameters of dwarf stars have been estimated, included in the catalog "Stars with solar-type activity" (GTSh10). The following matters are considered: searching methodology of negatives with positions of studied stars and with calculated limited magnitude; image viewing and reduction with the facilities of the International Virtual Observatory; the preliminary results of the photometry of studied objects.
Infrared search and track performance estimates for detection of commercial unmanned aerial vehicles
NASA Astrophysics Data System (ADS)
Nicholas, Robert; Driggers, Ronald; Shelton, David; Furxhi, Orges
2018-04-01
Unmanned aerial vehicles (UAVs) have become more readily available in the past 5 years and are proliferating rapidly. New aviation regulations are accelerating the use of UAVs in many applications. As a result, there are increasing concerns of potential air threats in situational environments including commercial airport security and drug trafficking. In this study, radiometric signatures of commercially available miniature UAVs is determined for long-wave infrared (LWIR) bands in both clear sky and partial cloudy conditions. Results are presented that compare LWIR performance estimates for the detection of commercial UAVs via infrared search and track (IRST) systems with two candidate sensors.
High-Resolution Time-Frequency Spectrum-Based Lung Function Test from a Smartphone Microphone
Thap, Tharoeun; Chung, Heewon; Jeong, Changwon; Hwang, Ki-Eun; Kim, Hak-Ryul; Yoon, Kwon-Ha; Lee, Jinseok
2016-01-01
In this paper, a smartphone-based lung function test, developed to estimate lung function parameters using a high-resolution time-frequency spectrum from a smartphone built-in microphone is presented. A method of estimation of the forced expiratory volume in 1 s divided by forced vital capacity (FEV1/FVC) based on the variable frequency complex demodulation method (VFCDM) is first proposed. We evaluated our proposed method on 26 subjects, including 13 healthy subjects and 13 chronic obstructive pulmonary disease (COPD) patients, by comparing with the parameters clinically obtained from pulmonary function tests (PFTs). For the healthy subjects, we found that an absolute error (AE) and a root mean squared error (RMSE) of the FEV1/FVC ratio were 4.49% ± 3.38% and 5.54%, respectively. For the COPD patients, we found that AE and RMSE from COPD patients were 10.30% ± 10.59% and 14.48%, respectively. For both groups, we compared the results using the continuous wavelet transform (CWT) and short-time Fourier transform (STFT), and found that VFCDM was superior to CWT and STFT. Further, to estimate other parameters, including forced vital capacity (FVC), forced expiratory volume in 1 s (FEV1), and peak expiratory flow (PEF), regression analysis was conducted to establish a linear transformation. However, the parameters FVC, FEV1, and PEF had correlation factor r values of 0.323, 0.275, and −0.257, respectively, while FEV1/FVC had an r value of 0.814. The results obtained suggest that only the FEV1/FVC ratio can be accurately estimated from a smartphone built-in microphone. The other parameters, including FVC, FEV1, and PEF, were subjective and dependent on the subject’s familiarization with the test and performance of forced exhalation toward the microphone. PMID:27548164
NASA Astrophysics Data System (ADS)
Zhu, Aichun; Wang, Tian; Snoussi, Hichem
2018-03-01
This paper addresses the problems of the graphical-based human pose estimation in still images, including the diversity of appearances and confounding background clutter. We present a new architecture for estimating human pose using a Convolutional Neural Network (CNN). Firstly, a Relative Mixture Deformable Model (RMDM) is defined by each pair of connected parts to compute the relative spatial information in the graphical model. Secondly, a Local Multi-Resolution Convolutional Neural Network (LMR-CNN) is proposed to train and learn the multi-scale representation of each body parts by combining different levels of part context. Thirdly, a LMR-CNN based hierarchical model is defined to explore the context information of limb parts. Finally, the experimental results demonstrate the effectiveness of the proposed deep learning approach for human pose estimation.
Liu, Zhijian; Li, Hao; Cao, Guoqing
2017-07-30
Indoor airborne culturable bacteria are sometimes harmful to human health. Therefore, a quick estimation of their concentration is particularly necessary. However, measuring the indoor microorganism concentration (e.g., bacteria) usually requires a large amount of time, economic cost, and manpower. In this paper, we aim to provide a quick solution: using knowledge-based machine learning to provide quick estimation of the concentration of indoor airborne culturable bacteria only with the inputs of several measurable indoor environmental indicators, including: indoor particulate matter (PM 2.5 and PM 10 ), temperature, relative humidity, and CO₂ concentration. Our results show that a general regression neural network (GRNN) model can sufficiently provide a quick and decent estimation based on the model training and testing using an experimental database with 249 data groups.
Turner, Rachael M; Lyons, Carrie E; Howell, Sean; Honermann, Brian; Garner, Alex; Hess III, Robert; Diouf, Daouda; Ayala, George; Sullivan, Patrick S; Millett, Greg
2018-01-01
Background Gay, bisexual, and other cisgender men who have sex with men (GBMSM) are disproportionately affected by the HIV pandemic. Traditionally, GBMSM have been deemed less relevant in HIV epidemics in low- and middle-income settings where HIV epidemics are more generalized. This is due (in part) to how important population size estimates regarding the number of individuals who identify as GBMSM are to informing the development and monitoring of HIV prevention, treatment, and care programs and coverage. However, pervasive stigma and criminalization of same-sex practices and relationships provide a challenging environment for population enumeration, and these factors have been associated with implausibly low or absent size estimates of GBMSM, thereby limiting knowledge about the dynamics of HIV transmission and the implementation of programs addressing GBMSM. Objective This study leverages estimates of the number of members of a social app geared towards gay men (Hornet) and members of Facebook using self-reported relationship interests in men, men and women, and those with at least one reported same-sex interest. Results were categorized by country of residence to validate official size estimates of GBMSM in 13 countries across five continents. Methods Data were collected through the Hornet Gay Social Network and by using an a priori determined framework to estimate the numbers of Facebook members with interests associated with GBMSM in South Africa, Ghana, Nigeria, Senegal, Côte d'Ivoire, Mauritania, The Gambia, Lebanon, Thailand, Malaysia, Brazil, Ukraine, and the United States. These estimates were compared with the most recent Joint United Nations Programme on HIV/AIDS (UNAIDS) and national estimates across 143 countries. Results The estimates that leveraged social media apps for the number of GBMSM across countries are consistently far higher than official UNAIDS estimates. Using Facebook, it is also feasible to assess the numbers of GBMSM aged 13-17 years, which demonstrate similar proportions to those of older men. There is greater consistency in Facebook estimates of GBMSM compared to UNAIDS-reported estimates across countries. Conclusions The ability to use social media for epidemiologic and HIV prevention, treatment, and care needs continues to improve. Here, a method leveraging different categories of same-sex interests on Facebook, combined with a specific gay-oriented app (Hornet), demonstrated significantly higher estimates than those officially reported. While there are biases in this approach, these data reinforce the need for multiple methods to be used to count the number of GBMSM (especially in more stigmatizing settings) to better inform mathematical models and the scale of HIV program coverage. Moreover, these estimates can inform programs for those aged 13-17 years; a group for which HIV incidence is the highest and HIV prevention program coverage, including the availability of pre-exposure prophylaxis (PrEP), is lowest. Taken together, these results highlight the potential for social media to provide comparable estimates of the number of GBMSM across a large range of countries, including some with no reported estimates. PMID:29422452
Measuring the electrical properties of soil using a calibrated ground-coupled GPR system
Oden, C.P.; Olhoeft, G.R.; Wright, D.L.; Powers, M.H.
2008-01-01
Traditional methods for estimating vadose zone soil properties using ground penetrating radar (GPR) include measuring travel time, fitting diffraction hyperbolae, and other methods exploiting geometry. Additional processing techniques for estimating soil properties are possible with properly calibrated GPR systems. Such calibration using ground-coupled antennas must account for the effects of the shallow soil on the antenna's response, because changing soil properties result in a changing antenna response. A prototype GPR system using ground-coupled antennas was calibrated using laboratory measurements and numerical simulations of the GPR components. Two methods for estimating subsurface properties that utilize the calibrated response were developed. First, a new nonlinear inversion algorithm to estimate shallow soil properties under ground-coupled antennas was evaluated. Tests with synthetic data showed that the inversion algorithm is well behaved across the allowed range of soil properties. A preliminary field test gave encouraging results, with estimated soil property uncertainties (????) of ??1.9 and ??4.4 mS/m for the relative dielectric permittivity and the electrical conductivity, respectively. Next, a deconvolution method for estimating the properties of subsurface reflectors with known shapes (e.g., pipes or planar interfaces) was developed. This method uses scattering matrices to account for the response of subsurface reflectors. The deconvolution method was evaluated for use with noisy data using synthetic data. Results indicate that the deconvolution method requires reflected waves with a signal/noise ratio of about 10:1 or greater. When applied to field data with a signal/noise ratio of 2:1, the method was able to estimate the reflection coefficient and relative permittivity, but the large uncertainty in this estimate precluded inversion for conductivity. ?? Soil Science Society of America.
Bayesian evidence computation for model selection in non-linear geoacoustic inference problems.
Dettmer, Jan; Dosso, Stan E; Osler, John C
2010-12-01
This paper applies a general Bayesian inference approach, based on Bayesian evidence computation, to geoacoustic inversion of interface-wave dispersion data. Quantitative model selection is carried out by computing the evidence (normalizing constants) for several model parameterizations using annealed importance sampling. The resulting posterior probability density estimate is compared to estimates obtained from Metropolis-Hastings sampling to ensure consistent results. The approach is applied to invert interface-wave dispersion data collected on the Scotian Shelf, off the east coast of Canada for the sediment shear-wave velocity profile. Results are consistent with previous work on these data but extend the analysis to a rigorous approach including model selection and uncertainty analysis. The results are also consistent with core samples and seismic reflection measurements carried out in the area.
A Bayesian Machine Learning Model for Estimating Building Occupancy from Open Source Data
Stewart, Robert N.; Urban, Marie L.; Duchscherer, Samantha E.; ...
2016-01-01
Understanding building occupancy is critical to a wide array of applications including natural hazards loss analysis, green building technologies, and population distribution modeling. Due to the expense of directly monitoring buildings, scientists rely in addition on a wide and disparate array of ancillary and open source information including subject matter expertise, survey data, and remote sensing information. These data are fused using data harmonization methods which refer to a loose collection of formal and informal techniques for fusing data together to create viable content for building occupancy estimation. In this paper, we add to the current state of the artmore » by introducing the Population Data Tables (PDT), a Bayesian based informatics system for systematically arranging data and harmonization techniques into a consistent, transparent, knowledge learning framework that retains in the final estimation uncertainty emerging from data, expert judgment, and model parameterization. PDT probabilistically estimates ambient occupancy in units of people/1000ft2 for over 50 building types at the national and sub-national level with the goal of providing global coverage. The challenge of global coverage led to the development of an interdisciplinary geospatial informatics system tool that provides the framework for capturing, storing, and managing open source data, handling subject matter expertise, carrying out Bayesian analytics as well as visualizing and exporting occupancy estimation results. We present the PDT project, situate the work within the larger community, and report on the progress of this multi-year project.Understanding building occupancy is critical to a wide array of applications including natural hazards loss analysis, green building technologies, and population distribution modeling. Due to the expense of directly monitoring buildings, scientists rely in addition on a wide and disparate array of ancillary and open source information including subject matter expertise, survey data, and remote sensing information. These data are fused using data harmonization methods which refer to a loose collection of formal and informal techniques for fusing data together to create viable content for building occupancy estimation. In this paper, we add to the current state of the art by introducing the Population Data Tables (PDT), a Bayesian model and informatics system for systematically arranging data and harmonization techniques into a consistent, transparent, knowledge learning framework that retains in the final estimation uncertainty emerging from data, expert judgment, and model parameterization. PDT probabilistically estimates ambient occupancy in units of people/1000 ft 2 for over 50 building types at the national and sub-national level with the goal of providing global coverage. The challenge of global coverage led to the development of an interdisciplinary geospatial informatics system tool that provides the framework for capturing, storing, and managing open source data, handling subject matter expertise, carrying out Bayesian analytics as well as visualizing and exporting occupancy estimation results. We present the PDT project, situate the work within the larger community, and report on the progress of this multi-year project.« less
Bracken-Grissom, Heather D; Ahyong, Shane T; Wilkinson, Richard D; Feldmann, Rodney M; Schweitzer, Carrie E; Breinholt, Jesse W; Bendall, Matthew; Palero, Ferran; Chan, Tin-Yam; Felder, Darryl L; Robles, Rafael; Chu, Ka-Hou; Tsang, Ling-Ming; Kim, Dohyup; Martin, Joel W; Crandall, Keith A
2014-07-01
Lobsters are a ubiquitous and economically important group of decapod crustaceans that include the infraorders Polychelida, Glypheidea, Astacidea and Achelata. They include familiar forms such as the spiny, slipper, clawed lobsters and crayfish and unfamiliar forms such as the deep-sea and "living fossil" species. The high degree of morphological diversity among these infraorders has led to a dynamic classification and conflicting hypotheses of evolutionary relationships. In this study, we estimated phylogenetic relationships among the major groups of all lobster families and 94% of the genera using six genes (mitochondrial and nuclear) and 195 morphological characters across 173 species of lobsters for the most comprehensive sampling to date. Lobsters were recovered as a non-monophyletic assemblage in the combined (molecular + morphology) analysis. All families were monophyletic, with the exception of Cambaridae, and 7 of 79 genera were recovered as poly- or paraphyletic. A rich fossil history coupled with dense taxon coverage allowed us to estimate and compare divergence times and origins of major lineages using two drastically different approaches. Age priors were constructed and/or included based on fossil age information or fossil discovery, age, and extant species count data. Results from the two approaches were largely congruent across deep to shallow taxonomic divergences across major lineages. The origin of the first lobster-like decapod (Polychelida) was estimated in the Devonian (∼409-372 Ma) with all infraorders present in the Carboniferous (∼353-318 Ma). Fossil calibration subsampling studies examined the influence of sampling density (number of fossils) and placement (deep, middle, and shallow) on divergence time estimates. Results from our study suggest including at least 1 fossil per 10 operational taxonomic units (OTUs) in divergence dating analyses. [Dating; decapods; divergence; lobsters; molecular; morphology; phylogenetics.]. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved.For Permissions, please email: journals.permissions@oup.com.
Torgerson, Paul R.; Devleesschauwer, Brecht; Praet, Nicolas; Speybroeck, Niko; Willingham, Arve Lee; Kasuga, Fumiko; Rokni, Mohammad B.; Zhou, Xiao-Nong; Fèvre, Eric M.; Sripa, Banchob; Gargouri, Neyla; Fürst, Thomas; Budke, Christine M.; Carabin, Hélène; Kirk, Martyn D.; Angulo, Frederick J.; Havelaar, Arie; de Silva, Nilanthi
2015-01-01
Background Foodborne diseases are globally important, resulting in considerable morbidity and mortality. Parasitic diseases often result in high burdens of disease in low and middle income countries and are frequently transmitted to humans via contaminated food. This study presents the first estimates of the global and regional human disease burden of 10 helminth diseases and toxoplasmosis that may be attributed to contaminated food. Methods and Findings Data were abstracted from 16 systematic reviews or similar studies published between 2010 and 2015; from 5 disease data bases accessed in 2015; and from 79 reports, 73 of which have been published since 2000, 4 published between 1995 and 2000 and 2 published in 1986 and 1981. These included reports from national surveillance systems, journal articles, and national estimates of foodborne diseases. These data were used to estimate the number of infections, sequelae, deaths, and Disability Adjusted Life Years (DALYs), by age and region for 2010. These parasitic diseases, resulted in 48.4 million cases (95% Uncertainty intervals [UI] of 43.4–79.0 million) and 59,724 (95% UI 48,017–83,616) deaths annually resulting in 8.78 million (95% UI 7.62–12.51 million) DALYs. We estimated that 48% (95% UI 38%-56%) of cases of these parasitic diseases were foodborne, resulting in 76% (95% UI 65%-81%) of the DALYs attributable to these diseases. Overall, foodborne parasitic disease, excluding enteric protozoa, caused an estimated 23.2 million (95% UI 18.2–38.1 million) cases and 45,927 (95% UI 34,763–59,933) deaths annually resulting in an estimated 6.64 million (95% UI 5.61–8.41 million) DALYs. Foodborne Ascaris infection (12.3 million cases, 95% UI 8.29–22.0 million) and foodborne toxoplasmosis (10.3 million cases, 95% UI 7.40–14.9 million) were the most common foodborne parasitic diseases. Human cysticercosis with 2.78 million DALYs (95% UI 2.14–3.61 million), foodborne trematodosis with 2.02 million DALYs (95% UI 1.65–2.48 million) and foodborne toxoplasmosis with 825,000 DALYs (95% UI 561,000–1.26 million) resulted in the highest burdens in terms of DALYs, mainly due to years lived with disability. Foodborne enteric protozoa, reported elsewhere, resulted in an additional 67.2 million illnesses or 492,000 DALYs. Major limitations of our study include often substantial data gaps that had to be filled by imputation and suffer from the uncertainties that surround such models. Due to resource limitations it was also not possible to consider all potentially foodborne parasites (for example Trypanosoma cruzi). Conclusions Parasites are frequently transmitted to humans through contaminated food. These estimates represent an important step forward in understanding the impact of foodborne diseases globally and regionally. The disease burden due to most foodborne parasites is highly focal and results in significant morbidity and mortality among vulnerable populations. PMID:26633705
NASA Astrophysics Data System (ADS)
Galanti, Eli; Durante, Daniele; Finocchiaro, Stefano; Iess, Luciano; Kaspi, Yohai
2017-07-01
The upcoming Juno spacecraft measurements have the potential of improving our knowledge of Jupiter’s gravity field. The analysis of the Juno Doppler data will provide a very accurate reconstruction of spatial gravity variations, but these measurements will be very accurate only over a limited latitudinal range. In order to deduce the full gravity field of Jupiter, additional information needs to be incorporated into the analysis, especially regarding the Jovian flow structure and its depth, which can influence the measured gravity field. In this study we propose a new iterative method for the estimation of the Jupiter gravity field, using a simulated Juno trajectory, a trajectory estimation model, and an adjoint-based inverse model for the flow dynamics. We test this method both for zonal harmonics only and with a full gravity field including tesseral harmonics. The results show that this method can fit some of the gravitational harmonics better to the “measured” harmonics, mainly because of the added information from the dynamical model, which includes the flow structure. Thus, it is suggested that the method presented here has the potential of improving the accuracy of the expected gravity harmonics estimated from the Juno and Cassini radio science experiments.
The costs of turnover in nursing homes
Mukamel, Dana B.; Spector, William D.; Limcangco, Rhona; Wang, Ying; Feng, Zhanlian; Mor, Vincent
2009-01-01
Background Turnover rates in nursing homes have been persistently high for decades, ranging upwards of 100%. Objectives To estimate the net costs associated with turnover of direct care staff in nursing homes. Data and sample 902 nursing homes in California in 2005. Data included Medicaid cost reports, the Minimum Data Set (MDS), Medicare enrollment files, Census and Area Resource File (ARF). Research Design We estimated total cost functions, which included in addition to exogenous outputs and wages, the facility turnover rate. Instrumental variable (IV) limited information maximum likelihood techniques were used for estimation to deal with the endogeneity of turnover and costs. Results The cost functions exhibited the expected behavior, with initially increasing and then decreasing returns to scale. The ordinary least square estimate did not show a significant association between costs and turnover. The IV estimate of turnover costs was negative and significant (p=0.039). The marginal cost savings associated with a 10 percentage point increase in turnover for an average facility was $167,063 or 2.9% of annual total costs. Conclusion The net savings associated with turnover offer an explanation for the persistence of this phenomenon over the last decades, despite the many policy initiatives to reduce it. Future policy efforts need to recognize the complex relationship between turnover and costs. PMID:19648834
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galanti, Eli; Kaspi, Yohai; Durante, Daniele
The upcoming Juno spacecraft measurements have the potential of improving our knowledge of Jupiter’s gravity field. The analysis of the Juno Doppler data will provide a very accurate reconstruction of spatial gravity variations, but these measurements will be very accurate only over a limited latitudinal range. In order to deduce the full gravity field of Jupiter, additional information needs to be incorporated into the analysis, especially regarding the Jovian flow structure and its depth, which can influence the measured gravity field. In this study we propose a new iterative method for the estimation of the Jupiter gravity field, using a simulatedmore » Juno trajectory, a trajectory estimation model, and an adjoint-based inverse model for the flow dynamics. We test this method both for zonal harmonics only and with a full gravity field including tesseral harmonics. The results show that this method can fit some of the gravitational harmonics better to the “measured” harmonics, mainly because of the added information from the dynamical model, which includes the flow structure. Thus, it is suggested that the method presented here has the potential of improving the accuracy of the expected gravity harmonics estimated from the Juno and Cassini radio science experiments.« less
NASA Astrophysics Data System (ADS)
Wood, W. T.; Runyan, T. E.; Palmsten, M.; Dale, J.; Crawford, C.
2016-12-01
Natural Gas (primarily methane) and gas hydrate accumulations require certain bio-geochemical, as well as physical conditions, some of which are poorly sampled and/or poorly understood. We exploit recent advances in the prediction of seafloor porosity and heat flux via machine learning techniques (e.g. Random forests and Bayesian networks) to predict the occurrence of gas and subsequently gas hydrate in marine sediments. The prediction (actually guided interpolation) of key parameters we use in this study is a K-nearest neighbor technique. KNN requires only minimal pre-processing of the data and predictors, and requires minimal run-time input so the results are almost entirely data-driven. Specifically we use new estimates of sedimentation rate and sediment type, along with recently derived compaction modeling to estimate profiles of porosity and age. We combined the compaction with seafloor heat flux to estimate temperature with depth and geologic age, which, with estimates of organic carbon, and models of methanogenesis yield limits on the production of methane. Results include geospatial predictions of gas (and gas hydrate) accumulations, with quantitative estimates of uncertainty. The Generic Earth Modeling System (GEMS) we have developed to derive the machine learning estimates is modular and easily updated with new algorithms or data.
Singapore’s willingness to pay for mitigation of transboundary forest-fire haze from Indonesia
NASA Astrophysics Data System (ADS)
Lin, Yuan; Wijedasa, Lahiru S.; Chisholm, Ryan A.
2017-02-01
Haze pollution over the past four decades in Southeast Asia is mainly a result of forest and peatland fires in Indonesia. The economic impacts of haze include adverse health effects and disruption to transport and tourism. Previous studies have used a variety of approaches to assess the economic impacts of haze and the forest fires more generally. But no study has used contingent valuation to assess non-market impacts of haze on individuals. Here we apply contingent valuation to estimate impacts of haze on Singapore, one of most severely affected countries. We used a double-bounded dichotomous-choice survey design and the Kaplan-Meier-Turnbull method to infer the distribution of Singaporeans’ willingness to pay (WTP) for haze mitigation. Our estimate of mean individual WTP was 0.97% of annual income (n = 390). To calculate total national WTP, we stratified by income, the demographic variable most strongly related to individual WTP. The total WTP estimate was 643.5 million per year (95% CI [527.7 million, 765.0 million]). This estimate is comparable in magnitude to previously estimated impacts of Indonesia’s fires and also to the estimated costs of peatland protection and restoration. We recommend that our results be incorporated into future cost-benefit analyses of the fires and mitigation strategies.