Rosenberger, Amanda E.; Dunham, Jason B.
2005-01-01
Estimation of fish abundance in streams using the removal model or the Lincoln - Peterson mark - recapture model is a common practice in fisheries. These models produce misleading results if their assumptions are violated. We evaluated the assumptions of these two models via electrofishing of rainbow trout Oncorhynchus mykiss in central Idaho streams. For one-, two-, three-, and four-pass sampling effort in closed sites, we evaluated the influences of fish size and habitat characteristics on sampling efficiency and the accuracy of removal abundance estimates. We also examined the use of models to generate unbiased estimates of fish abundance through adjustment of total catch or biased removal estimates. Our results suggested that the assumptions of the mark - recapture model were satisfied and that abundance estimates based on this approach were unbiased. In contrast, the removal model assumptions were not met. Decreasing sampling efficiencies over removal passes resulted in underestimated population sizes and overestimates of sampling efficiency. This bias decreased, but was not eliminated, with increased sampling effort. Biased removal estimates based on different levels of effort were highly correlated with each other but were less correlated with unbiased mark - recapture estimates. Stream size decreased sampling efficiency, and stream size and instream wood increased the negative bias of removal estimates. We found that reliable estimates of population abundance could be obtained from models of sampling efficiency for different levels of effort. Validation of abundance estimates requires extra attention to routine sampling considerations but can help fisheries biologists avoid pitfalls associated with biased data and facilitate standardized comparisons among studies that employ different sampling methods.
The Influence of Mark-Recapture Sampling Effort on Estimates of Rock Lobster Survival
Kordjazi, Ziya; Frusher, Stewart; Buxton, Colin; Gardner, Caleb; Bird, Tomas
2016-01-01
Five annual capture-mark-recapture surveys on Jasus edwardsii were used to evaluate the effect of sample size and fishing effort on the precision of estimated survival probability. Datasets of different numbers of individual lobsters (ranging from 200 to 1,000 lobsters) were created by random subsampling from each annual survey. This process of random subsampling was also used to create 12 datasets of different levels of effort based on three levels of the number of traps (15, 30 and 50 traps per day) and four levels of the number of sampling-days (2, 4, 6 and 7 days). The most parsimonious Cormack-Jolly-Seber (CJS) model for estimating survival probability shifted from a constant model towards sex-dependent models with increasing sample size and effort. A sample of 500 lobsters or 50 traps used on four consecutive sampling-days was required for obtaining precise survival estimations for males and females, separately. Reduced sampling effort of 30 traps over four sampling days was sufficient if a survival estimate for both sexes combined was sufficient for management of the fishery. PMID:26990561
Cost and schedule estimation study report
NASA Technical Reports Server (NTRS)
Condon, Steve; Regardie, Myrna; Stark, Mike; Waligora, Sharon
1993-01-01
This report describes the analysis performed and the findings of a study of the software development cost and schedule estimation models used by the Flight Dynamics Division (FDD), Goddard Space Flight Center. The study analyzes typical FDD projects, focusing primarily on those developed since 1982. The study reconfirms the standard SEL effort estimation model that is based on size adjusted for reuse; however, guidelines for the productivity and growth parameters in the baseline effort model have been updated. The study also produced a schedule prediction model based on empirical data that varies depending on application type. Models for the distribution of effort and schedule by life-cycle phase are also presented. Finally, this report explains how to use these models to plan SEL projects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, William Scott
This seminar presentation describes amplitude models and yield estimations that look at the data in order to inform legislation. The following points were brought forth in the summary: global models that will predict three-component amplitudes (R-T-Z) were produced; Q models match regional geology; corrected source spectra can be used for discrimination and yield estimation; three-component data increase coverage and reduce scatter in source spectral estimates; three-component efforts must include distance-dependent effects; a community effort on instrument calibration is needed.
Peter, R; Siegrist, J; Hallqvist, J; Reuterwall, C; Theorell, T
2002-01-01
Objectives: Associations between two alternative formulations of job stress derived from the effort-reward imbalance and the job strain model and first non-fatal acute myocardial infarction were studied. Whereas the job strain model concentrates on situational (extrinsic) characteristics the effort-reward imbalance model analyses distinct person (intrinsic) characteristics in addition to situational ones. In view of these conceptual differences the hypothesis was tested that combining information from the two models improves the risk estimation of acute myocardial infarction. Methods: 951 male and female myocardial infarction cases and 1147 referents aged 45–64 years of The Stockholm Heart Epidemiology (SHEEP) case-control study underwent a clinical examination. Information on job stress and health adverse behaviours was derived from standardised questionnaires. Results: Multivariate analysis showed moderately increased odds ratios for either model. Yet, with respect to the effort-reward imbalance model gender specific effects were found: in men the extrinsic component contributed to risk estimation, whereas this was the case with the intrinsic component in women. Controlling each job stress model for the other in order to test the independent effect of either approach did not show systematically increased odds ratios. An improved estimation of acute myocardial infarction risk resulted from combining information from the two models by defining groups characterised by simultaneous exposure to effort-reward imbalance and job strain (men: odds ratio 2.02 (95% confidence intervals (CI) 1.34 to 3.07); women odds ratio 2.19 (95% CI 1.11 to 4.28)). Conclusions: Findings show an improved risk estimation of acute myocardial infarction by combining information from the two job stress models under study. Moreover, gender specific effects of the two components of the effort-reward imbalance model were observed. PMID:11896138
Estimating parasitic sea lamprey abundance in Lake Huron from heterogenous data sources
Young, Robert J.; Jones, Michael L.; Bence, James R.; McDonald, Rodney B.; Mullett, Katherine M.; Bergstedt, Roger A.
2003-01-01
The Great Lakes Fishery Commission uses time series of transformer, parasitic, and spawning population estimates to evaluate the effectiveness of its sea lamprey (Petromyzon marinus) control program. This study used an inverse variance weighting method to integrate Lake Huron sea lamprey population estimates derived from two estimation procedures: 1) prediction of the lake-wide spawning population from a regression model based on stream size and, 2) whole-lake mark and recapture estimates. In addition, we used a re-sampling procedure to evaluate the effect of trading off sampling effort between the regression and mark-recapture models. Population estimates derived from the regression model ranged from 132,000 to 377,000 while mark-recapture estimates of marked recently metamorphosed juveniles and parasitic sea lampreys ranged from 536,000 to 634,000 and 484,000 to 1,608,000, respectively. The precision of the estimates varied greatly among estimation procedures and years. The integrated estimate of the mark-recapture and spawner regression procedures ranged from 252,000 to 702,000 transformers. The re-sampling procedure indicated that the regression model is more sensitive to reduction in sampling effort than the mark-recapture model. Reliance on either the regression or mark-recapture model alone could produce misleading estimates of abundance of sea lampreys and the effect of the control program on sea lamprey abundance. These analyses indicate that the precision of the lakewide population estimate can be maximized by re-allocating sampling effort from marking sea lampreys to trapping additional streams.
Adams, A.A.Y.; Stanford, J.W.; Wiewel, A.S.; Rodda, G.H.
2011-01-01
Estimating the detection probability of introduced organisms during the pre-monitoring phase of an eradication effort can be extremely helpful in informing eradication and post-eradication monitoring efforts, but this step is rarely taken. We used data collected during 11 nights of mark-recapture sampling on Aguiguan, Mariana Islands, to estimate introduced kiore (Rattus exulans Peale) density and detection probability, and evaluated factors affecting detectability to help inform possible eradication efforts. Modelling of 62 captures of 48 individuals resulted in a model-averaged density estimate of 55 kiore/ha. Kiore detection probability was best explained by a model allowing neophobia to diminish linearly (i.e. capture probability increased linearly) until occasion 7, with additive effects of sex and cumulative rainfall over the prior 48 hours. Detection probability increased with increasing rainfall and females were up to three times more likely than males to be trapped. In this paper, we illustrate the type of information that can be obtained by modelling mark-recapture data collected during pre-eradication monitoring and discuss the potential of using these data to inform eradication and posteradication monitoring efforts. ?? New Zealand Ecological Society.
NASA Technical Reports Server (NTRS)
McNeill, Justin
1995-01-01
The Multimission Image Processing Subsystem (MIPS) at the Jet Propulsion Laboratory (JPL) has managed transitions of application software sets from one operating system and hardware platform to multiple operating systems and hardware platforms. As a part of these transitions, cost estimates were generated from the personal experience of in-house developers and managers to calculate the total effort required for such projects. Productivity measures have been collected for two such transitions, one very large and the other relatively small in terms of source lines of code. These estimates used a cost estimation model similar to the Software Engineering Laboratory (SEL) Effort Estimation Model. Experience in transitioning software within JPL MIPS have uncovered a high incidence of interface complexity. Interfaces, both internal and external to individual software applications, have contributed to software transition project complexity, and thus to scheduling difficulties and larger than anticipated design work on software to be ported.
Estimating Software-Development Costs With Greater Accuracy
NASA Technical Reports Server (NTRS)
Baker, Dan; Hihn, Jairus; Lum, Karen
2008-01-01
COCOMOST is a computer program for use in estimating software development costs. The goal in the development of COCOMOST was to increase estimation accuracy in three ways: (1) develop a set of sensitivity software tools that return not only estimates of costs but also the estimation error; (2) using the sensitivity software tools, precisely define the quantities of data needed to adequately tune cost estimation models; and (3) build a repository of software-cost-estimation information that NASA managers can retrieve to improve the estimates of costs of developing software for their project. COCOMOST implements a methodology, called '2cee', in which a unique combination of well-known pre-existing data-mining and software-development- effort-estimation techniques are used to increase the accuracy of estimates. COCOMOST utilizes multiple models to analyze historical data pertaining to software-development projects and performs an exhaustive data-mining search over the space of model parameters to improve the performances of effort-estimation models. Thus, it is possible to both calibrate and generate estimates at the same time. COCOMOST is written in the C language for execution in the UNIX operating system.
A study of fault prediction and reliability assessment in the SEL environment
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Patnaik, Debabrata
1986-01-01
An empirical study on estimation and prediction of faults, prediction of fault detection and correction effort, and reliability assessment in the Software Engineering Laboratory environment (SEL) is presented. Fault estimation using empirical relationships and fault prediction using curve fitting method are investigated. Relationships between debugging efforts (fault detection and correction effort) in different test phases are provided, in order to make an early estimate of future debugging effort. This study concludes with the fault analysis, application of a reliability model, and analysis of a normalized metric for reliability assessment and reliability monitoring during development of software.
NASA Software Cost Estimation Model: An Analogy Based Estimation Model
NASA Technical Reports Server (NTRS)
Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James
2015-01-01
The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K- nearest neighbor prediction model performance on the same data set.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vermeul, Vincent R.; Cole, Charles R.; Bergeron, Marcel P.
2001-08-29
The baseline three-dimensional transient inverse model for the estimation of site-wide scale flow parameters, including their uncertainties, using data on the transient behavior of the unconfined aquifer system over the entire historical period of Hanford operations, has been modified to account for the effects of basalt intercommunication between the Hanford unconfined aquifer and the underlying upper basalt confined aquifer. Both the baseline and alternative conceptual models (ACM-1) considered only the groundwater flow component and corresponding observational data in the 3-Dl transient inverse calibration efforts. Subsequent efforts will examine both groundwater flow and transport. Comparisons of goodness of fit measures andmore » parameter estimation results for the ACM-1 transient inverse calibrated model with those from previous site-wide groundwater modeling efforts illustrate that the new 3-D transient inverse model approach will strengthen the technical defensibility of the final model(s) and provide the ability to incorporate uncertainty in predictions related to both conceptual model and parameter uncertainty. These results, however, indicate that additional improvements are required to the conceptual model framework. An investigation was initiated at the end of this basalt inverse modeling effort to determine whether facies-based zonation would improve specific yield parameter estimation results (ACM-2). A description of the justification and methodology to develop this zonation is discussed.« less
Parameter Estimates in Differential Equation Models for Chemical Kinetics
ERIC Educational Resources Information Center
Winkel, Brian
2011-01-01
We discuss the need for devoting time in differential equations courses to modelling and the completion of the modelling process with efforts to estimate the parameters in the models using data. We estimate the parameters present in several differential equation models of chemical reactions of order n, where n = 0, 1, 2, and apply more general…
Kery, M.; Royle, J. Andrew; Schmid, Hans; Schaub, M.; Volet, B.; Hafliger, G.; Zbinden, N.
2010-01-01
Species' assessments must frequently be derived from opportunistic observations made by volunteers (i.e., citizen scientists). Interpretation of the resulting data to estimate population trends is plagued with problems, including teasing apart genuine population trends from variations in observation effort. We devised a way to correct for annual variation in effort when estimating trends in occupancy (species distribution) from faunal or floral databases of opportunistic observations. First, for all surveyed sites, detection histories (i.e., strings of detection-nondetection records) are generated. Within-season replicate surveys provide information on the detectability of an occupied site. Detectability directly represents observation effort; hence, estimating detectablity means correcting for observation effort. Second, site-occupancy models are applied directly to the detection-history data set (i.e., without aggregation by site and year) to estimate detectability and species distribution (occupancy, i.e., the true proportion of sites where a species occurs). Site-occupancy models also provide unbiased estimators of components of distributional change (i.e., colonization and extinction rates). We illustrate our method with data from a large citizen-science project in Switzerland in which field ornithologists record opportunistic observations. We analyzed data collected on four species: the widespread Kingfisher (Alcedo atthis. ) and Sparrowhawk (Accipiter nisus. ) and the scarce Rock Thrush (Monticola saxatilis. ) and Wallcreeper (Tichodroma muraria. ). Our method requires that all observed species are recorded. Detectability was <1 and varied over the years. Simulations suggested some robustness, but we advocate recording complete species lists (checklists), rather than recording individual records of single species. The representation of observation effort with its effect on detectability provides a solution to the problem of differences in effort encountered when extracting trend information from haphazard observations. We expect our method is widely applicable for global biodiversity monitoring and modeling of species distributions. ?? 2010 Society for Conservation Biology.
Ortega, Alonso; Labrenz, Stephan; Markowitsch, Hans J; Piefke, Martina
2013-01-01
In the last decade, different statistical techniques have been introduced to improve assessment of malingering-related poor effort. In this context, we have recently shown preliminary evidence that a Bayesian latent group model may help to optimize classification accuracy using a simulation research design. In the present study, we conducted two analyses. Firstly, we evaluated how accurately this Bayesian approach can distinguish between participants answering in an honest way (honest response group) and participants feigning cognitive impairment (experimental malingering group). Secondly, we tested the accuracy of our model in the differentiation between patients who had real cognitive deficits (cognitively impaired group) and participants who belonged to the experimental malingering group. All Bayesian analyses were conducted using the raw scores of a visual recognition forced-choice task (2AFC), the Test of Memory Malingering (TOMM, Trial 2), and the Word Memory Test (WMT, primary effort subtests). The first analysis showed 100% accuracy for the Bayesian model in distinguishing participants of both groups with all effort measures. The second analysis showed outstanding overall accuracy of the Bayesian model when estimates were obtained from the 2AFC and the TOMM raw scores. Diagnostic accuracy of the Bayesian model diminished when using the WMT total raw scores. Despite, overall diagnostic accuracy can still be considered excellent. The most plausible explanation for this decrement is the low performance in verbal recognition and fluency tasks of some patients of the cognitively impaired group. Additionally, the Bayesian model provides individual estimates, p(zi |D), of examinees' effort levels. In conclusion, both high classification accuracy levels and Bayesian individual estimates of effort may be very useful for clinicians when assessing for effort in medico-legal settings.
Towards an Early Software Effort Estimation Based on Functional and Non-Functional Requirements
NASA Astrophysics Data System (ADS)
Kassab, Mohamed; Daneva, Maya; Ormandjieva, Olga
The increased awareness of the non-functional requirements as a key to software project and product success makes explicit the need to include them in any software project effort estimation activity. However, the existing approaches to defining size-based effort relationships still pay insufficient attention to this need. This paper presents a flexible, yet systematic approach to the early requirements-based effort estimation, based on Non-Functional Requirements ontology. It complementarily uses one standard functional size measurement model and a linear regression technique. We report on a case study which illustrates the application of our solution approach in context and also helps evaluate our experiences in using it.
Inferring invasive species abundance using removal data from management actions
Davis, Amy J.; Hooten, Mevin B.; Miller, Ryan S.; Farnsworth, Matthew L.; Lewis, Jesse S.; Moxcey, Michael; Pepin, Kim M.
2016-01-01
Evaluation of the progress of management programs for invasive species is crucial for demonstrating impacts to stakeholders and strategic planning of resource allocation. Estimates of abundance before and after management activities can serve as a useful metric of population management programs. However, many methods of estimating population size are too labor intensive and costly to implement, posing restrictive levels of burden on operational programs. Removal models are a reliable method for estimating abundance before and after management using data from the removal activities exclusively, thus requiring no work in addition to management. We developed a Bayesian hierarchical model to estimate abundance from removal data accounting for varying levels of effort, and used simulations to assess the conditions under which reliable population estimates are obtained. We applied this model to estimate site-specific abundance of an invasive species, feral swine (Sus scrofa), using removal data from aerial gunning in 59 site/time-frame combinations (480–19,600 acres) throughout Oklahoma and Texas, USA. Simulations showed that abundance estimates were generally accurate when effective removal rates (removal rate accounting for total effort) were above 0.40. However, when abundances were small (<50) the effective removal rate needed to accurately estimates abundances was considerably higher (0.70). Based on our post-validation method, 78% of our site/time frame estimates were accurate. To use this modeling framework it is important to have multiple removals (more than three) within a time frame during which demographic changes are minimized (i.e., a closed population; ≤3 months for feral swine). Our results show that the probability of accurately estimating abundance from this model improves with increased sampling effort (8+ flight hours across the 3-month window is best) and increased removal rate. Based on the inverse relationship between inaccurate abundances and inaccurate removal rates, we suggest auxiliary information that could be collected and included in the model as covariates (e.g., habitat effects, differences between pilots) to improve accuracy of removal rates and hence abundance estimates.
Establishment of a center of excellence for applied mathematical and statistical research
NASA Technical Reports Server (NTRS)
Woodward, W. A.; Gray, H. L.
1983-01-01
The state of the art was assessed with regards to efforts in support of the crop production estimation problem and alternative generic proportion estimation techniques were investigated. Topics covered include modeling the greeness profile (Badhwarmos model), parameter estimation using mixture models such as CLASSY, and minimum distance estimation as an alternative to maximum likelihood estimation. Approaches to the problem of obtaining proportion estimates when the underlying distributions are asymmetric are examined including the properties of Weibull distribution.
Szczegielniak, Jan; Łuniewski, Jacek; Stanisławski, Rafał; Bogacz, Katarzyna; Krajczy, Marcin; Rydel, Marek
2018-01-01
Background The six-minute walk test (6MWT) is considered to be a simple and inexpensive tool for the assessment of functional tolerance of submaximal effort. The aim of this work was 1) to background the nonlinear nature of the energy expenditure process due to physical activity, 2) to compare the results/scores of the submaximal treadmill exercise test and those of 6MWT in pulmonary patients and 3) to develop nonlinear mathematical models relating the two. Methods The study group included patients with the COPD. All patients were subjected to a submaximal exercise test and a 6MWT. To develop an optimal mathematical solution and compare the results of the exercise test and the 6MWT, the least squares and genetic algorithms were employed to estimate parameters of polynomial expansion and piecewise linear models. Results Mathematical analysis enabled to construct nonlinear models for estimating the MET result of submaximal exercise test based on average walk velocity (or distance) in the 6MWT. Conclusions Submaximal effort tolerance in COPD patients can be effectively estimated from new, rehabilitation-oriented, nonlinear models based on the generalized MET concept and the 6MWT. PMID:29425213
Baele, Guy; Lemey, Philippe; Vansteelandt, Stijn
2013-03-06
Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model's marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. We here assess the original 'model-switch' path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model's marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation.
Using effort information with change-in-ratio data for population estimation
Udevitz, Mark S.; Pollock, Kenneth H.
1995-01-01
Most change-in-ratio (CIR) methods for estimating fish and wildlife population sizes have been based only on assumptions about how encounter probabilities vary among population subclasses. When information on sampling effort is available, it is also possible to derive CIR estimators based on assumptions about how encounter probabilities vary over time. This paper presents a generalization of previous CIR models that allows explicit consideration of a range of assumptions about the variation of encounter probabilities among subclasses and over time. Explicit estimators are derived under this model for specific sets of assumptions about the encounter probabilities. Numerical methods are presented for obtaining estimators under the full range of possible assumptions. Likelihood ratio tests for these assumptions are described. Emphasis is on obtaining estimators based on assumptions about variation of encounter probabilities over time.
2013-01-01
Background Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model’s marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. Results We here assess the original ‘model-switch’ path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model’s marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. Conclusions We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation. PMID:23497171
NASA Technical Reports Server (NTRS)
Madden, Michael G.; Wyrick, Roberta; O'Neill, Dale E.
2005-01-01
Space Shuttle Processing is a complicated and highly variable project. The planning and scheduling problem, categorized as a Resource Constrained - Stochastic Project Scheduling Problem (RC-SPSP), has a great deal of variability in the Orbiter Processing Facility (OPF) process flow from one flight to the next. Simulation Modeling is a useful tool in estimation of the makespan of the overall process. However, simulation requires a model to be developed, which itself is a labor and time consuming effort. With such a dynamic process, often the model would potentially be out of synchronization with the actual process, limiting the applicability of the simulation answers in solving the actual estimation problem. Integration of TEAMS model enabling software with our existing schedule program software is the basis of our solution. This paper explains the approach used to develop an auto-generated simulation model from planning and schedule efforts and available data.
Peterson, J.; Dunham, J.B.
2003-01-01
Effective conservation efforts for at-risk species require knowledge of the locations of existing populations. Species presence can be estimated directly by conducting field-sampling surveys or alternatively by developing predictive models. Direct surveys can be expensive and inefficient, particularly for rare and difficult-to-sample species, and models of species presence may produce biased predictions. We present a Bayesian approach that combines sampling and model-based inferences for estimating species presence. The accuracy and cost-effectiveness of this approach were compared to those of sampling surveys and predictive models for estimating the presence of the threatened bull trout ( Salvelinus confluentus ) via simulation with existing models and empirical sampling data. Simulations indicated that a sampling-only approach would be the most effective and would result in the lowest presence and absence misclassification error rates for three thresholds of detection probability. When sampling effort was considered, however, the combined approach resulted in the lowest error rates per unit of sampling effort. Hence, lower probability-of-detection thresholds can be specified with the combined approach, resulting in lower misclassification error rates and improved cost-effectiveness.
USDA-ARS?s Scientific Manuscript database
Although there have been efforts to improve existing soil moisture retrieval algorithms, the ability to estimate soil moisture from passive microwave observations is still hampered by problems in accurately modeling the observed microwave signal. This paper focuses on the estimation of effective sur...
Crop area estimation based on remotely-sensed data with an accurate but costly subsample
NASA Technical Reports Server (NTRS)
Gunst, R. F.
1985-01-01
Research activities conducted under the auspices of National Aeronautics and Space Administration Cooperative Agreement NCC 9-9 are discussed. During this contract period research efforts are concentrated in two primary areas. The first are is an investigation of the use of measurement error models as alternatives to least squares regression estimators of crop production or timber biomass. The secondary primary area of investigation is on the estimation of the mixing proportion of two-component mixture models. This report lists publications, technical reports, submitted manuscripts, and oral presentation generated by these research efforts. Possible areas of future research are mentioned.
DOT National Transportation Integrated Search
1994-10-31
The Volpe Center first estimated an inter-regional auto trip model as part of its effort to assess the market feasibility of maglev for the National Maglev Initiative (NMI). The original intent was to develop a direct demand model for estimating inte...
NASA Astrophysics Data System (ADS)
Panhwar, Sher Khan; Liu, Qun; Khan, Fozia; Siddiqui, Pirzada J. A.
2012-03-01
Using surplus production model packages of ASPIC (a stock-production model incorporating covariates) and CEDA (Catch effort data analysis), we analyzed the catch and effort data of Sillago sihama fishery in Pakistan. ASPIC estimates the parameters of MSY (maximum sustainable yield), F msy (fishing mortality), q (catchability coefficient), K (carrying capacity or unexploited biomass) and B1/K (maximum sustainable yield over initial biomass). The estimated non-bootstrapped value of MSY based on logistic was 598 t and that based on the Fox model was 415 t, which showed that the Fox model estimation was more conservative than that with the logistic model. The R 2 with the logistic model (0.702) is larger than that with the Fox model (0.541), which indicates a better fit. The coefficient of variation (cv) of the estimated MSY was about 0.3, except for a larger value 88.87 and a smaller value of 0.173. In contrast to the ASPIC results, the R 2 with the Fox model (0.651-0.692) was larger than that with the Schaefer model (0.435-0.567), indicating a better fit. The key parameters of CEDA are: MSY, K, q, and r (intrinsic growth), and the three error assumptions in using the models are normal, log normal and gamma. Parameter estimates from the Schaefer and Pella-Tomlinson models were similar. The MSY estimations from the above two models were 398 t, 549 t and 398 t for normal, log-normal and gamma error distributions, respectively. The MSY estimates from the Fox model were 381 t, 366 t and 366 t for the above three error assumptions, respectively. The Fox model estimates were smaller than those for the Schaefer and the Pella-Tomlinson models. In the light of the MSY estimations of 415 t from ASPIC for the Fox model and 381 t from CEDA for the Fox model, MSY for S. sihama is about 400 t. As the catch in 2003 was 401 t, we would suggest the fishery should be kept at the current level. Production models used here depend on the assumption that CPUE (catch per unit effort) data used in the study can reliably quantify temporal variability in population abundance, hence the modeling results would be wrong if such an assumption is not met. Because the reliability of this CPUE data in indexing fish population abundance is unknown, we should be cautious with the interpretation and use of the derived population and management parameters.
Inferring invasive species abundance using removal data from management actions.
Davis, Amy J; Hooten, Mevin B; Miller, Ryan S; Farnsworth, Matthew L; Lewis, Jesse; Moxcey, Michael; Pepin, Kim M
2016-10-01
Evaluation of the progress of management programs for invasive species is crucial for demonstrating impacts to stakeholders and strategic planning of resource allocation. Estimates of abundance before and after management activities can serve as a useful metric of population management programs. However, many methods of estimating population size are too labor intensive and costly to implement, posing restrictive levels of burden on operational programs. Removal models are a reliable method for estimating abundance before and after management using data from the removal activities exclusively, thus requiring no work in addition to management. We developed a Bayesian hierarchical model to estimate abundance from removal data accounting for varying levels of effort, and used simulations to assess the conditions under which reliable population estimates are obtained. We applied this model to estimate site-specific abundance of an invasive species, feral swine (Sus scrofa), using removal data from aerial gunning in 59 site/time-frame combinations (480-19,600 acres) throughout Oklahoma and Texas, USA. Simulations showed that abundance estimates were generally accurate when effective removal rates (removal rate accounting for total effort) were above 0.40. However, when abundances were small (<50) the effective removal rate needed to accurately estimates abundances was considerably higher (0.70). Based on our post-validation method, 78% of our site/time frame estimates were accurate. To use this modeling framework it is important to have multiple removals (more than three) within a time frame during which demographic changes are minimized (i.e., a closed population; ≤3 months for feral swine). Our results show that the probability of accurately estimating abundance from this model improves with increased sampling effort (8+ flight hours across the 3-month window is best) and increased removal rate. Based on the inverse relationship between inaccurate abundances and inaccurate removal rates, we suggest auxiliary information that could be collected and included in the model as covariates (e.g., habitat effects, differences between pilots) to improve accuracy of removal rates and hence abundance estimates. © 2016 by the Ecological Society of America.
Software Development Cost Estimation Executive Summary
NASA Technical Reports Server (NTRS)
Hihn, Jairus M.; Menzies, Tim
2006-01-01
Identify simple fully validated cost models that provide estimation uncertainty with cost estimate. Based on COCOMO variable set. Use machine learning techniques to determine: a) Minimum number of cost drivers required for NASA domain based cost models; b) Minimum number of data records required and c) Estimation Uncertainty. Build a repository of software cost estimation information. Coordinating tool development and data collection with: a) Tasks funded by PA&E Cost Analysis; b) IV&V Effort Estimation Task and c) NASA SEPG activities.
Optimal estimation of large structure model errors. [in Space Shuttle controller design
NASA Technical Reports Server (NTRS)
Rodriguez, G.
1979-01-01
In-flight estimation of large structure model errors is usually required as a means of detecting inevitable deficiencies in large structure controller/estimator models. The present paper deals with a least-squares formulation which seeks to minimize a quadratic functional of the model errors. The properties of these error estimates are analyzed. It is shown that an arbitrary model error can be decomposed as the sum of two components that are orthogonal in a suitably defined function space. Relations between true and estimated errors are defined. The estimates are found to be approximations that retain many of the significant dynamics of the true model errors. Current efforts are directed toward application of the analytical results to a reference large structure model.
2011-01-01
Background While many pandemic preparedness plans have promoted disease control effort to lower and delay an epidemic peak, analytical methods for determining the required control effort and making statistical inferences have yet to be sought. As a first step to address this issue, we present a theoretical basis on which to assess the impact of an early intervention on the epidemic peak, employing a simple epidemic model. Methods We focus on estimating the impact of an early control effort (e.g. unsuccessful containment), assuming that the transmission rate abruptly increases when control is discontinued. We provide analytical expressions for magnitude and time of the epidemic peak, employing approximate logistic and logarithmic-form solutions for the latter. Empirical influenza data (H1N1-2009) in Japan are analyzed to estimate the effect of the summer holiday period in lowering and delaying the peak in 2009. Results Our model estimates that the epidemic peak of the 2009 pandemic was delayed for 21 days due to summer holiday. Decline in peak appears to be a nonlinear function of control-associated reduction in the reproduction number. Peak delay is shown to critically depend on the fraction of initially immune individuals. Conclusions The proposed modeling approaches offer methodological avenues to assess empirical data and to objectively estimate required control effort to lower and delay an epidemic peak. Analytical findings support a critical need to conduct population-wide serological survey as a prior requirement for estimating the time of peak. PMID:21269441
The EGM2008 Global Gravitational Model
NASA Astrophysics Data System (ADS)
Pavlis, N. K.; Holmes, S. A.; Kenyon, S. C.; Factor, J. K.
2008-12-01
The development of a new Earth Gravitational Model (EGM) to degree 2160 has been completed. This model, designated EGM2008, is the product of the final re-iteration of our modelling and estimation approach. Our multi-year effort has produced several Preliminary Gravitational Models (PGM) of increasingly improved performance. One of these models (PGM2007A) was provided for evaluation to an independent Evaluation Working Group, sponsored by the International Association of Geodesy (IAG). In an effort to address certain shortcomings of PGM2007A, we have considered the feedback that we received from this Working Group. As part of this effort, EGM2008 incorporates an improved version of our 5'x5' global gravity anomaly database and has benefited from the latest GRACE based satellite-only solutions (e.g., ITG- GRACE03S). EGM2008 incorporates an improved ocean-wide set of altimetry-derived gravity anomalies that were estimated using PGM2007B (a variant of PGM2007A) and its associated Dynamic Ocean Topography (DOT) model as reference models in a "Remove-Compute-Restore" fashion. For the Least Squares Collocation estimation of our final global 5'x5' area-mean gravity anomaly database, we have used consistently PGM2007B as our reference model to degree 2160. We have developed and used a formulation that predicts area-mean gravity anomalies that are effectively band-limited to degree 2160, thereby minimizing aliasing effects during the harmonic analysis process. We have also placed special emphasis on the refinement and "calibration" of the error estimates that accompany our final combination solution EGM2008. We present the main aspects of the model's development and evaluation. This evaluation was accomplished primarily through the comparison of various model derived quantities with independent data and models (e.g., geoid undulations derived from GPS positioning and spirit levelling, astronomical deflections of the vertical, etc.). We will also present comparisons of our model-implied Dynamic Ocean Topography with other contemporary estimates (e.g., from ECCO).
NASA Astrophysics Data System (ADS)
Memon, Aamir Mahmood; Liu, Qun; Memon, Khadim Hussain; Baloch, Wazir Ali; Memon, Asfandyar; Baset, Abdul
2015-07-01
Catch and effort data were analyzed to estimate the maximum sustainable yield (MSY) of King Soldier Bream, Argyrops spinifer (Forsskål, 1775, Family: Sparidae), and to evaluate the present status of the fish stocks exploited in Pakistani waters. The catch and effort data for the 25-years period 1985-2009 were analyzed using two computer software packages, CEDA (catch and effort data analysis) and ASPIC (a surplus production model incorporating covariates). The maximum catch of 3 458 t was observed in 1988 and the minimum catch of 1 324 t in 2005, while the average annual catch of A. spinifer over the 25 years was 2 500 t. The surplus production models of Fox, Schaefer, and Pella Tomlinson under three error assumptions of normal, log-normal and gamma are in the CEDA package and the two surplus models of Fox and logistic are in the ASPIC package. In CEDA, the MSY was estimated by applying the initial proportion (IP) of 0.8, because the starting catch was approximately 80% of the maximum catch. Except for gamma, because gamma showed maximization failures, the estimated results of MSY using CEDA with the Fox surplus production model and two error assumptions, were 1 692.08 t ( R 2=0.572) and 1 694.09 t ( R 2=0.606), respectively, and from the Schaefer and the Pella Tomlinson models with two error assumptions were 2 390.95 t ( R 2=0.563), and 2 380.06 t ( R 2=0.605), respectively. The MSY estimated by the Fox model was conservatively compared to the Schaefer and Pella Tomlinson models. The MSY values from Schaefer and Pella Tomlinson models were the same. The computed values of MSY using the ASPIC computer software program with the two surplus production models of Fox and logistic were 1 498 t ( R 2=0.917), and 2 488 t ( R 2=0.897) respectively. The estimated values of MSY using CEDA were about 1 700-2 400 t and the values from ASPIC were 1 500-2 500 t. The estimates output by the CEDA and the ASPIC packages indicate that the stock is overfished, and needs some effective management to reduce the fishing effort of the species in Pakistani waters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vienna, John D.; Kim, Dong-Sang; Skorski, Daniel C.
2013-07-01
Recent glass formulation and melter testing data have suggested that significant increases in waste loading in HLW and LAW glasses are possible over current system planning estimates. The data (although limited in some cases) were evaluated to determine a set of constraints and models that could be used to estimate the maximum loading of specific waste compositions in glass. It is recommended that these models and constraints be used to estimate the likely HLW and LAW glass volumes that would result if the current glass formulation studies are successfully completed. It is recognized that some of the models are preliminarymore » in nature and will change in the coming years. Plus the models do not currently address the prediction uncertainties that would be needed before they could be used in plant operations. The models and constraints are only meant to give an indication of rough glass volumes and are not intended to be used in plant operation or waste form qualification activities. A current research program is in place to develop the data, models, and uncertainty descriptions for that purpose. A fundamental tenet underlying the research reported in this document is to try to be less conservative than previous studies when developing constraints for estimating the glass to be produced by implementing current advanced glass formulation efforts. The less conservative approach documented herein should allow for the estimate of glass masses that may be realized if the current efforts in advanced glass formulations are completed over the coming years and are as successful as early indications suggest they may be. Because of this approach there is an unquantifiable uncertainty in the ultimate glass volume projections due to model prediction uncertainties that has to be considered along with other system uncertainties such as waste compositions and amounts to be immobilized, split factors between LAW and HLW, etc.« less
Creating a stage-based deterministic PVA model - the western prairie fringed orchid [Exercise 12
Carolyn Hull Sieg; Rudy M. King; Fred Van Dyke
2003-01-01
Contemporary efforts to conserve populations and species often employ population viability analysis (PVA), a specific application of population modeling that estimates the effects of environmental and demographic processes on population growth rates. These models can also be used to estimate probabilities that a population will fall below a certain level. This...
NASA Astrophysics Data System (ADS)
Koch, Stefan; Mitlöhner, Johann
2010-08-01
ERP implementation projects have received enormous attention in the last years, due to their importance for organisations, as well as the costs and risks involved. The estimation of effort and costs associated with new projects therefore is an important topic. Unfortunately, there is still a lack of models that can cope with the special characteristics of these projects. As the main focus lies in adapting and customising a complex system, and even changing the organisation, traditional models like COCOMO can not easily be applied. In this article, we will apply effort estimation based on social choice in this context. Social choice deals with aggregating the preferences of a number of voters into a collective preference, and we will apply this idea by substituting the voters by project attributes. Therefore, instead of supplying numeric values for various project attributes, a new project only needs to be placed into rankings per attribute, necessitating only ordinal values, and the resulting aggregate ranking can be used to derive an estimation. We will describe the estimation process using a data set of 39 projects, and compare the results to other approaches proposed in the literature.
Evaluating Satellite-based Rainfall Estimates for Basin-scale Hydrologic Modeling
NASA Astrophysics Data System (ADS)
Yilmaz, K. K.; Hogue, T. S.; Hsu, K.; Gupta, H. V.; Mahani, S. E.; Sorooshian, S.
2003-12-01
The reliability of any hydrologic simulation and basin outflow prediction effort depends primarily on the rainfall estimates. The problem of estimating rainfall becomes more obvious in basins with scarce or no rain gauges. We present an evaluation of satellite-based rainfall estimates for basin-scale hydrologic modeling with particular interest in ungauged basins. The initial phase of this study focuses on comparison of mean areal rainfall estimates from ground-based rain gauge network, NEXRAD radar Stage-III, and satellite-based PERSIANN (Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks) and their influence on hydrologic model simulations over several basins in the U.S. Six-hourly accumulations of the above competing mean areal rainfall estimates are used as input to the Sacramento Soil Moisture Accounting Model. Preliminary experiments for the Leaf River Basin in Mississippi, for the period of March 2000 - June 2002, reveals that seasonality plays an important role in the comparison. There is an overestimation during the summer and underestimation during the winter in satellite-based rainfall with respect to the competing rainfall estimates. The consequence of this result on the hydrologic model is that simulated discharge underestimates the major observed peak discharges during early spring for the basin under study. Future research will entail developing correction procedures, which depend on different factors such as seasonality, geographic location and basin size, for satellite-based rainfall estimates over basins with dense rain gauge network and/or radar coverage. Extension of these correction procedures to satellite-based rainfall estimates over ungauged basins with similar characteristics has the potential for reducing the input uncertainty in ungauged basin modeling efforts.
Estimating Total Deposition Using NADP & CASTNET Data
For more than 40 years, efforts have been made to estimate total sulfur and nitrogen deposition in the United States using a combination of measured concentrations in precipitation and in the air, precipitation amounts for wet deposition, and various modeled or estimated depositi...
Studies in Software Cost Model Behavior: Do We Really Understand Cost Model Performance?
NASA Technical Reports Server (NTRS)
Lum, Karen; Hihn, Jairus; Menzies, Tim
2006-01-01
While there exists extensive literature on software cost estimation techniques, industry practice continues to rely upon standard regression-based algorithms. These software effort models are typically calibrated or tuned to local conditions using local data. This paper cautions that current approaches to model calibration often produce sub-optimal models because of the large variance problem inherent in cost data and by including far more effort multipliers than the data supports. Building optimal models requires that a wider range of models be considered while correctly calibrating these models requires rejection rules that prune variables and records and use multiple criteria for evaluating model performance. The main contribution of this paper is to document a standard method that integrates formal model identification, estimation, and validation. It also documents what we call the large variance problem that is a leading cause of cost model brittleness or instability.
DOT National Transportation Integrated Search
2000-11-01
In an effort to study occupant survivability in train collisions, analyses and tests were conducted to understand and improve the crashworthiness of rail vehicles. A collision dynamics model was developed in order to estimate the rigid body motion of...
As part of a broader exploratory effort to develop ecological risk assessment approaches to estimate potential chemical effects on non-target populations, we describe an approach for developing simple population models to estimate the extent to which acute effects on individual...
DOT National Transportation Integrated Search
2012-05-01
The Vermont Integrated Land-Use and Transportation Carbon Estimator (VILTCE) project is part of a larger effort to develop environmental metrics related to travel, and to integrate these tools into a travel model under UVM TRC Signature Project No. 1...
Battery Calendar Life Estimator Manual Modeling and Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jon P. Christophersen; Ira Bloom; Ed Thomas
2012-10-01
The Battery Life Estimator (BLE) Manual has been prepared to assist developers in their efforts to estimate the calendar life of advanced batteries for automotive applications. Testing requirements and procedures are defined by the various manuals previously published under the United States Advanced Battery Consortium (USABC). The purpose of this manual is to describe and standardize a method for estimating calendar life based on statistical models and degradation data acquired from typical USABC battery testing.
Battery Life Estimator Manual Linear Modeling and Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jon P. Christophersen; Ira Bloom; Ed Thomas
2009-08-01
The Battery Life Estimator (BLE) Manual has been prepared to assist developers in their efforts to estimate the calendar life of advanced batteries for automotive applications. Testing requirements and procedures are defined by the various manuals previously published under the United States Advanced Battery Consortium (USABC). The purpose of this manual is to describe and standardize a method for estimating calendar life based on statistical models and degradation data acquired from typical USABC battery testing.
Estimating rates of local species extinction, colonization and turnover in animal communities
Nichols, James D.; Boulinier, T.; Hines, J.E.; Pollock, K.H.; Sauer, J.R.
1998-01-01
Species richness has been identified as a useful state variable for conservation and management purposes. Changes in richness over time provide a basis for predicting and evaluating community responses to management, to natural disturbance, and to changes in factors such as community composition (e.g., the removal of a keystone species). Probabilistic capture-recapture models have been used recently to estimate species richness from species count and presence-absence data. These models do not require the common assumption that all species are detected in sampling efforts. We extend this approach to the development of estimators useful for studying the vital rates responsible for changes in animal communities over time; rates of local species extinction, turnover, and colonization. Our approach to estimation is based on capture-recapture models for closed animal populations that permit heterogeneity in detection probabilities among the different species in the sampled community. We have developed a computer program, COMDYN, to compute many of these estimators and associated bootstrap variances. Analyses using data from the North American Breeding Bird Survey (BBS) suggested that the estimators performed reasonably well. We recommend estimators based on probabilistic modeling for future work on community responses to management efforts as well as on basic questions about community dynamics.
NASA Technical Reports Server (NTRS)
Avila, Edwin M. Martinez; Muniz, Ricardo; Szafran, Jamie; Dalton, Adam
2011-01-01
Lines of code (LOC) analysis is one of the methods used to measure programmer productivity and estimate schedules of programming projects. The Launch Control System (LCS) had previously used this method to estimate the amount of work and to plan development efforts. The disadvantage of using LOC as a measure of effort is that one can only measure 30% to 35% of the total effort of software projects involves coding [8]. In the application, instead of using the LOC we are using function point for a better estimation of hours in each software to develop. Because of these disadvantages, Jamie Szafran of the System Software Branch of Control And Data Systems (NE-C3) at Kennedy Space Canter developed a web application called Function Point Analysis (FPA) Depot. The objective of this web application is that the LCS software architecture team can use the data to more accurately estimate the effort required to implement customer requirements. This paper describes the evolution of the domain model used for function point analysis as project managers continually strive to generate more accurate estimates.
Estimating Total Deposition Using NADP & CASTNET Data (NADP 2016 poster)
For more than 40 years, efforts have been made to estimate total sulfur and nitrogen deposition in the United States using a combination of measured concentrations in precipitation and in the air, precipitation amounts for wet deposition, and various modeled or estimated depositi...
Improving PAGER's real-time earthquake casualty and loss estimation toolkit: a challenge
Jaiswal, K.S.; Wald, D.J.
2012-01-01
We describe the on-going developments of PAGER’s loss estimation models, and discuss value-added web content that can be generated related to exposure, damage and loss outputs for a variety of PAGER users. These developments include identifying vulnerable building types in any given area, estimating earthquake-induced damage and loss statistics by building type, and developing visualization aids that help locate areas of concern for improving post-earthquake response efforts. While detailed exposure and damage information is highly useful and desirable, significant improvements are still necessary in order to improve underlying building stock and vulnerability data at a global scale. Existing efforts with the GEM’s GED4GEM and GVC consortia will help achieve some of these objectives. This will benefit PAGER especially in regions where PAGER’s empirical model is less-well constrained; there, the semi-empirical and analytical models will provide robust estimates of damage and losses. Finally, we outline some of the challenges associated with rapid casualty and loss estimation that we experienced while responding to recent large earthquakes worldwide.
A water quality model, LM3 Eutro, will be used to estimate the response of nutrient concentrations and primary productivity in Lake Michigan to nutrient loading scenarios. This work is part of a larger effort, the Future Midwestern landscapes study, that will estimate the produc...
Ward, Robert J; Griffiths, Richard A; Wilkinson, John W; Cornish, Nina
2017-12-22
A fifth of reptiles are Data Deficient; many due to unknown population status. Monitoring snake populations can be demanding due to crypsis and low population densities, with insufficient recaptures for abundance estimation via Capture-Mark-Recapture. Alternatively, binomial N-mixture models enable abundance estimation from count data without individual identification, but have rarely been successfully applied to snake populations. We evaluated the suitability of occupancy and N-mixture methods for monitoring an insular population of grass snakes (Natrix helvetica) and considered covariates influencing detection, occupancy and abundance within remaining habitat. Snakes were elusive, with detectability increasing with survey effort (mean: 0.33 ± 0.06 s.e.m.). The probability of a transect being occupied was moderate (mean per kilometre: 0.44 ± 0.19 s.e.m.) and increased with transect length. Abundance estimates indicate a small threatened population associated to our transects (mean: 39, 95% CI: 20-169). Power analysis indicated that the survey effort required to detect occupancy declines would be prohibitive. Occupancy models fitted well, whereas N-mixture models showed poor fit, provided little extra information over occupancy models and were at greater risk of closure violation. Therefore we suggest occupancy models are more appropriate for monitoring snakes and other elusive species, but that population trends may go undetected.
Mollet, Pierre; Kery, Marc; Gardner, Beth; Pasinelli, Gilberto; Royle, Andy
2015-01-01
We conducted a survey of an endangered and cryptic forest grouse, the capercaillie Tetrao urogallus, based on droppings collected on two sampling occasions in eight forest fragments in central Switzerland in early spring 2009. We used genetic analyses to sex and individually identify birds. We estimated sex-dependent detection probabilities and population size using a modern spatial capture-recapture (SCR) model for the data from pooled surveys. A total of 127 capercaillie genotypes were identified (77 males, 46 females, and 4 of unknown sex). The SCR model yielded atotal population size estimate (posterior mean) of 137.3 capercaillies (posterior sd 4.2, 95% CRI 130–147). The observed sex ratio was skewed towards males (0.63). The posterior mean of the sex ratio under the SCR model was 0.58 (posterior sd 0.02, 95% CRI 0.54–0.61), suggesting a male-biased sex ratio in our study area. A subsampling simulation study indicated that a reduced sampling effort representing 75% of the actual detections would still yield practically acceptable estimates of total size and sex ratio in our population. Hence, field work and financial effort could be reduced without compromising accuracy when the SCR model is used to estimate key population parameters of cryptic species.
SOFTCOST - DEEP SPACE NETWORK SOFTWARE COST MODEL
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1994-01-01
The early-on estimation of required resources and a schedule for the development and maintenance of software is usually the least precise aspect of the software life cycle. However, it is desirable to make some sort of an orderly and rational attempt at estimation in order to plan and organize an implementation effort. The Software Cost Estimation Model program, SOFTCOST, was developed to provide a consistent automated resource and schedule model which is more formalized than the often used guesswork model based on experience, intuition, and luck. SOFTCOST was developed after the evaluation of a number of existing cost estimation programs indicated that there was a need for a cost estimation program with a wide range of application and adaptability to diverse kinds of software. SOFTCOST combines several software cost models found in the open literature into one comprehensive set of algorithms that compensate for nearly fifty implementation factors relative to size of the task, inherited baseline, organizational and system environment, and difficulty of the task. SOFTCOST produces mean and variance estimates of software size, implementation productivity, recommended staff level, probable duration, amount of computer resources required, and amount and cost of software documentation. Since the confidence level for a project using mean estimates is small, the user is given the opportunity to enter risk-biased values for effort, duration, and staffing, to achieve higher confidence levels. SOFTCOST then produces a PERT/CPM file with subtask efforts, durations, and precedences defined so as to produce the Work Breakdown Structure (WBS) and schedule having the asked-for overall effort and duration. The SOFTCOST program operates in an interactive environment prompting the user for all of the required input. The program builds the supporting PERT data base in a file for later report generation or revision. The PERT schedule and the WBS schedule may be printed and stored in a file for later use. The SOFTCOST program is written in Microsoft BASIC for interactive execution and has been implemented on an IBM PC-XT/AT operating MS-DOS 2.1 or higher with 256K bytes of memory. SOFTCOST was originally developed for the Zylog Z80 system running under CP/M in 1981. It was converted to run on the IBM PC XT/AT in 1986. SOFTCOST is a copyrighted work with all copyright vested in NASA.
Grant, Evan H. Campbell; Zipkin, Elise; Scott, Sillett T.; Chandler, Richard; Royle, J. Andrew
2014-01-01
Wildlife populations consist of individuals that contribute disproportionately to growth and viability. Understanding a population's spatial and temporal dynamics requires estimates of abundance and demographic rates that account for this heterogeneity. Estimating these quantities can be difficult, requiring years of intensive data collection. Often, this is accomplished through the capture and recapture of individual animals, which is generally only feasible at a limited number of locations. In contrast, N-mixture models allow for the estimation of abundance, and spatial variation in abundance, from count data alone. We extend recently developed multistate, open population N-mixture models, which can additionally estimate demographic rates based on an organism's life history characteristics. In our extension, we develop an approach to account for the case where not all individuals can be assigned to a state during sampling. Using only state-specific count data, we show how our model can be used to estimate local population abundance, as well as density-dependent recruitment rates and state-specific survival. We apply our model to a population of black-throated blue warblers (Setophaga caerulescens) that have been surveyed for 25 years on their breeding grounds at the Hubbard Brook Experimental Forest in New Hampshire, USA. The intensive data collection efforts allow us to compare our estimates to estimates derived from capture–recapture data. Our model performed well in estimating population abundance and density-dependent rates of annual recruitment/immigration. Estimates of local carrying capacity and per capita recruitment of yearlings were consistent with those published in other studies. However, our model moderately underestimated annual survival probability of yearling and adult females and severely underestimates survival probabilities for both of these male stages. The most accurate and precise estimates will necessarily require some amount of intensive data collection efforts (such as capture–recapture). Integrated population models that combine data from both intensive and extensive sources are likely to be the most efficient approach for estimating demographic rates at large spatial and temporal scales.
Disentangling sampling and ecological explanations underlying species-area relationships
Cam, E.; Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Alpizar-Jara, R.; Flather, C.H.
2002-01-01
We used a probabilistic approach to address the influence of sampling artifacts on the form of species-area relationships (SARs). We developed a model in which the increase in observed species richness is a function of sampling effort exclusively. We assumed that effort depends on area sampled, and we generated species-area curves under that model. These curves can be realistic looking. We then generated SARs from avian data, comparing SARs based on counts with those based on richness estimates. We used an approach to estimation of species richness that accounts for species detection probability and, hence, for variation in sampling effort. The slopes of SARs based on counts are steeper than those of curves based on estimates of richness, indicating that the former partly reflect failure to account for species detection probability. SARs based on estimates reflect ecological processes exclusively, not sampling processes. This approach permits investigation of ecologically relevant hypotheses. The slope of SARs is not influenced by the slope of the relationship between habitat diversity and area. In situations in which not all of the species are detected during sampling sessions, approaches to estimation of species richness integrating species detection probability should be used to investigate the rate of increase in species richness with area.
NASA Astrophysics Data System (ADS)
Barba, M.; Willis, M. J.; Tiampo, K. F.; Lynett, P. J.; Mätzler, E.; Thorsøe, K.; Higman, B. M.; Thompson, J. A.; Morin, P. J.
2017-12-01
We use a combination of geodetic imaging techniques and modelling efforts to examine the June 2017 Karrat Fjord, West Greenland, landslide and tsunami event. Our efforts include analysis of pre-cursor motions extracted from Sentinal SAR interferometry that we improved with high-resolution Digital Surface Models derived from commercial imagery and geo-coded Structure from Motion analyses. We produce well constrained estimates of landslide volume through DSM differencing by improving the ArcticDEM coverage of the region, and provide modeled tsunami run-up estimates at villages around the region, constrained with in-situ observations provided by the Greenlandic authorities. Estimates of run-up at unoccupied coasts are derived using a blend of high resolution imagery and elevation models. We further detail post-failure slope stability for areas of interest around the Karrat Fjord region. Warming trends in the region from model and satellite analysis are combined with optical imagery to ascertain whether the influence of melting permafrost and the formation of small springs on a slight bench on the mountainside that eventually failed can be used as indicators of future events.
Silva, Déborah R O; Ligeiro, Raphael; Hughes, Robert M; Callisto, Marcos
2016-06-01
Taxonomic richness is one of the most important measures of biological diversity in ecological studies, including those with stream macroinvertebrates. However, it is impractical to measure the true richness of any site directly by sampling. Our objective was to evaluate the effect of sampling effort on estimates of macroinvertebrate family and Ephemeroptera, Plecoptera, and Trichoptera (EPT) genera richness at two scales: basin and stream site. In addition, we tried to determine which environmental factors at the site scale most influenced the amount of sampling effort needed. We sampled 39 sites in the Cerrado biome (neotropical savanna). In each site, we obtained 11 equidistant samples of the benthic assemblage and multiple physical habitat measurements. The observed basin-scale richness achieved a consistent estimation from Chao 1, Jack 1, and Jack 2 richness estimators. However, at the site scale, there was a constant increase in the observed number of taxa with increased number of samples. Models that best explained the slope of site-scale sampling curves (representing the necessity of greater sampling effort) included metrics that describe habitat heterogeneity, habitat structure, anthropogenic disturbance, and water quality, for both macroinvertebrate family and EPT genera richness. Our results demonstrate the importance of considering basin- and site-scale sampling effort in ecological surveys and that taxa accumulation curves and richness estimators are good tools for assessing sampling efficiency. The physical habitat explained a significant amount of the sampling effort needed. Therefore, future studies should explore the possible implications of physical habitat characteristics when developing sampling objectives, study designs, and calculating the needed sampling effort.
Reynolds, Michelle H.; Brinck, Kevin W.; Laniawe, Leona
2011-01-01
To improve the Laysan Teal population estimates, we recommend changes to the monitoring protocol. Additional years of data are needed to quantify inter-annual seasonal detection probabilities, which may allow the use of standardized direct counts as an unbiased index of population size. Survey protocols should be enhanced through frequent resights, regular survey intervals, and determining reliable standards to detect catastrophic declines and annual changes in adult abundance. In late 2009 to early 2010, 68% of the population was marked with unique color band combinations. This allowed for potentially accurate adult population estimates and survival estimates without the need to mark new birds in 2010, 2011, and possibly 2012. However, efforts should be made to replace worn or illegible bands so birds can be identified in future surveys. It would be valuable to develop more sophisticated population size and survival models using Program MARK, a state-of-the-art software package which uses likelihood models to analyze mark-recapture data. This would allow for more reliable adult population and survival estimates to compare with the ―source‖ Laysan Teal population on Laysan Island. These models will require additional years of resight data (> 1 year) and, in some cases, an intensive annual effort of marking and recapture. Because data indicate standardized all-wetland counts are a poor index of abundance, monitoring efforts could be improved by expanding resight surveys to include all wetlands, discontinuing the all-wetland counts, and reallocating some of the wetland count effort to collect additional opportunistic resights. Approximately two years of additional bimonthly surveys are needed to validate the direct count as an appropriate index of population abundance. Additional years of individual resight data will allow estimates of adult population size, as specified in recovery criteria, and to track species population dynamics at Midway Atoll.
Improving waterfowl production estimates: Results of a test in the prairie pothole region
Arnold, P.M.; Cowardin, L.M.
1985-01-01
The U.S. Fish and Wildlife Service in an effort to improve and standardize methods for estimating waterfowl production tested a new technique in the four-county Arrowwood Wetland Management District (WMD) for three years (1982-1984). On 14 randomly selected 10.36 km2 plots, upland and wetland habitat was mapped, classified, and digitized. Waterfowl breeding pairs were counted twice each year and the proportion of wetland basins containing water was determined. Pair numbers and habitat conditions were entered into a computer model developed by Northern Prairie Wildlife Research Center. That model estimates production on small federally owned wildlife tracts, federal wetland easements, and private land. Results indicate that production estimates were most accurate for mallards (Anas platyrhynchos), the species for which the computer model and data base were originally designed. Predictions for the pintail (Anas acuta), gadwall (A. strepa), blue-winged teal (A. discors), and northern shoveler (A. clypeata) were believed to be less accurate. Modeling breeding period dynamics of a waterfowl species and making credible production estimates for a geographic area are possible if the data used in the model are adequate. The process of modeling the breeding period of a species aids in locating areas of insufficient biological knowledge. This process will help direct future research efforts and permit more efficient gathering of field data.
Redmond, Daniel P; Chiew, Yeong Shiong; Major, Vincent; Chase, J Geoffrey
2016-09-23
Monitoring of respiratory mechanics is required for guiding patient-specific mechanical ventilation settings in critical care. Many models of respiratory mechanics perform poorly in the presence of variable patient effort. Typical modelling approaches either attempt to mitigate the effect of the patient effort on the airway pressure waveforms, or attempt to capture the size and shape of the patient effort. This work analyses a range of methods to identify respiratory mechanics in volume controlled ventilation modes when there is patient effort. The models are compared using 4 Datasets, each with a sample of 30 breaths before, and 2-3 minutes after sedation has been administered. The sedation will reduce patient efforts, but the underlying pulmonary mechanical properties are unlikely to change during this short time. Model identified parameters from breathing cycles with patient effort are compared to breathing cycles that do not have patient effort. All models have advantages and disadvantages, so model selection may be specific to the respiratory mechanics application. However, in general, the combined method of iterative interpolative pressure reconstruction, and stacking multiple consecutive breaths together has the best performance over the Dataset. The variability of identified elastance when there is patient effort is the lowest with this method, and there is little systematic offset in identified mechanics when sedation is administered. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Student Effort and Performance over the Semester
ERIC Educational Resources Information Center
Krohn, Gregory A.; O'Connor, Catherine M.
2005-01-01
The authors extend the standard education production function and student time allocation analysis to focus on the interactions between student effort and performance over the semester. The purged instrumental variable technique is used to obtain consistent estimators of the structural parameters of the model using data from intermediate…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jensen, A.L.; Spigarelli, J.A.; Thommes, M.M.
1982-01-01
Two conventional fishery stock assessment models, the surplus-production model and the dynamic-pool model, were applied to assess the impacts of water withdrawals by electricity-generating plants, industries, and municipalities on the standing stocks and yields of alewife Alosa pseudoharengus, rainbow smelt Osmerus mordax, and yellow perch Perca flavescens in Lake Michigan. Impingement and entrainment estimates were based on data collected at 15 power plants. The surplus-production model was fitted to the three populations with catch and effort data from the commercial fisheries. Dynamic-pool model parameters were estimated from published data. The numbers entrained and impinged are large, but the proportions ofmore » the standing stocks impinged and the proportions of the eggs and larvae entrained are small. The reductions in biomass of the stocks and in maximum sustainable yields are larger than the proportions impinged. The reductions in biomass, based on 1975 data and an assumed full water withdrawal, are 2.86% for alewife, 0.76% for rainbow smelt, and 0.28% for yellow perch. Fishery models are an economical means of impact assessment in situations where catch and effort data are available for estimation of model parameters.« less
Methodological Framework for Analysis of Buildings-Related Programs with BEAMS, 2008
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elliott, Douglas B.; Dirks, James A.; Hostick, Donna J.
The U.S. Department of Energy’s (DOE’s) Office of Energy Efficiency and Renewable Energy (EERE) develops official “benefits estimates” for each of its major programs using its Planning, Analysis, and Evaluation (PAE) Team. PAE conducts an annual integrated modeling and analysis effort to produce estimates of the energy, environmental, and financial benefits expected from EERE’s budget request. These estimates are part of EERE’s budget request and are also used in the formulation of EERE’s performance measures. Two of EERE’s major programs are the Building Technologies Program (BT) and the Weatherization and Intergovernmental Program (WIP). Pacific Northwest National Laboratory (PNNL) supports PAEmore » by developing the program characterizations and other market information necessary to provide input to the EERE integrated modeling analysis as part of PAE’s Portfolio Decision Support (PDS) effort. Additionally, PNNL also supports BT by providing line-item estimates for the Program’s internal use. PNNL uses three modeling approaches to perform these analyses. This report documents the approach and methodology used to estimate future energy, environmental, and financial benefits using one of those methods: the Building Energy Analysis and Modeling System (BEAMS). BEAMS is a PC-based accounting model that was built in Visual Basic by PNNL specifically for estimating the benefits of buildings-related projects. It allows various types of projects to be characterized including whole-building, envelope, lighting, and equipment projects. This document contains an overview section that describes the estimation process and the models used to estimate energy savings. The body of the document describes the algorithms used within the BEAMS software. This document serves both as stand-alone documentation for BEAMS, and also as a supplemental update of a previous document, Methodological Framework for Analysis of Buildings-Related Programs: The GPRA Metrics Effort, (Elliott et al. 2004b). The areas most changed since the publication of that previous document are those discussing the calculation of lighting and HVAC interactive effects (for both lighting and envelope/whole-building projects). This report does not attempt to convey inputs to BEAMS or the methodology of their derivation.« less
A Bayesian approach to multisource forest area estimation
Andrew O. Finley
2007-01-01
In efforts such as land use change monitoring, carbon budgeting, and forecasting ecological conditions and timber supply, demand is increasing for regional and national data layers depicting forest cover. These data layers must permit small area estimates of forest and, most importantly, provide associated error estimates. This paper presents a model-based approach for...
Software Size Estimation Using Expert Estimation: A Fuzzy Logic Approach
ERIC Educational Resources Information Center
Stevenson, Glenn A.
2012-01-01
For decades software managers have been using formal methodologies such as the Constructive Cost Model and Function Points to estimate the effort of software projects during the early stages of project development. While some research shows these methodologies to be effective, many software managers feel that they are overly complicated to use and…
DOT National Transportation Integrated Search
2014-05-01
Travel demand forecasting models are used to predict future traffic volumes to evaluate : roadway improvement alternatives. Each of the metropolitan planning organizations (MPO) in : Alabama maintains a travel demand model to support planning efforts...
Estimating abundance of mountain lions from unstructured spatial sampling
Russell, Robin E.; Royle, J. Andrew; Desimone, Richard; Schwartz, Michael K.; Edwards, Victoria L.; Pilgrim, Kristy P.; Mckelvey, Kevin S.
2012-01-01
Mountain lions (Puma concolor) are often difficult to monitor because of their low capture probabilities, extensive movements, and large territories. Methods for estimating the abundance of this species are needed to assess population status, determine harvest levels, evaluate the impacts of management actions on populations, and derive conservation and management strategies. Traditional mark–recapture methods do not explicitly account for differences in individual capture probabilities due to the spatial distribution of individuals in relation to survey effort (or trap locations). However, recent advances in the analysis of capture–recapture data have produced methods estimating abundance and density of animals from spatially explicit capture–recapture data that account for heterogeneity in capture probabilities due to the spatial organization of individuals and traps. We adapt recently developed spatial capture–recapture models to estimate density and abundance of mountain lions in western Montana. Volunteers and state agency personnel collected mountain lion DNA samples in portions of the Blackfoot drainage (7,908 km2) in west-central Montana using 2 methods: snow back-tracking mountain lion tracks to collect hair samples and biopsy darting treed mountain lions to obtain tissue samples. Overall, we recorded 72 individual capture events, including captures both with and without tissue sample collection and hair samples resulting in the identification of 50 individual mountain lions (30 females, 19 males, and 1 unknown sex individual). We estimated lion densities from 8 models containing effects of distance, sex, and survey effort on detection probability. Our population density estimates ranged from a minimum of 3.7 mountain lions/100 km2 (95% Cl 2.3–5.7) under the distance only model (including only an effect of distance on detection probability) to 6.7 (95% Cl 3.1–11.0) under the full model (including effects of distance, sex, survey effort, and distance x sex on detection probability). These numbers translate to a total estimate of 293 mountain lions (95% Cl 182–451) to 529 (95% Cl 245–870) within the Blackfoot drainage. Results from the distance model are similar to previous estimates of 3.6 mountain lions/100 km2 for the study area; however, results from all other models indicated greater numbers of mountain lions. Our results indicate that unstructured spatial sampling combined with spatial capture–recapture analysis can be an effective method for estimating large carnivore densities.
Analysis and Management of Animal Populations: Modeling, Estimation and Decision Making
Williams, B.K.; Nichols, J.D.; Conroy, M.J.
2002-01-01
This book deals with the processes involved in making informed decisions about the management of animal populations. It covers the modeling of population responses to management actions, the estimation of quantities needed in the modeling effort, and the application of these estimates and models to the development of sound management decisions. The book synthesizes and integrates in a single volume the methods associated with these themes, as they apply to ecological assessment and conservation of animal populations. KEY FEATURES * Integrates population modeling, parameter estimation and * decision-theoretic approaches to management in a single, cohesive framework * Provides authoritative, state-of-the-art descriptions of quantitative * approaches to modeling, estimation and decision-making * Emphasizes the role of mathematical modeling in the conduct of science * and management * Utilizes a unifying biological context, consistent mathematical notation, * and numerous biological examples
Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.
2012-01-01
An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.
NASA Astrophysics Data System (ADS)
Wang, Yu; Liu, Qun
2013-01-01
Surplus-production models are widely used in fish stock assessment and fisheries management due to their simplicity and lower data demands than age-structured models such as Virtual Population Analysis. The CEDA (catch-effort data analysis) and ASPIC (a surplus-production model incorporating covariates) computer packages are data-fitting or parameter estimation tools that have been developed to analyze catch-and-effort data using non-equilibrium surplus production models. We applied CEDA and ASPIC to the hairtail ( Trichiurus japonicus) fishery in the East China Sea. Both packages produced robust results and yielded similar estimates. In CEDA, the Schaefer surplus production model with log-normal error assumption produced results close to those of ASPIC. CEDA is sensitive to the choice of initial proportion, while ASPIC is not. However, CEDA produced higher R 2 values than ASPIC.
Terrestrial gravity data analysis for interim gravity model improvement
NASA Technical Reports Server (NTRS)
1987-01-01
This is the first status report for the Interim Gravity Model research effort that was started on June 30, 1986. The basic theme of this study is to develop appropriate models and adjustment procedures for estimating potential coefficients from terrestrial gravity data. The plan is to use the latest gravity data sets to produce coefficient estimates as well as to provide normal equations to NASA for use in the TOPEX/POSEIDON gravity field modeling program.
van Meijgaard, Jeroen; Fielding, Jonathan E
2012-01-01
Despite years of declining smoking prevalence, tobacco use is still the leading preventable contributor to illness and death in the United States, and the effect of past tobacco-use control efforts has not fully translated into improvements in health outcomes. The objective of this study was to use a life course model with multiple competing causes of death to elucidate the ongoing benefits of tobacco-use control efforts on US death rates. We used a continuous-time life course simulation model for the US population. We modeled smoking initiation and cessation and 20 leading causes of death as competing risks over the life span, with the risk of death for each cause dependent on past and current smoking status. Risk parameters were estimated using data from the National Health Interview Survey that were linked to follow-up mortality data. Up to 14% (9% for men, 14% for women) of the total gain in life expectancy since 1960 was due to tobacco-use control efforts. Past efforts are expected to further increase life expectancy by 0.9 years for women and 1.3 years for men. Additional reduction in smoking prevalence may eventually yield an average 3.4-year increase in life expectancy in the United States. Coronary heart disease is expected to increase as a share of total deaths. A dynamic individual-level model with multiple causes of death supports assessment of the delayed benefits of improved tobacco-use control efforts. We show that past smoking reduction efforts will translate into further increases in life expectancy in the coming years. Smoking will remain a major contributor to preventable illness and death, worthy of continued interventions.
A hidden-process model for estimating prespawn mortality using carcass survey data
DeWeber, J. Tyrell; Peterson, James T.; Sharpe, Cameron; Kent, Michael L.; Colvin, Michael E.; Schreck, Carl B.
2017-01-01
After returning to spawning areas, adult Pacific salmon Oncorhynchus spp. often die without spawning successfully, which is commonly referred to as prespawn mortality. Prespawn mortality reduces reproductive success and can thereby hamper conservation, restoration, and reintroduction efforts. The primary source of information used to estimate prespawn mortality is collected through carcass surveys, but estimation can be difficult with these data due to imperfect detection and carcasses with unknown spawning status. To facilitate unbiased estimation of prespawn mortality and associated uncertainty, we developed a hidden-process mark–recovery model to estimate prespawn mortality rates from carcass survey data while accounting for imperfect detection and unknown spawning success. We then used the model to estimate prespawn mortality and identify potential associated factors for 3,352 adult spring Chinook Salmon O. tshawytscha that were transported above Foster Dam on the South Santiam River (Willamette River basin, Oregon) from 2009 to 2013. Estimated prespawn mortality was relatively low (≤13%) in most years (interannual mean = 28%) but was especially high (74%) in 2013. Variation in prespawn mortality estimates among outplanted groups of fish within each year was also very high, and some of this variation was explained by a trend toward lower prespawn mortality among fish that were outplanted later in the year. Numerous efforts are being made to monitor and, when possible, minimize prespawn mortality in salmon populations; this model can be used to provide unbiased estimates of spawning success that account for unknown fate and imperfect detection, which are common to carcass survey data.
Examining hydrogen transitions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plotkin, S. E.; Energy Systems
2007-03-01
This report describes the results of an effort to identify key analytic issues associated with modeling a transition to hydrogen as a fuel for light duty vehicles, and using insights gained from this effort to suggest ways to improve ongoing modeling efforts. The study reported on here examined multiple hydrogen scenarios reported in the literature, identified modeling issues associated with those scenario analyses, and examined three DOE-sponsored hydrogen transition models in the context of those modeling issues. The three hydrogen transition models are HyTrans (contractor: Oak Ridge National Laboratory), MARKAL/DOE* (Brookhaven National Laboratory), and NEMS-H2 (OnLocation, Inc). The goals ofmore » these models are (1) to help DOE improve its R&D effort by identifying key technology and other roadblocks to a transition and testing its technical program goals to determine whether they are likely to lead to the market success of hydrogen technologies, (2) to evaluate alternative policies to promote a transition, and (3) to estimate the costs and benefits of alternative pathways to hydrogen development.« less
"Hot Spots" of Land Atmosphere Coupling
NASA Technical Reports Server (NTRS)
Koster, Randal D.; Dirmeyer, Paul A.; Guo, Zhi-Chang; Bonan, Gordan; Chan, Edmond; Cox, Peter; Gordon, T. C.; Kanae, Shinjiro; Kowalczyk, Eva; Lawrence, David
2004-01-01
Previous estimates of land-atmosphere interaction (the impact of soil moisture on precipitation) have been limited by a severe paucity of relevant observational data and by the model-dependence of the various computational estimates. To counter this limitation, a dozen climate modeling groups have recently performed the same highly-controlled numerical experiment as part of a coordinated intercomparison project. This allows, for the first time ever, a superior multi-model approach to the estimation of the regions on the globe where precipitation is affected by soil moisture anomalies during Northern Hemisphere summer. Such estimation has many potential benefits; it can contribute, for example, to seasonal rainfall prediction efforts.
Population health outcome models in suicide prevention policy.
Lynch, Frances L
2014-09-01
Suicide is a leading cause of death in the U.S. and results in immense suffering and significant cost. Effective suicide prevention interventions could reduce this burden, but policy makers need estimates of health outcomes achieved by alternative interventions to focus implementation efforts. To illustrate the utility of health outcome models to help in achieving goals defined by the National Action Alliance for Suicide Prevention's Research Prioritization Task Force. The approach is illustrated specifically with psychotherapeutic interventions to prevent suicide reattempt in emergency department settings. A health outcome model using decision analysis with secondary data was applied to estimate suicide attempts and deaths averted from evidence-based interventions. Under optimal conditions, the model estimated that over 1 year, implementing evidence-based psychotherapeutic interventions in emergency departments could decrease the number of suicide attempts by 18,737, and if offered over 5 years, it could avert 109,306 attempts. Over 1 year, the model estimated 2,498 fewer deaths from suicide, and over 5 years, about 13,928 fewer suicide deaths. Health outcome models could aid in suicide prevention policy by helping focus implementation efforts. Further research developing more sophisticated models of the impact of suicide prevention interventions that include a more complex understanding of suicidal behavior, longer time frames, and inclusion of additional outcomes that capture the full benefits and costs of interventions would be helpful next steps. Copyright © 2014 American Journal of Preventive Medicine. All rights reserved.
Ghumman, Abul Razzaq; Al-Salamah, Ibrahim Saleh; AlSaleem, Saleem Saleh; Haider, Husnain
2017-02-01
Geomorphological instantaneous unit hydrograph (GIUH) usually uses geomorphologic parameters of catchment estimated from digital elevation model (DEM) for rainfall-runoff modeling of ungauged watersheds with limited data. Higher resolutions (e.g., 5 or 10 m) of DEM play an important role in the accuracy of rainfall-runoff models; however, such resolutions are expansive to obtain and require much greater efforts and time for preparation of inputs. In this research, a modeling framework is developed to evaluate the impact of lower resolutions (i.e., 30 and 90 m) of DEM on the accuracy of Clark GIUH model. Observed rainfall-runoff data of a 202-km 2 catchment in a semiarid region was used to develop direct runoff hydrographs for nine rainfall events. Geographical information system was used to process both the DEMs. Model accuracy and errors were estimated by comparing the model results with the observed data. The study found (i) high model efficiencies greater than 90% for both the resolutions, and (ii) that the efficiency of Clark GIUH model does not significantly increase by enhancing the resolution of the DEM from 90 to 30 m. Thus, it is feasible to use lower resolutions (i.e., 90 m) of DEM in the estimation of peak runoff in ungauged catchments with relatively less efforts. Through sensitivity analysis (Monte Carlo simulations), the kinematic wave parameter and stream length ratio are found to be the most significant parameters in velocity and peak flow estimations, respectively; thus, they need to be carefully estimated for calculation of direct runoff in ungauged watersheds using Clark GIUH model.
Effects of sample size on estimates of population growth rates calculated with matrix models.
Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M
2008-08-28
Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.
Zipkin, Elise F; Sillett, T Scott; Grant, Evan H Campbell; Chandler, Richard B; Royle, J Andrew
2014-01-01
Wildlife populations consist of individuals that contribute disproportionately to growth and viability. Understanding a population's spatial and temporal dynamics requires estimates of abundance and demographic rates that account for this heterogeneity. Estimating these quantities can be difficult, requiring years of intensive data collection. Often, this is accomplished through the capture and recapture of individual animals, which is generally only feasible at a limited number of locations. In contrast, N-mixture models allow for the estimation of abundance, and spatial variation in abundance, from count data alone. We extend recently developed multistate, open population N-mixture models, which can additionally estimate demographic rates based on an organism's life history characteristics. In our extension, we develop an approach to account for the case where not all individuals can be assigned to a state during sampling. Using only state-specific count data, we show how our model can be used to estimate local population abundance, as well as density-dependent recruitment rates and state-specific survival. We apply our model to a population of black-throated blue warblers (Setophaga caerulescens) that have been surveyed for 25 years on their breeding grounds at the Hubbard Brook Experimental Forest in New Hampshire, USA. The intensive data collection efforts allow us to compare our estimates to estimates derived from capture–recapture data. Our model performed well in estimating population abundance and density-dependent rates of annual recruitment/immigration. Estimates of local carrying capacity and per capita recruitment of yearlings were consistent with those published in other studies. However, our model moderately underestimated annual survival probability of yearling and adult females and severely underestimates survival probabilities for both of these male stages. The most accurate and precise estimates will necessarily require some amount of intensive data collection efforts (such as capture–recapture). Integrated population models that combine data from both intensive and extensive sources are likely to be the most efficient approach for estimating demographic rates at large spatial and temporal scales. PMID:24634726
Validation of PV-RPM Code in the System Advisor Model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klise, Geoffrey Taylor; Lavrova, Olga; Freeman, Janine
2017-04-01
This paper describes efforts made by Sandia National Laboratories (SNL) and the National Renewable Energy Laboratory (NREL) to validate the SNL developed PV Reliability Performance Model (PV - RPM) algorithm as implemented in the NREL System Advisor Model (SAM). The PV - RPM model is a library of functions that estimates component failure and repair in a photovoltaic system over a desired simulation period. The failure and repair distributions in this paper are probabilistic representations of component failure and repair based on data collected by SNL for a PV power plant operating in Arizona. The validation effort focuses on whethermore » the failure and repair dist ributions used in the SAM implementation result in estimated failures that match the expected failures developed in the proof - of - concept implementation. Results indicate that the SAM implementation of PV - RPM provides the same results as the proof - of - concep t implementation, indicating the algorithms were reproduced successfully.« less
Obesity and severe obesity forecasts through 2030.
Finkelstein, Eric A; Khavjou, Olga A; Thompson, Hope; Trogdon, Justin G; Pan, Liping; Sherry, Bettylou; Dietz, William
2012-06-01
Previous efforts to forecast future trends in obesity applied linear forecasts assuming that the rise in obesity would continue unabated. However, evidence suggests that obesity prevalence may be leveling off. This study presents estimates of adult obesity and severe obesity prevalence through 2030 based on nonlinear regression models. The forecasted results are then used to simulate the savings that could be achieved through modestly successful obesity prevention efforts. The study was conducted in 2009-2010 and used data from the 1990 through 2008 Behavioral Risk Factor Surveillance System (BRFSS). The analysis sample included nonpregnant adults aged ≥ 18 years. The individual-level BRFSS variables were supplemented with state-level variables from the U.S. Bureau of Labor Statistics, the American Chamber of Commerce Research Association, and the Census of Retail Trade. Future obesity and severe obesity prevalence were estimated through regression modeling by projecting trends in explanatory variables expected to influence obesity prevalence. Linear time trend forecasts suggest that by 2030, 51% of the population will be obese. The model estimates a much lower obesity prevalence of 42% and severe obesity prevalence of 11%. If obesity were to remain at 2010 levels, the combined savings in medical expenditures over the next 2 decades would be $549.5 billion. The study estimates a 33% increase in obesity prevalence and a 130% increase in severe obesity prevalence over the next 2 decades. If these forecasts prove accurate, this will further hinder efforts for healthcare cost containment. Copyright © 2012 Elsevier Inc. All rights reserved.
In support of the trichloroethylene (TCE) risk assessment for the Office of Air and Radiation, Office of Solid Waste and Emergency Response, and Office of Water, NERL and NCEA are developing an updated physiologically-based pharmacokinetic (PBPK) model. The PBPK modeling effort ...
Accounting for Incomplete Species Detection in Fish Community Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
McManamay, Ryan A; Orth, Dr. Donald J; Jager, Yetta
2013-01-01
Riverine fish assemblages are heterogeneous and very difficult to characterize with a one-size-fits-all approach to sampling. Furthermore, detecting changes in fish assemblages over time requires accounting for variation in sampling designs. We present a modeling approach that permits heterogeneous sampling by accounting for site and sampling covariates (including method) in a model-based framework for estimation (versus a sampling-based framework). We snorkeled during three surveys and electrofished during a single survey in suite of delineated habitats stratified by reach types. We developed single-species occupancy models to determine covariates influencing patch occupancy and species detection probabilities whereas community occupancy models estimated speciesmore » richness in light of incomplete detections. For most species, information-theoretic criteria showed higher support for models that included patch size and reach as covariates of occupancy. In addition, models including patch size and sampling method as covariates of detection probabilities also had higher support. Detection probability estimates for snorkeling surveys were higher for larger non-benthic species whereas electrofishing was more effective at detecting smaller benthic species. The number of sites and sampling occasions required to accurately estimate occupancy varied among fish species. For rare benthic species, our results suggested that higher number of occasions, and especially the addition of electrofishing, may be required to improve detection probabilities and obtain accurate occupancy estimates. Community models suggested that richness was 41% higher than the number of species actually observed and the addition of an electrofishing survey increased estimated richness by 13%. These results can be useful to future fish assemblage monitoring efforts by informing sampling designs, such as site selection (e.g. stratifying based on patch size) and determining effort required (e.g. number of sites versus occasions).« less
A probabilistic assessment of health risks associated with short-term exposure to tropospheric ozone
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitfield, R.G; Biller, W.F.; Jusko, M.J.
1996-06-01
The work described in this report is part of a larger risk assessment sponsored by the U.S. Environmental Protection Agency. Earlier efforts developed exposure-response relationships for acute health effects among populations engaged in heavy exertion. Those efforts also developed a probabilistic national ambient air quality standards exposure model and a general methodology for integrating probabilistic exposure-response relation- ships and exposure estimates to calculate overall risk results. Recently published data make it possible to model additional health endpoints (for exposure at moderate exertion), including hospital admissions. New air quality and exposure estimates for alternative national ambient air quality standards for ozonemore » are combined with exposure-response models to produce the risk results for hospital admissions and acute health effects. Sample results explain the methodology and introduce risk output formats.« less
Estimating current modal splits.
DOT National Transportation Integrated Search
2005-11-01
This project is the second part in a two part modeling effort. In previous work*, mode : choice was modeled by examining characteristics of individuals and the trips they make. : A study of the choices of individuals is necessary for a fundamental un...
Diurnal and Reproductive Stage-Dependent Variation of Parental Behaviour in Captive Zebra Finches
Morvai, Boglárka; Nanuru, Sabine; Mul, Douwe; Kusche, Nina; Milne, Gregory; Székely, Tamás; Komdeur, Jan; Miklósi, Ádám
2016-01-01
Parental care plays a key role in ontogeny, life-history trade-offs, sexual selection and intra-familial conflict. Studies focusing on understanding causes and consequences of variation in parental effort need to quantify parental behaviour accurately. The applied methods are, however, diverse even for a given species and type of parental effort, and rarely validated for accuracy. Here we focus on variability of parental behaviour from a methodological perspective to investigate the effect of different samplings on various estimates of parental effort. We used nest box cameras in a captive breeding population of zebra finches, Taeniopygia guttata, a widely used model system of sexual selection, intra-familial dynamics and parental care. We investigated diurnal and reproductive stage-dependent variation in parental effort (including incubation, brooding, nest attendance and number of feedings) based on 12h and 3h continuous video-recordings taken at various reproductive stages. We then investigated whether shorter (1h) sampling periods provided comparable estimates of overall parental effort and division of labour to those of longer (3h) sampling periods. Our study confirmed female-biased division of labour during incubation, and showed that the difference between female and male effort diminishes with advancing reproductive stage. We found individually consistent parental behaviours within given days of incubation and nestling provisioning. Furthermore, parental behaviour was consistent over the different stages of incubation, however, only female brooding was consistent over nestling provisioning. Parental effort during incubation did not predict parental effort during nestling provisioning. Our analyses revealed that 1h sampling may be influenced heavily by stochastic and diurnal variation. We suggest using a single longer sampling period (3h) may provide a consistent and accurate estimate for overall parental effort during incubation in zebra finches. Due to the large within-individual variation, we suggest repeated longer sampling over the reproductive stage may be necessary for accurate estimates of parental effort post-hatching. PMID:27973549
An approach to software cost estimation
NASA Technical Reports Server (NTRS)
Mcgarry, F.; Page, J.; Card, D.; Rohleder, M.; Church, V.
1984-01-01
A general procedure for software cost estimation in any environment is outlined. The basic concepts of work and effort estimation are explained, some popular resource estimation models are reviewed, and the accuracy of source estimates is discussed. A software cost prediction procedure based on the experiences of the Software Engineering Laboratory in the flight dynamics area and incorporating management expertise, cost models, and historical data is described. The sources of information and relevant parameters available during each phase of the software life cycle are identified. The methodology suggested incorporates these elements into a customized management tool for software cost prediction. Detailed guidelines for estimation in the flight dynamics environment developed using this methodology are presented.
A Lagrangian stochastic model is proposed as a tool that can be utilized in forecasting remedial performance and estimating the benefits (in terms of flux and mass reduction) derived from a source zone remedial effort. The stochastic functional relationships that describe the hyd...
Karr, Jonathan R; Williams, Alex H; Zucker, Jeremy D; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A; Bot, Brian M; Hoff, Bruce R; Kellen, Michael R; Covert, Markus W; Stolovitzky, Gustavo A; Meyer, Pablo
2015-05-01
Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.
Karr, Jonathan R.; Williams, Alex H.; Zucker, Jeremy D.; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A.; Bot, Brian M.; Hoff, Bruce R.; Kellen, Michael R.; Covert, Markus W.; Stolovitzky, Gustavo A.; Meyer, Pablo
2015-01-01
Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model’s structure and in silico “experimental” data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation. PMID:26020786
The Case of Effort Variables in Student Performance.
ERIC Educational Resources Information Center
Borg, Mary O.; And Others
1989-01-01
Tests the existence of a structural shift between above- and below-average students in the econometric models that explain students' grades in principles of economics classes. Identifies a structural shift and estimates separate models for above- and below-average students. Concludes that separate models as well as educational policies are…
A Cost Model for Testing Unmanned and Autonomous Systems of Systems
2011-02-01
those risks. In addition, the fundamental methods presented by Aranha and Borba to include the complexity and sizing of tests for UASoS, can be expanded...used as an input for test execution effort estimation models (Aranha & Borba , 2007). Such methodology is very relevant to this work because as a UASoS...calculate the test effort based on the complexity of the SoS. However, Aranha and Borba define test size as the number of steps required to complete
Modeling avian abundance from replicated counts using binomial mixture models
Kery, Marc; Royle, J. Andrew; Schmid, Hans
2005-01-01
Abundance estimation in ecology is usually accomplished by capture–recapture, removal, or distance sampling methods. These may be hard to implement at large spatial scales. In contrast, binomial mixture models enable abundance estimation without individual identification, based simply on temporally and spatially replicated counts. Here, we evaluate mixture models using data from the national breeding bird monitoring program in Switzerland, where some 250 1-km2 quadrats are surveyed using the territory mapping method three times during each breeding season. We chose eight species with contrasting distribution (wide–narrow), abundance (high–low), and detectability (easy–difficult). Abundance was modeled as a random effect with a Poisson or negative binomial distribution, with mean affected by forest cover, elevation, and route length. Detectability was a logit-linear function of survey date, survey date-by-elevation, and sampling effort (time per transect unit). Resulting covariate effects and parameter estimates were consistent with expectations. Detectability per territory (for three surveys) ranged from 0.66 to 0.94 (mean 0.84) for easy species, and from 0.16 to 0.83 (mean 0.53) for difficult species, depended on survey effort for two easy and all four difficult species, and changed seasonally for three easy and three difficult species. Abundance was positively related to route length in three high-abundance and one low-abundance (one easy and three difficult) species, and increased with forest cover in five forest species, decreased for two nonforest species, and was unaffected for a generalist species. Abundance estimates under the most parsimonious mixture models were between 1.1 and 8.9 (median 1.8) times greater than estimates based on territory mapping; hence, three surveys were insufficient to detect all territories for each species. We conclude that binomial mixture models are an important new approach for estimating abundance corrected for detectability when only repeated-count data are available. Future developments envisioned include estimation of trend, occupancy, and total regional abundance.
AERIS : State-of-the-Practice Scan of Environmental Models
DOT National Transportation Integrated Search
2011-06-24
This report has been developed under the Track 1 effort of Phase 1 of the AERIS program and presents the findings of the state-of-the-practice scan of environmental models to estimate environmental impacts (emissions, fuel consumption, etc.) due to c...
NASA Astrophysics Data System (ADS)
Thiaw, Modou; Gascuel, Didier; Jouffre, Didier; Thiaw, Omar Thiom
2009-12-01
In Senegal, two stocks of white shrimp ( Penaeusnotialis) are intensively exploited, one in the north and another in the south. We used surplus production models including environmental effects to analyse their changes in abundance over the past 10 years and to estimate their Maximum Sustainable Yield (MSY) and the related fishing effort ( EMSY). First, yearly abundance indices were estimated from commercial statistics using GLM techniques. Then, two environmental indices were alternatively tested in the model: the coastal upwelling intensity from wind speeds provided by the SeaWifs database and the primary production derived from satellite infrared images of chlorophyll a. Models were fitted, with or without the environmental effect, to the 1996-2005 time series. They express stock abundance and catches as functions of the fishing effort and the environmental index (when considered). For the northern stock, fishing effort and abundance fluctuate over the period without any clear trends. The model based on the upwelling index explains 64.9% of the year-to-year variability. It shows that the stock was slightly overexploited in 2002-2003 and is now close to full exploitation. Stock abundance strongly depends on environmental conditions; consequently, the MSY estimate varies from 300 to 900 tons according to the upwelling intensity. For the southern stock, fishing effort has strongly increased over the past 10 years, while abundance has been reduced 4-fold. The environment has a significant effect on abundance but only explains a small part of the year-to-year variability. The best fit is obtained using the primary production index ( R2 = 0.75), and the stock is now significantly overfished regardless of environmental conditions. MSY varies from 1200 to 1800 tons according to environmental conditions. Finally, in northern Senegal, the upwelling is highly variable from year to year and constitutes the major factor determining productivity. In the south, hydrodynamic processes seem to dominate and determine the primary production and the white shrimp stock productivity as well.
Progress and limitations on quantifying nutrient and carbon loading to coastal waters
NASA Astrophysics Data System (ADS)
Stets, E.; Oelsner, G. P.; Stackpoole, S. M.
2017-12-01
Riverine export of nutrients and carbon to estuarine and coastal waters are important determinants of coastal ecosystem health and provide necessary insight into global biogeochemical cycles. Quantification of coastal solute loads typically relies upon modeling based on observations of concentration and discharge from selected rivers draining to the coast. Most large-scale river export models require unidirectional flow and thus are referenced to monitoring locations at the head of tide, which can be located far inland. As a result, the contributions of the coastal plain, tidal wetlands, and concentrated coastal development are often poorly represented in regional and continental-scale estimates of solute delivery to coastal waters. However, site-specific studies have found that these areas are disproportionately active in terms of nutrient and carbon export. Modeling efforts to upscale fluxes from these areas, while not common, also suggest an outsized importance to coastal flux estimates. This presentation will focus on illustrating how the problem of under-representation of near-shore environments impacts large-scale coastal flux estimates in the context of recent regional and continental-scale assessments. Alternate approaches to capturing the influence of the near-coastal terrestrial inputs including recent data aggregation efforts and modeling approaches will be discussed.
Link, W.A.; Sauer, J.R.; Helbig, Andreas J.; Flade, Martin
1999-01-01
Count survey data are commonly used for estimating temporal and spatial patterns of population change. Since count surveys are not censuses, counts can be influenced by 'nuisance factors' related to the probability of detecting animals but unrelated to the actual population size. The effects of systematic changes in these factors can be confounded with patterns of population change. Thus, valid analysis of count survey data requires the identification of nuisance factors and flexible models for their effects. We illustrate using data from the Christmas Bird Count (CBC), a midwinter survey of bird populations in North America. CBC survey effort has substantially increased in recent years, suggesting that unadjusted counts may overstate population growth (or understate declines). We describe a flexible family of models for the effect of effort, that includes models in which increasing effort leads to diminishing returns in terms of the number of birds counted.
A generic open-source software framework supporting scenario simulations in bioterrorist crises.
Falenski, Alexander; Filter, Matthias; Thöns, Christian; Weiser, Armin A; Wigger, Jan-Frederik; Davis, Matthew; Douglas, Judith V; Edlund, Stefan; Hu, Kun; Kaufman, James H; Appel, Bernd; Käsbohrer, Annemarie
2013-09-01
Since the 2001 anthrax attack in the United States, awareness of threats originating from bioterrorism has grown. This led internationally to increased research efforts to improve knowledge of and approaches to protecting human and animal populations against the threat from such attacks. A collaborative effort in this context is the extension of the open-source Spatiotemporal Epidemiological Modeler (STEM) simulation and modeling software for agro- or bioterrorist crisis scenarios. STEM, originally designed to enable community-driven public health disease models and simulations, was extended with new features that enable integration of proprietary data as well as visualization of agent spread along supply and production chains. STEM now provides a fully developed open-source software infrastructure supporting critical modeling tasks such as ad hoc model generation, parameter estimation, simulation of scenario evolution, estimation of effects of mitigation or management measures, and documentation. This open-source software resource can be used free of charge. Additionally, STEM provides critical features like built-in worldwide data on administrative boundaries, transportation networks, or environmental conditions (eg, rainfall, temperature, elevation, vegetation). Users can easily combine their own confidential data with built-in public data to create customized models of desired resolution. STEM also supports collaborative and joint efforts in crisis situations by extended import and export functionalities. In this article we demonstrate specifically those new software features implemented to accomplish STEM application in agro- or bioterrorist crisis scenarios.
APOKASC 2.0: Asteroseismology and Spectroscopy for Cool Stars
NASA Astrophysics Data System (ADS)
Pinsonneault, Marc H.; Elsworth, Yvonne P.; APOKASC
2017-01-01
The APOGEE survey has obtained and analyzed high resolution H band spectra of more than 10,000 cool dwarfs and giants in the original Kepler fields. The APOKASC effort combines this data with asteroseismology and star spot studies, resulting in more than 7,000 stellar mass estimates for dwarfs and giants with high quality abundances, temperatures, and surface gravities. We highlight the main results from this effort so far, which include a tight correlation between surface abundances in giants and stellar mass, precise absolute gravity calibrations, and the discovery of unexpected stellar populations, such as young alpha-enhanced stars. We discuss grid modeling estimates for stellar masses and compare the absolute asteroseismic mass scale to calibrators in star clusters and the halo Directions for future efforts are discussed.
Evaluating AIDS Prevention: Contributions of Multiple Disciplines.
ERIC Educational Resources Information Center
Leviton, Laura C., Ed.; And Others
1990-01-01
Seven essays on efforts of evaluate prevention programs aimed at the acquired immune deficiency syndrome (AIDS) are presented. Topics include public health psychology, mathematical models of epidemiology, estimates of incubation periods, ethnographic evaluations of AIDS prevention programs, an AIDS education model, theory-based evaluation, and…
Hill, Mary C.
1985-01-01
The purpose of this study was to develop a methodology to be used to investigate the aquifer characteristics and water supply potential of an aquifer system. In particular, the geohydrology of northern Long Valley, New Jersey, was investigated. Geohydrologic data were collected and analyzed to characterize the site. Analysis was accomplished by interpreting the available data and by using a numerical simulation of the watertable aquifer. Special attention was given to the estimation of hydraulic conductivity values and hydraulic conductivity structure which together define the hydraulic conductivity of the modeled aquifer. Hydraulic conductivity and all other aspects of the system were first estimated using the trial-and-error method of calibration. The estimation of hydraulic conductivity was improved using a least squares method to estimate hydraulic conductivity values and by improvements in the parameter structure. These efforts improved the calibration of the model far more than a preceding period of similar effort using the trial-and-error method of calibration. In addition, the proposed method provides statistical information on the reliability of estimated hydraulic conductivity values, calculated heads, and calculated flows. The methodology developed and applied in this work proved to be of substantial value in the evaluation of the aquifer considered.
USDA-ARS?s Scientific Manuscript database
The use of distributed parameter models to address water resource management problems has increased in recent years. Calibration is necessary to reduce the uncertainties associated with model input parameters. Manual calibration of a distributed parameter model is a very time consuming effort. There...
Modelling Sublimation of Carbon Dioxide
ERIC Educational Resources Information Center
Winkel, Brian
2012-01-01
In this article, the author reports results in their efforts to model sublimation of carbon dioxide and the associated kinetics order and parameter estimation issues in their model. They have offered the reader two sets of data and several approaches to determine the rate of sublimation of a piece of solid dry ice. They presented several models…
The impact of climate change on surface-level ozone is examined through a multiscale modeling effort that linked global and regional climate models to drive air quality model simulations. Results are quantified in terms of the relative response factor (RRFE), which estimates the ...
Hernández, Maciel M.; Valiente, Carlos; Eisenberg, Nancy; Berger, Rebecca H.; Spinrad, Tracy L.; VanSchyndel, Sarah K.; Silva, Kassondra M.; Southworth, Jody; Thompson, Marilyn S.
2017-01-01
This study evaluated the association between effortful control in kindergarten and academic achievement one year later (N = 301), and whether teacher–student closeness and conflict in kindergarten mediated the association. Parents, teachers, and observers reported on children's effortful control, and teachers reported on their perceived levels of closeness and conflict with students. Students completed the passage comprehension and applied problems subtests of the Woodcock–Johnson tests of achievement, as well as a behavioral measure of effortful control. Analytical models predicting academic achievement were estimated using a structural equation model framework. Effortful control positively predicted academic achievement even when controlling for prior achievement and other covariates. Mediation hypotheses were tested in a separate model; effortful control positively predicted teacher–student closeness and strongly, negatively predicted teacher–student conflict. Teacher–student closeness and effortful control, but not teacher–student conflict, had small, positive associations with academic achievement. Effortful control also indirectly predicted higher academic achievement through its positive effect on teacher–student closeness and via its positive relation to early academic achievement. The findings suggest that teacher–student closeness is one mechanism by which effortful control is associated with academic achievement. Effortful control was also a consistent predictor of academic achievement, beyond prior achievement levels and controlling for teacher–student closeness and conflict, with implications for intervention programs on fostering regulation and achievement concurrently. PMID:28684888
Hernández, Maciel M; Valiente, Carlos; Eisenberg, Nancy; Berger, Rebecca H; Spinrad, Tracy L; VanSchyndel, Sarah K; Silva, Kassondra M; Southworth, Jody; Thompson, Marilyn S
This study evaluated the association between effortful control in kindergarten and academic achievement one year later ( N = 301), and whether teacher-student closeness and conflict in kindergarten mediated the association. Parents, teachers, and observers reported on children's effortful control, and teachers reported on their perceived levels of closeness and conflict with students. Students completed the passage comprehension and applied problems subtests of the Woodcock-Johnson tests of achievement, as well as a behavioral measure of effortful control. Analytical models predicting academic achievement were estimated using a structural equation model framework. Effortful control positively predicted academic achievement even when controlling for prior achievement and other covariates. Mediation hypotheses were tested in a separate model; effortful control positively predicted teacher-student closeness and strongly, negatively predicted teacher-student conflict. Teacher-student closeness and effortful control, but not teacher-student conflict, had small, positive associations with academic achievement. Effortful control also indirectly predicted higher academic achievement through its positive effect on teacher-student closeness and via its positive relation to early academic achievement. The findings suggest that teacher-student closeness is one mechanism by which effortful control is associated with academic achievement. Effortful control was also a consistent predictor of academic achievement, beyond prior achievement levels and controlling for teacher-student closeness and conflict, with implications for intervention programs on fostering regulation and achievement concurrently.
Integrating Phosphorus Movement with Soil and Water Loss in the Daily Erosion Project
NASA Astrophysics Data System (ADS)
Sklenar, Tim; Perez-Bidegain, Mario; Cruse, Richard; Gelder, Brian; Herzmann, Daryl
2016-04-01
The Daily Erosion Project (DEP) is an ongoing modelling effort which is now in its second generation. DEP provides comprehensive and dynamic estimates of sediment delivery, soil erosion, and hill slope runoff for agricultural land areas across the Midwestern United States every day for Hydrologic Unit Code 12 (HUC 12) size watersheds. Results are posted every morning on the Internet at dailyerosion.org. Currently DEP covers all of Iowa and portions of Kansas and Minnesota, but expansion of coverage is ongoing. The integration of highly resolute spatial and temporal climate data, soil properties, crop rotation and residue management data affords the opportunity to test the effects of using multiple conservation practices on the transport and fate of water borne nutrients, especially phosphorus, on the Midwestern United States agricultural landscapes. Understanding the interaction of different environmental and land management practices on phosphorus movement will allow data from the DEP to guide conservation efforts as expansion continues into surrounding Midwestern states. The presentation will provide an overview of the DEP technology, including how input data are derived and used to make daily erosion estimates on over 200,000 flowpaths in the modelling area, as well as a discussion of the ongoing phosphorus transport modelling efforts and plans for future expansion (both land area and model functionality).
NASA Technical Reports Server (NTRS)
Murphy, Patrick Charles
1985-01-01
An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The algorithm was developed for airplane parameter estimation problems but is well suited for most nonlinear, multivariable, dynamic systems. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort. MNRES determines the sensitivities with less computational effort than using either a finite-difference method or integrating the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, thus eliminating algorithm reformulation with each new model and providing flexibility to use model equations in any format that is convenient. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. It is observed that the degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. The CR bounds were found to be close to the bounds determined by the search when the degree of nonlinearity was small. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels for the parameter confidence limits. The primary utility of the measure, however, was found to be in predicting the degree of agreement between Cramer-Rao bounds and search estimates.
Baker, Matthew R; Schindler, Daniel E; Essington, Timothy E; Hilborn, Ray
2014-01-01
Few studies have considered the management implications of mortality to target fish stocks caused by non-retention in commercial harvest gear (escape mortality). We demonstrate the magnitude of this previously unquantified source of mortality and its implications for the population dynamics of exploited stocks, biological metrics, stock productivity, and optimal management. Non-retention in commercial gillnet fisheries for Pacific salmon (Oncorhynchus spp.) is common and often leads to delayed mortality in spawning populations. This represents losses, not only to fishery harvest, but also in future recruitment to exploited stocks. We estimated incidence of non-retention in Alaskan gillnet fisheries for sockeye salmon (O. nerka) and found disentanglement injuries to be extensive and highly variable between years. Injuries related to non-retention were noted in all spawning populations, and incidence of injury ranged from 6% to 44% of escaped salmon across nine river systems over five years. We also demonstrate that non-retention rates strongly correlate with fishing effort. We applied maximum likelihood and Bayesian approaches to stock-recruitment analyses, discounting estimates of spawning salmon to account for fishery-related mortality in escaped fish. Discounting spawning stock estimates as a function of annual fishing effort improved model fits to historical stock-recruitment data in most modeled systems. This suggests the productivity of exploited stocks has been systematically underestimated. It also suggests that indices of fishing effort may be used to predict escape mortality and correct for losses. Our results illustrate how explicitly accounting for collateral effects of fishery extraction may improve estimates of productivity and better inform management metrics derived from estimates of stock-recruitment analyses.
Background noise spectra of global seismic stations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wada, M.M.; Claassen, J.P.
1996-08-01
Over an extended period of time station noise spectra were collected from various sources for use in estimating the detection and location performance of global networks of seismic stations. As the database of noise spectra enlarged and duplicate entries became available, an effort was mounted to more carefully select station noise spectra while discarding others. This report discusses the methodology and criteria by which the noise spectra were selected. It also identifies and illustrates the station noise spectra which survived the selection process and which currently contribute to the modeling efforts. The resulting catalog of noise statistics not only benefitsmore » those who model network performance but also those who wish to select stations on the basis of their noise level as may occur in designing networks or in selecting seismological data for analysis on the basis of station noise level. In view of the various ways by which station noise were estimated by the different contributors, it is advisable that future efforts which predict network performance have available station noise data and spectral estimation methods which are compatible with the statistics underlying seismic noise. This appropriately requires (1) averaging noise over seasonal and/or diurnal cycles, (2) averaging noise over time intervals comparable to those employed by actual detectors, and (3) using logarithmic measures of the noise.« less
Kenneth A. Baerenklau; Armando González-Cabán; Catrina I. Páez; Edgard Chávez
2009-01-01
The U.S. Forest Service is responsible for developing tools to facilitate effective and efficient fire management on wildlands and urban-wildland interfaces. Existing GIS-based fire modeling software only permits estimation of the costs of fire prevention and mitigation efforts as well as the effects of those efforts on fire behavior. This research demonstrates how the...
Hydrogen from coal cost estimation guidebook
NASA Technical Reports Server (NTRS)
Billings, R. E.
1981-01-01
In an effort to establish baseline information whereby specific projects can be evaluated, a current set of parameters which are typical of coal gasification applications was developed. Using these parameters a computer model allows researchers to interrelate cost components in a sensitivity analysis. The results make possible an approximate estimation of hydrogen energy economics from coal, under a variety of circumstances.
OVERVIEW AND STATUS OF LAKE MICHIGAN MASS BALANCE MODELLING PROJECT
With most of the data available from the Lake Michigan Mass Balance Project field program, the modeling efforts have begun in earnest. The tributary and atmospheric load estimates are or will be completed soon, so realistic simulations for calibration are beginning. A Quality Ass...
NASA Astrophysics Data System (ADS)
Kang, Hee Joong; Zhang, Chang Ik; Lee, Eun Ji; Seo, Young Il
2015-06-01
Hairtail ( Trichiurus lepturus) has been traditionally harvested by multi-gear types in the Yellow Sea and the East China Sea, except for the East Sea (Sea of Japan) in Korean waters. Six different fishery types such as offshore stownet fishery, offshore longline fishery, large pair-trawl fishery, large purse seine fishery, large otter trawl fishery and offshore angling fishery target to harvest the hairtail stock accounting for about 90% of the total annual catch. We attempted to develop an ecosystem-based fisheries assessment approach, which determines the optimal allocation of catch quotas and fishing efforts for major fisheries. We conducted standardization of fishing effort for six types of hairtail fisheries using a general linear model (GLM), and then estimated maximum sustainable yield (MSY) and maximum economic yield (MEY). Estimated MSY and MEY for the hairtail stock were estimated as 100,151 mt and 97,485 mt, respectively. In addition, we carried out an ecosystem-based risk analysis to obtain species risk index (SRI), which was applied to adjusting the optimal proportion of fishing effort for six hairtail fisheries as a penalty or an incentive. As a result, fishing effort ratios were adjusted by SRI for the six fisheries types. Also, the total allowable catch (TAC) was estimated as 97,485 mt and the maximum net profit at TAC by the hairtail fisheries was estimated as 778 billion won (USD 765 million).
Aircraft ground damage and the use of predictive models to estimate costs
NASA Astrophysics Data System (ADS)
Kromphardt, Benjamin D.
Aircraft are frequently involved in ground damage incidents, and repair costs are often accepted as part of doing business. The Flight Safety Foundation (FSF) estimates ground damage to cost operators $5-10 billion annually. Incident reports, documents from manufacturers or regulatory agencies, and other resources were examined to better understand the problem of ground damage in aviation. Major contributing factors were explained, and two versions of a computer-based model were developed to project costs and show what is possible. One objective was to determine if the models could match the FSF's estimate. Another objective was to better understand cost savings that could be realized by efforts to further mitigate the occurrence of ground incidents. Model effectiveness was limited by access to official data, and assumptions were used if data was not available. However, the models were determined to sufficiently estimate the costs of ground incidents.
NASA Astrophysics Data System (ADS)
Hobbs, J.; Turmon, M.; David, C. H.; Reager, J. T., II; Famiglietti, J. S.
2017-12-01
NASA's Western States Water Mission (WSWM) combines remote sensing of the terrestrial water cycle with hydrological models to provide high-resolution state estimates for multiple variables. The effort includes both land surface and river routing models that are subject to several sources of uncertainty, including errors in the model forcing and model structural uncertainty. Computational and storage constraints prohibit extensive ensemble simulations, so this work outlines efficient but flexible approaches for estimating and reporting uncertainty. Calibrated by remote sensing and in situ data where available, we illustrate the application of these techniques in producing state estimates with associated uncertainties at kilometer-scale resolution for key variables such as soil moisture, groundwater, and streamflow.
Single-Step BLUP with Varying Genotyping Effort in Open-Pollinated Picea glauca.
Ratcliffe, Blaise; El-Dien, Omnia Gamal; Cappa, Eduardo P; Porth, Ilga; Klápště, Jaroslav; Chen, Charles; El-Kassaby, Yousry A
2017-03-10
Maximization of genetic gain in forest tree breeding programs is contingent on the accuracy of the predicted breeding values and precision of the estimated genetic parameters. We investigated the effect of the combined use of contemporary pedigree information and genomic relatedness estimates on the accuracy of predicted breeding values and precision of estimated genetic parameters, as well as rankings of selection candidates, using single-step genomic evaluation (HBLUP). In this study, two traits with diverse heritabilities [tree height (HT) and wood density (WD)] were assessed at various levels of family genotyping efforts (0, 25, 50, 75, and 100%) from a population of white spruce ( Picea glauca ) consisting of 1694 trees from 214 open-pollinated families, representing 43 provenances in Québec, Canada. The results revealed that HBLUP bivariate analysis is effective in reducing the known bias in heritability estimates of open-pollinated populations, as it exposes hidden relatedness, potential pedigree errors, and inbreeding. The addition of genomic information in the analysis considerably improved the accuracy in breeding value estimates by accounting for both Mendelian sampling and historical coancestry that were not captured by the contemporary pedigree alone. Increasing family genotyping efforts were associated with continuous improvement in model fit, precision of genetic parameters, and breeding value accuracy. Yet, improvements were observed even at minimal genotyping effort, indicating that even modest genotyping effort is effective in improving genetic evaluation. The combined utilization of both pedigree and genomic information may be a cost-effective approach to increase the accuracy of breeding values in forest tree breeding programs where shallow pedigrees and large testing populations are the norm. Copyright © 2017 Ratcliffe et al.
Estimation of real-time runway surface contamination using flight data recorder parameters
NASA Astrophysics Data System (ADS)
Curry, Donovan
Within this research effort, the development of an analytic process for friction coefficient estimation is presented. Under static equilibrium, the sum of forces and moments acting on the aircraft, in the aircraft body coordinate system, while on the ground at any instant is equal to zero. Under this premise the longitudinal, lateral and normal forces due to landing are calculated along with the individual deceleration components existent when an aircraft comes to a rest during ground roll. In order to validate this hypothesis a six degree of freedom aircraft model had to be created and landing tests had to be simulated on different surfaces. The simulated aircraft model includes a high fidelity aerodynamic model, thrust model, landing gear model, friction model and antiskid model. Three main surfaces were defined in the friction model; dry, wet and snow/ice. Only the parameters recorded by an FDR are used directly from the aircraft model all others are estimated or known a priori. The estimation of unknown parameters is also presented in the research effort. With all needed parameters a comparison and validation with simulated and estimated data, under different runway conditions, is performed. Finally, this report presents results of a sensitivity analysis in order to provide a measure of reliability of the analytic estimation process. Linear and non-linear sensitivity analysis has been performed in order to quantify the level of uncertainty implicit in modeling estimated parameters and how they can affect the calculation of the instantaneous coefficient of friction. Using the approach of force and moment equilibrium about the CG at landing to reconstruct the instantaneous coefficient of friction appears to be a reasonably accurate estimate when compared to the simulated friction coefficient. This is also true when the FDR and estimated parameters are introduced to white noise and when crosswind is introduced to the simulation. After the linear analysis the results show the minimum frequency at which the algorithm still provides moderately accurate data is at 2Hz. In addition, the linear analysis shows that with estimated parameters increased and decreased up to 25% at random, high priority parameters have to be accurate to within at least +/-5% to have an effect of less than 1% change in the average coefficient of friction. Non-linear analysis results show that the algorithm can be considered reasonably accurate for all simulated cases when inaccuracies in the estimated parameters vary randomly and simultaneously up to +/-27%. At worst-case the maximum percentage change in average coefficient of friction is less than 10% for all surfaces.
Demographics of reintroduced populations: estimation, modeling, and decision analysis
Converse, Sarah J.; Moore, Clinton T.; Armstrong, Doug P.
2013-01-01
Reintroduction can be necessary for recovering populations of threatened species. However, the success of reintroduction efforts has been poorer than many biologists and managers would hope. To increase the benefits gained from reintroduction, management decision making should be couched within formal decision-analytic frameworks. Decision analysis is a structured process for informing decision making that recognizes that all decisions have a set of components—objectives, alternative management actions, predictive models, and optimization methods—that can be decomposed, analyzed, and recomposed to facilitate optimal, transparent decisions. Because the outcome of interest in reintroduction efforts is typically population viability or related metrics, models used in decision analysis efforts for reintroductions will need to include population models. In this special section of the Journal of Wildlife Management, we highlight examples of the construction and use of models for informing management decisions in reintroduced populations. In this introductory contribution, we review concepts in decision analysis, population modeling for analysis of decisions in reintroduction settings, and future directions. Increased use of formal decision analysis, including adaptive management, has great potential to inform reintroduction efforts. Adopting these practices will require close collaboration among managers, decision analysts, population modelers, and field biologists.
Overcommitment as a predictor of effort-reward imbalance: evidence from an 8-year follow-up study.
Feldt, Taru; Hyvönen, Katriina; Mäkikangas, Anne; Rantanen, Johanna; Huhtala, Mari; Kinnunen, Ulla
2016-07-01
The effort-reward imbalance (ERI) model includes the personal characteristic of overcommitment (OC) and the job-related characteristics of effort, reward, and ERI, all of which are assumed to play a role in an employee's health and well-being at work. The aim of the present longitudinal study was to shed more light on the dynamics of the ERI model by investigating the basic hypotheses related to the role of OC in the model, ie, to establish whether an employee's OC could be a risk factor for an increased experience of high effort, low reward, and high ERI at work. The study was based on 5-wave, 8-year follow-up data collected among Finnish professionals in 2006 (T1, N=747), 2008 (T2, N=422), 2010 (T3, N=368), 2012 (T4, N=325), and 2014 (T5, N=273). The participants were mostly male (85% at T1) and the majority of them worked in technical fields. OC, effort, reward, and ERI were measured at each time point with the 23-item ERI scale. Three cross-lagged structural equation models (SEM) were estimated and compared by using full information maximum likelihood method: (i) OC predicted later experiences of effort, reward, and ERI (normal causation model), (ii) effort, reward, and ERI predicted later OC (reversed causation model), and (iii) associations in normal causal and reversed causal models were simultaneously valid (reciprocal causation model). The results supported the normal causation model: strong OC predicted later experiences of high effort, low reward and high ERI. High OC is a risk factor for an increased experience of job strain factors; that is, high effort, low reward, and high ERI. Thus, OC is a risk factor not only for an employee's well-being and health but also for an increasing risk for perceiving adverse job strain factors in the working environment.
NASA Astrophysics Data System (ADS)
Boyer, T.; Locarnini, R. A.; Mishonov, A. V.; Reagan, J. R.; Seidov, D.; Zweng, M.; Levitus, S.
2017-12-01
Ocean heat uptake is the major factor in sequestering the Earth's Energy Imbalance (EEI). Since 2000, the National Centers for Environmental Information (NCEI) have been estimating historical ocean heat content (OHC) changes back to the 1950s, as well as monitoring recent OHC. Over these years, through worldwide community efforts, methods of calculating OHC have substantially improved. Similarly, estimation of the uncertainty of ocean heat content calculations provide new insight into how well EEI estimates can be constrained using in situ measurements and models. The changing ocean observing system, especially with the near-global year-round coverage afforded by Argo, has also allowed more confidence in regional and global OHC estimates and provided a benchmark for better understanding of historical OHC changes. NCEI is incorporating knowledge gained through these global efforts into the basic methods, instrument bias corrections, uncertainty measurements, and temporal and spatial resolution capabilities of historic OHC change estimation and recent monitoring. The nature of these improvements and their consequences for estimation of OHC in relation to the EEI will be discussed.
Uncertainty in sample estimates and the implicit loss function for soil information.
NASA Astrophysics Data System (ADS)
Lark, Murray
2015-04-01
One significant challenge in the communication of uncertain information is how to enable the sponsors of sampling exercises to make a rational choice of sample size. One way to do this is to compute the value of additional information given the loss function for errors. The loss function expresses the costs that result from decisions made using erroneous information. In certain circumstances, such as remediation of contaminated land prior to development, loss functions can be computed and used to guide rational decision making on the amount of resource to spend on sampling to collect soil information. In many circumstances the loss function cannot be obtained prior to decision making. This may be the case when multiple decisions may be based on the soil information and the costs of errors are hard to predict. The implicit loss function is proposed as a tool to aid decision making in these circumstances. Conditional on a logistical model which expresses costs of soil sampling as a function of effort, and statistical information from which the error of estimates can be modelled as a function of effort, the implicit loss function is the loss function which makes a particular decision on effort rational. In this presentation the loss function is defined and computed for a number of arbitrary decisions on sampling effort for a hypothetical soil monitoring problem. This is based on a logistical model of sampling cost parameterized from a recent geochemical survey of soil in Donegal, Ireland and on statistical parameters estimated with the aid of a process model for change in soil organic carbon. It is shown how the implicit loss function might provide a basis for reflection on a particular choice of sample size by comparing it with the values attributed to soil properties and functions. Scope for further research to develop and apply the implicit loss function to help decision making by policy makers and regulators is then discussed.
Assessing the influence of abatement efforts and other human activities on ozone levels is complicated by the atmosphere's changeable nature. Two statistical methods, the dynamic linear model(DLM) and the generalized additive model (GAM), are used to estimate ozone trends in the...
Bridging Scientific Model Outputs with Emergency Response Needs in Catastrophic Earthquake Responses
ERIC Educational Resources Information Center
Johannes, Tay W.
2010-01-01
In emergency management, scientific models are widely used for running hazard simulations and estimating losses often in support of planning and mitigation efforts. This work expands utility of the scientific model into the response phase of emergency management. The focus is on the common operating picture as it gives context to emergency…
Bieryla, Kathleen A; Anderson, Dennis E; Madigan, Michael L
2009-02-01
The main purpose of this study was to compare three methods of determining relative effort during sit-to-stand (STS). Fourteen young (mean 19.6+/-SD 1.2 years old) and 17 older (61.7+/-5.5 years old) adults completed six STS trials at three speeds: slow, normal, and fast. Sagittal plane joint torques at the hip, knee, and ankle were calculated through inverse dynamics. Isometric and isokinetic maximum voluntary contractions (MVC) for the hip, knee, and ankle were collected and used for model parameters to predict the participant-specific maximum voluntary joint torque. Three different measures of relative effort were determined by normalizing STS joint torques to three different estimates of maximum voluntary torque. Relative effort at the hip, knee, and ankle were higher when accounting for variations in maximum voluntary torque with joint angle and angular velocity (hip=26.3+/-13.5%, knee=78.4+/-32.2%, ankle=27.9+/-14.1%) compared to methods which do not account for these variations (hip=23.5+/-11.7%, knee=51.7+/-15.0%, ankle=20.7+/-10.4%). At higher velocities, the difference in calculating relative effort with respect to isometric MVC or incorporating joint angle and angular velocity became more evident. Estimates of relative effort that account for the variations in maximum voluntary torque with joint angle and angular velocity may provide higher levels of accuracy compared to methods based on measurements of maximal isometric torques.
The Brazilian version of the effort-reward imbalance questionnaire to assess job stress.
Chor, Dóra; Werneck, Guilherme Loureiro; Faerstein, Eduardo; Alves, Márcia Guimarães de Mello; Rotenberg, Lúcia
2008-01-01
The effort-reward imbalance (ERI) model has been used to assess the health impact of job stress. We aimed at describing the cross-cultural adaptation of the ERI questionnaire into Portuguese and some psychometric properties, in particular internal consistency, test-retest reliability, and factorial structure. We developed a Brazilian version of the ERI using a back-translation method and tested its reliability. The test-retest reliability study was conducted with 111 health workers and University staff. The current analyses are based on 89 participants, after exclusion of those with missing data. Reproducibility (interclass correlation coefficients) for the "effort", "'reward", and "'overcommitment"' dimensions of the scale was estimated at 0.76, 0.86, and 0.78, respectively. Internal consistency (Cronbach's alpha) estimates for these same dimensions were 0.68, 0.78, and 0.78, respectively. The exploratory factorial structure was fairly consistent with the model's theoretical components. We conclude that the results of this study represent the first evidence in favor of the application of the Brazilian Portuguese version of the ERI scale in health research in populations with similar socioeconomic characteristics.
Selişteanu, Dan; Șendrescu, Dorin; Georgeanu, Vlad; Roman, Monica
2015-01-01
Monoclonal antibodies (mAbs) are at present one of the fastest growing products of pharmaceutical industry, with widespread applications in biochemistry, biology, and medicine. The operation of mAbs production processes is predominantly based on empirical knowledge, the improvements being achieved by using trial-and-error experiments and precedent practices. The nonlinearity of these processes and the absence of suitable instrumentation require an enhanced modelling effort and modern kinetic parameter estimation strategies. The present work is dedicated to nonlinear dynamic modelling and parameter estimation for a mammalian cell culture process used for mAb production. By using a dynamical model of such kind of processes, an optimization-based technique for estimation of kinetic parameters in the model of mammalian cell culture process is developed. The estimation is achieved as a result of minimizing an error function by a particle swarm optimization (PSO) algorithm. The proposed estimation approach is analyzed in this work by using a particular model of mammalian cell culture, as a case study, but is generic for this class of bioprocesses. The presented case study shows that the proposed parameter estimation technique provides a more accurate simulation of the experimentally observed process behaviour than reported in previous studies.
Selişteanu, Dan; Șendrescu, Dorin; Georgeanu, Vlad
2015-01-01
Monoclonal antibodies (mAbs) are at present one of the fastest growing products of pharmaceutical industry, with widespread applications in biochemistry, biology, and medicine. The operation of mAbs production processes is predominantly based on empirical knowledge, the improvements being achieved by using trial-and-error experiments and precedent practices. The nonlinearity of these processes and the absence of suitable instrumentation require an enhanced modelling effort and modern kinetic parameter estimation strategies. The present work is dedicated to nonlinear dynamic modelling and parameter estimation for a mammalian cell culture process used for mAb production. By using a dynamical model of such kind of processes, an optimization-based technique for estimation of kinetic parameters in the model of mammalian cell culture process is developed. The estimation is achieved as a result of minimizing an error function by a particle swarm optimization (PSO) algorithm. The proposed estimation approach is analyzed in this work by using a particular model of mammalian cell culture, as a case study, but is generic for this class of bioprocesses. The presented case study shows that the proposed parameter estimation technique provides a more accurate simulation of the experimentally observed process behaviour than reported in previous studies. PMID:25685797
Validated Predictions of Metabolic Energy Consumption for Submaximal Effort Movement
Tsianos, George A.; MacFadden, Lisa N.
2016-01-01
Physical performance emerges from complex interactions among many physiological systems that are largely driven by the metabolic energy demanded. Quantifying metabolic demand is an essential step for revealing the many mechanisms of physical performance decrement, but accurate predictive models do not exist. The goal of this study was to investigate if a recently developed model of muscle energetics and force could be extended to reproduce the kinematics, kinetics, and metabolic demand of submaximal effort movement. Upright dynamic knee extension against various levels of ergometer load was simulated. Task energetics were estimated by combining the model of muscle contraction with validated models of lower limb musculotendon paths and segment dynamics. A genetic algorithm was used to compute the muscle excitations that reproduced the movement with the lowest energetic cost, which was determined to be an appropriate criterion for this task. Model predictions of oxygen uptake rate (VO2) were well within experimental variability for the range over which the model parameters were confidently known. The model's accurate estimates of metabolic demand make it useful for assessing the likelihood and severity of physical performance decrement for a given task as well as investigating underlying physiologic mechanisms. PMID:27248429
Re-analysis of Alaskan benchmark glacier mass-balance data using the index method
Van Beusekom, Ashely E.; O'Nell, Shad R.; March, Rod S.; Sass, Louis C.; Cox, Leif H.
2010-01-01
At Gulkana and Wolverine Glaciers, designated the Alaskan benchmark glaciers, we re-analyzed and re-computed the mass balance time series from 1966 to 2009 to accomplish our goal of making more robust time series. Each glacier's data record was analyzed with the same methods. For surface processes, we estimated missing information with an improved degree-day model. Degree-day models predict ablation from the sum of daily mean temperatures and an empirical degree-day factor. We modernized the traditional degree-day model and derived new degree-day factors in an effort to match the balance time series more closely. We estimated missing yearly-site data with a new balance gradient method. These efforts showed that an additional step needed to be taken at Wolverine Glacier to adjust for non-representative index sites. As with the previously calculated mass balances, the re-analyzed balances showed a continuing trend of mass loss. We noted that the time series, and thus our estimate of the cumulative mass loss over the period of record, was very sensitive to the data input, and suggest the need to add data-collection sites and modernize our weather stations.
Optimizing Estimated Loss Reduction for Active Sampling in Rank Learning
2008-01-01
active learning framework for SVM-based and boosting-based rank learning. Our approach suggests sampling based on maximizing the estimated loss differential over unlabeled data. Experimental results on two benchmark corpora show that the proposed model substantially reduces the labeling effort, and achieves superior performance rapidly with as much as 30% relative improvement over the margin-based sampling
An Investigation Into the Effects of Frequency Response Function Estimators on Model Updating
NASA Astrophysics Data System (ADS)
Ratcliffe, M. J.; Lieven, N. A. J.
1999-03-01
Model updating is a very active research field, in which significant effort has been invested in recent years. Model updating methodologies are invariably successful when used on noise-free simulated data, but tend to be unpredictable when presented with real experimental data that are—unavoidably—corrupted with uncorrelated noise content. In the development and validation of model-updating strategies, a random zero-mean Gaussian variable is added to simulated test data to tax the updating routines more fully. This paper proposes a more sophisticated model for experimental measurement noise, and this is used in conjunction with several different frequency response function estimators, from the classical H1and H2to more refined estimators that purport to be unbiased. Finite-element model case studies, in conjunction with a genuine experimental test, suggest that the proposed noise model is a more realistic representation of experimental noise phenomena. The choice of estimator is shown to have a significant influence on the viability of the FRF sensitivity method. These test cases find that the use of the H2estimator for model updating purposes is contraindicated, and that there is no advantage to be gained by using the sophisticated estimators over the classical H1estimator.
Jiao, Y.; Lapointe, N.W.R.; Angermeier, P.L.; Murphy, B.R.
2009-01-01
Models of species' demographic features are commonly used to understand population dynamics and inform management tactics. Hierarchical demographic models are ideal for the assessment of non-indigenous species because our knowledge of non-indigenous populations is usually limited, data on demographic traits often come from a species' native range, these traits vary among populations, and traits are likely to vary considerably over time as species adapt to new environments. Hierarchical models readily incorporate this spatiotemporal variation in species' demographic traits by representing demographic parameters as multi-level hierarchies. As is done for traditional non-hierarchical matrix models, sensitivity and elasticity analyses are used to evaluate the contributions of different life stages and parameters to estimates of population growth rate. We applied a hierarchical model to northern snakehead (Channa argus), a fish currently invading the eastern United States. We used a Monte Carlo approach to simulate uncertainties in the sensitivity and elasticity analyses and to project future population persistence under selected management tactics. We gathered key biological information on northern snakehead natural mortality, maturity and recruitment in its native Asian environment. We compared the model performance with and without hierarchy of parameters. Our results suggest that ignoring the hierarchy of parameters in demographic models may result in poor estimates of population size and growth and may lead to erroneous management advice. In our case, the hierarchy used multi-level distributions to simulate the heterogeneity of demographic parameters across different locations or situations. The probability that the northern snakehead population will increase and harm the native fauna is considerable. Our elasticity and prognostic analyses showed that intensive control efforts immediately prior to spawning and/or juvenile-dispersal periods would be more effective (and probably require less effort) than year-round control efforts. Our study demonstrates the importance of considering the hierarchy of parameters in estimating population growth rate and evaluating different management strategies for non-indigenous invasive species. ?? 2009 Elsevier B.V.
Developing and Testing a Model to Predict Outcomes of Organizational Change
Gustafson, David H; Sainfort, François; Eichler, Mary; Adams, Laura; Bisognano, Maureen; Steudel, Harold
2003-01-01
Objective To test the effectiveness of a Bayesian model employing subjective probability estimates for predicting success and failure of health care improvement projects. Data Sources Experts' subjective assessment data for model development and independent retrospective data on 221 healthcare improvement projects in the United States, Canada, and the Netherlands collected between 1996 and 2000 for validation. Methods A panel of theoretical and practical experts and literature in organizational change were used to identify factors predicting the outcome of improvement efforts. A Bayesian model was developed to estimate probability of successful change using subjective estimates of likelihood ratios and prior odds elicited from the panel of experts. A subsequent retrospective empirical analysis of change efforts in 198 health care organizations was performed to validate the model. Logistic regression and ROC analysis were used to evaluate the model's performance using three alternative definitions of success. Data Collection For the model development, experts' subjective assessments were elicited using an integrative group process. For the validation study, a staff person intimately involved in each improvement project responded to a written survey asking questions about model factors and project outcomes. Results Logistic regression chi-square statistics and areas under the ROC curve demonstrated a high level of model performance in predicting success. Chi-square statistics were significant at the 0.001 level and areas under the ROC curve were greater than 0.84. Conclusions A subjective Bayesian model was effective in predicting the outcome of actual improvement projects. Additional prospective evaluations as well as testing the impact of this model as an intervention are warranted. PMID:12785571
Modification of a rainfall-runoff model for distributed modeling in a GIS and its validation
NASA Astrophysics Data System (ADS)
Nyabeze, W. R.
A rainfall-runoff model, which can be inter-faced with a Geographical Information System (GIS) to integrate definition, measurement, calculating parameter values for spatial features, presents considerable advantages. The modification of the GWBasic Wits Rainfall-Runoff Erosion Model (GWBRafler) to enable parameter value estimation in a GIS (GISRafler) is presented in this paper. Algorithms are applied to estimate parameter values reducing the number of input parameters and the effort to populate them. The use of a GIS makes the relationship between parameter estimates and cover characteristics more evident. This paper has been produced as part of research to generalize the GWBRafler on a spatially distributed basis. Modular data structures are assumed and parameter values are weighted relative to the module area and centroid properties. Modifications to the GWBRafler enable better estimation of low flows, which are typical in drought conditions.
Spatially-explicit models of global tree density.
Glick, Henry B; Bettigole, Charlie; Maynard, Daniel S; Covey, Kristofer R; Smith, Jeffrey R; Crowther, Thomas W
2016-08-16
Remote sensing and geographic analysis of woody vegetation provide means of evaluating the distribution of natural resources, patterns of biodiversity and ecosystem structure, and socio-economic drivers of resource utilization. While these methods bring geographic datasets with global coverage into our day-to-day analytic spheres, many of the studies that rely on these strategies do not capitalize on the extensive collection of existing field data. We present the methods and maps associated with the first spatially-explicit models of global tree density, which relied on over 420,000 forest inventory field plots from around the world. This research is the result of a collaborative effort engaging over 20 scientists and institutions, and capitalizes on an array of analytical strategies. Our spatial data products offer precise estimates of the number of trees at global and biome scales, but should not be used for local-level estimation. At larger scales, these datasets can contribute valuable insight into resource management, ecological modelling efforts, and the quantification of ecosystem services.
A model to estimate cost-savings in diabetic foot ulcer prevention efforts.
Barshes, Neal R; Saedi, Samira; Wrobel, James; Kougias, Panos; Kundakcioglu, O Erhun; Armstrong, David G
2017-04-01
Sustained efforts at preventing diabetic foot ulcers (DFUs) and subsequent leg amputations are sporadic in most health care systems despite the high costs associated with such complications. We sought to estimate effectiveness targets at which cost-savings (i.e. improved health outcomes at decreased total costs) might occur. A Markov model with probabilistic sensitivity analyses was used to simulate the five-year survival, incidence of foot complications, and total health care costs in a hypothetical population of 100,000 people with diabetes. Clinical event and cost estimates were obtained from previously-published trials and studies. A population without previous DFU but with 17% neuropathy and 11% peripheral artery disease (PAD) prevalence was assumed. Primary prevention (PP) was defined as reducing initial DFU incidence. PP was more than 90% likely to provide cost-savings when annual prevention costs are less than $50/person and/or annual DFU incidence is reduced by at least 25%. Efforts directed at patients with diabetes who were at moderate or high risk for DFUs were very likely to provide cost-savings if DFU incidence was decreased by at least 10% and/or the cost was less than $150 per person per year. Low-cost DFU primary prevention efforts producing even small decreases in DFU incidence may provide the best opportunity for cost-savings, especially if focused on patients with neuropathy and/or PAD. Mobile phone-based reminders, self-identification of risk factors (ex. Ipswich touch test), and written brochures may be among such low-cost interventions that should be investigated for cost-savings potential. Published by Elsevier Inc.
Development on electromagnetic impedance function modeling and its estimation
NASA Astrophysics Data System (ADS)
Sutarno, D.
2015-09-01
Today the Electromagnetic methods such as magnetotellurics (MT) and controlled sources audio MT (CSAMT) is used in a broad variety of applications. Its usefulness in poor seismic areas and its negligible environmental impact are integral parts of effective exploration at minimum cost. As exploration was forced into more difficult areas, the importance of MT and CSAMT, in conjunction with other techniques, has tended to grow continuously. However, there are obviously important and difficult problems remaining to be solved concerning our ability to collect process and interpret MT as well as CSAMT in complex 3D structural environments. This talk aim at reviewing and discussing the recent development on MT as well as CSAMT impedance functions modeling, and also some improvements on estimation procedures for the corresponding impedance functions. In MT impedance modeling, research efforts focus on developing numerical method for computing the impedance functions of three dimensionally (3-D) earth resistivity models. On that reason, 3-D finite elements numerical modeling for the impedances is developed based on edge element method. Whereas, in the CSAMT case, the efforts were focused to accomplish the non-plane wave problem in the corresponding impedance functions. Concerning estimation of MT and CSAMT impedance functions, researches were focused on improving quality of the estimates. On that objective, non-linear regression approach based on the robust M-estimators and the Hilbert transform operating on the causal transfer functions, were used to dealing with outliers (abnormal data) which are frequently superimposed on a normal ambient MT as well as CSAMT noise fields. As validated, the proposed MT impedance modeling method gives acceptable results for standard three dimensional resistivity models. Whilst, the full solution based modeling that accommodate the non-plane wave effect for CSAMT impedances is applied for all measurement zones, including near-, transition-as well as the far-field zones, and consequently the plane wave correction is no longer needed for the impedances. In the resulting robust impedance estimates, outlier contamination is removed and the self consistency between the real and imaginary parts of the impedance estimates is guaranteed. Using synthetic and real MT data, it is shown that the proposed robust estimation methods always yield impedance estimates which are better than the conventional least square (LS) estimation, even under condition of severe noise contamination. A recent development on the constrained robust CSAMT impedance estimation is also discussed. By using synthetic CSAMT data it is demonstrated that the proposed methods can produce usable CSAMT transfer functions for all measurement zones.
Development on electromagnetic impedance function modeling and its estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sutarno, D., E-mail: Sutarno@fi.itb.ac.id
2015-09-30
Today the Electromagnetic methods such as magnetotellurics (MT) and controlled sources audio MT (CSAMT) is used in a broad variety of applications. Its usefulness in poor seismic areas and its negligible environmental impact are integral parts of effective exploration at minimum cost. As exploration was forced into more difficult areas, the importance of MT and CSAMT, in conjunction with other techniques, has tended to grow continuously. However, there are obviously important and difficult problems remaining to be solved concerning our ability to collect process and interpret MT as well as CSAMT in complex 3D structural environments. This talk aim atmore » reviewing and discussing the recent development on MT as well as CSAMT impedance functions modeling, and also some improvements on estimation procedures for the corresponding impedance functions. In MT impedance modeling, research efforts focus on developing numerical method for computing the impedance functions of three dimensionally (3-D) earth resistivity models. On that reason, 3-D finite elements numerical modeling for the impedances is developed based on edge element method. Whereas, in the CSAMT case, the efforts were focused to accomplish the non-plane wave problem in the corresponding impedance functions. Concerning estimation of MT and CSAMT impedance functions, researches were focused on improving quality of the estimates. On that objective, non-linear regression approach based on the robust M-estimators and the Hilbert transform operating on the causal transfer functions, were used to dealing with outliers (abnormal data) which are frequently superimposed on a normal ambient MT as well as CSAMT noise fields. As validated, the proposed MT impedance modeling method gives acceptable results for standard three dimensional resistivity models. Whilst, the full solution based modeling that accommodate the non-plane wave effect for CSAMT impedances is applied for all measurement zones, including near-, transition-as well as the far-field zones, and consequently the plane wave correction is no longer needed for the impedances. In the resulting robust impedance estimates, outlier contamination is removed and the self consistency between the real and imaginary parts of the impedance estimates is guaranteed. Using synthetic and real MT data, it is shown that the proposed robust estimation methods always yield impedance estimates which are better than the conventional least square (LS) estimation, even under condition of severe noise contamination. A recent development on the constrained robust CSAMT impedance estimation is also discussed. By using synthetic CSAMT data it is demonstrated that the proposed methods can produce usable CSAMT transfer functions for all measurement zones.« less
Occurrence and distribution of Indian primates
Karanth, K.K.; Nichols, J.D.; Hines, J.E.
2010-01-01
Global and regional species conservation efforts are hindered by poor distribution data and range maps. Many Indian primates face extinction, but assessments of population status are hindered by lack of reliable distribution data. We estimated the current occurrence and distribution of 15 Indian primates by applying occupancy models to field data from a country-wide survey of local experts. We modeled species occurrence in relation to ecological and social covariates (protected areas, landscape characteristics, and human influences), which we believe are critical to determining species occurrence in India. We found evidence that protected areas positively influence occurrence of seven species and for some species are their only refuge. We found evergreen forests to be more critical for some primates along with temperate and deciduous forests. Elevation negatively influenced occurrence of three species. Lower human population density was positively associated with occurrence of five species, and higher cultural tolerance was positively associated with occurrence of three species. We find that 11 primates occupy less than 15% of the total land area of India. Vulnerable primates with restricted ranges are Golden langur, Arunachal macaque, Pig-tailed macaque, stump-tailed macaque, Phayre's leaf monkey, Nilgiri langur and Lion-tailed macaque. Only Hanuman langur and rhesus macaque are widely distributed. We find occupancy modeling to be useful in determining species ranges, and in agreement with current species ranking and IUCN status. In landscapes where monitoring efforts require optimizing cost, effort and time, we used ecological and social covariates to reliably estimate species occurrence and focus species conservation efforts. ?? Elsevier Ltd.
Rayne, Sierra; Forest, Kaya
2016-09-18
The air-water partition coefficients (Kaw) for 86 large polycyclic aromatic hydrocarbons and their unsaturated relatives were estimated using high-level G4(MP2) gas and aqueous phase calculations with the SMD, IEFPCM-UFF, and CPCM solvation models. An extensive method validation effort was undertaken which involved confirming that, via comparisons to experimental enthalpies of formation, gas-phase energies at the G4(MP2) level for the compounds of interest were at or near thermochemical accuracy. Investigations of the three solvation models using a range of neutral and ionic compounds suggested that while no clear preferential solvation model could be chosen in advance for accurate Kaw estimates of the target compounds, the employment of increasingly higher levels of theory would result in lower Kaw errors. Subsequent calculations on the polycyclic aromatic and unsaturated hydrocarbons at the G4(MP2) level revealed excellent agreement for the IEFPCM-UFF and CPCM models against limited available experimental data. The IEFPCM-UFF-G4(MP2) and CPCM-G4(MP2) solvation energy calculation approaches are anticipated to give Kaw estimates within typical experimental ranges, each having general Kaw errors of less than 0.5 log10 units. When applied to other large organic compounds, the method should allow development of a broad and reliable Kaw database for multimedia environmental modeling efforts on various contaminants.
James T. Peterson; Jason Dunham
2003-01-01
Effective conservation efforts for at-risk species require knowledge of the locations of existing populations. Species presence can be estimated directly by conducting field-sampling surveys or alternatively by developing predictive models. Direct surveys can be expensive and inefficient, particularly for rare and difficult- to-sample species, and models of species...
Lance A. Vickers; David R. Larsen; Daniel C. Dey; Benjamin O. Knapp; John M. Kabrick
2017-01-01
Predicting the effects of silvicultural choices on regeneration has been difficult with the tools available to foresters. In an effort to improve this, we developed a collection of reproduction establishment models based on stand development hypotheses and parameterized with empirical data for several species in the Missouri Ozarks. These models estimate third-year...
Fukuda, Hiromu; Maunder, Mark N.
2017-01-01
Catch-per-unit-effort (CPUE) is often the main piece of information used in fisheries stock assessment; however, the catch and effort data that are traditionally compiled from commercial logbooks can be incomplete or unreliable due to many reasons. Pacific bluefin tuna (PBF) is a seasonal target species in the Taiwanese longline fishery. Since 2010, detailed catch information for each PBF has been made available through a catch documentation scheme. However, previously, only market landing data with a low coverage of logbooks were available. Therefore, several nontraditional procedures were performed to reconstruct catch and effort data from many alternative data sources not directly obtained from fishers for 2001–2015: (1) Estimating the catch number from the landing weight for 2001–2003, for which the catch number information was incomplete, based on Monte Carlo simulation; (2) deriving fishing days for 2007–2009 from voyage data recorder data, based on a newly developed algorithm; and (3) deriving fishing days for 2001–2006 from vessel trip information, based on linear relationships between fishing and at-sea days. Subsequently, generalized linear mixed models were developed with the delta-lognormal assumption for standardizing the CPUE calculated from the reconstructed data, and three-stage model evaluation was performed using (1) Akaike and Bayesian information criteria to determine the most favorable variable composition of standardization models, (2) overall R2 via cross-validation to compare fitting performance between area-separated and area-combined standardizations, and (3) system-based testing to explore the consistency of the standardized CPUEs with auxiliary data in the PBF stock assessment model. The last stage of evaluation revealed high consistency among the data, thus demonstrating improvements in data reconstruction for estimating the abundance index, and consequently the stock assessment. PMID:28968434
USDA-ARS?s Scientific Manuscript database
Current restoration efforts for the Chesapeake Bay watershed mandate a timeline for reducing the load of nutrients and sediment to receiving waters. The Chesapeake Bay Watershed Model (WSM) has been used for two decades to simulate hydrology and nutrient and sediment transport; however, spatial limi...
Methods to estimate irrigated reference crop evapotranspiration - a review.
Kumar, R; Jat, M K; Shankar, V
2012-01-01
Efficient water management of crops requires accurate irrigation scheduling which, in turn, requires the accurate measurement of crop water requirement. Irrigation is applied to replenish depleted moisture for optimum plant growth. Reference evapotranspiration plays an important role for the determination of water requirements for crops and irrigation scheduling. Various models/approaches varying from empirical to physically base distributed are available for the estimation of reference evapotranspiration. Mathematical models are useful tools to estimate the evapotranspiration and water requirement of crops, which is essential information required to design or choose best water management practices. In this paper the most commonly used models/approaches, which are suitable for the estimation of daily water requirement for agricultural crops grown in different agro-climatic regions, are reviewed. Further, an effort has been made to compare the accuracy of various widely used methods under different climatic conditions.
2013-01-01
Background Previous global burden of disease (GBD) estimates for household air pollution (HAP) from solid cookfuel use were based on categorical indicators of exposure. Recent progress in GBD methodologies that use integrated–exposure–response (IER) curves for combustion particles required the development of models to quantitatively estimate average HAP levels experienced by large populations. Such models can also serve to inform public health intervention efforts. Thus, we developed a model to estimate national household concentrations of PM2.5 from solid cookfuel use in India, together with estimates for 29 states. Methods We monitored 24-hr household concentrations of PM2.5, in 617 rural households from 4 states in India on a cross-sectional basis between November 2004 and March 2005. We then, developed log-linear regression models that predict household concentrations as a function of multiple, independent household level variables available in national household surveys and generated national / state estimates using The Indian National Family and Health Survey (NFHS 2005). Results The measured mean 24-hr concentration of PM2.5 in solid cookfuel using households ranged from 163 μg/m3 (95% CI: 143,183; median 106; IQR: 191) in the living area to 609 μg/m3 (95% CI: 547,671; median: 472; IQR: 734) in the kitchen area. Fuel type, kitchen type, ventilation, geographical location and cooking duration were found to be significant predictors of PM2.5 concentrations in the household model. k-fold cross validation showed a fair degree of correlation (r = 0.56) between modeled and measured values. Extrapolation of the household results by state to all solid cookfuel-using households in India, covered by NFHS 2005, resulted in a modeled estimate of 450 μg/m3 (95% CI: 318,640) and 113 μg/m3 (95% CI: 102,127) , for national average 24-hr PM2.5 concentrations in the kitchen and living areas respectively. Conclusions The model affords substantial improvement over commonly used exposure indicators such as “percent solid cookfuel use” in HAP disease burden assessments, by providing some of the first estimates of national average HAP levels experienced in India. Model estimates also add considerable strength of evidence for framing and implementation of intervention efforts at the state and national levels. PMID:24020494
Allstadt, Kate E.; Thompson, Eric M.; Hearne, Mike; Nowicki Jessee, M. Anna; Zhu, J.; Wald, David J.; Tanyas, Hakan
2017-01-01
The U.S. Geological Survey (USGS) has made significant progress toward the rapid estimation of shaking and shakingrelated losses through their Did You Feel It? (DYFI), ShakeMap, ShakeCast, and PAGER products. However, quantitative estimates of the extent and severity of secondary hazards (e.g., landsliding, liquefaction) are not currently included in scenarios and real-time post-earthquake products despite their significant contributions to hazard and losses for many events worldwide. We are currently running parallel global statistical models for landslides and liquefaction developed with our collaborators in testing mode, but much work remains in order to operationalize these systems. We are expanding our efforts in this area by not only improving the existing statistical models, but also by (1) exploring more sophisticated, physics-based models where feasible; (2) incorporating uncertainties; and (3) identifying and undertaking research and product development to provide useful landslide and liquefaction estimates and their uncertainties. Although our existing models use standard predictor variables that are accessible globally or regionally, including peak ground motions, topographic slope, and distance to water bodies, we continue to explore readily available proxies for rock and soil strength as well as other susceptibility terms. This work is based on the foundation of an expanding, openly available, case-history database we are compiling along with historical ShakeMaps for each event. The expected outcome of our efforts is a robust set of real-time secondary hazards products that meet the needs of a wide variety of earthquake information users. We describe the available datasets and models, developments currently underway, and anticipated products.
Developing recreational harvest regulations for an unexploited lake trout population
Lenker, Melissa A; Weidel, Brian C.; Jensen, Olaf P.; Solomon, Christopher T.
2016-01-01
Developing fishing regulations for previously unexploited populations presents numerous challenges, many of which stem from a scarcity of baseline information about abundance, population productivity, and expected angling pressure. We used simulation models to test the effect of six management strategies (catch and release; trophy, minimum, and maximum length limits; and protected and exploited slot length limits) on an unexploited population of Lake Trout Salvelinus namaycush in Follensby Pond, a 393-ha lake located in New York State’s Adirondack Park. We combined field and literature data and mark–recapture abundance estimates to parameterize an age-structured population model and used the model to assess the effects of each management strategy on abundance, catch per unit effort (CPUE), and harvest over a range of angler effort (0–2,000 angler-days/year). Lake Trout density (3.5 fish/ha for fish ≥ age 13, the estimated age at maturity) was similar to densities observed in other unexploited systems, but growth rate was relatively slow. Maximum harvest occurred at levels of effort ≤ 1,000 angler-days/year in all the scenarios considered. Regulations that permitted harvest of large postmaturation fish, such as New York’s standard Lake Trout minimum size limit or a trophy size limit, resulted in low harvest and high angler CPUE. Regulations that permitted harvest of small and sometimes immature fish, such as a protected slot or maximum size limit, allowed high harvest but resulted in low angler CPUE and produced rapid declines in harvest with increases in effort beyond the effort consistent with maximum yield. Management agencies can use these results to match regulations to management goals and to assess the risks of different management options for unexploited Lake Trout populations and other fish species with similar life history traits.
System Identification Applied to Dynamic CFD Simulation and Wind Tunnel Data
NASA Technical Reports Server (NTRS)
Murphy, Patrick C.; Klein, Vladislav; Frink, Neal T.; Vicroy, Dan D.
2011-01-01
Demanding aerodynamic modeling requirements for military and civilian aircraft have provided impetus for researchers to improve computational and experimental techniques. Model validation is a key component for these research endeavors so this study is an initial effort to extend conventional time history comparisons by comparing model parameter estimates and their standard errors using system identification methods. An aerodynamic model of an aircraft performing one-degree-of-freedom roll oscillatory motion about its body axes is developed. The model includes linear aerodynamics and deficiency function parameters characterizing an unsteady effect. For estimation of unknown parameters two techniques, harmonic analysis and two-step linear regression, were applied to roll-oscillatory wind tunnel data and to computational fluid dynamics (CFD) simulated data. The model used for this study is a highly swept wing unmanned aerial combat vehicle. Differences in response prediction, parameters estimates, and standard errors are compared and discussed
Dunham, Kylee; Grand, James B.
2016-01-01
We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.
Space shuttle propulsion estimation development verification, volume 1
NASA Technical Reports Server (NTRS)
Rogers, Robert M.
1989-01-01
The results of the Propulsion Estimation Development Verification are summarized. A computer program developed under a previous contract (NAS8-35324) was modified to include improved models for the Solid Rocket Booster (SRB) internal ballistics, the Space Shuttle Main Engine (SSME) power coefficient model, the vehicle dynamics using quaternions, and an improved Kalman filter algorithm based on the U-D factorized algorithm. As additional output, the estimated propulsion performances, for each device are computed with the associated 1-sigma bounds. The outputs of the estimation program are provided in graphical plots. An additional effort was expended to examine the use of the estimation approach to evaluate single engine test data. In addition to the propulsion estimation program PFILTER, a program was developed to produce a best estimate of trajectory (BET). The program LFILTER, also uses the U-D factorized algorithm form of the Kalman filter as in the propulsion estimation program PFILTER. The necessary definitions and equations explaining the Kalman filtering approach for the PFILTER program, the models used for this application for dynamics and measurements, program description, and program operation are presented.
Levine, Andrew J; Martin, Eileen; Sacktor, Ned; Munro, Cynthia; Becker, James
2017-06-01
Prevalence estimates of HIV-associated neurocognitive disorders (HAND) may be inflated. Estimates are determined via cohort studies in which participants may apply suboptimal effort on neurocognitive testing, thereby inflating estimates. Additionally, fluctuating HAND severity over time may be related to inconsistent effort. To address these hypotheses, we characterized effort in the Multicenter AIDS Cohort Study. After neurocognitive testing, 935 participants (525 HIV- and 410 HIV+) completed the visual analog effort scale (VAES), rating their effort from 0% to 100%. Those with <100% then indicated the reason(s) for suboptimal effort. K-means cluster analysis established 3 groups: high (mean = 97%), moderate (79%), and low effort (51%). Rates of HAND and other characteristics were compared between the groups. Linear regression examined the predictors of VAES score. Data from 57 participants who completed the VAES at 2 visits were analyzed to characterize the longitudinal relationship between effort and HAND severity. Fifty-two percent of participants reported suboptimal effort (<100%), with no difference between serostatus groups. Common reasons included "tired" (43%) and "distracted" (36%). The lowest effort group had greater asymptomatic neurocognitive impairment and minor neurocognitive disorder diagnosis (25% and 33%) as compared with the moderate (23% and 15%) and the high (12% and 9%) effort groups. Predictors of suboptimal effort were self-reported memory impairment, African American race, and cocaine use. Change in effort between baseline and follow-up correlated with change in HAND severity. Suboptimal effort seems to inflate estimated HAND prevalence and explain fluctuation of severity over time. A simple modification of study protocols to optimize effort is indicated by the results.
Two Mathematical Models of Nonlinear Vibrations
NASA Technical Reports Server (NTRS)
Brugarolas, Paul; Bayard, David; Spanos, John; Breckenridge, William
2007-01-01
Two innovative mathematical models of nonlinear vibrations, and methods of applying them, have been conceived as byproducts of an effort to develop a Kalman filter for highly precise estimation of bending motions of a large truss structure deployed in outer space from a space-shuttle payload bay. These models are also applicable to modeling and analysis of vibrations in other engineering disciplines, on Earth as well as in outer space.
NASA Technical Reports Server (NTRS)
Barghouty, A. F.
2014-01-01
Accurate estimates of electroncapture cross sections at energies relevant to the modeling of the transport, acceleration, and interaction of energetic neutral atoms (ENA) in space (approximately few MeV per nucleon) and especially for multi-electron ions must rely on detailed, but computationally expensive, quantum-mechanical description of the collision process. Kuang's semi-classical approach is an elegant and efficient way to arrive at these estimates. Motivated by ENA modeling efforts for apace applications, we shall briefly present this approach along with sample applications and report on current progress.
Predicting Critical Power in Elite Cyclists: Questioning the Validity of the 3-Minute All-Out Test.
Bartram, Jason C; Thewlis, Dominic; Martin, David T; Norton, Kevin I
2017-07-01
New applications of the critical-power concept, such as the modeling of intermittent-work capabilities, are exciting prospects for elite cycling. However, accurate calculation of the required parameters is traditionally time invasive and somewhat impractical. An alternative single-test protocol (3-min all-out) has recently been proposed, but validation in an elite population is lacking. The traditional approach for parameter establishment, but with fewer tests, could also prove an acceptable compromise. Six senior Australian endurance track-cycling representatives completed 6 efforts to exhaustion on 2 separate days over a 3-wk period. These included 1-, 4-, 6-, 8-, and 10-min self-paced efforts, plus the 3-min all-out protocol. Traditional work-vs-time calculations of CP and anaerobic energy contribution (W') using the 5 self-paced efforts were compared with calculations from the 3-min all-out protocol. The impact of using just 2 or 3 self-paced efforts for traditional CP and W' estimation was also explored using thresholds of agreement (8 W, 2.0 kJ, respectively). CP estimated from the 3-min all-out approach was significantly higher than from the traditional approach (402 ± 33, 351 ± 27 W, P < .001), while W' was lower (15.5 ± 3.0, 24.3 ± 4.0 kJ, P = .02). Five different combinations of 2 or 3 self-paced efforts led to CP estimates within the threshold of agreement, with only 1 combination deemed accurate for W'. In elite cyclists the 3-min all-out approach is not suitable to estimate CP when compared with the traditional method. However, reducing the number of tests used in the traditional method lessens testing burden while maintaining appropriate parameter accuracy.
Improving size estimates of open animal populations by incorporating information on age
Manly, Bryan F.J.; McDonald, Trent L.; Amstrup, Steven C.; Regehr, Eric V.
2003-01-01
Around the world, a great deal of effort is expended each year to estimate the sizes of wild animal populations. Unfortunately, population size has proven to be one of the most intractable parameters to estimate. The capture-recapture estimation models most commonly used (of the Jolly-Seber type) are complicated and require numerous, sometimes questionable, assumptions. The derived estimates usually have large variances and lack consistency over time. In capture–recapture studies of long-lived animals, the ages of captured animals can often be determined with great accuracy and relative ease. We show how to incorporate age information into size estimates for open populations, where the size changes through births, deaths, immigration, and emigration. The proposed method allows more precise estimates of population size than the usual models, and it can provide these estimates from two sample occasions rather than the three usually required. Moreover, this method does not require specialized programs for capture-recapture data; researchers can derive their estimates using the logistic regression module in any standard statistical package.
NASA Technical Reports Server (NTRS)
Wallace, Dolores R.
2003-01-01
In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.
Martin, Thomas E.; Riordan, Margaret M.; Repin, Rimi; Mouton, James C.; Blake, William M.
2017-01-01
AimAdult survival is central to theories explaining latitudinal gradients in life history strategies. Life history theory predicts higher adult survival in tropical than north temperate regions given lower fecundity and parental effort. Early studies were consistent with this prediction, but standard-effort netting studies in recent decades suggested that apparent survival rates in temperate and tropical regions strongly overlap. Such results do not fit with life history theory. Targeted marking and resighting of breeding adults yielded higher survival estimates in the tropics, but this approach is thought to overestimate survival because it does not sample social and age classes with lower survival. We compared the effect of field methods on tropical survival estimates and their relationships with life history traits.LocationSabah, Malaysian Borneo.Time period2008–2016.Major taxonPasseriformes.MethodsWe used standard-effort netting and resighted individuals of all social and age classes of 18 tropical songbird species over 8 years. We compared apparent survival estimates between these two field methods with differing analytical approaches.ResultsEstimated detection and apparent survival probabilities from standard-effort netting were similar to those from other tropical studies that used standard-effort netting. Resighting data verified that a high proportion of individuals that were never recaptured in standard-effort netting remained in the study area, and many were observed breeding. Across all analytical approaches, addition of resighting yielded substantially higher survival estimates than did standard-effort netting alone. These apparent survival estimates were higher than for temperate zone species, consistent with latitudinal differences in life histories. Moreover, apparent survival estimates from addition of resighting, but not from standard-effort netting alone, were correlated with parental effort as measured by egg temperature across species.Main conclusionsInclusion of resighting showed that standard-effort netting alone can negatively bias apparent survival estimates and obscure life history relationships across latitudes and among tropical species.
NASA Technical Reports Server (NTRS)
Kypuros, Javier A.; Colson, Rodrigo; Munoz, Afredo
2004-01-01
This paper describes efforts conducted to improve dynamic temperature estimations of a turbine tip clearance system to facilitate design of a generalized tip clearance controller. This work builds upon research previously conducted and presented in and focuses primarily on improving dynamic temperature estimations of the primary components affecting tip clearance (i.e. the rotor, blades, and casing/shroud). The temperature profiles estimated by the previous model iteration, specifically for the rotor and blades, were found to be inaccurate and, more importantly, insufficient to facilitate controller design. Some assumptions made to facilitate the previous results were not valid, and thus improvements are presented here to better match the physical reality. As will be shown, the improved temperature sub- models, match a commercially validated model and are sufficiently simplified to aid in controller design.
Palazoğlu, T K; Gökmen, V
2008-04-01
In this study, a numerical model was developed to simulate frying of potato strips and estimate acrylamide levels in French fries. Heat and mass transfer parameters determined during frying of potato strips and the formation and degradation kinetic parameters of acrylamide obtained with a sugar-asparagine model system were incorporated within the model. The effect of reducing sugar content (0.3 to 2.15 g/100 g dry matter), strip thickness (8.5 x 8.5 mm and 10 x 10 mm), and frying time (3, 4, 5, and 6 min) and temperature (150, 170, and 190 degrees C) on resultant acrylamide level in French fries was investigated both numerically and experimentally. The model appeared to closely estimate the acrylamide contents, and thereby may potentially save considerable time, money, and effort during the stages of process design and optimization.
Identifying Aberrant Responding: Use of Multiple Measures
ERIC Educational Resources Information Center
Steinkamp, Susan Christa
2017-01-01
For test scores that rely on the accurate estimation of ability via an IRT model, their use and interpretation is dependent upon the assumption that the IRT model fits the data. Examinees who do not put forth full effort in answering test questions, have prior knowledge of test content, or do not approach a test with the intent of answering…
2013-09-30
the Study of the Environmental Arctic Change (SEARCH) Sea Ice Outlook (SIO) effort. The SIO is an international effort to provide a community-wide...summary of the expected September arctic sea ice minimum. Monthly reports released throughout the summer synthesize community estimates of the current...state and expected minimum of sea ice . Along with the backbone components of this system (NAVGEM/HYCOM/CICE), other data models have been used to
NASA Technical Reports Server (NTRS)
Woodward, W. A.; Gray, H. L.
1983-01-01
Efforts in support of the development of multicrop production monitoring capability are reported. In particular, segment level proportion estimation techniques based upon a mixture model were investigated. Efforts have dealt primarily with evaluation of current techniques and development of alternative ones. A comparison of techniques is provided on both simulated and LANDSAT data along with an analysis of the quality of profile variables obtained from LANDSAT data.
Optimizing larval assessment to support sea lamprey control in the Great Lakes
Hansen, Michael J.; Adams, Jean V.; Cuddy, Douglas W.; Richards, Jessica M.; Fodale, Michael F.; Larson, Geraldine L.; Ollila, Dale J.; Slade, Jeffrey W.; Steeves, Todd B.; Young, Robert J.; Zerrenner, Adam
2003-01-01
Elements of the larval sea lamprey (Petromyzon marinus) assessment program that most strongly influence the chemical treatment program were analyzed, including selection of streams for larval surveys, allocation of sampling effort among stream reaches, allocation of sampling effort among habitat types, estimation of daily growth rates, and estimation of metamorphosis rates, to determine how uncertainty in each element influenced the stream selection program. First, the stream selection model based on current larval assessment sampling protocol significantly underestimated transforming sea lam-prey abundance, transforming sea lampreys killed, and marginal costs per sea lamprey killed, compared to a protocol that included more years of data (especially for large streams). Second, larval density in streams varied significantly with Type-I habitat area, but not with total area or reach length. Third, the ratio of larval density between Type-I and Type-II habitat varied significantly among streams, and that the optimal allocation of sampling effort varied with the proportion of habitat types and variability of larval density within each habitat. Fourth, mean length varied significantly among streams and years. Last, size at metamorphosis varied more among years than within or among regions and that metamorphosis varied significantly among streams within regions. Study results indicate that: (1) the stream selection model should be used to identify streams with potentially high residual populations of larval sea lampreys; (2) larval sampling in Type-II habitat should be initiated in all streams by increasing sampling in Type-II habitat to 50% of the sampling effort in Type-I habitat; and (3) methods should be investigated to reduce uncertainty in estimates of sea lamprey production, with emphasis on those that reduce the uncertainty associated with larval length at the end of the growing season and those used to predict metamorphosis.
Fuel-cycle emissions for conventional and alternative fuel vehicles : an assessment of air toxics
DOT National Transportation Integrated Search
2000-08-01
This report provides information on recent efforts to use the Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation (GREET) fuel-cycle model to estimate air toxics emissions. GREET, developed at Argonne National Laboratory, currentl...
Moving across scales: Challenges and opportunities in upscaling carbon fluxes
NASA Astrophysics Data System (ADS)
Naithani, K. J.
2016-12-01
Light use efficiency (LUE) type models are commonly used to upscale terrestrial C fluxes and estimate regional and global C budgets. Model parameters are often estimated for each land cover type (LCT) using flux observations from one or more eddy covariance towers, and then spatially extrapolated by integrating land cover, meteorological, and remotely sensed data. Decisions regarding the type of input data (spatial resolution of land cover data, spatial and temporal length of flux data), representation of landscape structure (land use vs. disturbance regime), and the type of modeling framework (common risk vs. hierarchical) all influence the estimates CO2 fluxes and the associated uncertainties, but are rarely considered together. This work presents a synthesis of past and present efforts for upscaling CO2 fluxes and associated uncertainties in the ChEAS (Chequamegon Ecosystem Atmosphere Study) region in northern Wisconsin and the Upper Peninsula of Michigan. This work highlights two key future research needs. First, the characterization of uncertainties due to all of the abovementioned factors reflects only a (hopefully relevant) subset the overall uncertainties. Second, interactions among these factors are likely critical, but are poorly represented by the tower network at landscape scales. Yet, results indicate significant spatial and temporal heterogeneity of uncertainty in CO2 fluxes which can inform carbon management efforts and prioritize data needs.
Simultaneous Estimation of Electromechanical Modes and Forced Oscillations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Follum, Jim; Pierre, John W.; Martin, Russell
Over the past several years, great strides have been made in the effort to monitor the small-signal stability of power systems. These efforts focus on estimating electromechanical modes, which are a property of the system that dictate how generators in different parts of the system exchange energy. Though the algorithms designed for this task are powerful and important for reliable operation of the power system, they are susceptible to severe bias when forced oscillations are present in the system. Forced oscillations are fundamentally different from electromechanical oscillations in that they are the result of a rogue input to the system,more » rather than a property of the system itself. To address the presence of forced oscillations, the frequently used AutoRegressive Moving Average (ARMA) model is adapted to include sinusoidal inputs, resulting in the AutoRegressive Moving Average plus Sinusoid (ARMA+S) model. From this model, a new Two-Stage Least Squares algorithm is derived to incorporate the forced oscillations, thereby enabling the simultaneous estimation of the electromechanical modes and the amplitude and phase of the forced oscillations. The method is validated using simulated power system data as well as data obtained from the western North American power system (wNAPS) and Eastern Interconnection (EI).« less
Bayesian characterization of uncertainty in species interaction strengths.
Wolf, Christopher; Novak, Mark; Gitelman, Alix I
2017-06-01
Considerable effort has been devoted to the estimation of species interaction strengths. This effort has focused primarily on statistical significance testing and obtaining point estimates of parameters that contribute to interaction strength magnitudes, leaving the characterization of uncertainty associated with those estimates unconsidered. We consider a means of characterizing the uncertainty of a generalist predator's interaction strengths by formulating an observational method for estimating a predator's prey-specific per capita attack rates as a Bayesian statistical model. This formulation permits the explicit incorporation of multiple sources of uncertainty. A key insight is the informative nature of several so-called non-informative priors that have been used in modeling the sparse data typical of predator feeding surveys. We introduce to ecology a new neutral prior and provide evidence for its superior performance. We use a case study to consider the attack rates in a New Zealand intertidal whelk predator, and we illustrate not only that Bayesian point estimates can be made to correspond with those obtained by frequentist approaches, but also that estimation uncertainty as described by 95% intervals is more useful and biologically realistic using the Bayesian method. In particular, unlike in bootstrap confidence intervals, the lower bounds of the Bayesian posterior intervals for attack rates do not include zero when a predator-prey interaction is in fact observed. We conclude that the Bayesian framework provides a straightforward, probabilistic characterization of interaction strength uncertainty, enabling future considerations of both the deterministic and stochastic drivers of interaction strength and their impact on food webs.
2010-04-30
estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources , gathering and maintaining...previous and current complex SW development efforts, the program offices will have a source of objective lessons learned and metrics that can be applied...the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this
NASA Technical Reports Server (NTRS)
Sanchez, Christopher M.
2011-01-01
NASA White Sands Test Facility (WSTF) is leading an evaluation effort in advanced destructive and nondestructive testing of composite pressure vessels and structures. WSTF is using progressive finite element analysis methods for test design and for confirmation of composite pressure vessel performance. Using composite finite element analysis models and failure theories tested in the World-Wide Failure Exercise, WSTF is able to estimate the static strength of composite pressure vessels. Additionally, test and evaluation on composites that have been impact damaged is in progress so that models can be developed to estimate damage tolerance and the degradation in static strength.
ARM-Led Improvements Aerosols in Climate and Climate Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghan, Steven J.; Penner, Joyce E.
2016-07-25
The DOE ARM program has played a foundational role in efforts to quantify aerosol effects on climate, beginning with the early back-of-the-envelope estimates of direct radiative forcing by anthropogenic sulfate and biomass burning aerosol (Penner et al., 1994). In this chapter we review the role that ARM has played in subsequent detailed estimates based on physically-based representations of aerosols in climate models. The focus is on quantifying the direct and indirect effects of anthropogenic aerosol on the planetary energy balance. Only recently have other DOE programs applied the aerosol modeling capability to simulate the climate response to the radiative forcing.
How many species of flowering plants are there?
Joppa, Lucas N.; Roberts, David L.; Pimm, Stuart L.
2011-01-01
We estimate the probable number of flowering plants. First, we apply a model that explicitly incorporates taxonomic effort over time to estimate the number of as-yet-unknown species. Second, we ask taxonomic experts their opinions on how many species are likely to be missing, on a family-by-family basis. The results are broadly comparable. We show that the current number of species should grow by between 10 and 20 per cent. There are, however, interesting discrepancies between expert and model estimates for some families, suggesting that our model does not always completely capture patterns of taxonomic activity. The as-yet-unknown species are probably similar to those taxonomists have described recently—overwhelmingly rare and local, and disproportionately in biodiversity hotspots, where there are high levels of habitat destruction. PMID:20610425
Kowalski, Amanda
2016-01-02
Efforts to control medical care costs depend critically on how individuals respond to prices. I estimate the price elasticity of expenditure on medical care using a censored quantile instrumental variable (CQIV) estimator. CQIV allows estimates to vary across the conditional expenditure distribution, relaxes traditional censored model assumptions, and addresses endogeneity with an instrumental variable. My instrumental variable strategy uses a family member's injury to induce variation in an individual's own price. Across the conditional deciles of the expenditure distribution, I find elasticities that vary from -0.76 to -1.49, which are an order of magnitude larger than previous estimates.
Lessons learned in deploying software estimation technology and tools
NASA Technical Reports Server (NTRS)
Panlilio-Yap, Nikki; Ho, Danny
1994-01-01
Developing a software product involves estimating various project parameters. This is typically done in the planning stages of the project when there is much uncertainty and very little information. Coming up with accurate estimates of effort, cost, schedule, and reliability is a critical problem faced by all software project managers. The use of estimation models and commercially available tools in conjunction with the best bottom-up estimates of software-development experts enhances the ability of a product development group to derive reasonable estimates of important project parameters. This paper describes the experience of the IBM Software Solutions (SWS) Toronto Laboratory in selecting software estimation models and tools and deploying their use to the laboratory's product development groups. It introduces the SLIM and COSTAR products, the software estimation tools selected for deployment to the product areas, and discusses the rationale for their selection. The paper also describes the mechanisms used for technology injection and tool deployment, and concludes with a discussion of important lessons learned in the technology and tool insertion process.
Statistical aspects of carbon fiber risk assessment modeling. [fire accidents involving aircraft
NASA Technical Reports Server (NTRS)
Gross, D.; Miller, D. R.; Soland, R. M.
1980-01-01
The probabilistic and statistical aspects of the carbon fiber risk assessment modeling of fire accidents involving commercial aircraft are examined. Three major sources of uncertainty in the modeling effort are identified. These are: (1) imprecise knowledge in establishing the model; (2) parameter estimation; and (3)Monte Carlo sampling error. All three sources of uncertainty are treated and statistical procedures are utilized and/or developed to control them wherever possible.
NASA Astrophysics Data System (ADS)
Kalyanapu, A. J.; Dullo, T. T.; Gangrade, S.; Kao, S. C.; Marshall, R.; Islam, S. R.; Ghafoor, S. K.
2017-12-01
Hurricane Harvey that made landfall in the southern Texas this August is one of the most destructive hurricanes during the 2017 hurricane season. During its active period, many areas in coastal Texas region received more than 40 inches of rain. This downpour caused significant flooding resulting in about 77 casualties, displacing more than 30,000 people, inundating hundreds of thousands homes and is currently estimated to have caused more than $70 billion in direct damage. One of the significantly affected areas is Harris County where the city of Houston, TX is located. Covering over two HUC-8 drainage basins ( 2702 mi2), this county experienced more than 80% of its annual average rainfall during this event. This study presents an effort to reconstruct flooding caused by extreme rainfall due to Hurricane Harvey in Harris County, Texas. This computationally intensive task was performed at a 30-m spatial resolution using a rapid flood model called Flood2D-GPU, a graphics processing unit (GPU) accelerated model, on Oak Ridge National Laboratory's (ORNL) Titan Supercomputer. For this task, the hourly rainfall estimates from the National Center for Environmental Prediction Stage IV Quantitative Precipitation Estimate were fed into the Variable Infiltration Capacity (VIC) hydrologic model and Routing Application for Parallel computation of Discharge (RAPID) routing model to estimate flow hydrographs at 69 locations for Flood2D-GPU simulation. Preliminary results of the simulation including flood inundation extents, maps of flood depths and inundation duration will be presented. Future efforts will focus on calibrating and validating the simulation results and assessing the flood damage for better understanding the impacts made by Hurricane Harvey.
Preliminary development of digital signal processing in microwave radiometers
NASA Technical Reports Server (NTRS)
Stanley, W. D.
1980-01-01
Topics covered involve a number of closely related tasks including: the development of several control loop and dynamic noise model computer programs for simulating microwave radiometer measurements; computer modeling of an existing stepped frequency radiometer in an effort to determine its optimum operational characteristics; investigation of the classical second order analog control loop to determine its ability to reduce the estimation error in a microwave radiometer; investigation of several digital signal processing unit designs; initiation of efforts to develop required hardware and software for implementation of the digital signal processing unit; and investigation of the general characteristics and peculiarities of digital processing noiselike microwave radiometer signals.
Polar bears in the Beaufort Sea: A 30-year mark-recapture case history
Amstrup, Steven C.; McDonald, T.L.; Stirling, I.
2001-01-01
Knowledge of population size and trend is necessary to manage anthropogenic risks to polar bears (Ursus maritimus). Despite capturing over 1,025 females between 1967 and 1998, previously calculated estimates of the size of the southern Beaufort Sea (SBS) population have been unreliable. We improved estimates of numbers of polar bears by modeling heterogeneity in capture probability with covariates. Important covariates referred to the year of the study, age of the bear, capture effort, and geographic location. Our choice of best approximating model was based on the inverse relationship between variance in parameter estimates and likelihood of the fit and suggested a growth from ≈ 500 to over 1,000 females during this study. The mean coefficient of variation on estimates for the last decade of the study was 0.16—the smallest yet derived. A similar model selection approach is recommended for other projects where a best model is not identified by likelihood criteria alone.
NASA Astrophysics Data System (ADS)
Xiong, Yan; Reichenbach, Stephen E.
1999-01-01
Understanding of hand-written Chinese characters is at such a primitive stage that models include some assumptions about hand-written Chinese characters that are simply false. So Maximum Likelihood Estimation (MLE) may not be an optimal method for hand-written Chinese characters recognition. This concern motivates the research effort to consider alternative criteria. Maximum Mutual Information Estimation (MMIE) is an alternative method for parameter estimation that does not derive its rationale from presumed model correctness, but instead examines the pattern-modeling problem in automatic recognition system from an information- theoretic point of view. The objective of MMIE is to find a set of parameters in such that the resultant model allows the system to derive from the observed data as much information as possible about the class. We consider MMIE for recognition of hand-written Chinese characters using on a simplified hidden Markov Random Field. MMIE provides improved performance improvement over MLE in this application.
NASA Astrophysics Data System (ADS)
Zheng, Yuejiu; Ouyang, Minggao; Han, Xuebing; Lu, Languang; Li, Jianqiu
2018-02-01
Sate of charge (SOC) estimation is generally acknowledged as one of the most important functions in battery management system for lithium-ion batteries in new energy vehicles. Though every effort is made for various online SOC estimation methods to reliably increase the estimation accuracy as much as possible within the limited on-chip resources, little literature discusses the error sources for those SOC estimation methods. This paper firstly reviews the commonly studied SOC estimation methods from a conventional classification. A novel perspective focusing on the error analysis of the SOC estimation methods is proposed. SOC estimation methods are analyzed from the views of the measured values, models, algorithms and state parameters. Subsequently, the error flow charts are proposed to analyze the error sources from the signal measurement to the models and algorithms for the widely used online SOC estimation methods in new energy vehicles. Finally, with the consideration of the working conditions, choosing more reliable and applicable SOC estimation methods is discussed, and the future development of the promising online SOC estimation methods is suggested.
Terminal Area Productivity Airport Wind Analysis and Chicago O'Hare Model Description
NASA Technical Reports Server (NTRS)
Hemm, Robert; Shapiro, Gerald
1998-01-01
This paper describes two results from a continuing effort to provide accurate cost-benefit analyses of the NASA Terminal Area Productivity (TAP) program technologies. Previous tasks have developed airport capacity and delay models and completed preliminary cost benefit estimates for TAP technologies at 10 U.S. airports. This task covers two improvements to the capacity and delay models. The first improvement is the completion of a detailed model set for the Chicago O'Hare (ORD) airport. Previous analyses used a more general model to estimate the benefits for ORD. This paper contains a description of the model details with results corresponding to current conditions. The second improvement is the development of specific wind speed and direction criteria for use in the delay models to predict when the Aircraft Vortex Spacing System (AVOSS) will allow use of reduced landing separations. This paper includes a description of the criteria and an estimate of AVOSS utility for 10 airports based on analysis of 35 years of weather data.
Modelling soil erosion at European scale: towards harmonization and reproducibility
NASA Astrophysics Data System (ADS)
Bosco, C.; de Rigo, D.; Dewitte, O.; Poesen, J.; Panagos, P.
2015-02-01
Soil erosion by water is one of the most widespread forms of soil degradation. The loss of soil as a result of erosion can lead to decline in organic matter and nutrient contents, breakdown of soil structure and reduction of the water-holding capacity. Measuring soil loss across the whole landscape is impractical and thus research is needed to improve methods of estimating soil erosion with computational modelling, upon which integrated assessment and mitigation strategies may be based. Despite the efforts, the prediction value of existing models is still limited, especially at regional and continental scale, because a systematic knowledge of local climatological and soil parameters is often unavailable. A new approach for modelling soil erosion at regional scale is here proposed. It is based on the joint use of low-data-demanding models and innovative techniques for better estimating model inputs. The proposed modelling architecture has at its basis the semantic array programming paradigm and a strong effort towards computational reproducibility. An extended version of the Revised Universal Soil Loss Equation (RUSLE) has been implemented merging different empirical rainfall-erosivity equations within a climatic ensemble model and adding a new factor for a better consideration of soil stoniness within the model. Pan-European soil erosion rates by water have been estimated through the use of publicly available data sets and locally reliable empirical relationships. The accuracy of the results is corroborated by a visual plausibility check (63% of a random sample of grid cells are accurate, 83% at least moderately accurate, bootstrap p ≤ 0.05). A comparison with country-level statistics of pre-existing European soil erosion maps is also provided.
Devenish Nelson, Eleanor S.; Harris, Stephen; Soulsbury, Carl D.; Richards, Shane A.; Stephens, Philip A.
2010-01-01
Background Demographic models are widely used in conservation and management, and their parameterisation often relies on data collected for other purposes. When underlying data lack clear indications of associated uncertainty, modellers often fail to account for that uncertainty in model outputs, such as estimates of population growth. Methodology/Principal Findings We applied a likelihood approach to infer uncertainty retrospectively from point estimates of vital rates. Combining this with resampling techniques and projection modelling, we show that confidence intervals for population growth estimates are easy to derive. We used similar techniques to examine the effects of sample size on uncertainty. Our approach is illustrated using data on the red fox, Vulpes vulpes, a predator of ecological and cultural importance, and the most widespread extant terrestrial mammal. We show that uncertainty surrounding estimated population growth rates can be high, even for relatively well-studied populations. Halving that uncertainty typically requires a quadrupling of sampling effort. Conclusions/Significance Our results compel caution when comparing demographic trends between populations without accounting for uncertainty. Our methods will be widely applicable to demographic studies of many species. PMID:21049049
Adam Duarte,; Hatfield, Jeffrey; Todd M. Swannack,; Michael R. J. Forstner,; M. Clay Green,; Floyd W. Weckerly,
2015-01-01
Population viability analyses provide a quantitative approach that seeks to predict the possible future status of a species of interest under different scenarios and, therefore, can be important components of large-scale species’ conservation programs. We created a model and simulated range-wide population and breeding habitat dynamics for an endangered woodland warbler, the golden-cheeked warbler (Setophaga chrysoparia). Habitat-transition probabilities were estimated across the warbler's breeding range by combining National Land Cover Database imagery with multistate modeling. Using these estimates, along with recently published demographic estimates, we examined if the species can remain viable into the future given the current conditions. Lastly, we evaluated if protecting a greater amount of habitat would increase the number of warblers that can be supported in the future by systematically increasing the amount of protected habitat and comparing the estimated terminal carrying capacity at the end of 50 years of simulated habitat change. The estimated habitat-transition probabilities supported the hypothesis that habitat transitions are unidirectional, whereby habitat is more likely to diminish than regenerate. The model results indicated population viability could be achieved under current conditions, depending on dispersal. However, there is considerable uncertainty associated with the population projections due to parametric uncertainty. Model results suggested that increasing the amount of protected lands would have a substantial impact on terminal carrying capacities at the end of a 50-year simulation. Notably, this study identifies the need for collecting the data required to estimate demographic parameters in relation to changes in habitat metrics and population density in multiple regions, and highlights the importance of establishing a common definition of what constitutes protected habitat, what management goals are suitable within those protected areas, and a standard operating procedure to identify areas of priority for habitat conservation efforts. Therefore, we suggest future efforts focus on these aspects of golden-cheeked warbler conservation and ecology.
Calambokidis, John; Jahncke, Jaime
2017-01-01
Mortality from collisions with vessels is one of the main human causes of death for large whales. Ship strikes are rarely witnessed and the distribution of strike risk and estimates of mortality remain uncertain at best. We estimated ship strike mortality for blue humpback and fin whales in U.S. West Coast waters using a novel application of a naval encounter model. Mortality estimates from the model were far higher than current minimum estimates derived from stranding records and are closer to extrapolations adjusted for detection probabilities of dead whales. Our most conservative model estimated mortality to be 7.8x, 2.0x and 2.7x the U.S. recommended limit for blue, humpback and fin whales, respectively, suggesting that death from vessel collisions may be a significant impediment to population growth and recovery. Comparing across the study area, the majority of strike mortality occurs in waters off California, from Bodega Bay south and tends to be concentrated in a band approximately 24 Nm (44.5 km) offshore and in designated shipping lanes leading to and from major ports. While some mortality risk exists across nearly all West Coast waters, 74%, 82% and 65% of blue, humpback and fin whale mortality, respectively, occurs in just 10% of the study area, suggesting conservation efforts can be very effective if focused in these waters. Risk is highest in the shipping lanes off San Francisco and Long Beach, but only a fraction of total estimated mortality occurs in these proportionally small areas, making any conservation efforts exclusively within these areas insufficient to address overall strike mortality. We recommend combining shipping lane modifications and re-locations, ship speed reductions and creation of ‘Areas to be Avoided’ by vessels in ecologically important locations to address this significant source of whale mortality. PMID:28827838
Rockwood, R Cotton; Calambokidis, John; Jahncke, Jaime
2017-01-01
Mortality from collisions with vessels is one of the main human causes of death for large whales. Ship strikes are rarely witnessed and the distribution of strike risk and estimates of mortality remain uncertain at best. We estimated ship strike mortality for blue humpback and fin whales in U.S. West Coast waters using a novel application of a naval encounter model. Mortality estimates from the model were far higher than current minimum estimates derived from stranding records and are closer to extrapolations adjusted for detection probabilities of dead whales. Our most conservative model estimated mortality to be 7.8x, 2.0x and 2.7x the U.S. recommended limit for blue, humpback and fin whales, respectively, suggesting that death from vessel collisions may be a significant impediment to population growth and recovery. Comparing across the study area, the majority of strike mortality occurs in waters off California, from Bodega Bay south and tends to be concentrated in a band approximately 24 Nm (44.5 km) offshore and in designated shipping lanes leading to and from major ports. While some mortality risk exists across nearly all West Coast waters, 74%, 82% and 65% of blue, humpback and fin whale mortality, respectively, occurs in just 10% of the study area, suggesting conservation efforts can be very effective if focused in these waters. Risk is highest in the shipping lanes off San Francisco and Long Beach, but only a fraction of total estimated mortality occurs in these proportionally small areas, making any conservation efforts exclusively within these areas insufficient to address overall strike mortality. We recommend combining shipping lane modifications and re-locations, ship speed reductions and creation of 'Areas to be Avoided' by vessels in ecologically important locations to address this significant source of whale mortality.
NASA Astrophysics Data System (ADS)
Lusiana, Evellin Dewi
2017-12-01
The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.
Tillman, Fred; Wiele, Stephen M.; Pool, Donald R.
2015-01-01
Population growth in the Verde Valley in Arizona has led to efforts to better understand water availability in the watershed. Evapotranspiration (ET) is a substantial component of the water budget and a critical factor in estimating groundwater recharge in the area. In this study, four estimates of ET are compared and discussed with applications to the Verde Valley. Higher potential ET (PET) rates from the soil-water balance (SWB) recharge model resulted in an average annual ET volume about 17% greater than for ET from the basin characteristics (BCM) recharge model. Annual BCM PET volume, however, was greater by about a factor of 2 or more than SWB actual ET (AET) estimates, which are used in the SWB model to estimate groundwater recharge. ET also was estimated using a method that combines MODIS-EVI remote sensing data and geospatial information and by the MODFLOW-EVT ET package as part of a regional groundwater-flow model that includes the study area. Annual ET volumes were about same for upper-bound MODIS-EVI ET for perennial streams as for the MODFLOW ET estimates, with the small differences between the two methods having minimal impact on annual or longer groundwater budgets for the study area.
Crop Characteristics Research: Growth and Reflectance Analysis
NASA Technical Reports Server (NTRS)
Badhwar, G. D. (Principal Investigator)
1985-01-01
Much of the early research in remote sensing follows along developing spectral signatures of cover types. It was found, however, that a signature from an unknown cover class could not always be matched to a catalog value of known cover class. This approach was abandoned and supervised classification schemes followed. These were not efficient and required extensive training. It was obvious that data acquired at a single time could not separate cover types. A large portion of the proposed research has concentrated on modeling the temporal behavior of agricultural crops and on removing the need for any training data in remote sensing surveys; the key to which is the solution of the so-called 'signature extension' problem. A clear need to develop spectral estimaters of crop ontogenic stages and yield has existed even though various correlations have been developed. Considerable effort in developing techniques to estimate these variables was devoted to this work. The need to accurately evaluate existing canopy reflectance model(s), improve these models, use them to understand the crop signatures, and estimate leaf area index was the third objective of the proposed work. A synopsis of this research effort is discussed.
And the first one now will later be last: Time-reversal in cormack-jolly-seber models
Nichols, James D.
2016-01-01
The models of Cormack, Jolly and Seber (CJS) are remarkable in providing a rich set of inferences about population survival, recruitment, abundance and even sampling probabilities from a seemingly limited data source: a matrix of 1's and 0's reflecting animal captures and recaptures at multiple sampling occasions. Survival and sampling probabilities are estimated directly in CJS models, whereas estimators for recruitment and abundance were initially obtained as derived quantities. Various investigators have noted that just as standard modeling provides direct inferences about survival, reversing the time order of capture history data permits direct modeling and inference about recruitment. Here we review the development of reverse-time modeling efforts, emphasizing the kinds of inferences and questions to which they seem well suited.
Sampling effort and estimates of species richness based on prepositioned area electrofisher samples
Bowen, Z.H.; Freeman, Mary C.
1998-01-01
Estimates of species richness based on electrofishing data are commonly used to describe the structure of fish communities. One electrofishing method for sampling riverine fishes that has become popular in the last decade is the prepositioned area electrofisher (PAE). We investigated the relationship between sampling effort and fish species richness at seven sites in the Tallapoosa River system, USA based on 1,400 PAE samples collected during 1994 and 1995. First, we estimated species richness at each site using the first-order jackknife and compared observed values for species richness and jackknife estimates of species richness to estimates based on historical collection data. Second, we used a permutation procedure and nonlinear regression to examine rates of species accumulation. Third, we used regression to predict the number of PAE samples required to collect the jackknife estimate of species richness at each site during 1994 and 1995. We found that jackknife estimates of species richness generally were less than or equal to estimates based on historical collection data. The relationship between PAE electrofishing effort and species richness in the Tallapoosa River was described by a positive asymptotic curve as found in other studies using different electrofishing gears in wadable streams. Results from nonlinear regression analyses indicted that rates of species accumulation were variable among sites and between years. Across sites and years, predictions of sampling effort required to collect jackknife estimates of species richness suggested that doubling sampling effort (to 200 PAEs) would typically increase observed species richness by not more than six species. However, sampling effort beyond about 60 PAE samples typically increased observed species richness by < 10%. We recommend using historical collection data in conjunction with a preliminary sample size of at least 70 PAE samples to evaluate estimates of species richness in medium-sized rivers. Seventy PAE samples should provide enough information to describe the relationship between sampling effort and species richness and thus facilitate evaluation of a sampling effort.
2016-04-30
Dabkowski, and Dixit (2015), we demonstrate that the DoDAF models required pre–MS A map to 14 of the 18 parameters of the Constructive Systems...engineering effort in complex systems. Saarbrücken, Germany: VDM Verlag. Valerdi, R., Dabkowski, M., & Dixit , I. (2015). Reliability improvement of...R., Dabkowski, M., & Dixit , I. (2015). Reliability Improvement of Major Defense Acquisition Program Cost Estimates – Mapping DoDAF to COSYSMO
Cost Estimation Techniques for C3I System Software.
1984-07-01
opment manmonth have been determined for maxi, midi , and mini .1 type computers. Small to median size timeshared developments used 0.2 to 1.5 hours...development schedule 1.23 1.00 1.10 2.1.3 Detailed Model The final codification of the COCOMO regressions was the development of separate effort...regardless of the software structure level being estimated: D8VC -- the expected development computer (maxi. midi . mini, micro) MODE -- the expected
NASA Astrophysics Data System (ADS)
Gilles, Luc; Wang, Lianqi; Ellerbroek, Brent
2010-07-01
This paper describes the modeling efforts undertaken in the past couple of years to derive wavefront error (WFE) performance estimates for the Narrow Field Infrared Adaptive Optics System (NFIRAOS), which is the facility laser guide star (LGS) dual-conjugate adaptive optics (AO) system for the Thirty Meter Telescope (TMT). The estimates describe the expected performance of NFIRAOS as a function of seeing on Mauna Kea, zenith angle, and galactic latitude (GL). They have been developed through a combination of integrated AO simulations, side analyses, allocations, lab and lidar experiments.
Rigby, Elizabeth A.; Haukos, David A.
2012-01-01
Previous Mottled Duck (Anas fulvigula) studies suggested that high female breeding season survival may be caused by low nesting effort, but few breeding season estimates of survival associated with nesting effort exist on the western Gulf Coast. Here, breeding season survival (N = 40) and breeding incidence (N = 39) were estimated for female Mottled Ducks on the upper Texas coast, 2006–2008. Females were fitted with backpack radio transmitters and visually relocated every 3–4 days. Weekly survival was estimated using the Known Fate procedure of program MARK with breeding incidence estimated as the annual proportion of females observed nesting or with broods. The top-ranked survival model included a body mass covariate and held weekly female survival constant across weeks and years (SW = 0.986, SE = 0.006). When compared to survival across the entire year estimated from previous band recovery and age ratio analysis, survival rate during the breeding season did not differ. Breeding incidence was well below 100% in all years and highly variable among years (15%–63%). Breeding season survival and breeding incidence were similar to estimates obtained with implant transmitters from the mid-coast of Texas. The greatest breeding incidence for both studies occurred when drought indices indicated average environmental moisture during the breeding season. The observed combination of low breeding incidence and high breeding season survival support the hypothesis of a trade-off between the ecological cost of nesting effort and survival for Mottled Duck females. Habitat cues that trigger nesting are unknown and should be investigated.
Sub-sampling genetic data to estimate black bear population size: A case study
Tredick, C.A.; Vaughan, M.R.; Stauffer, D.F.; Simek, S.L.; Eason, T.
2007-01-01
Costs for genetic analysis of hair samples collected for individual identification of bears average approximately US$50 [2004] per sample. This can easily exceed budgetary allowances for large-scale studies or studies of high-density bear populations. We used 2 genetic datasets from 2 areas in the southeastern United States to explore how reducing costs of analysis by sub-sampling affected precision and accuracy of resulting population estimates. We used several sub-sampling scenarios to create subsets of the full datasets and compared summary statistics, population estimates, and precision of estimates generated from these subsets to estimates generated from the complete datasets. Our results suggested that bias and precision of estimates improved as the proportion of total samples used increased, and heterogeneity models (e.g., Mh[CHAO]) were more robust to reduced sample sizes than other models (e.g., behavior models). We recommend that only high-quality samples (>5 hair follicles) be used when budgets are constrained, and efforts should be made to maximize capture and recapture rates in the field.
Estimating the effectiveness of further sampling in species inventories
Keating, K.A.; Quinn, J.F.; Ivie, M.A.; Ivie, L.L.
1998-01-01
Estimators of the number of additional species expected in the next ??n samples offer a potentially important tool for improving cost-effectiveness of species inventories but are largely untested. We used Monte Carlo methods to compare 11 such estimators, across a range of community structures and sampling regimes, and validated our results, where possible, using empirical data from vascular plant and beetle inventories from Glacier National Park, Montana, USA. We found that B. Efron and R. Thisted's 1976 negative binomial estimator was most robust to differences in community structure and that it was among the most accurate estimators when sampling was from model communities with structures resembling the large, heterogeneous communities that are the likely targets of major inventory efforts. Other estimators may be preferred under specific conditions, however. For example, when sampling was from model communities with highly even species-abundance distributions, estimates based on the Michaelis-Menten model were most accurate; when sampling was from moderately even model communities with S=10 species or communities with highly uneven species-abundance distributions, estimates based on Gleason's (1922) species-area model were most accurate. We suggest that use of such methods in species inventories can help improve cost-effectiveness by providing an objective basis for redirecting sampling to more-productive sites, methods, or time periods as the expectation of detecting additional species becomes unacceptably low.
Precision GPS ephemerides and baselines
NASA Technical Reports Server (NTRS)
1991-01-01
Based on the research, the area of precise ephemerides for GPS satellites, the following observations can be made pertaining to the status and future work needed regarding orbit accuracy. There are several aspects which need to be addressed in discussing determination of precise orbits, such as force models, kinematic models, measurement models, data reduction/estimation methods, etc. Although each one of these aspects was studied at CSR in research efforts, only points pertaining to the force modeling aspect are addressed.
Advancing Models and Data for Characterizing Exposures to Chemicals in Consumer Products
EPA’s Office of Research and Development (ORD) is leading several efforts to develop data and methods for estimating population chemical exposures related to the use of consumer products. New curated chemical, ingredient, and product use information are being collected fro...
Recent advances in estimating protein and energy requirements of ruminants
USDA-ARS?s Scientific Manuscript database
Considerable efforts have been made in gathering scientific data and developing feeding systems for ruminant animals in the last 50 years. Future endeavours should target the assessment, interpretation, and integration of the accumulated knowledge to develop nutrition models in a holistic and pragma...
The impact of truck repositioning on congestion and pollution in the LA basin.
DOT National Transportation Integrated Search
2011-03-01
Pollution and congestion caused by port related truck traffic is usually estimated based on careful transportation modeling and simulation. In these efforts, however, attention is normally focused on trucks on their way from a terminal at the Los Ang...
Real-time monitoring of a microbial electrolysis cell using an electrical equivalent circuit model.
Hussain, S A; Perrier, M; Tartakovsky, B
2018-04-01
Efforts in developing microbial electrolysis cells (MECs) resulted in several novel approaches for wastewater treatment and bioelectrosynthesis. Practical implementation of these approaches necessitates the development of an adequate system for real-time (on-line) monitoring and diagnostics of MEC performance. This study describes a simple MEC equivalent electrical circuit (EEC) model and a parameter estimation procedure, which enable such real-time monitoring. The proposed approach involves MEC voltage and current measurements during its operation with periodic power supply connection/disconnection (on/off operation) followed by parameter estimation using either numerical or analytical solution of the model. The proposed monitoring approach is demonstrated using a membraneless MEC with flow-through porous electrodes. Laboratory tests showed that changes in the influent carbon source concentration and composition significantly affect MEC total internal resistance and capacitance estimated by the model. Fast response of these EEC model parameters to changes in operating conditions enables the development of a model-based approach for real-time monitoring and fault detection.
Boyd, Windy A.; Smith, Marjolein V.; Kissling, Grace E.; Rice, Julie R.; Snyder, Daniel W.; Portier, Christopher J.; Freedman, Jonathan H.
2009-01-01
Background The nematode Caenorhabditis elegans is being assessed as an alternative model organism as part of an interagency effort to develop better means to test potentially toxic substances. As part of this effort, assays that use the COPAS Biosort flow sorting technology to record optical measurements (time of flight (TOF) and extinction (EXT)) of individual nematodes under various chemical exposure conditions are being developed. A mathematical model has been created that uses Biosort data to quantitatively and qualitatively describe C. elegans growth, and link changes in growth rates to biological events. Chlorpyrifos, an organophosphate pesticide known to cause developmental delays and malformations in mammals, was used as a model toxicant to test the applicability of the growth model for in vivo toxicological testing. Methodology/Principal Findings L1 larval nematodes were exposed to a range of sub-lethal chlorpyrifos concentrations (0–75 µM) and measured every 12 h. In the absence of toxicant, C. elegans matured from L1s to gravid adults by 60 h. A mathematical model was used to estimate nematode size distributions at various times. Mathematical modeling of the distributions allowed the number of measured nematodes and log(EXT) and log(TOF) growth rates to be estimated. The model revealed three distinct growth phases. The points at which estimated growth rates changed (change points) were constant across the ten chlorpyrifos concentrations. Concentration response curves with respect to several model-estimated quantities (numbers of measured nematodes, mean log(TOF) and log(EXT), growth rates, and time to reach change points) showed a significant decrease in C. elegans growth with increasing chlorpyrifos concentration. Conclusions Effects of chlorpyrifos on C. elegans growth and development were mathematically modeled. Statistical tests confirmed a significant concentration effect on several model endpoints. This confirmed that chlorpyrifos affects C. elegans development in a concentration dependent manner. The most noticeable effect on growth occurred during early larval stages: L2 and L3. This study supports the utility of the C. elegans growth assay and mathematical modeling in determining the effects of potentially toxic substances in an alternative model organism using high-throughput technologies. PMID:19753116
The use of auxiliary variables in capture-recapture and removal experiments
Pollock, K.H.; Hines, J.E.; Nichols, J.D.
1984-01-01
The dependence of animal capture probabilities on auxiliary variables is an important practical problem which has not been considered in the development of estimation procedures for capture-recapture and removal experiments. In this paper the linear logistic binary regression model is used to relate the probability of capture to continuous auxiliary variables. The auxiliary variables could be environmental quantities such as air or water temperature, or characteristics of individual animals, such as body length or weight. Maximum likelihood estimators of the population parameters are considered for a variety of models which all assume a closed population. Testing between models is also considered. The models can also be used when one auxiliary variable is a measure of the effort expended in obtaining the sample.
Magnavita, N
2007-01-01
Occupational stress is currently studied by the Job Demand/Control model of Karasek, and the Effort/Reward Imbalance model of Siegrist. In this study we have translated into Italian and validated the short form of the Job Content Questionnaire (JCQ) and of the Effort Reward Imbalance Questionnaire (ERI). The questionnaires were applied to 531 health care workers during periodical medical examinations. Estimations of internal consistency, based on the correlation among the variables comprising the set (Cronbach's alpha), in each case were satisfactory (alpha ranging from 0.76 to 0.89), with the exception of the control" scale of JCQ (alpha = 0.57). Exploratory factor analysis showed that "control" scale of JCQ, and "reward" scale of ERI could be divided into two and, respectively, three sub-scales. The Karasek's and Siegrist's models made distinct contributions to explaining perceived work stress. Both JCQ and ERI questionnaire may be useful in occupational health.
Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks.
Rumschinski, Philipp; Borchers, Steffen; Bosio, Sandro; Weismantel, Robert; Findeisen, Rolf
2010-05-25
Mathematical modeling and analysis have become, for the study of biological and cellular processes, an important complement to experimental research. However, the structural and quantitative knowledge available for such processes is frequently limited, and measurements are often subject to inherent and possibly large uncertainties. This results in competing model hypotheses, whose kinetic parameters may not be experimentally determinable. Discriminating among these alternatives and estimating their kinetic parameters is crucial to improve the understanding of the considered process, and to benefit from the analytical tools at hand. In this work we present a set-based framework that allows to discriminate between competing model hypotheses and to provide guaranteed outer estimates on the model parameters that are consistent with the (possibly sparse and uncertain) experimental measurements. This is obtained by means of exact proofs of model invalidity that exploit the polynomial/rational structure of biochemical reaction networks, and by making use of an efficient strategy to balance solution accuracy and computational effort. The practicability of our approach is illustrated with two case studies. The first study shows that our approach allows to conclusively rule out wrong model hypotheses. The second study focuses on parameter estimation, and shows that the proposed method allows to evaluate the global influence of measurement sparsity, uncertainty, and prior knowledge on the parameter estimates. This can help in designing further experiments leading to improved parameter estimates.
Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks
2010-01-01
Background Mathematical modeling and analysis have become, for the study of biological and cellular processes, an important complement to experimental research. However, the structural and quantitative knowledge available for such processes is frequently limited, and measurements are often subject to inherent and possibly large uncertainties. This results in competing model hypotheses, whose kinetic parameters may not be experimentally determinable. Discriminating among these alternatives and estimating their kinetic parameters is crucial to improve the understanding of the considered process, and to benefit from the analytical tools at hand. Results In this work we present a set-based framework that allows to discriminate between competing model hypotheses and to provide guaranteed outer estimates on the model parameters that are consistent with the (possibly sparse and uncertain) experimental measurements. This is obtained by means of exact proofs of model invalidity that exploit the polynomial/rational structure of biochemical reaction networks, and by making use of an efficient strategy to balance solution accuracy and computational effort. Conclusions The practicability of our approach is illustrated with two case studies. The first study shows that our approach allows to conclusively rule out wrong model hypotheses. The second study focuses on parameter estimation, and shows that the proposed method allows to evaluate the global influence of measurement sparsity, uncertainty, and prior knowledge on the parameter estimates. This can help in designing further experiments leading to improved parameter estimates. PMID:20500862
Kowalski, Amanda
2015-01-01
Efforts to control medical care costs depend critically on how individuals respond to prices. I estimate the price elasticity of expenditure on medical care using a censored quantile instrumental variable (CQIV) estimator. CQIV allows estimates to vary across the conditional expenditure distribution, relaxes traditional censored model assumptions, and addresses endogeneity with an instrumental variable. My instrumental variable strategy uses a family member’s injury to induce variation in an individual’s own price. Across the conditional deciles of the expenditure distribution, I find elasticities that vary from −0.76 to −1.49, which are an order of magnitude larger than previous estimates. PMID:26977117
Initial dynamic load estimates during configuration design
NASA Technical Reports Server (NTRS)
Schiff, Daniel
1987-01-01
This analysis includes the structural response to shock and vibration and evaluates the maximum deflections and material stresses and the potential for the occurrence of elastic instability, fatigue and fracture. The required computations are often performed by means of finite element analysis (FEA) computer programs in which the structure is simulated by a finite element model which may contain thousands of elements. The formulation of a finite element model can be time consuming, and substantial additional modeling effort may be necessary if the structure requires significant changes after initial analysis. Rapid methods for obtaining rough estimates of the structural response to shock and vibration are presented for the purpose of providing guidance during the initial mechanical design configuration stage.
Integration, Validation, and Application of a PV Snow Coverage Model in SAM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freeman, Janine M.; Ryberg, David Severin
2017-08-01
Due to the increasing deployment of PV systems in snowy climates, there is significant interest in a method capable of estimating PV losses resulting from snow coverage that has been verified for a variety of system designs and locations. Many independent snow coverage models have been developed over the last 15 years; however, there has been very little effort verifying these models beyond the system designs and locations on which they were based. Moreover, major PV modeling software products have not yet incorporated any of these models into their workflows. In response to this deficiency, we have integrated the methodologymore » of the snow model developed in the paper by Marion et al. (2013) into the National Renewable Energy Laboratory's (NREL) System Advisor Model (SAM). In this work, we describe how the snow model is implemented in SAM and we discuss our demonstration of the model's effectiveness at reducing error in annual estimations for three PV arrays. Next, we use this new functionality in conjunction with a long term historical data set to estimate average snow losses across the United States for two typical PV system designs. The open availability of the snow loss estimation capability in SAM to the PV modeling community, coupled with our results of the nationwide study, will better equip the industry to accurately estimate PV energy production in areas affected by snowfall.« less
Nateghi, Roshanak; Guikema, Seth D; Quiring, Steven M
2011-12-01
This article compares statistical methods for modeling power outage durations during hurricanes and examines the predictive accuracy of these methods. Being able to make accurate predictions of power outage durations is valuable because the information can be used by utility companies to plan their restoration efforts more efficiently. This information can also help inform customers and public agencies of the expected outage times, enabling better collective response planning, and coordination of restoration efforts for other critical infrastructures that depend on electricity. In the long run, outage duration estimates for future storm scenarios may help utilities and public agencies better allocate risk management resources to balance the disruption from hurricanes with the cost of hardening power systems. We compare the out-of-sample predictive accuracy of five distinct statistical models for estimating power outage duration times caused by Hurricane Ivan in 2004. The methods compared include both regression models (accelerated failure time (AFT) and Cox proportional hazard models (Cox PH)) and data mining techniques (regression trees, Bayesian additive regression trees (BART), and multivariate additive regression splines). We then validate our models against two other hurricanes. Our results indicate that BART yields the best prediction accuracy and that it is possible to predict outage durations with reasonable accuracy. © 2011 Society for Risk Analysis.
Paul, Susannah; Mgbere, Osaro; Arafat, Raouf; Yang, Biru; Santos, Eunice
2017-01-01
Objective The objective was to forecast and validate prediction estimates of influenza activity in Houston, TX using four years of historical influenza-like illness (ILI) from three surveillance data capture mechanisms. Background Using novel surveillance methods and historical data to estimate future trends of influenza-like illness can lead to early detection of influenza activity increases and decreases. Anticipating surges gives public health professionals more time to prepare and increase prevention efforts. Methods Data was obtained from three surveillance systems, Flu Near You, ILINet, and hospital emergency center (EC) visits, with diverse data capture mechanisms. Autoregressive integrated moving average (ARIMA) models were fitted to data from each source for week 27 of 2012 through week 26 of 2016 and used to forecast influenza-like activity for the subsequent 10 weeks. Estimates were then compared to actual ILI percentages for the same period. Results Forecasted estimates had wide confidence intervals that crossed zero. The forecasted trend direction differed by data source, resulting in lack of consensus about future influenza activity. ILINet forecasted estimates and actual percentages had the least differences. ILINet performed best when forecasting influenza activity in Houston, TX. Conclusion Though the three forecasted estimates did not agree on the trend directions, and thus, were considered imprecise predictors of long-term ILI activity based on existing data, pooling predictions and careful interpretations may be helpful for short term intervention efforts. Further work is needed to improve forecast accuracy considering the promise forecasting holds for seasonal influenza prevention and control, and pandemic preparedness.
Latitudinal distributions of particulate carbon export across the North Western Atlantic Ocean
NASA Astrophysics Data System (ADS)
Puigcorbé, Viena; Roca-Martí, Montserrat; Masqué, Pere; Benitez-Nelson, Claudia; Rutgers van der Loeff, Michiel; Bracher, Astrid; Moreau, Sebastien
2017-11-01
234Th-derived carbon export fluxes were measured in the Atlantic Ocean under the GEOTRACES framework to evaluate basin-scale export variability. Here, we present the results from the northern half of the GA02 transect, spanning from the equator to 64°N. As a result of limited site-specific C/234Th ratio measurements, we further combined our data with previous work to develop a basin wide C/234Th ratio depth curve. While the magnitude of organic carbon fluxes varied depending on the C/234Th ratio used, latitudinal trends were similar, with sizeable and variable organic carbon export fluxes occurring at high latitudes and low to negligible fluxes occurring in oligotrophic waters. Our results agree with previous studies, except at the boundaries between domains, where fluxes were relatively enhanced. Three different models were used to obtain satellite-derived net primary production (NPP). In general, NPP estimates had similar trends along the transect, but there were significant differences in the absolute magnitude depending on the model used. Nevertheless, organic carbon export efficiencies were generally < 25%, with the exception of a few stations located in the transition area between the riverine and the oligotrophic domains and between the oligotrophic and the temperate domains. Satellite-derived organic carbon export models from Dunne et al. (2005) (D05), Laws et al. (2011) (L11) and Henson et al. (2011) (H11) were also compared to our 234Th-derived carbon exports fluxes. D05 and L11 provided estimates closest to values obtained with the 234Th approach (within a 3-fold difference), but with no clear trends. The H11 model, on the other hand, consistently provided lower export estimates. The large increase in export data in the Atlantic Ocean derived from the GEOTRACES Program, combined with satellite observations and modeling efforts continue to improve the estimates of carbon export in this ocean basin and therefore reduce uncertainty in the global carbon budget. However, our results also suggest that tuning export models and including biological parameters at a regional scale is necessary for improving satellite-modeling efforts and providing export estimates that are more representative of in situ observations.
Near real-time forecasting for cholera decision making in Haiti after Hurricane Matthew
Camacho, Anton; Grandesso, Francesco; Cohuet, Sandra; Lemaitre, Joseph C.; Rinaldo, Andrea
2018-01-01
Computational models of cholera transmission can provide objective insights into the course of an ongoing epidemic and aid decision making on allocation of health care resources. However, models are typically designed, calibrated and interpreted post-hoc. Here, we report the efforts of a team from academia, field research and humanitarian organizations to model in near real-time the Haitian cholera outbreak after Hurricane Matthew in October 2016, to assess risk and to quantitatively estimate the efficacy of a then ongoing vaccination campaign. A rainfall-driven, spatially-explicit meta-community model of cholera transmission was coupled to a data assimilation scheme for computing short-term projections of the epidemic in near real-time. The model was used to forecast cholera incidence for the months after the passage of the hurricane (October-December 2016) and to predict the impact of a planned oral cholera vaccination campaign. Our first projection, from October 29 to December 31, predicted the highest incidence in the departments of Grande Anse and Sud, accounting for about 45% of the total cases in Haiti. The projection included a second peak in cholera incidence in early December largely driven by heavy rainfall forecasts, confirming the urgency for rapid intervention. A second projection (from November 12 to December 31) used updated rainfall forecasts to estimate that 835 cases would be averted by vaccinations in Grande Anse (90% Prediction Interval [PI] 476-1284) and 995 in Sud (90% PI 508-2043). The experience gained by this modeling effort shows that state-of-the-art computational modeling and data-assimilation methods can produce informative near real-time projections of cholera incidence. Collaboration among modelers and field epidemiologists is indispensable to gain fast access to field data and to translate model results into operational recommendations for emergency management during an outbreak. Future efforts should thus draw together multi-disciplinary teams to ensure model outputs are appropriately based, interpreted and communicated. PMID:29768401
Near real-time forecasting for cholera decision making in Haiti after Hurricane Matthew.
Pasetto, Damiano; Finger, Flavio; Camacho, Anton; Grandesso, Francesco; Cohuet, Sandra; Lemaitre, Joseph C; Azman, Andrew S; Luquero, Francisco J; Bertuzzo, Enrico; Rinaldo, Andrea
2018-05-01
Computational models of cholera transmission can provide objective insights into the course of an ongoing epidemic and aid decision making on allocation of health care resources. However, models are typically designed, calibrated and interpreted post-hoc. Here, we report the efforts of a team from academia, field research and humanitarian organizations to model in near real-time the Haitian cholera outbreak after Hurricane Matthew in October 2016, to assess risk and to quantitatively estimate the efficacy of a then ongoing vaccination campaign. A rainfall-driven, spatially-explicit meta-community model of cholera transmission was coupled to a data assimilation scheme for computing short-term projections of the epidemic in near real-time. The model was used to forecast cholera incidence for the months after the passage of the hurricane (October-December 2016) and to predict the impact of a planned oral cholera vaccination campaign. Our first projection, from October 29 to December 31, predicted the highest incidence in the departments of Grande Anse and Sud, accounting for about 45% of the total cases in Haiti. The projection included a second peak in cholera incidence in early December largely driven by heavy rainfall forecasts, confirming the urgency for rapid intervention. A second projection (from November 12 to December 31) used updated rainfall forecasts to estimate that 835 cases would be averted by vaccinations in Grande Anse (90% Prediction Interval [PI] 476-1284) and 995 in Sud (90% PI 508-2043). The experience gained by this modeling effort shows that state-of-the-art computational modeling and data-assimilation methods can produce informative near real-time projections of cholera incidence. Collaboration among modelers and field epidemiologists is indispensable to gain fast access to field data and to translate model results into operational recommendations for emergency management during an outbreak. Future efforts should thus draw together multi-disciplinary teams to ensure model outputs are appropriately based, interpreted and communicated.
Undocumented Migration from Latin America in an Era of Rising U.S. Enforcement.
Massey, Douglas S; Riosmena, Fernando
2010-07-01
Available data have consistently pointed up the failure of U.S. policies to reduce undocumented migration from Latin America. To shed light on the reasons for this failure, we estimated a series of dynamic models of undocumented entry into and exit from the United States. Our estimates suggest that undocumented migration is grounded more in mechanisms posited by social capital theory and the new economics of labor migration rather than neoclassical economics. As a result, U.S. efforts to increase the costs of undocumented entry and reduce the benefits of undocumented labor have proven unsuccessful given the widespread access of Latin Americans to migrant networks. The main effect of U.S. enforcement efforts has been to reduce the circularity of Latin American migration.
Undocumented Migration from Latin America in an Era of Rising U.S. Enforcement
MASSEY, DOUGLAS S.; RIOSMENA, FERNANDO
2010-01-01
Available data have consistently pointed up the failure of U.S. policies to reduce undocumented migration from Latin America. To shed light on the reasons for this failure, we estimated a series of dynamic models of undocumented entry into and exit from the United States. Our estimates suggest that undocumented migration is grounded more in mechanisms posited by social capital theory and the new economics of labor migration rather than neoclassical economics. As a result, U.S. efforts to increase the costs of undocumented entry and reduce the benefits of undocumented labor have proven unsuccessful given the widespread access of Latin Americans to migrant networks. The main effect of U.S. enforcement efforts has been to reduce the circularity of Latin American migration. PMID:20824109
Prediction of fishing effort distributions using boosted regression trees.
Soykan, Candan U; Eguchi, Tomoharu; Kohin, Suzanne; Dewar, Heidi
2014-01-01
Concerns about bycatch of protected species have become a dominant factor shaping fisheries management. However, efforts to mitigate bycatch are often hindered by a lack of data on the distributions of fishing effort and protected species. One approach to overcoming this problem has been to overlay the distribution of past fishing effort with known locations of protected species, often obtained through satellite telemetry and occurrence data, to identify potential bycatch hotspots. This approach, however, generates static bycatch risk maps, calling into question their ability to forecast into the future, particularly when dealing with spatiotemporally dynamic fisheries and highly migratory bycatch species. In this study, we use boosted regression trees to model the spatiotemporal distribution of fishing effort for two distinct fisheries in the North Pacific Ocean, the albacore (Thunnus alalunga) troll fishery and the California drift gillnet fishery that targets swordfish (Xiphias gladius). Our results suggest that it is possible to accurately predict fishing effort using < 10 readily available predictor variables (cross-validated correlations between model predictions and observed data -0.6). Although the two fisheries are quite different in their gears and fishing areas, their respective models had high predictive ability, even when input data sets were restricted to a fraction of the full time series. The implications for conservation and management are encouraging: Across a range of target species, fishing methods, and spatial scales, even a relatively short time series of fisheries data may suffice to accurately predict the location of fishing effort into the future. In combination with species distribution modeling of bycatch species, this approach holds promise as a mitigation tool when observer data are limited. Even in data-rich regions, modeling fishing effort and bycatch may provide more accurate estimates of bycatch risk than partial observer coverage for fisheries and bycatch species that are heavily influenced by dynamic oceanographic conditions.
NASA Space Radiation Risk Project: Overview and Recent Results
NASA Technical Reports Server (NTRS)
Blattnig, Steve R.; Chappell, Lori J.; George, Kerry A.; Hada, Megumi; Hu, Shaowen; Kidane, Yared H.; Kim, Myung-Hee Y.; Kovyrshina, Tatiana; Norman, Ryan B.; Nounu, Hatem N.;
2015-01-01
The NASA Space Radiation Risk project is responsible for integrating new experimental and computational results into models to predict risk of cancer and acute radiation syndrome (ARS) for use in mission planning and systems design, as well as current space operations. The project has several parallel efforts focused on proving NASA's radiation risk projection capability in both the near and long term. This presentation will give an overview, with select results from these efforts including the following topics: verification, validation, and streamlining the transition of models to use in decision making; relative biological effectiveness and dose rate effect estimation using a combination of stochastic track structure simulations, DNA damage model calculations and experimental data; ARS model improvements; pathway analysis from gene expression data sets; solar particle event probabilistic exposure calculation including correlated uncertainties for use in design optimization.
Di Guardo, Antonio; Gouin, Todd; MacLeod, Matthew; Scheringer, Martin
2018-01-24
Environmental fate and exposure models are a powerful means to integrate information on chemicals, their partitioning and degradation behaviour, the environmental scenario and the emissions in order to compile a picture of chemical distribution and fluxes in the multimedia environment. A 1995 pioneering book, resulting from a series of workshops among model developers and users, reported the main advantages and identified needs for research in the field of multimedia fate models. Considerable efforts were devoted to their improvement in the past 25 years and many aspects were refined; notably the inclusion of nanomaterials among the modelled substances, the development of models at different spatial and temporal scales, the estimation of chemical properties and emission data, the incorporation of additional environmental media and processes, the integration of sensitivity and uncertainty analysis in the simulations. However, some challenging issues remain and require research efforts and attention: the need of methods to estimate partition coefficients for polar and ionizable chemical in the environment, a better description of bioavailability in different environments as well as the requirement of injecting more ecological realism in exposure predictions to account for the diversity of ecosystem structures and functions in risk assessment. Finally, to transfer new scientific developments into the realm of regulatory risk assessment, we propose the formation of expert groups that compare, discuss and recommend model modifications and updates and help develop practical tools for risk assessment.
Gray, B.R.; Shi, W.; Houser, J.N.; Rogala, J.T.; Guan, Z.; Cochran-Biederman, J. L.
2011-01-01
Ecological restoration efforts in large rivers generally aim to ameliorate ecological effects associated with large-scale modification of those rivers. This study examined whether the effects of restoration efforts-specifically those of island construction-within a largely open water restoration area of the Upper Mississippi River (UMR) might be seen at the spatial scale of that 3476ha area. The cumulative effects of island construction, when observed over multiple years, were postulated to have made the restoration area increasingly similar to a positive reference area (a proximate area comprising contiguous backwater areas) and increasingly different from two negative reference areas. The negative reference areas represented the Mississippi River main channel in an area proximate to the restoration area and an open water area in a related Mississippi River reach that has seen relatively little restoration effort. Inferences on the effects of restoration were made by comparing constrained and unconstrained models of summer chlorophyll a (CHL), summer inorganic suspended solids (ISS) and counts of benthic mayfly larvae. Constrained models forced trends in means or in both means and sampling variances to become, over time, increasingly similar to those in the positive reference area and increasingly dissimilar to those in the negative reference areas. Trends were estimated over 12- (mayflies) or 14-year sampling periods, and were evaluated using model information criteria. Based on these methods, restoration effects were observed for CHL and mayflies while evidence in favour of restoration effects on ISS was equivocal. These findings suggest that the cumulative effects of island building at relatively large spatial scales within large rivers may be estimated using data from large-scale surveillance monitoring programs. Published in 2010 by John Wiley & Sons, Ltd.
Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems
Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R
2006-01-01
Background We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems. PMID:17081289
Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems.
Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R
2006-11-02
We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems.
Kracalik, Ian T; Kenu, Ernest; Ayamdooh, Evans Nsoh; Allegye-Cudjoe, Emmanuel; Polkuu, Paul Nokuma; Frimpong, Joseph Asamoah; Nyarko, Kofi Mensah; Bower, William A; Traxler, Rita; Blackburn, Jason K
2017-10-01
Anthrax is hyper-endemic in West Africa. Despite the effectiveness of livestock vaccines in controlling anthrax, underreporting, logistics, and limited resources makes implementing vaccination campaigns difficult. To better understand the geographic limits of anthrax, elucidate environmental factors related to its occurrence, and identify human and livestock populations at risk, we developed predictive models of the environmental suitability of anthrax in Ghana. We obtained data on the location and date of livestock anthrax from veterinary and outbreak response records in Ghana during 2005-2016, as well as livestock vaccination registers and population estimates of characteristically high-risk groups. To predict the environmental suitability of anthrax, we used an ensemble of random forest (RF) models built using a combination of climatic and environmental factors. From 2005 through the first six months of 2016, there were 67 anthrax outbreaks (851 cases) in livestock; outbreaks showed a seasonal peak during February through April and primarily involved cattle. There was a median of 19,709 vaccine doses [range: 0-175 thousand] administered annually. Results from the RF model suggest a marked ecological divide separating the broad areas of environmental suitability in northern Ghana from the southern part of the country. Increasing alkaline soil pH was associated with a higher probability of anthrax occurrence. We estimated 2.2 (95% CI: 2.0, 2.5) million livestock and 805 (95% CI: 519, 890) thousand low income rural livestock keepers were located in anthrax risk areas. Based on our estimates, the current anthrax vaccination efforts in Ghana cover a fraction of the livestock potentially at risk, thus control efforts should be focused on improving vaccine coverage among high risk groups.
Allegye-Cudjoe, Emmanuel; Polkuu, Paul Nokuma; Frimpong, Joseph Asamoah; Nyarko, Kofi Mensah; Bower, William A.; Traxler, Rita
2017-01-01
Anthrax is hyper-endemic in West Africa. Despite the effectiveness of livestock vaccines in controlling anthrax, underreporting, logistics, and limited resources makes implementing vaccination campaigns difficult. To better understand the geographic limits of anthrax, elucidate environmental factors related to its occurrence, and identify human and livestock populations at risk, we developed predictive models of the environmental suitability of anthrax in Ghana. We obtained data on the location and date of livestock anthrax from veterinary and outbreak response records in Ghana during 2005–2016, as well as livestock vaccination registers and population estimates of characteristically high-risk groups. To predict the environmental suitability of anthrax, we used an ensemble of random forest (RF) models built using a combination of climatic and environmental factors. From 2005 through the first six months of 2016, there were 67 anthrax outbreaks (851 cases) in livestock; outbreaks showed a seasonal peak during February through April and primarily involved cattle. There was a median of 19,709 vaccine doses [range: 0–175 thousand] administered annually. Results from the RF model suggest a marked ecological divide separating the broad areas of environmental suitability in northern Ghana from the southern part of the country. Increasing alkaline soil pH was associated with a higher probability of anthrax occurrence. We estimated 2.2 (95% CI: 2.0, 2.5) million livestock and 805 (95% CI: 519, 890) thousand low income rural livestock keepers were located in anthrax risk areas. Based on our estimates, the current anthrax vaccination efforts in Ghana cover a fraction of the livestock potentially at risk, thus control efforts should be focused on improving vaccine coverage among high risk groups. PMID:29028799
Preliminary Assessment of the Flow of Used Electronics, In ...
Electronic waste (e-waste) is the largest growing municipal waste stream in the United States. The improper disposal of e-waste has environmental, economic, and social impacts, thus there is a need for sustainable stewardship of electronics. EPA/ORD has been working to improve our understanding of the quantity and flow of electronic devices from initial purchase to final disposition. Understanding the pathways of used electronics from the consumer to their final disposition would provide insight to decision makers about their impacts and support efforts to encourage improvements in policy, technology, and beneficial use. This report is the first stage of study of EPA/ORD's efforts to understand the flows of used electronics and e-waste by reviewing the regulatory programs for the selected states and identifying the key lessons learned and best practices that have emerged since their inception. Additionally, a proof-of-concept e-waste flow model has been developed to provide estimates of the quantity of e-waste generated annually at the national level, as well as for selected states. This report documents a preliminary assessment of available data and development of the model that can be used as a starting point to estimate domestic flows of used electronics from generation, to collection and reuse, to final disposition. The electronics waste flow model can estimate the amount of electronic products entering the EOL management phase based on unit sales dat
Estimating parameters of hidden Markov models based on marked individuals: use of robust design data
Kendall, William L.; White, Gary C.; Hines, James E.; Langtimm, Catherine A.; Yoshizaki, Jun
2012-01-01
Development and use of multistate mark-recapture models, which provide estimates of parameters of Markov processes in the face of imperfect detection, have become common over the last twenty years. Recently, estimating parameters of hidden Markov models, where the state of an individual can be uncertain even when it is detected, has received attention. Previous work has shown that ignoring state uncertainty biases estimates of survival and state transition probabilities, thereby reducing the power to detect effects. Efforts to adjust for state uncertainty have included special cases and a general framework for a single sample per period of interest. We provide a flexible framework for adjusting for state uncertainty in multistate models, while utilizing multiple sampling occasions per period of interest to increase precision and remove parameter redundancy. These models also produce direct estimates of state structure for each primary period, even for the case where there is just one sampling occasion. We apply our model to expected value data, and to data from a study of Florida manatees, to provide examples of the improvement in precision due to secondary capture occasions. We also provide user-friendly software to implement these models. This general framework could also be used by practitioners to consider constrained models of particular interest, or model the relationship between within-primary period parameters (e.g., state structure) and between-primary period parameters (e.g., state transition probabilities).
1980-10-01
Element, 64709N Prototype Manpower/Personnel Systems (U), Project Z1302-PN Officer Career Models (U), funded by the Office of the Deputy Assistant... Models for Navy Officer Billets portion of the proposed NPS research effort to develop an integrated officer system planning model ; the purpose of this...attempting to model the Naval officer force structure as a system. This study considers the primary first order factors which drive the requirements
NASA Astrophysics Data System (ADS)
Park, Jong-Yeon; Stock, Charles A.; Yang, Xiaosong; Dunne, John P.; Rosati, Anthony; John, Jasmin; Zhang, Shaoqing
2018-03-01
Reliable estimates of historical and current biogeochemistry are essential for understanding past ecosystem variability and predicting future changes. Efforts to translate improved physical ocean state estimates into improved biogeochemical estimates, however, are hindered by high biogeochemical sensitivity to transient momentum imbalances that arise during physical data assimilation. Most notably, the breakdown of geostrophic constraints on data assimilation in equatorial regions can lead to spurious upwelling, resulting in excessive equatorial productivity and biogeochemical fluxes. This hampers efforts to understand and predict the biogeochemical consequences of El Niño and La Niña. We develop a strategy to robustly integrate an ocean biogeochemical model with an ensemble coupled-climate data assimilation system used for seasonal to decadal global climate prediction. Addressing spurious vertical velocities requires two steps. First, we find that tightening constraints on atmospheric data assimilation maintains a better equatorial wind stress and pressure gradient balance. This reduces spurious vertical velocities, but those remaining still produce substantial biogeochemical biases. The remainder is addressed by imposing stricter fidelity to model dynamics over data constraints near the equator. We determine an optimal choice of model-data weights that removed spurious biogeochemical signals while benefitting from off-equatorial constraints that still substantially improve equatorial physical ocean simulations. Compared to the unconstrained control run, the optimally constrained model reduces equatorial biogeochemical biases and markedly improves the equatorial subsurface nitrate concentrations and hypoxic area. The pragmatic approach described herein offers a means of advancing earth system prediction in parallel with continued data assimilation advances aimed at fully considering equatorial data constraints.
Investigation of Models and Estimation Techniques for GPS Attitude Determination
NASA Technical Reports Server (NTRS)
Garrick, J.
1996-01-01
Much work has been done in the Flight Dynamics Analysis Branch (FDAB) in developing algorithms to met the new and growing field of attitude determination using the Global Positioning SYstem (GPS) constellation of satellites. Flight Dynamics has the responsibility to investigate any new technology and incorporate the innovations in the attitude ground support systems developed to support future missions. The work presented here is an investigative analysis that will produce the needed adaptation to allow the Flight Dynamics Support System (FDSS) to incorporate GPS phase measurements and produce observation measurements compatible with the FDSS. A simulator was developed to produce the necessary measurement data to test the models developed for the different estimation techniques used by FDAB. This paper gives an overview of the current modeling capabilities of the simulator models and algorithms for the adaptation of GPS measurement data and results from each of the estimation techniques. Future analysis efforts to evaluate the simulator and models against inflight GPS measurement data are also outlined.
Soil moisture data as a constraint for groundwater recharge estimation
NASA Astrophysics Data System (ADS)
Mathias, Simon A.; Sorensen, James P. R.; Butler, Adrian P.
2017-09-01
Estimating groundwater recharge rates is important for water resource management studies. Modeling approaches to forecast groundwater recharge typically require observed historic data to assist calibration. It is generally not possible to observe groundwater recharge rates directly. Therefore, in the past, much effort has been invested to record soil moisture content (SMC) data, which can be used in a water balance calculation to estimate groundwater recharge. In this context, SMC data is measured at different depths and then typically integrated with respect to depth to obtain a single set of aggregated SMC values, which are used as an estimate of the total water stored within a given soil profile. This article seeks to investigate the value of such aggregated SMC data for conditioning groundwater recharge models in this respect. A simple modeling approach is adopted, which utilizes an emulation of Richards' equation in conjunction with a soil texture pedotransfer function. The only unknown parameters are soil texture. Monte Carlo simulation is performed for four different SMC monitoring sites. The model is used to estimate both aggregated SMC and groundwater recharge. The impact of conditioning the model to the aggregated SMC data is then explored in terms of its ability to reduce the uncertainty associated with recharge estimation. Whilst uncertainty in soil texture can lead to significant uncertainty in groundwater recharge estimation, it is found that aggregated SMC is virtually insensitive to soil texture.
Buderman, Frances E; Diefenbach, Duane R; Casalena, Mary Jo; Rosenberry, Christopher S; Wallingford, Bret D
2014-04-01
The Brownie tag-recovery model is useful for estimating harvest rates but assumes all tagged individuals survive to the first hunting season; otherwise, mortality between time of tagging and the hunting season will cause the Brownie estimator to be negatively biased. Alternatively, fitting animals with radio transmitters can be used to accurately estimate harvest rate but may be more costly. We developed a joint model to estimate harvest and annual survival rates that combines known-fate data from animals fitted with transmitters to estimate the probability of surviving the period from capture to the first hunting season, and data from reward-tagged animals in a Brownie tag-recovery model. We evaluated bias and precision of the joint estimator, and how to optimally allocate effort between animals fitted with radio transmitters and inexpensive ear tags or leg bands. Tagging-to-harvest survival rates from >20 individuals with radio transmitters combined with 50-100 reward tags resulted in an unbiased and precise estimator of harvest rates. In addition, the joint model can test whether transmitters affect an individual's probability of being harvested. We illustrate application of the model using data from wild turkey, Meleagris gallapavo, to estimate harvest rates, and data from white-tailed deer, Odocoileus virginianus, to evaluate whether the presence of a visible radio transmitter is related to the probability of a deer being harvested. The joint known-fate tag-recovery model eliminates the requirement to capture and mark animals immediately prior to the hunting season to obtain accurate and precise estimates of harvest rate. In addition, the joint model can assess whether marking animals with radio transmitters affects the individual's probability of being harvested, caused by hunter selectivity or changes in a marked animal's behavior.
Buderman, Frances E.; Diefenbach, Duane R.; Casalena, Mary Jo; Rosenberry, Christopher S.; Wallingford, Bret D.
2014-01-01
The Brownie tag-recovery model is useful for estimating harvest rates but assumes all tagged individuals survive to the first hunting season; otherwise, mortality between time of tagging and the hunting season will cause the Brownie estimator to be negatively biased. Alternatively, fitting animals with radio transmitters can be used to accurately estimate harvest rate but may be more costly. We developed a joint model to estimate harvest and annual survival rates that combines known-fate data from animals fitted with transmitters to estimate the probability of surviving the period from capture to the first hunting season, and data from reward-tagged animals in a Brownie tag-recovery model. We evaluated bias and precision of the joint estimator, and how to optimally allocate effort between animals fitted with radio transmitters and inexpensive ear tags or leg bands. Tagging-to-harvest survival rates from >20 individuals with radio transmitters combined with 50–100 reward tags resulted in an unbiased and precise estimator of harvest rates. In addition, the joint model can test whether transmitters affect an individual's probability of being harvested. We illustrate application of the model using data from wild turkey, Meleagris gallapavo,to estimate harvest rates, and data from white-tailed deer, Odocoileus virginianus, to evaluate whether the presence of a visible radio transmitter is related to the probability of a deer being harvested. The joint known-fate tag-recovery model eliminates the requirement to capture and mark animals immediately prior to the hunting season to obtain accurate and precise estimates of harvest rate. In addition, the joint model can assess whether marking animals with radio transmitters affects the individual's probability of being harvested, caused by hunter selectivity or changes in a marked animal's behavior.
Using population models to evaluate management alternatives for Gulf Striped Bass
Aspinwall, Alexander P.; Irwin, Elise R.; Lloyd, M. Clint
2017-01-01
Interstate management of Gulf Striped Bass Morone saxatilis has involved a thirty-year cooperative effort involving Federal and State agencies in Georgia, Florida and Alabama (Apalachicola-Chattahoochee-Flint Gulf Striped Bass Technical Committee). The Committee has recently focused on developing an adaptive framework for conserving and restoring Gulf Striped Bass in the Apalachicola, Chattahoochee, and Flint River (ACF) system. To evaluate the consequences and tradeoffs among management activities, population models were used to inform management decisions. Stochastic matrix models were constructed with varying recruitment and stocking rates to simulate effects of management alternatives on Gulf Striped Bass population objectives. An age-classified matrix model that incorporated stock fecundity estimates and survival estimates was used to project population growth rate. In addition, combinations of management alternatives (stocking rates, Hydrilla control, harvest regulations) were evaluated with respect to how they influenced Gulf Striped Bass population growth. Annual survival and mortality rates were estimated from catch-curve analysis, while fecundity was estimated and predicted using a linear least squares regression analysis of fish length versus egg number from hatchery brood fish data. Stocking rates and stocked-fish survival rates were estimated from census data. Results indicated that management alternatives could be an effective approach to increasing the Gulf Striped Bass population. Population abundance was greatest under maximum stocking effort, maximum Hydrilla control and a moratorium. Conversely, population abundance was lowest under no stocking, no Hydrilla control and the current harvest regulation. Stocking rates proved to be an effective management strategy; however, low survival estimates of stocked fish (1%) limited the potential for population growth. Hydrilla control increased the survival rate of stocked fish and provided higher estimates of population abundances than maximizing the stocking rate. A change in the current harvest regulation (50% harvest regulation) was not an effective alternative to increasing the Gulf Striped Bass population size. Applying a moratorium to the Gulf Striped Bass fishery increased survival rates from 50% to 74% and resulted in the largest population growth of the individual management alternatives. These results could be used by the Committee to inform management decisions for other populations of Striped Bass in the Gulf Region.
Advances in Scientific Balloon Thermal Modeling
NASA Technical Reports Server (NTRS)
Bohaboj, T.; Cathey, H. M., Jr.
2004-01-01
The National Aeronautics and Space Administration's Balloon Program office has long acknowledged that the accurate modeling of balloon performance and flight prediction is dependant on how well the balloon is thermally modeled. This ongoing effort is focused on developing accurate balloon thermal models that can be used to quickly predict balloon temperatures and balloon performance. The ability to model parametric changes is also a driver for this effort. This paper will present the most recent advances made in this area. This research effort continues to utilize the "Thrmal Desktop" addition to AUTO CAD for the modeling. Recent advances have been made by using this analytical tool. A number of analyses have been completed to test the applicability of this tool to the problem with very positive results. Progressively detailed models have been developed to explore the capabilities of the tool as well as to provide guidance in model formulation. A number of parametric studies have been completed. These studies have varied the shape of the structure, material properties, environmental inputs, and model geometry. These studies have concentrated on spherical "proxy models" for the initial development stages and then to transition to the natural shaped zero pressure and super pressure balloons. An assessment of required model resolution has also been determined. Model solutions have been cross checked with known solutions via hand calculations. The comparison of these cases will also be presented. One goal is to develop analysis guidelines and an approach for modeling balloons for both simple first order estimates and detailed full models. This papa presents the step by step advances made as part of this effort, capabilities, limitations, and the lessons learned. Also presented are the plans for further thermal modeling work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oubeidillah, Abdoul A; Kao, Shih-Chieh; Ashfaq, Moetasim
2014-01-01
To extend geographical coverage, refine spatial resolution, and improve modeling efficiency, a computation- and data-intensive effort was conducted to organize a comprehensive hydrologic dataset with post-calibrated model parameters for hydro-climate impact assessment. Several key inputs for hydrologic simulation including meteorologic forcings, soil, land class, vegetation, and elevation were collected from multiple best-available data sources and organized for 2107 hydrologic subbasins (8-digit hydrologic units, HUC8s) in the conterminous United States at refined 1/24 (~4 km) spatial resolution. Using high-performance computing for intensive model calibration, a high-resolution parameter dataset was prepared for the macro-scale Variable Infiltration Capacity (VIC) hydrologic model. The VICmore » simulation was driven by DAYMET daily meteorological forcing and was calibrated against USGS WaterWatch monthly runoff observations for each HUC8. The results showed that this new parameter dataset may help reasonably simulate runoff at most US HUC8 subbasins. Based on this exhaustive calibration effort, it is now possible to accurately estimate the resources required for further model improvement across the entire conterminous United States. We anticipate that through this hydrologic parameter dataset, the repeated effort of fundamental data processing can be lessened, so that research efforts can emphasize the more challenging task of assessing climate change impacts. The pre-organized model parameter dataset will be provided to interested parties to support further hydro-climate impact assessment.« less
Modelling HIV/AIDS epidemics in sub-Saharan Africa using seroprevalence data from antenatal clinics.
Salomon, J. A.; Murray, C. J.
2001-01-01
OBJECTIVE: To improve the methodological basis for modelling the HIV/AIDS epidemics in adults in sub-Saharan Africa, with examples from Botswana, Central African Republic, Ethiopia, and Zimbabwe. Understanding the magnitude and trajectory of the HIV/AIDS epidemic is essential for planning and evaluating control strategies. METHODS: Previous mathematical models were developed to estimate epidemic trends based on sentinel surveillance data from pregnant women. In this project, we have extended these models in order to take full advantage of the available data. We developed a maximum likelihood approach for the estimation of model parameters and used numerical simulation methods to compute uncertainty intervals around the estimates. FINDINGS: In the four countries analysed, there were an estimated half a million new adult HIV infections in 1999 (range: 260 to 960 thousand), 4.7 million prevalent infections (range: 3.0 to 6.6 million), and 370 thousand adult deaths from AIDS (range: 266 to 492 thousand). CONCLUSION: While this project addresses some of the limitations of previous modelling efforts, an important research agenda remains, including the need to clarify the relationship between sentinel data from pregnant women and the epidemiology of HIV and AIDS in the general population. PMID:11477962
Multi-profile analysis of soil moisture within the U.S. Climate Reference Network
USDA-ARS?s Scientific Manuscript database
Soil moisture estimates are crucial for hydrologic modeling and agricultural decision-support efforts. These measurements are also pivotal for long-term inquiries regarding the impacts of climate change and the resulting droughts over large spatial and temporal scales. However, it has only been t...
The integrated landscape assessment project
Miles A. Hemstrom; Janine Salwasser; Joshua Halofsky; Jimmy Kagan; Cyndi Comfort
2012-01-01
The Integrated Landscape Assessment Project (ILAP) is a three-year effort that produces information, models, data, and tools to help land managers, policymakers, and others examine mid- to broad-scale (e.g., watersheds to states and larger areas) prioritization of land management actions, perform landscape assessments, and estimate potential effects of management...
Estimating Likelihood of Fetal In Vivo Interactions Using In Vitro HTS Data (Teratology meeting)
Tox21/ToxCast efforts provide in vitro concentration-response data for thousands of compounds. Predicting whether chemical-biological interactions observed in vitro will occur in vivo is challenging. We hypothesize that using a modified model from the FDA guidance for drug intera...
Power Management and Distribution (PMAD) Model Development: Final Report
NASA Technical Reports Server (NTRS)
Metcalf, Kenneth J.
2011-01-01
Power management and distribution (PMAD) models were developed in the early 1990's to model candidate architectures for various Space Exploration Initiative (SEI) missions. They were used to generate "ballpark" component mass estimates to support conceptual PMAD system design studies. The initial set of models was provided to NASA Lewis Research Center (since renamed Glenn Research Center) in 1992. They were developed to estimate the characteristics of power conditioning components predicted to be available in the 2005 timeframe. Early 90's component and device designs and material technologies were projected forward to the 2005 timeframe, and algorithms reflecting those design and material improvements were incorporated into the models to generate mass, volume, and efficiency estimates for circa 2005 components. The models are about ten years old now and NASA GRC requested a review of them to determine if they should be updated to bring them into agreement with current performance projections or to incorporate unforeseen design or technology advances. This report documents the results of this review and the updated power conditioning models and new transmission line models generated to estimate post 2005 PMAD system masses and sizes. This effort continues the expansion and enhancement of a library of PMAD models developed to allow system designers to assess future power system architectures and distribution techniques quickly and consistently.
Ultrasonic Porosity Estimation of Low-Porosity Ceramic Samples
NASA Astrophysics Data System (ADS)
Eskelinen, J.; Hoffrén, H.; Kohout, T.; Hæggström, E.; Pesonen, L. J.
2007-03-01
We report on efforts to extend the applicability of an airborne ultrasonic pulse-reflection (UPR) method towards lower porosities. UPR is a method that has been used successfully to estimate porosity and tortuosity of high porosity foams. UPR measures acoustical reflectivity of a target surface at two or more incidence angles. We used ceramic samples to evaluate the feasibility of extending the UPR range into low porosities (<35%). The validity of UPR estimates depends on pore size distribution and probing frequency as predicted by the theoretical boundary conditions of the used equivalent fluid model under the high-frequency approximation.
Population trends for North American winter birds based on hierarchical models
Soykan, Candan U.; Sauer, John; Schuetz, Justin G.; LeBaron, Geoffrey S.; Dale, Kathy; Langham, Gary M.
2016-01-01
Managing widespread and persistent threats to birds requires knowledge of population dynamics at large spatial and temporal scales. For over 100 yrs, the Audubon Christmas Bird Count (CBC) has enlisted volunteers in bird monitoring efforts that span the Americas, especially southern Canada and the United States. We employed a Bayesian hierarchical model to control for variation in survey effort among CBC circles and, using CBC data from 1966 to 2013, generated early-winter population trend estimates for 551 species of birds. Selecting a subset of species that do not frequent bird feeders and have ≥25% range overlap with the distribution of CBC circles (228 species) we further estimated aggregate (i.e., across species) trends for the entire study region and at the level of states/provinces, Bird Conservation Regions, and Landscape Conservation Cooperatives. Moreover, we examined the relationship between ten biological traits—range size, population size, migratory strategy, habitat affiliation, body size, diet, number of eggs per clutch, age at sexual maturity, lifespan, and tolerance of urban/suburban settings—and CBC trend estimates. Our results indicate that 68% of the 551 species had increasing trends within the study area over the interval 1966–2013. When trends were examined across the subset of 228 species, the median population trend for the group was 0.9% per year at the continental level. At the regional level, aggregate trends were positive in all but a few areas. Negative population trends were evident in lower latitudes, whereas the largest increases were at higher latitudes, a pattern consistent with range shifts due to climate change. Nine of 10 biological traits were significantly associated with median population trend; however, none of the traits explained >34% of the deviance in the data, reflecting the indirect relationships between population trend estimates and species traits. Trend estimates based on the CBC are broadly congruent with estimates based on the North American Breeding Bird Survey, another large-scale monitoring program. Both of these efforts, conducted by citizen scientists, will be required going forward to ensure robust inference about population dynamics in the face of climate and land cover changes.
Finite-Size Scaling for the Baxter-Wu Model Using Block Distribution Functions
NASA Astrophysics Data System (ADS)
Velonakis, Ioannis N.; Hadjiagapiou, Ioannis A.
2018-05-01
In the present work, we present an alternative way of applying the well-known finite-size scaling (FSS) theory in the case of a Baxter-Wu model using Binder-like blocks. Binder's ideas are extended to estimate phase transition points and the corresponding scaling exponents not only for magnetic but also for energy properties, saving computational time and effort. The vast majority of our conclusions can be easily generalized to other models.
COSTMODL - AN AUTOMATED SOFTWARE DEVELOPMENT COST ESTIMATION TOOL
NASA Technical Reports Server (NTRS)
Roush, G. B.
1994-01-01
The cost of developing computer software consumes an increasing portion of many organizations' budgets. As this trend continues, the capability to estimate the effort and schedule required to develop a candidate software product becomes increasingly important. COSTMODL is an automated software development estimation tool which fulfills this need. Assimilating COSTMODL to any organization's particular environment can yield significant reduction in the risk of cost overruns and failed projects. This user-customization capability is unmatched by any other available estimation tool. COSTMODL accepts a description of a software product to be developed and computes estimates of the effort required to produce it, the calendar schedule required, and the distribution of effort and staffing as a function of the defined set of development life-cycle phases. This is accomplished by the five cost estimation algorithms incorporated into COSTMODL: the NASA-developed KISS model; the Basic, Intermediate, and Ada COCOMO models; and the Incremental Development model. This choice affords the user the ability to handle project complexities ranging from small, relatively simple projects to very large projects. Unique to COSTMODL is the ability to redefine the life-cycle phases of development and the capability to display a graphic representation of the optimum organizational structure required to develop the subject project, along with required staffing levels and skills. The program is menu-driven and mouse sensitive with an extensive context-sensitive help system that makes it possible for a new user to easily install and operate the program and to learn the fundamentals of cost estimation without having prior training or separate documentation. The implementation of these functions, along with the customization feature, into one program makes COSTMODL unique within the industry. COSTMODL was written for IBM PC compatibles, and it requires Turbo Pascal 5.0 or later and Turbo Professional 5.0 for recompilation. An executable is provided on the distribution diskettes. COSTMODL requires 512K RAM. The standard distribution medium for COSTMODL is three 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. COSTMODL was developed in 1991. IBM PC is a registered trademark of International Business Machines. Borland and Turbo Pascal are registered trademarks of Borland International, Inc. Turbo Professional is a trademark of TurboPower Software. MS-DOS is a registered trademark of Microsoft Corporation. Turbo Professional is a trademark of TurboPower Software.
Assessing relative abundance and reproductive success of shrubsteppe raptors
Lehman, Robert N.; Carpenter, L.B.; Steenhof, Karen; Kochert, Michael N.
1998-01-01
From 1991-1994, we quantified relative abundance and reproductive success of the Ferruginous Hawk (Buteo regalis), Northern Harrier (Circus cyaneus), Burrowing Owl (Speotytoc unicularia), and Short-eared Owl (Asio flammeus) on the shrubsteppe plateaus (benchlands) in and near the Snake River Birds of Prey National Conservation Area in southwestern Idaho. To assess relative abundance, we searched randomly selected plots using four sampling methods: point counts, line transects, and quadrats of two sizes. On a persampling-effort basis, transects were slightly more effective than point counts and quadrats for locating raptor nests (3.4 pairs detected/100 h of effort vs. 2.2-3.1 pairs). Random sampling using quadrats failed to detect a Short-eared Owl population increase from 1993 to 1994. To evaluate nesting success, we tried to determine reproductive outcome for all nesting attempts located during random, historical, and incidental nest searches. We compared nesting success estimates based on all nesting attempts, on attempts found during incubation, and the Mayfield model. Most pairs used to evaluate success were pairs found incidentally. Visits to historical nesting areas yielded the highest number of pairs per sampling effort (14.6/100 h), but reoccupancy rates for most species decreased through time. Estimates based on all attempts had the highest sample sizes but probably overestimated success for all species except the Ferruginous Hawk. Estimates of success based on nesting attempts found during incubation had the lowest sample sizes. All three methods yielded biased nesting snccess estimates for the Northern Harrier and Short-eared Owl. The estimate based on pairs found during incubation probably provided the least biased estimate for the Burrowing Owl. Assessments of nesting success were hindered by difficulties in confirming egg laying and nesting success for all species except the Ferruginous hawk.
NASA Astrophysics Data System (ADS)
Park, Jin-Young; Lee, Dong-Eun; Kim, Byung-Soo
2017-10-01
Due to the increasing concern about climate change, efforts to reduce environmental load are continuously being made in construction industry, and LCA (life cycle assessment) is being presented as an effective method to assess environmental load. Since LCA requires information on construction quantity used for environmental load estimation, however, it is not being utilized in the environmental review in the early design phase where it is difficult to obtain such information. In this study, computation system for construction quantity based on standard cross section of road drainage facilities was developed to compute construction quantity required for LCA using only information available in the early design phase to develop and verify the effectiveness of a model that can perform environmental load estimation. The result showed that it is an effective model that can be used in the early design phase as it revealed a 13.39% mean absolute error rate.
NASA Astrophysics Data System (ADS)
Zheng, Yuejiu; Gao, Wenkai; Ouyang, Minggao; Lu, Languang; Zhou, Long; Han, Xuebing
2018-04-01
State-of-charge (SOC) inconsistency impacts the power, durability and safety of the battery pack. Therefore, it is necessary to measure the SOC inconsistency of the battery pack with good accuracy. We explore a novel method for modeling and estimating the SOC inconsistency of lithium-ion (Li-ion) battery pack with low computation effort. In this method, a second-order RC model is selected as the cell mean model (CMM) to represent the overall performance of the battery pack. A hypothetical Rint model is employed as the cell difference model (CDM) to evaluate the SOC difference. The parameters of mean-difference model (MDM) are identified with particle swarm optimization (PSO). Subsequently, the mean SOC and the cell SOC differences are estimated by using extended Kalman filter (EKF). Finally, we conduct an experiment on a small Li-ion battery pack with twelve cells connected in series. The results show that the evaluated SOC difference is capable of tracking the changing of actual value after a quick convergence.
Practical simplifications for radioimmunotherapy dosimetric models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, S.; DeNardo, G.L.; O`Donnell, R.T.
1999-01-01
Radiation dosimetry is potentially useful for assessment and prediction of efficacy and toxicity for radionuclide therapy. The usefulness of these dose estimates relies on the establishment of a dose-response model using accurate pharmacokinetic data and a radiation dosimetric model. Due to the complexity in radiation dose estimation, many practical simplifications have been introduced in the dosimetric modeling for clinical trials of radioimmunotherapy. Although research efforts are generally needed to improve the simplifications used at each stage of model development, practical simplifications are often possible for specific applications without significant consequences to the dose-response model. In the development of dosimetric methodsmore » for radioimmunotherapy, practical simplifications in the dosimetric models were introduced. This study evaluated the magnitude of uncertainty associated with practical simplifications for: (1) organ mass of the MIRD phantom; (2) radiation contribution from target alone; (3) interpolation of S value; (4) macroscopic tumor uniformity; and (5) fit of tumor pharmacokinetic data.« less
Bayesian spatio-temporal modeling of particulate matter concentrations in Peninsular Malaysia
NASA Astrophysics Data System (ADS)
Manga, Edna; Awang, Norhashidah
2016-06-01
This article presents an application of a Bayesian spatio-temporal Gaussian process (GP) model on particulate matter concentrations from Peninsular Malaysia. We analyze daily PM10 concentration levels from 35 monitoring sites in June and July 2011. The spatiotemporal model set in a Bayesian hierarchical framework allows for inclusion of informative covariates, meteorological variables and spatiotemporal interactions. Posterior density estimates of the model parameters are obtained by Markov chain Monte Carlo methods. Preliminary data analysis indicate information on PM10 levels at sites classified as industrial locations could explain part of the space time variations. We include the site-type indicator in our modeling efforts. Results of the parameter estimates for the fitted GP model show significant spatio-temporal structure and positive effect of the location-type explanatory variable. We also compute some validation criteria for the out of sample sites that show the adequacy of the model for predicting PM10 at unmonitored sites.
NASA Astrophysics Data System (ADS)
Jain, Jalaj; Prakash, Ram; Vyas, Gheesa Lal; Pal, Udit Narayan; Chowdhuri, Malay Bikas; Manchanda, Ranjana; Halder, Nilanjan; Choyal, Yaduvendra
2015-12-01
In the present work an effort has been made to estimate the plasma parameters simultaneously like—electron density, electron temperature, ground state atom density, ground state ion density and metastable state density from the observed visible spectra of penning plasma discharge (PPD) source using least square fitting. The analysis is performed for the prominently observed neutral helium lines. The atomic data and analysis structure (ADAS) database is used to provide the required collisional-radiative (CR) photon emissivity coefficients (PECs) values under the optical thin plasma condition in the analysis. With this condition the estimated plasma temperature from the PPD is found rather high. It is seen that the inclusion of opacity in the observed spectral lines through PECs and addition of diffusion of neutrals and metastable state species in the CR-model code analysis improves the electron temperature estimation in the simultaneous measurement.
CAN A MODEL TRANSFERABILITY FRAMEWORK IMPROVE ...
Budget constraints and policies that limit primary data collection have fueled a practice of transferring estimates (or models to generate estimates) of ecological endpoints from sites where primary data exists to sites where little to no primary data were collected. Whereas benefit transfer has been well studied; there is no comparable framework for evaluating whether model transfer between sites is justifiable. We developed and applied a transferability assessment framework to a case study involving forest carbon sequestration for soils in Tillamook Bay, Oregon. The carbon sequestration capacity of forested watersheds is an important ecosystem service in the effort to reduce atmospheric greenhouse gas emissions. We used our framework, incorporating three basic steps (model selection, defining context variables, assessing logistical constraints) for evaluating model transferability, to compare estimates of carbon storage capacity derived from two models, COMET-Farm and Yasso. We applied each model to Tillamook Bay and compared results to data extracted from the Soil Survey Geographic Database (SSURGO) using ArcGIS. Context variables considered were: geographic proximity to Tillamook, dominant tree species, climate and soil type. Preliminary analyses showed that estimates from COMET-Farm were more similar to SSURGO data, likely because model context variables (e.g. proximity to Tillamook and dominant tree species) were identical to those in Tillamook. In contras
CubeSat mission design software tool for risk estimating relationships
NASA Astrophysics Data System (ADS)
Gamble, Katharine Brumbaugh; Lightsey, E. Glenn
2014-09-01
In an effort to make the CubeSat risk estimation and management process more scientific, a software tool has been created that enables mission designers to estimate mission risks. CubeSat mission designers are able to input mission characteristics, such as form factor, mass, development cycle, and launch information, in order to determine the mission risk root causes which historically present the highest risk for their mission. Historical data was collected from the CubeSat community and analyzed to provide a statistical background to characterize these Risk Estimating Relationships (RERs). This paper develops and validates the mathematical model based on the same cost estimating relationship methodology used by the Unmanned Spacecraft Cost Model (USCM) and the Small Satellite Cost Model (SSCM). The RER development uses general error regression models to determine the best fit relationship between root cause consequence and likelihood values and the input factors of interest. These root causes are combined into seven overall CubeSat mission risks which are then graphed on the industry-standard 5×5 Likelihood-Consequence (L-C) chart to help mission designers quickly identify areas of concern within their mission. This paper is the first to document not only the creation of a historical database of CubeSat mission risks, but, more importantly, the scientific representation of Risk Estimating Relationships.
NASA Astrophysics Data System (ADS)
Farmer, W. H.; Archfield, S. A.; Over, T. M.; Kiang, J. E.
2015-12-01
In the United States and across the globe, the majority of stream reaches and rivers are substantially impacted by water use or remain ungaged. The result is large gaps in the availability of natural streamflow records from which to infer hydrologic understanding and inform water resources management. From basin-specific to continent-wide scales, many efforts have been undertaken to develop methods to estimate ungaged streamflow. This work applies and contrasts several statistical models of daily streamflow to more than 1,700 reference-quality streamgages across the conterminous United States using a cross-validation methodology. The variability of streamflow simulation performance across the country exhibits a pattern familiar to other continental scale modeling efforts performed for the United States. For portions of the West Coast and the dense, relatively homogeneous and humid regions of the eastern United States models produce reliable estimates of daily streamflow using many different prediction methods. Model performance for the middle portion of the United States, marked by more heterogeneous and arid conditions, and with larger contributing areas and sparser networks of streamgages, is consistently poor. A discussion of the difficulty of statistical interpolation and regionalization in these regions raises additional questions of data availability and quality, hydrologic process representation and dominance, and intrinsic variability.
Structure/activity relationships for biodegradability and their role in environmental assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boethling, R.S.
1994-12-31
Assessment of biodegradability is an important part of the review process for both new and existing chemicals under the Toxic Substances Control Act. It is often necessary to estimate biodegradability because experimental data are unavailable. Structure/biodegradability relationships (SBR) are a means to this end. Quantitative SBR have been developed, but this approach has not been very useful because they apply only to a few narrowly defined classes of chemicals. In response to the need for more widely applicable methods, multivariate analysis has been used to develop biodegradability classification models. For example, recent efforts have produced four new models. Two calculatemore » the probability of rapid biodegradation and can be used for classification; the other two models allow semi-quantitative estimation of primary and ultimate biodegradation rates. All are based on multiple regressions against 36 preselected substructures plus molecular weight. Such efforts have been fairly successful by statistical criteria, but in general are hampered by a lack of large and consistent datasets. Knowledge-based expert systems may represent the next step in the evolution of SBR. In principle such systems need not be as severely limited by imperfect datasets. However, the codification of expert knowledge and reasoning is a critical prerequisite. Results of knowledge acquisition exercises and modeling based on them will also be described.« less
An evaluation of sex-age-kill (SAK) model performance
Millspaugh, Joshua J.; Skalski, John R.; Townsend, Richard L.; Diefenbach, Duane R.; Boyce, Mark S.; Hansen, Lonnie P.; Kammermeyer, Kent
2009-01-01
The sex-age-kill (SAK) model is widely used to estimate abundance of harvested large mammals, including white-tailed deer (Odocoileus virginianus). Despite a long history of use, few formal evaluations of SAK performance exist. We investigated how violations of the stable age distribution and stationary population assumption, changes to male or female harvest, stochastic effects (i.e., random fluctuations in recruitment and survival), and sampling efforts influenced SAK estimation. When the simulated population had a stable age distribution and λ > 1, the SAK model underestimated abundance. Conversely, when λ < 1, the SAK overestimated abundance. When changes to male harvest were introduced, SAK estimates were opposite the true population trend. In contrast, SAK estimates were robust to changes in female harvest rates. Stochastic effects caused SAK estimates to fluctuate about their equilibrium abundance, but the effect dampened as the size of the surveyed population increased. When we considered both stochastic effects and sampling error at a deer management unit scale the resultant abundance estimates were within ±121.9% of the true population level 95% of the time. These combined results demonstrate extreme sensitivity to model violations and scale of analysis. Without changes to model formulation, the SAK model will be biased when λ ≠ 1. Furthermore, any factor that alters the male harvest rate, such as changes to regulations or changes in hunter attitudes, will bias population estimates. Sex-age-kill estimates may be precise at large spatial scales, such as the state level, but less so at the individual management unit level. Alternative models, such as statistical age-at-harvest models, which require similar data types, might allow for more robust, broad-scale demographic assessments.
MESSOC capabilities and results. [Model for Estimating Space Station Opertions Costs
NASA Technical Reports Server (NTRS)
Shishko, Robert
1990-01-01
MESSOC (Model for Estimating Space Station Operations Costs) is the result of a multi-year effort by NASA to understand and model the mature operations cost of Space Station Freedom. This paper focuses on MESSOC's ability to contribute to life-cycle cost analyses through its logistics equations and databases. Together, these afford MESSOC the capability to project not only annual logistics costs for a variety of Space Station scenarios, but critical non-cost logistics results such as annual Station maintenance crewhours, upweight/downweight, and on-orbit sparing availability as well. MESSOC results using current logistics databases and baseline scenario have already shown important implications for on-orbit maintenance approaches, space transportation systems, and international operations cost sharing.
Soft computing techniques toward modeling the water supplies of Cyprus.
Iliadis, L; Maris, F; Tachos, S
2011-10-01
This research effort aims in the application of soft computing techniques toward water resources management. More specifically, the target is the development of reliable soft computing models capable of estimating the water supply for the case of "Germasogeia" mountainous watersheds in Cyprus. Initially, ε-Regression Support Vector Machines (ε-RSVM) and fuzzy weighted ε-RSVMR models have been developed that accept five input parameters. At the same time, reliable artificial neural networks have been developed to perform the same job. The 5-fold cross validation approach has been employed in order to eliminate bad local behaviors and to produce a more representative training data set. Thus, the fuzzy weighted Support Vector Regression (SVR) combined with the fuzzy partition has been employed in an effort to enhance the quality of the results. Several rational and reliable models have been produced that can enhance the efficiency of water policy designers. Copyright © 2011 Elsevier Ltd. All rights reserved.
Evaluating the capabilities of watershed-scale models in estimating sediment yield at field-scale.
Sommerlot, Andrew R; Nejadhashemi, A Pouyan; Woznicki, Sean A; Giri, Subhasis; Prohaska, Michael D
2013-09-30
Many watershed model interfaces have been developed in recent years for predicting field-scale sediment loads. They share the goal of providing data for decisions aimed at improving watershed health and the effectiveness of water quality conservation efforts. The objectives of this study were to: 1) compare three watershed-scale models (Soil and Water Assessment Tool (SWAT), Field_SWAT, and the High Impact Targeting (HIT) model) against calibrated field-scale model (RUSLE2) in estimating sediment yield from 41 randomly selected agricultural fields within the River Raisin watershed; 2) evaluate the statistical significance among models; 3) assess the watershed models' capabilities in identifying areas of concern at the field level; 4) evaluate the reliability of the watershed-scale models for field-scale analysis. The SWAT model produced the most similar estimates to RUSLE2 by providing the closest median and the lowest absolute error in sediment yield predictions, while the HIT model estimates were the worst. Concerning statistically significant differences between models, SWAT was the only model found to be not significantly different from the calibrated RUSLE2 at α = 0.05. Meanwhile, all models were incapable of identifying priorities areas similar to the RUSLE2 model. Overall, SWAT provided the most correct estimates (51%) within the uncertainty bounds of RUSLE2 and is the most reliable among the studied models, while HIT is the least reliable. The results of this study suggest caution should be exercised when using watershed-scale models for field level decision-making, while field specific data is of paramount importance. Copyright © 2013 Elsevier Ltd. All rights reserved.
Validation of a novel air toxic risk model with air monitoring.
Pratt, Gregory C; Dymond, Mary; Ellickson, Kristie; Thé, Jesse
2012-01-01
Three modeling systems were used to estimate human health risks from air pollution: two versions of MNRiskS (for Minnesota Risk Screening), and the USEPA National Air Toxics Assessment (NATA). MNRiskS is a unique cumulative risk modeling system used to assess risks from multiple air toxics, sources, and pathways on a local to a state-wide scale. In addition, ambient outdoor air monitoring data were available for estimation of risks and comparison with the modeled estimates of air concentrations. Highest air concentrations and estimated risks were generally found in the Minneapolis-St. Paul metropolitan area and lowest risks in undeveloped rural areas. Emissions from mobile and area (nonpoint) sources created greater estimated risks than emissions from point sources. Highest cancer risks were via ingestion pathway exposures to dioxins and related compounds. Diesel particles, acrolein, and formaldehyde created the highest estimated inhalation health impacts. Model-estimated air concentrations were generally highest for NATA and lowest for the AERMOD version of MNRiskS. This validation study showed reasonable agreement between available measurements and model predictions, although results varied among pollutants, and predictions were often lower than measurements. The results increased confidence in identifying pollutants, pathways, geographic areas, sources, and receptors of potential concern, and thus provide a basis for informing pollution reduction strategies and focusing efforts on specific pollutants (diesel particles, acrolein, and formaldehyde), geographic areas (urban centers), and source categories (nonpoint sources). The results heighten concerns about risks from food chain exposures to dioxins and PAHs. Risk estimates were sensitive to variations in methodologies for treating emissions, dispersion, deposition, exposure, and toxicity. © 2011 Society for Risk Analysis.
Sampling effort needed to estimate condition and species richness in the Ohio River, USA
The level of sampling effort required to characterize fish assemblage condition in a river for the purposes of bioassessment may be estimated via different approaches. However, the goal with any approach is to determine the minimum level of effort necessary to reach some specific...
Integration, Validation, and Application of a PV Snow Coverage Model in SAM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryberg, David; Freeman, Janine
2015-09-01
Due to the increasing deployment of PV systems in snowy climates, there is significant interest in a method capable of estimating PV losses resulting from snow coverage that has been verified for a wide variety of system designs and locations. A scattering of independent snow coverage models have been developed over the last 15 years; however, there has been very little effort spent on verifying these models beyond the system design and location on which they were based. Moreover, none of the major PV modeling software products have incorporated any of these models into their workflow. In response to thismore » deficiency, we have integrated the methodology of the snow model developed in the paper by Marion et al. [1] into the National Renewable Energy Laboratory's (NREL) System Advisor Model (SAM). In this work we describe how the snow model is implemented in SAM and discuss our demonstration of the model's effectiveness at reducing error in annual estimations for two PV arrays. Following this, we use this new functionality in conjunction with a long term historical dataset to estimate average snow losses across the United States for a typical PV system design. The open availability of the snow loss estimation capability in SAM to the PV modeling community, coupled with our results of the nation-wide study, will better equip the industry to accurately estimate PV energy production in areas affected by snowfall.« less
NASA Astrophysics Data System (ADS)
Beamer, J. P.; Hill, D. F.; Liston, G. E.; Arendt, A. A.; Hood, E. W.
2013-12-01
In Prince William Sound (PWS), Alaska, there is a pressing need for accurate estimates of the spatial and temporal variations in coastal freshwater discharge (FWD). FWD into PWS originates from streamflow due to rainfall, annual snowmelt, and changes in stored glacier mass and is important because it helps establish spatial and temporal patterns in ocean salinity and temperature, and is a time-varying boundary condition for oceanographic circulation models. Previous efforts to model FWD into PWS have been heavily empirical, with many physical processes absorbed into calibration coefficients that, in many cases, were calibrated to streams and rivers not hydrologically similar to those discharging into PWS. In this work we adapted and validated a suite of high-resolution (in space and time), physically-based, distributed weather, snowmelt, and runoff-routing models designed specifically for snow melt- and glacier melt-dominated watersheds like PWS in order to: 1) provide high-resolution, real-time simulations of snowpack and FWD, and 2) provide a record of historical variations of FWD. SnowModel, driven with gridded topography, land cover, and 32 years (1979-2011) of 3-hourly North American Regional Reanalysis (NARR) atmospheric forcing data, was used to simulate snowpack accumulation and melt across a PWS model domain. SnowModel outputs of daily snow water equivalent (SWE) depth and grid-cell runoff volumes were then coupled with HydroFlow, a runoff-routing model which routed snowmelt, glacier-melt, and rainfall to each watershed outlet (PWS coastline) in the simulation domain. The end product was a continuous 32-year simulation of daily FWD into PWS. In order to validate the models, SWE and snow depths from SnowModel were compared with observed SWE and snow depths from SnoTel and snow survey data, and discharge from HydroFlow was compared with observed streamflow measurements. As a second phase of this research effort, the coupled models will be set-up to run in real-time, where daily measurements from weather stations in the PWS will be used to drive simulations of snow cover and streamflow. In addition, we will deploy a strategic array of instrumentation aimed at validating the simulated weather estimates and the calculations of freshwater discharge. Upon successful implementation and validation of the modeling system, it will join established and ongoing computational and observational efforts that have a common goal of establishing a comprehensive understanding of the physical behavior of PWS.
Assessing Forest NPP: BIOME-BGC Predictions versus BEF Derived Estimates
NASA Astrophysics Data System (ADS)
Hasenauer, H.; Pietsch, S. A.; Petritsch, R.
2007-05-01
Forest productivity has always been a major issue within sustainable forest management. While in the past terrestrial forest inventory data have been the major source for assessing forest productivity, recent developments in ecosystem modeling offer an alternative approach using ecosystem models such as Biome-BGC to estimate Net Primary Production (NPP). In this study we compare two terrestrial driven approaches for assessing NPP: (i) estimates from a species specific adaptation of the biogeochemical ecosystem model BIOME-BGC calibrated for Alpine conditions; and (ii) NPP estimates derived from inventory data using biomass expansion factors (BEF). The forest inventory data come from 624 sample plots across Austria and consist of repeated individual tree observations and include growth as well as soil and humus information. These locations are covered with spruce, beech, oak, pine and larch stands, thus addressing the main Austrian forest types. 144 locations were previously used in a validating effort to produce species-specific parameter estimates of the ecosystem model. The remaining 480 sites are from the Austrian National Forest Soil Survey carried out at the Federal Research and Training Centre for Forests, Natural Hazards and Landscape (BFW). By using diameter at breast height (dbh) and height (h) volume and subsequently biomass of individual trees were calculated, aggregated for the whole forest stand and compared with the model output. Regression analyses were performed for both volume and biomass estimates.
Lee, Karl K.; Risley, John C.
2002-03-19
Precipitation-runoff models, base-flow-separation techniques, and stream gain-loss measurements were used to study recharge and ground-water surface-water interaction as part of a study of the ground-water resources of the Willamette River Basin. The study was a cooperative effort between the U.S. Geological Survey and the State of Oregon Water Resources Department. Precipitation-runoff models were used to estimate the water budget of 216 subbasins in the Willamette River Basin. The models were also used to compute long-term average recharge and base flow. Recharge and base-flow estimates will be used as input to a regional ground-water flow model, within the same study. Recharge and base-flow estimates were made using daily streamflow records. Recharge estimates were made at 16 streamflow-gaging-station locations and were compared to recharge estimates from the precipitation-runoff models. Base-flow separation methods were used to identify the base-flow component of streamflow at 52 currently operated and discontinued streamflow-gaging-station locations. Stream gain-loss measurements were made on the Middle Fork Willamette, Willamette, South Yamhill, Pudding, and South Santiam Rivers, and were used to identify and quantify gaining and losing stream reaches both spatially and temporally. These measurements provide further understanding of ground-water/surface-water interactions.
So, Emily; Spence, Robin
2013-01-01
Recent earthquakes such as the Haiti earthquake of 12 January 2010 and the Qinghai earthquake on 14 April 2010 have highlighted the importance of rapid estimation of casualties after the event for humanitarian response. Both of these events resulted in surprisingly high death tolls, casualties and survivors made homeless. In the Mw = 7.0 Haiti earthquake, over 200,000 people perished with more than 300,000 reported injuries and 2 million made homeless. The Mw = 6.9 earthquake in Qinghai resulted in over 2,000 deaths with a further 11,000 people with serious or moderate injuries and 100,000 people have been left homeless in this mountainous region of China. In such events relief efforts can be significantly benefitted by the availability of rapid estimation and mapping of expected casualties. This paper contributes to ongoing global efforts to estimate probable earthquake casualties very rapidly after an earthquake has taken place. The analysis uses the assembled empirical damage and casualty data in the Cambridge Earthquake Impacts Database (CEQID) and explores data by event and across events to test the relationships of building and fatality distributions to the main explanatory variables of building type, building damage level and earthquake intensity. The prototype global casualty estimation model described here uses a semi-empirical approach that estimates damage rates for different classes of buildings present in the local building stock, and then relates fatality rates to the damage rates of each class of buildings. This approach accounts for the effect of the very different types of buildings (by climatic zone, urban or rural location, culture, income level etc), on casualties. The resulting casualty parameters were tested against the overall casualty data from several historical earthquakes in CEQID; a reasonable fit was found.
Heinonen, Johannes P M; Palmer, Stephen C F; Redpath, Steve M; Travis, Justin M J
2014-01-01
Individual-based models have gained popularity in ecology, and enable simultaneous incorporation of spatial explicitness and population dynamic processes to understand spatio-temporal patterns of populations. We introduce an individual-based model for understanding and predicting spatial hen harrier (Circus cyaneus) population dynamics in Great Britain. The model uses a landscape with habitat, prey and game management indices. The hen harrier population was initialised according to empirical census estimates for 1988/89 and simulated until 2030, and predictions for 1998, 2004 and 2010 were compared to empirical census estimates for respective years. The model produced a good qualitative match to overall trends between 1989 and 2010. Parameter explorations revealed relatively high elasticity in particular to demographic parameters such as juvenile male mortality. This highlights the need for robust parameter estimates from empirical research. There are clearly challenges for replication of real-world population trends, but this model provides a useful tool for increasing understanding of drivers of hen harrier dynamics and focusing research efforts in order to inform conflict management decisions.
Heinonen, Johannes P. M.; Palmer, Stephen C. F.; Redpath, Steve M.; Travis, Justin M. J.
2014-01-01
Individual-based models have gained popularity in ecology, and enable simultaneous incorporation of spatial explicitness and population dynamic processes to understand spatio-temporal patterns of populations. We introduce an individual-based model for understanding and predicting spatial hen harrier (Circus cyaneus) population dynamics in Great Britain. The model uses a landscape with habitat, prey and game management indices. The hen harrier population was initialised according to empirical census estimates for 1988/89 and simulated until 2030, and predictions for 1998, 2004 and 2010 were compared to empirical census estimates for respective years. The model produced a good qualitative match to overall trends between 1989 and 2010. Parameter explorations revealed relatively high elasticity in particular to demographic parameters such as juvenile male mortality. This highlights the need for robust parameter estimates from empirical research. There are clearly challenges for replication of real-world population trends, but this model provides a useful tool for increasing understanding of drivers of hen harrier dynamics and focusing research efforts in order to inform conflict management decisions. PMID:25405860
The Cost of Smoking in California.
Max, Wendy; Sung, Hai-Yen; Shi, Yanling; Stark, Brad
2016-05-01
The economic impact of smoking, including healthcare costs and the value of lost productivity due to illness and mortality, was estimated for California for 2009. Smoking-attributable healthcare costs were estimated using a series of econometric models that estimate expenditures for hospital care, ambulatory care, prescriptions, home health care, and nursing home care. Lost productivity due to illness was estimated using an econometric model predicting how smoking status affects the number of days lost from work or other activities. The value of lives lost from premature mortality due to smoking was estimated using an epidemiological approach. Almost 4 million Californians still smoke, including 146 000 adolescents. The cost of smoking in 2009 totaled $18.1 billion, including $9.8 billion in healthcare costs, $1.4 billion in lost productivity from illness, and $6.8 billion in lost productivity from premature mortality. This amounts to $487 per California resident and $4603 per smoker. Costs were greater for men than for women. Hospital costs comprised 44% of healthcare costs. Despite extensive efforts at tobacco control in California, healthcare and lost productivity costs attributable to smoking remain high. Compared to costs for 1999, the total cost was 15% greater in 2009. However, after adjusting for inflation, real costs have fallen by 13% over the past decade, indicating that efforts have been successful in reducing the economic burden of smoking in the state. © The Author 2015. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Alderman, Phillip D.; Stanfill, Bryan
2016-10-06
Recent international efforts have brought renewed emphasis on the comparison of different agricultural systems models. Thus far, analysis of model-ensemble simulated results has not clearly differentiated between ensemble prediction uncertainties due to model structural differences per se and those due to parameter value uncertainties. Additionally, despite increasing use of Bayesian parameter estimation approaches with field-scale crop models, inadequate attention has been given to the full posterior distributions for estimated parameters. The objectives of this study were to quantify the impact of parameter value uncertainty on prediction uncertainty for modeling spring wheat phenology using Bayesian analysis and to assess the relativemore » contributions of model-structure-driven and parameter-value-driven uncertainty to overall prediction uncertainty. This study used a random walk Metropolis algorithm to estimate parameters for 30 spring wheat genotypes using nine phenology models based on multi-location trial data for days to heading and days to maturity. Across all cases, parameter-driven uncertainty accounted for between 19 and 52% of predictive uncertainty, while model-structure-driven uncertainty accounted for between 12 and 64%. Here, this study demonstrated the importance of quantifying both model-structure- and parameter-value-driven uncertainty when assessing overall prediction uncertainty in modeling spring wheat phenology. More generally, Bayesian parameter estimation provided a useful framework for quantifying and analyzing sources of prediction uncertainty.« less
NASA Astrophysics Data System (ADS)
Castro-Bolinaga, C. F.; Zavaleta, E. R.; Diplas, P.
2015-03-01
This paper presents the preliminary results of a coupled modelling effort to study the fate of tailings (radioactive waste-by product) downstream of the Coles Hill uranium deposit located in Virginia, USA. The implementation of the overall modelling process includes a one-dimensional hydraulic model to qualitatively characterize the sediment transport process under severe flooding conditions downstream of the potential mining site, a two-dimensional ANSYS Fluent model to simulate the release of tailings from a containment cell located partially above the local ground surface into the nearby streams, and a one-dimensional finite-volume sediment transport model to examine the propagation of a tailings sediment pulse in the river network located downstream. The findings of this investigation aim to assist in estimating the potential impacts that tailings would have if they were transported into rivers and reservoirs located downstream of the Coles Hill deposit that serve as municipal drinking water supplies.
NASA Astrophysics Data System (ADS)
Bell, Kevin D.; Dafesh, Philip A.; Hsu, L. A.; Tsuda, A. S.
1995-12-01
Current architectural and design trade techniques often carry unaffordable alternatives late into the decision process. Early decisions made during the concept exploration and development (CE&D) phase will drive the cost of a program more than any other phase of development; thus, designers must be able to assess both the performance and cost impacts of their early choices. The Space Based Infrared System (SBIRS) cost engineering model (CEM) described in this paper is an end-to-end process integrating engineering and cost expertise through commonly available spreadsheet software, allowing for concurrent design engineering and cost estimation to identify and balance system drives to reduce acquisition costs. The automated interconnectivity between subsystem models using spreadsheet software allows for the quick and consistent assessment of the system design impacts and relative cost impacts due to requirement changes. It is different from most CEM efforts attempted in the past as it incorporates more detailed spacecraft and sensor payload models, and has been applied to determine the cost drivers for an advanced infrared satellite system acquisition. The CEM is comprised of integrated detailed engineering and cost estimating relationships describing performance, design, and cost parameters. Detailed models have been developed to evaluate design parameters for the spacecraft bus and sensor; both step-starer and scanner sensor types incorporate models of focal plane array, optics, processing, thermal, communications, and mission performance. The current CEM effort has provided visibility to requirements, design, and cost drivers for system architects and decision makers to determine the configuration of an infrared satellite architecture that meets essential requirements cost effectively. In general, the methodology described in this paper consists of process building blocks that can be tailored to the needs of many applications. Descriptions of the spacecraft and payload subsystem models provide insight into The Aerospace Corporation expertise and scope of the SBIRS concept development effort.
Improving a regional model using reduced complexity and parameter estimation
Kelson, Victor A.; Hunt, Randall J.; Haitjema, Henk M.
2002-01-01
The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model's prediction that, for a model that is properly calibrated for heads, regional drawdowns are relatively unaffected by the choice of aquifer properties, but that mine inflows are strongly affected. Paradoxically, by reducing model complexity, we have increased the understanding gained from the modeling effort.
Improving a regional model using reduced complexity and parameter estimation.
Kelson, Victor A; Hunt, Randall J; Haitjema, Henk M
2002-01-01
The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model's prediction that, for a model that is properly calibrated for heads, regional drawdowns are relatively unaffected by the choice of aquifer properties, but that mine inflows are strongly affected. Paradoxically, by reducing model complexity, we have increased the understanding gained from the modeling effort.
Relevance of the Implementation of Teeth in Three-Dimensional Vocal Tract Models
ERIC Educational Resources Information Center
Traser, Louisa; Birkholz, Peter; Flügge, Tabea Viktoria; Kamberger, Robert; Burdumy, Michael; Richter, Bernhard; Korvink, Jan Gerrit; Echternach, Matthias
2017-01-01
Purpose: Recently, efforts have been made to investigate the vocal tract using magnetic resonance imaging (MRI). Due to technical limitations, teeth were omitted in many previous studies on vocal tract acoustics. However, the knowledge of how teeth influence vocal tract acoustics might be important in order to estimate the necessity of…
2004-11-01
variation in ventilation rates over time and the distribution of ventilation air within a building, and to estimate the impact of envelope air ... tightening efforts on infiltration rates. • It may be used to determine the indoor air quality performance of a building before construction, and to
Integrating social, economic, and ecological values across large landscapes
Jessica E. Halofsky; Megan K. Creutzburg; Miles A. Hemstrom
2014-01-01
The Integrated Landscape Assessment Project (ILAP) was a multiyear effort to produce information, maps, and models to help land managers, policymakers, and others conduct mid- to broad-scale (e.g., watersheds to states and larger areas) prioritization of land management actions, perform landscape assessments, and estimate cumulative effects of management actions for...
Quantifying VOC emissions from East Asia using 10 years of satellite observations
NASA Astrophysics Data System (ADS)
Stavrakou, T.; Muller, J. F.; Bauwens, M.; De Smedt, I.; Van Roozendael, M.; Boersma, F.; van der A, R. J.; Pierre-Francois, C.; Clerbaux, C.
2016-12-01
China's emissions are in the spotlight of efforts to mitigate climate change and improve regional and city-scale air quality. Despite growing efforts to better quantify China's emissions, the current estimates are often poor or inadequate. Complementary to bottom-up inventories, inverse modeling of fluxes has the potential to improve those estimates through the use of atmospheric observations of trace gas compounds. As formaldehyde (HCHO) is a high-yield product in the oxidation of most volatile organic compounds (VOCs) emitted by anthropogenic and natural sources, satellite observations of HCHO hold the potential to inform us on the spatial and temporal variability of the underlying VOC sources. The 10-year record of space-based HCHO column observations from the OMI instrument is used to constrain VOC emission fluxes in East Asia in a source inversion framework built on the IMAGES chemistry-transport model and its adjoint. The interannual and seasonal variability, spatial distribution and potential trends of the top-down VOC fluxes (anthropogenic, pyrogenic and biogenic) are presented and confronted to existing emission inventories, satellite observations of other species (e.g. glyoxal and nitrogen oxides), and past studies.
NASA Astrophysics Data System (ADS)
Thornton, P. E.; Nacp Site Synthesis Participants
2010-12-01
The North American Carbon Program (NACP) synthesis effort includes an extensive intercomparison of modeled and observed ecosystem states and fluxes preformed with multiple models across multiple sites. The participating models span a range of complexity and intended application, while the participating sites cover a broad range of natural and managed ecosystems in North America, from the subtropics to arctic tundra, and coastal to interior climates. A unique characteristic of this collaborative effort is that multiple independent observations are available at all sites: fluxes are measured with the eddy covariance technique, and standard biometric and field sampling methods provide estimates of standing stock and annual production in multiple categories. In addition, multiple modeling approaches are employed to make predictions at each site, varying, for example, in the use of diagnostic vs. prognostic leaf area index. Given multiple independent observational constraints and multiple classes of model, we evaluate the internal consistency of observations at each site, and use this information to extend previously derived estimates of uncertainty in the flux observations. Model results are then compared with all available observations and models are ranked according to their consistency with each type of observation (high frequency flux measurement, carbon stock, annual production). We demonstrate a range of internal consistency across the sites, and show that some models which perform well against one observational metric perform poorly against others. We use this analysis to construct a hypothesis for combining eddy covariance, biometrics, and other standard physiological and ecological measurements which, as data collection proceeded over several years, would present an increasingly challenging target for next generation models.
Estimating Software Effort Hours for Major Defense Acquisition Programs
ERIC Educational Resources Information Center
Wallshein, Corinne C.
2010-01-01
Software Cost Estimation (SCE) uses labor hours or effort required to conceptualize, develop, integrate, test, field, or maintain program components. Department of Defense (DoD) SCE can use initial software data parameters to project effort hours for large, software-intensive programs for contractors reporting the top levels of process maturity,…
Competition or cooperation in transboundary fish stocks management: Insight from a dynamical model.
Nguyen, Trong Hieu; Brochier, Timothée; Auger, Pierre; Trinh, Viet Duoc; Brehmer, Patrice
2018-06-14
An idealized system of a shared fish stock associated with different exclusive economic zones (EEZ) is modelled. Parameters were estimated for the case of the small pelagic fisheries shared between Southern Morocco, Mauritania and the Senegambia. Two models of fishing effort distribution were explored. The first one considers independent national fisheries in each EEZ, with a cost per unit of fishing effort that depends on local fishery policy. The second one considers the case of a fully cooperative fishery performed by an international fleet freely moving across the borders. Both models are based on a set of six ordinary differential equations describing the time evolution of the fish biomass and the fishing effort. We take advantage of the two time scales to obtain a reduced model governing the total fish biomass of the system and fishing efforts in each zone. At the fast equilibrium, the fish distribution follows the ideal free distribution according to the carrying capacity in each area. Different equilibria can be reached according to management choices. When fishing fleets are independent and national fishery policies are not harmonized, in the general case, competition leads after a few decades to a scenario where only one fishery remains sustainable. In the case of sub-regional agreement acting on the adjustment of cost per unit of fishing effort in each EEZ, we found that a large number of equilibria exists. In this last case the initial distribution of fishing effort strongly impact the optimal equilibrium that can be reached. Lastly, the country with the highest carrying capacity density may get less landings when collaborating with other countries than if it minimises its fishing costs. The second fully cooperative model shows that a single international fishing fleet moving freely in the fishing areas leads to a sustainable equilibrium. Such findings should foster regional fisheries organizations to get potential new ways for neighbouring fish stock management. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Guerrero, César; Pedrosa, Elisabete T.; Pérez-Bejarano, Andrea; Keizer, Jan Jacob
2014-05-01
The temperature reached on soils is an important parameter needed to describe the wildfire effects. However, the methods for measure the temperature reached on burned soils have been poorly developed. Recently, the use of the near-infrared (NIR) spectroscopy has been pointed as a valuable tool for this purpose. The NIR spectrum of a soil sample contains information of the organic matter (quantity and quality), clay (quantity and quality), minerals (such as carbonates and iron oxides) and water contents. Some of these components are modified by the heat, and each temperature causes a group of changes, leaving a typical fingerprint on the NIR spectrum. This technique needs the use of a model (or calibration) where the changes in the NIR spectra are related with the temperature reached. For the development of the model, several aliquots are heated at known temperatures, and used as standards in the calibration set. This model offers the possibility to make estimations of the temperature reached on a burned sample from its NIR spectrum. However, the estimation of the temperature reached using NIR spectroscopy is due to changes in several components, and cannot be attributed to changes in a unique soil component. Thus, we can estimate the temperature reached by the interaction between temperature and the thermo-sensible soil components. In addition, we cannot expect the uniform distribution of these components, even at small scale. Consequently, the proportion of these soil components can vary spatially across the site. This variation will be present in the samples used to construct the model and also in the samples affected by the wildfire. Therefore, the strategies followed to develop robust models should be focused to manage this expected variation. In this work we compared the prediction accuracy of models constructed with different approaches. These approaches were designed to provide insights about how to distribute the efforts needed for the development of robust models, since this step is the bottle-neck of this technique. In the first approach, a plot-scale model was used to predict the temperature reached in samples collected in other plots from the same site. In a plot-scale model, all the heated aliquots come from a unique plot-scale sample. As expected, the results obtained with this approach were deceptive, because this approach was assuming that a plot-scale model would be enough to represent the whole variability of the site. The accuracy (measured as the root mean square error of prediction, thereinafter RMSEP) was 86ºC, and the bias was also high (>30ºC). In the second approach, the temperatures predicted through several plot-scale models were averaged. The accuracy was improved (RMSEP=65ºC) respect the first approach, because the variability from several plots was considered and biased predictions were partially counterbalanced. However, this approach implies more efforts, since several plot-scale models are needed. In the third approach, the predictions were obtained with site-scale models. These models were constructed with aliquots from several plots. In this case, the results were accurate, since the RMSEP was around 40ºC, the bias was very small (<1ºC) and the R2 was 0.92. As expected, this approach clearly outperformed the second approach, in spite of the fact that the same efforts were needed. In a plot-scale model, only one interaction between temperature and soil components was modelled. However, several different interactions between temperature and soil components were present in the calibration matrix of a site-scale model. Consequently, the site-scale models were able to model the temperature reached excluding the influence of the differences in soil composition, resulting in more robust models respect that variation. Summarizing, the results were highlighting the importance of an adequate strategy to develop robust and accurate models with moderate efforts, and how a wrong strategy can result in deceptive predictions.
Curran, Christopher A.; Eng, Ken; Konrad, Christopher P.
2012-01-01
Regional low-flow regression models for estimating Q7,10 at ungaged stream sites are developed from the records of daily discharge at 65 continuous gaging stations (including 22 discontinued gaging stations) for the purpose of evaluating explanatory variables. By incorporating the base-flow recession time constant τ as an explanatory variable in the regression model, the root-mean square error for estimating Q7,10 at ungaged sites can be lowered to 72 percent (for known values of τ), which is 42 percent less than if only basin area and mean annual precipitation are used as explanatory variables. If partial-record sites are included in the regression data set, τ must be estimated from pairs of discharge measurements made during continuous periods of declining low flows. Eight measurement pairs are optimal for estimating τ at partial-record sites, and result in a lowering of the root-mean square error by 25 percent. A low-flow survey strategy that includes paired measurements at partial-record sites requires additional effort and planning beyond a standard strategy, but could be used to enhance regional estimates of τ and potentially reduce the error of regional regression models for estimating low-flow characteristics at ungaged sites.
Quantifying the impact of longline fisheries on adult survival in the black-footed albatross
Veran, S.; Gimenez, O.; Flint, E.; Kendall, W.L.; Doherty, P.F.; Lebreton, J.D.
2007-01-01
1. Industrial longline fishing has been suspected to impact upon black-footed albatross populations Phoebastria nigripes by increasing mortality, but no precise estimates of bycatch mortality are available to ascertain this statement. We present a general framework for quantifying the relationship between albatross population and longline fishing in absence of reliable estimates of bycatch rate. 2. We analysed capture?recapture data of a population of black-footed albatross to obtain estimates of survival probability for this population using several alternative models to adequately take into account heterogeneity in the recapture process. Instead of trying to estimate the number of birds killed by using various extrapolations and unchecked assumptions, we investigate the potential relationship between annual adult survival and several measures of fishing effort. Although we considered a large number of covariates, we used principal component analysis to generate a few uncorrelated synthetic variables from the set and thus we maintained both power and robustness. 3. The average survival for 1997?2002 was 92%, a low value compared to estimates available for other albatross species. We found that one of the synthetic variables used to summarize industrial longline fishing significantly explained more than 40% of the variation in adult survival over 11 years, suggesting an impact by longline fishing on albatross? survival. 4. Our analysis provides some evidence of non-linear variation in survival with fishing effort. This could indicate that below a certain level of fishing effort, deaths due to incidental catch can be partially or totally compensated for by a decrease in natural mortality. Another possible explanation is the existence of a strong interspecific competition for accessing the baits, reducing the risk of being accidentally hooked. 5. Synthesis and applications. The suspicion of a significant impact of longline fishing on the black-footed albatross population was supported by the combination of a low estimate of adult survival for the study period, and a significant relationship between adult survival and a synthetic measure of fishing effort. This study highlights the sensitivity of the black-footed albatross to commercial longline fishing, and should exhort fishery management authorities to find adequate seabirds avoidance methods and to encourage their employment.
Yield Model Development (YMD) implementation plan for fiscal years 1981 and 1982
NASA Technical Reports Server (NTRS)
Ambroziak, R. A. (Principal Investigator)
1981-01-01
A plan is described for supporting USDA crop production forecasting and estimation by (1) testing, evaluating, and selecting crop yield models for application testing; (2) identifying areas of feasible research for improvement of models; and (3) conducting research to modify existing models and to develop new crop yield assessment methods. Tasks to be performed for each of these efforts are described as well as for project management and support. The responsibilities of USDA, USDC, USDI, and NASA are delineated as well as problem areas to be addressed.
Age-specific survival of reintroduced swift fox in Badlands National Park and surrounding lands
Sasmal, Indrani; Klaver, Robert W.; Jenks, Jonathan A.; Schroeder, Greg M.
2016-01-01
In 2003, a reintroduction program was initiated at Badlands National Park (BNP), South Dakota, USA, with swift foxes (Vulpes velox) translocated from Colorado and Wyoming, USA, as part of a restoration effort to recover declining swift fox populations throughout its historical range. Estimates of age-specific survival are necessary to evaluate the potential for population growth of reintroduced populations. We used 7 years (2003–2009) of capture–recapture data of 243 pups, 29 yearlings, and 69 adult swift foxes at BNP and the surrounding area to construct Cormack–Jolly–Seber model estimates of apparent survival within a capture–mark–recapture framework using Program MARK. The best model for estimating recapture probabilities included no differences among age classes, greater recapture probabilities during early years of the monitoring effort than later years, and variation among spring, winter, and summer. Our top ranked survival model indicated pup survival differed from that of yearlings and adults and varied by month and year. The apparent annual survival probability of pups (0.47, SE = 0.10) in our study area was greater than the apparent annual survival probability of yearlings and adults (0.27, SE = 0.08). Our results indicate low survival probabilities for a reintroduced population of swift foxes in the BNP and surrounding areas. Management of reintroduced populations and future reintroductions of swift foxes should consider the effects of relative low annual survival on population demography.
Computational methods for a three-dimensional model of the petroleum-discovery process
Schuenemeyer, J.H.; Bawiec, W.J.; Drew, L.J.
1980-01-01
A discovery-process model devised by Drew, Schuenemeyer, and Root can be used to predict the amount of petroleum to be discovered in a basin from some future level of exploratory effort: the predictions are based on historical drilling and discovery data. Because marginal costs of discovery and production are a function of field size, the model can be used to make estimates of future discoveries within deposit size classes. The modeling approach is a geometric one in which the area searched is a function of the size and shape of the targets being sought. A high correlation is assumed between the surface-projection area of the fields and the volume of petroleum. To predict how much oil remains to be found, the area searched must be computed, and the basin size and discovery efficiency must be estimated. The basin is assumed to be explored randomly rather than by pattern drilling. The model may be used to compute independent estimates of future oil at different depth intervals for a play involving multiple producing horizons. We have written FORTRAN computer programs that are used with Drew, Schuenemeyer, and Root's model to merge the discovery and drilling information and perform the necessary computations to estimate undiscovered petroleum. These program may be modified easily for the estimation of remaining quantities of commodities other than petroleum. ?? 1980.
New Methodology for Estimating Fuel Economy by Vehicle Class
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, Shih-Miao; Dabbs, Kathryn; Hwang, Ho-Ling
2011-01-01
Office of Highway Policy Information to develop a new methodology to generate annual estimates of average fuel efficiency and number of motor vehicles registered by vehicle class for Table VM-1 of the Highway Statistics annual publication. This paper describes the new methodology developed under this effort and compares the results of the existing manual method and the new systematic approach. The methodology developed under this study takes a two-step approach. First, the preliminary fuel efficiency rates are estimated based on vehicle stock models for different classes of vehicles. Then, a reconciliation model is used to adjust the initial fuel consumptionmore » rates from the vehicle stock models and match the VMT information for each vehicle class and the reported total fuel consumption. This reconciliation model utilizes a systematic approach that produces documentable and reproducible results. The basic framework utilizes a mathematical programming formulation to minimize the deviations between the fuel economy estimates published in the previous year s Highway Statistics and the results from the vehicle stock models, subject to the constraint that fuel consumptions for different vehicle classes must sum to the total fuel consumption estimate published in Table MF-21 of the current year Highway Statistics. The results generated from this new approach provide a smoother time series for the fuel economies by vehicle class. It also utilizes the most up-to-date and best available data with sound econometric models to generate MPG estimates by vehicle class.« less
Payn, Robert A.; Hall, Robert O Jr.; Kennedy, Theodore A.; Poole, Geoff C; Marshall, Lucy A.
2017-01-01
Conventional methods for estimating whole-stream metabolic rates from measured dissolved oxygen dynamics do not account for the variation in solute transport times created by dynamic flow conditions. Changes in flow at hourly time scales are common downstream of hydroelectric dams (i.e. hydropeaking), and hydrologic limitations of conventional metabolic models have resulted in a poor understanding of the controls on biological production in these highly managed river ecosystems. To overcome these limitations, we coupled a two-station metabolic model of dissolved oxygen dynamics with a hydrologic river routing model. We designed calibration and parameter estimation tools to infer values for hydrologic and metabolic parameters based on time series of water quality data, achieving the ultimate goal of estimating whole-river gross primary production and ecosystem respiration during dynamic flow conditions. Our case study data for model design and calibration were collected in the tailwater of Glen Canyon Dam (Arizona, USA), a large hydropower facility where the mean discharge was 325 m3 s 1 and the average daily coefficient of variation of flow was 0.17 (i.e. the hydropeaking index averaged from 2006 to 2016). We demonstrate the coupled model’s conceptual consistency with conventional models during steady flow conditions, and illustrate the potential bias in metabolism estimates with conventional models during unsteady flow conditions. This effort contributes an approach to solute transport modeling and parameter estimation that allows study of whole-ecosystem metabolic regimes across a more diverse range of hydrologic conditions commonly encountered in streams and rivers.
Global parameter estimation for thermodynamic models of transcriptional regulation.
Suleimenov, Yerzhan; Ay, Ahmet; Samee, Md Abul Hassan; Dresch, Jacqueline M; Sinha, Saurabh; Arnosti, David N
2013-07-15
Deciphering the mechanisms involved in gene regulation holds the key to understanding the control of central biological processes, including human disease, population variation, and the evolution of morphological innovations. New experimental techniques including whole genome sequencing and transcriptome analysis have enabled comprehensive modeling approaches to study gene regulation. In many cases, it is useful to be able to assign biological significance to the inferred model parameters, but such interpretation should take into account features that affect these parameters, including model construction and sensitivity, the type of fitness calculation, and the effectiveness of parameter estimation. This last point is often neglected, as estimation methods are often selected for historical reasons or for computational ease. Here, we compare the performance of two parameter estimation techniques broadly representative of local and global approaches, namely, a quasi-Newton/Nelder-Mead simplex (QN/NMS) method and a covariance matrix adaptation-evolutionary strategy (CMA-ES) method. The estimation methods were applied to a set of thermodynamic models of gene transcription applied to regulatory elements active in the Drosophila embryo. Measuring overall fit, the global CMA-ES method performed significantly better than the local QN/NMS method on high quality data sets, but this difference was negligible on lower quality data sets with increased noise or on data sets simplified by stringent thresholding. Our results suggest that the choice of parameter estimation technique for evaluation of gene expression models depends both on quality of data, the nature of the models [again, remains to be established] and the aims of the modeling effort. Copyright © 2013 Elsevier Inc. All rights reserved.
2010-09-01
estimation of total exposure at any toxicological endpoint in the body. This effort is a significant contribution as it highlights future research needs...rigorous modeling of the nanoparticle transport by including physico-chemical properties of engineered particles. Similarly, toxicological dose-response...exposure risks as compared to larger sized particles of the same material. Although the toxicology of a base material may be thoroughly defined, the
Linard, Joshua I.; Schaffrath, Keelin R.
2014-01-01
Elevated concentrations of salinity and selenium in the tributaries and main-stem reaches of the Colorado River are a water-quality concern and have been the focus of remediation efforts for many years. Land-management practices with the objective of limiting the amount of salt and selenium that reaches the stream have focused on improving the methods by which irrigation water is conveyed and distributed. Federal land managers implement improvements in accordance with the Colorado River Basin Salinity Control Act of 1974, which directs Federal land managers to enhance and protect the quality of water available in the Colorado River. In an effort to assist in evaluating and mitigating the detrimental effects of salinity and selenium, the U.S. Geological Survey, in cooperation with the Bureau of Reclamation, the Colorado River Water Resources District, and the Bureau of Land Management, analyzed salinity and selenium data collected at sites to develop regression models. The study area and sites are on the Colorado River or in one of three small basins in Western Colorado: the White River Basin, the Lower Gunnison River Basin, and the Dolores River Basin. By using data collected from water years 2009 through 2011, regression models able to estimate concentrations were developed for salinity at six sites and selenium at six sites. At a minimum, data from discrete measurement of salinity or selenium concentration, streamflow, and specific conductance at each of the sites were needed for model development. Comparison of the Adjusted R2 and standard error statistics of the two salinity models developed at each site indicated the models using specific conductance as the explanatory variable performed better than those using streamflow. The addition of multiple explanatory variables improved the ability to estimate selenium concentration at several sites compared with use of solely streamflow or specific conductance. The error associated with the log-transformed salinity and selenium estimates is consistent in log space; however, when the estimates are transformed into non-log values, the error increases as the estimates decrease. Continuous streamflow and specific conductance data collected at study sites provide the means to examine temporal variability in constituent concentration and load. The regression models can estimate continuous concentrations or loads on the basis of continuous specific conductance or streamflow data. Similar estimates are available for other sites at the USGS National Real-Time Water Quality Web page (http://nrtwq.usgs.gov) and provide water-resource managers with a means of improving their general understanding of how constituent concentration or load can change annually, seasonally, or in real time.
A diagnostic model to estimate winds and small-scale drag from Mars Observer PMIRR data
NASA Technical Reports Server (NTRS)
Barnes, J. R.
1993-01-01
Theoretical and modeling studies indicate that small-scale drag due to breaking gravity waves is likely to be of considerable importance for the circulation in the middle atmospheric region (approximately 40-100 km altitude) on Mars. Recent earth-based spectroscopic observations have provided evidence for the existence of circulation features, in particular, a warm winter polar region, associated with gravity wave drag. Since the Mars Observer PMIRR experiment will obtain temperature profiles extending from the surface up to about 80 km altitude, it will be extensively sampling middle atmospheric regions in which gravity wave drag may play a dominant role. Estimating the drag then becomes crucial to the estimation of the atmospheric winds from the PMIRR-observed temperatures. An interative diagnostic model based upon one previously developed and tested with earth satellite temperature data will be applied to the PMIRR measurements to produce estimates of the small-scale zonal drag and three-dimensional wind fields in the Mars middle atmosphere. This model is based on the primitive equations, and can allow for time dependence (the time tendencies used may be based upon those computed in a Fast Fourier Mapping procedure). The small-scale zonal drag is estimated as the residual in the zonal momentum equation; the horizontal winds having first been estimated from the meridional momentum equation and the continuity equation. The scheme estimates the vertical motions from the thermodynamic equation, and thus needs estimates of the diabatic heating based upon the observed temperatures. The latter will be generated using a radiative model. It is hoped that the diagnostic scheme will be able to produce good estimates of the zonal gravity wave drag in the Mars middle atmosphere, estimates that can then be used in other diagnostic or assimilation efforts, as well as more theoretical studies.
Estimating migratory connectivity of birds when re-encounter probabilities are heterogeneous
Cohen, Emily B; Hostetler, Jeffrey A; Royle, J Andrew; Marra, Peter P
2014-01-01
Understanding the biology and conducting effective conservation of migratory species requires an understanding of migratory connectivity – the geographic linkages of populations between stages of the annual cycle. Unfortunately, for most species, we are lacking such information. The North American Bird Banding Laboratory (BBL) houses an extensive database of marking, recaptures and recoveries, and such data could provide migratory connectivity information for many species. To date, however, few species have been analyzed for migratory connectivity largely because heterogeneous re-encounter probabilities make interpretation problematic. We accounted for regional variation in re-encounter probabilities by borrowing information across species and by using effort covariates on recapture and recovery probabilities in a multistate capture–recapture and recovery model. The effort covariates were derived from recaptures and recoveries of species within the same regions. We estimated the migratory connectivity for three tern species breeding in North America and over-wintering in the tropics, common (Sterna hirundo), roseate (Sterna dougallii), and Caspian terns (Hydroprogne caspia). For western breeding terns, model-derived estimates of migratory connectivity differed considerably from those derived directly from the proportions of re-encounters. Conversely, for eastern breeding terns, estimates were merely refined by the inclusion of re-encounter probabilities. In general, eastern breeding terns were strongly connected to eastern South America, and western breeding terns were strongly linked to the more western parts of the nonbreeding range under both models. Through simulation, we found this approach is likely useful for many species in the BBL database, although precision improved with higher re-encounter probabilities and stronger migratory connectivity. We describe an approach to deal with the inherent biases in BBL banding and re-encounter data to demonstrate that this large dataset is a valuable source of information about the migratory connectivity of the birds of North America. PMID:24967083
Estimating migratory connectivity of birds when re-encounter probabilities are heterogeneous
Cohen, Emily B.; Hostelter, Jeffrey A.; Royle, J. Andrew; Marra, Peter P.
2014-01-01
Understanding the biology and conducting effective conservation of migratory species requires an understanding of migratory connectivity – the geographic linkages of populations between stages of the annual cycle. Unfortunately, for most species, we are lacking such information. The North American Bird Banding Laboratory (BBL) houses an extensive database of marking, recaptures and recoveries, and such data could provide migratory connectivity information for many species. To date, however, few species have been analyzed for migratory connectivity largely because heterogeneous re-encounter probabilities make interpretation problematic. We accounted for regional variation in re-encounter probabilities by borrowing information across species and by using effort covariates on recapture and recovery probabilities in a multistate capture–recapture and recovery model. The effort covariates were derived from recaptures and recoveries of species within the same regions. We estimated the migratory connectivity for three tern species breeding in North America and over-wintering in the tropics, common (Sterna hirundo), roseate (Sterna dougallii), and Caspian terns (Hydroprogne caspia). For western breeding terns, model-derived estimates of migratory connectivity differed considerably from those derived directly from the proportions of re-encounters. Conversely, for eastern breeding terns, estimates were merely refined by the inclusion of re-encounter probabilities. In general, eastern breeding terns were strongly connected to eastern South America, and western breeding terns were strongly linked to the more western parts of the nonbreeding range under both models. Through simulation, we found this approach is likely useful for many species in the BBL database, although precision improved with higher re-encounter probabilities and stronger migratory connectivity. We describe an approach to deal with the inherent biases in BBL banding and re-encounter data to demonstrate that this large dataset is a valuable source of information about the migratory connectivity of the birds of North America.
The Incubation Periods of Dengue Viruses
Chan, Miranda; Johansson, Michael A.
2012-01-01
Dengue viruses are major contributors to illness and death globally. Here we analyze the extrinsic and intrinsic incubation periods (EIP and IIP), in the mosquito and human, respectively. We identified 146 EIP observations from 8 studies and 204 IIP observations from 35 studies. These data were fitted with censored Bayesian time-to-event models. The best-fitting temperature-dependent EIP model estimated that 95% of EIPs are between 5 and 33 days at 25°C, and 2 and 15 days at 30°C, with means of 15 and 6.5 days, respectively. The mean IIP estimate was 5.9 days, with 95% expected between days 3 and 10. Differences between serotypes were not identified for either incubation period. These incubation period models should be useful in clinical diagnosis, outbreak investigation, prevention and control efforts, and mathematical modeling of dengue virus transmission. PMID:23226436
Motlagh, Farhad Shafiepour; Yarmohammadian, Mohammad Hossein; Yaghoubi, Maryam
2012-03-01
One important factor in growth, progress, and increase in work efficiency of employees of any enterprise is to make considerable effort. Supreme leader of the Islamic Republic of Iran also addressed the issue of need for more efforts. The goal of this study was to determine the association of perceived organizational justice and organizational expectations with efforts of nurses to provide a suitable model. The current study was a descriptive study. The study group consists of all nurses who worked in hospitals of Isfahan. Due to some limitations all nurses of the special unit, surgery wards and operating room were questioned. The data collection tools were the Organizational Justice Questionnaire, organizational expectations questionnaire, and double effort questionnaire. Content validity of the mentioned questionnaires was confirmed after considering the experts' comments. The reliability of these questionnaires, using the Cronbach's alpha, were 0.79, 0.83 and 0.92, respectively. The Pearson correlation and the structural equation model were used for the analysis of data. There was a significant correlation between the perceived organizational justice and the double effort of nurses during the surgery of patients. Correlation of the expectation from job, usefulness of job, and its attractiveness with double effort of nurses before the surgery was also statistically significant. Moreover, it was shown that the root of the mean square error of estimation (RMSEA) was 0.087, the fitted goodness index (GFI) was 0.953, the value of chi-square was 268.5, and the model was statistically significant (p < 0.001). Today Justice is an essential need for human life and its importance in organizations and social life of individuals is evident.
The New NASA Orbital Debris Engineering Model ORDEM2000
NASA Technical Reports Server (NTRS)
Liou, Jer-Chyi; Matney, Mark J.; Anz-Meador, Phillip D.; Kessler, Donald; Jansen, Mark; Theall, Jeffery R.
2002-01-01
The NASA Orbital Debris Program Office at Johnson Space Center has developed a new computer-based orbital debris engineering model, ORDEM2000, which describes the orbital debris environment in the low Earth orbit region between 200 and 2000 km altitude. The model is appropriate for those engineering solutions requiring knowledge and estimates of the orbital debris environment (debris spatial density, flux, etc.). ORDEM2000 can also be used as a benchmark for ground-based debris measurements and observations. We incorporated a large set of observational data, covering the object size range from 10 mm to 10 m, into the ORDEM2000 debris database, utilizing a maximum likelihood estimator to convert observations into debris population probability distribution functions. These functions then form the basis of debris populations. We developed a finite element model to process the debris populations to form the debris environment. A more capable input and output structure and a user-friendly graphical user interface are also implemented in the model. ORDEM2000 has been subjected to a significant verification and validation effort. This document describes ORDEM2000, which supersedes the previous model, ORDEM96. The availability of new sensor and in situ data, as well as new analytical techniques, has enabled the construction of this new model. Section 1 describes the general requirements and scope of an engineering model. Data analyses and the theoretical formulation of the model are described in Sections 2 and 3. Section 4 describes the verification and validation effort and the sensitivity and uncertainty analyses. Finally, Section 5 describes the graphical user interface, software installation, and test cases for the user.
NASA Astrophysics Data System (ADS)
Mishra, V.; Cruise, J.; Mecikalski, J. R.
2017-12-01
Much effort has been expended recently on the assimilation of remotely sensed soil moisture into operational land surface models (LSM). These efforts have normally been focused on the use of data derived from the microwave bands and results have often shown that improvements to model simulations have been limited due to the fact that microwave signals only penetrate the top 2-5 cm of the soil surface. It is possible that model simulations could be further improved through the introduction of geostationary satellite thermal infrared (TIR) based root zone soil moisture in addition to the microwave deduced surface estimates. In this study, root zone soil moisture estimates from the TIR based Atmospheric Land Exchange Inverse (ALEXI) model were merged with NASA Soil Moisture Active Passive (SMAP) based surface estimates through the application of informational entropy. Entropy can be used to characterize the movement of moisture within the vadose zone and accounts for both advection and diffusion processes. The Principle of Maximum Entropy (POME) can be used to derive complete soil moisture profiles and, fortuitously, only requires a surface boundary condition as well as the overall mean moisture content of the soil column. A lower boundary can be considered a soil parameter or obtained from the LSM itself. In this study, SMAP provided the surface boundary while ALEXI supplied the mean and the entropy integral was used to tie the two together and produce the vertical profile. However, prior to the merging, the coarse resolution (9 km) SMAP data were downscaled to the finer resolution (4.7 km) ALEXI grid. The disaggregation scheme followed the Soil Evaporative Efficiency approach and again, all necessary inputs were available from the TIR model. The profiles were then assimilated into a standard agricultural crop model (Decision Support System for Agrotechnology, DSSAT) via the ensemble Kalman Filter. The study was conducted over the Southeastern United States for the growing seasons from 2015-2017. Soil moisture profiles compared favorably to in situ data and simulated crop yields compared well with observed yields.
Kery, M.; Royle, J. Andrew; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.
2009-01-01
Species richness is the most widely used biodiversity measure. Virtually always, it cannot be observed but needs to be estimated because some species may be present but remain undetected. This fact is commonly ignored in ecology and management, although it will bias estimates of species richness and related parameters such as occupancy, turnover or extinction rates. We describe a species community modeling strategy based on species-specific models of occurrence, from which estimates of important summaries of community structure, e.g., species richness, occupancy, or measures of similarity among species or sites, are derived by aggregating indicators of occurrence for all species observed in the sample, and for the estimated complement of unobserved species. We use data augmentation for an efficient Bayesian approach to estimation and prediction under this model based on MCMC in WinBUGS. For illustration, we use the Swiss breeding bird survey (MHB) that conducts 2?3 territory-mapping surveys in a systematic sample of 267 1 km2 units on quadrat-specific routes averaging 5.1 km to obtain species-specific estimates of occupancy, and estimates of species richness of all diurnal species free of distorting effects of imperfect detectability. We introduce into our model species-specific covariates relevant to occupancy (elevation, forest cover, route length) and sampling (season, effort). From 1995 to 2004, 185 diurnal breeding bird species were known in Switzerland, and an additional 13 bred 1?3 times since 1900. 134 species were observed during MHB surveys in 254 quadrats surveyed in 2001, and our estimate of 169.9 (95% CI 151?195) therefore appeared sensible. The observed number of species ranged from 4 to 58 (mean 32.8), but with an estimated 0.7?11.2 (mean 2.6) further, unobserved species, the estimated proportion of detected species was 0.48?0.98 (mean 0.91). As is well known, species richness declined at higher elevation and fell above the timberline, and most species showed some preferred elevation. Route length had clear effects on occupancy, suggesting it is a proxy for the size of the effectively sampled area. Detection probability of most species showed clear seasonal patterns and increased with greater survey effort; these are important results for the planning of focused surveys. The main benefit of our model, and its implementation in WinBUGS for which we provide code, is its conceptual simplicity. Species richness is naturally expressed as the sum of occurrences of individual species. Information about species is combined across sites, which yields greater efficiency or may even enable estimation for sites with very few observed species in the first place. At the same time, species detections are clearly segregated into a true state process (occupancy) and an observation process (detection, given occupancy), and covariates can be readily introduced, which provides for efficient introduction of such additional information as well as sharp testing of such relationships.
Oxidative DNA damage background estimated by a system model of base excision repair
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sokhansanj, B A; Wilson, III, D M
Human DNA can be damaged by natural metabolism through free radical production. It has been suggested that the equilibrium between innate damage and cellular DNA repair results in an oxidative DNA damage background that potentially contributes to disease and aging. Efforts to quantitatively characterize the human oxidative DNA damage background level based on measuring 8-oxoguanine lesions as a biomarker have led to estimates varying over 3-4 orders of magnitude, depending on the method of measurement. We applied a previously developed and validated quantitative pathway model of human DNA base excision repair, integrating experimentally determined endogenous damage rates and model parametersmore » from multiple sources. Our estimates of at most 100 8-oxoguanine lesions per cell are consistent with the low end of data from biochemical and cell biology experiments, a result robust to model limitations and parameter variation. Our results show the power of quantitative system modeling to interpret composite experimental data and make biologically and physiologically relevant predictions for complex human DNA repair pathway mechanisms and capacity.« less
Glass Property Data and Models for Estimating High-Level Waste Glass Volume
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vienna, John D.; Fluegel, Alexander; Kim, Dong-Sang
2009-10-05
This report describes recent efforts to develop glass property models that can be used to help estimate the volume of high-level waste (HLW) glass that will result from vitrification of Hanford tank waste. The compositions of acceptable and processable HLW glasses need to be optimized to minimize the waste-form volume and, hence, to save cost. A database of properties and associated compositions for simulated waste glasses was collected for developing property-composition models. This database, although not comprehensive, represents a large fraction of data on waste-glass compositions and properties that were available at the time of this report. Glass property-composition modelsmore » were fit to subsets of the database for several key glass properties. These models apply to a significantly broader composition space than those previously publised. These models should be considered for interim use in calculating properties of Hanford waste glasses.« less
Link, W.A.; Sauer, J.R.; Niven, D.K.
2006-01-01
Analysis of Christmas Bird Count (CBC) data is complicated by the need to account for variation in effort on counts and to provide summaries over large geographic regions. We describe a hierarchical model for analysis of population change using CBC data that addresses these needs. The effect of effort is modeled parametrically, with parameter values varying among strata as identically distributed random effects. Year and site effects are modeled hierarchically, accommodating large regional variation in number of samples and precision of estimates. The resulting model is complex, but a Bayesian analysis can be conducted using Markov chain Monte Carlo techniques. We analyze CBC data for American Black Ducks (Anas rubripes), a species of considerable management interest that has historically been monitored using winter surveys. Over the interval 1966-2003, Black Duck populations showed distinct regional patterns of population change. The patterns shown by CBC data are similar to those shown by the Midwinter Waterfowl Inventory for the United States.
Nuclear Explosion Monitoring Advances and Challenges
NASA Astrophysics Data System (ADS)
Baker, G. E.
2015-12-01
We address the state-of-the-art in areas important to monitoring, current challenges, specific efforts that illustrate approaches addressing shortcomings in capabilities, and additional approaches that might be helpful. The exponential increase in the number of events that must be screened as magnitude thresholds decrease presents one of the greatest challenges. Ongoing efforts to exploit repeat seismic events using waveform correlation, subspace methods, and empirical matched field processing holds as much "game-changing" promise as anything being done, and further efforts to develop and apply such methods efficiently are critical. Greater accuracy of travel time, signal loss, and full waveform predictions are still needed to better locate and discriminate seismic events. Important developments include methods to model velocities using multiple types of data; to model attenuation with better separation of source, path, and site effects; and to model focusing and defocusing of surface waves. Current efforts to model higher frequency full waveforms are likely to improve source characterization while more effective estimation of attenuation from ambient noise holds promise for filling in gaps. Censoring in attenuation modeling is a critical problem to address. Quantifying uncertainty of discriminants is key to their operational use. Efforts to do so for moment tensor (MT) inversion are particularly important, and fundamental progress on the statistics of MT distributions is the most important advance needed in the near term in this area. Source physics is seeing great progress through theoretical, experimental, and simulation studies. The biggest need is to accurately predict the effects of source conditions on seismic generation. Uniqueness is the challenge here. Progress will depend on studies that probe what distinguishes mechanisms, rather than whether one of many possible mechanisms is consistent with some set of observations.
Understanding and Predicting the Process of Software Maintenance Releases
NASA Technical Reports Server (NTRS)
Basili, Victor; Briand, Lionel; Condon, Steven; Kim, Yong-Mi; Melo, Walcelio L.; Valett, Jon D.
1996-01-01
One of the major concerns of any maintenance organization is to understand and estimate the cost of maintenance releases of software systems. Planning the next release so as to maximize the increase in functionality and the improvement in quality are vital to successful maintenance management. The objective of this paper is to present the results of a case study in which an incremental approach was used to better understand the effort distribution of releases and build a predictive effort model for software maintenance releases. This study was conducted in the Flight Dynamics Division (FDD) of NASA Goddard Space Flight Center(GSFC). This paper presents three main results: 1) a predictive effort model developed for the FDD's software maintenance release process; 2) measurement-based lessons learned about the maintenance process in the FDD; and 3) a set of lessons learned about the establishment of a measurement-based software maintenance improvement program. In addition, this study provides insights and guidelines for obtaining similar results in other maintenance organizations.
Wildhaber, M.L.; Holan, S.H.; Bryan, J.L.; Gladish, D.W.; Ellersieck, M.
2011-01-01
In 2003, the US Army Corps of Engineers initiated the Pallid Sturgeon Population Assessment Program (PSPAP) to monitor pallid sturgeon and the fish community of the Missouri River. The power analysis of PSPAP presented here was conducted to guide sampling design and effort decisions. The PSPAP sampling design has a nested structure with multiple gear subsamples within a river bend. Power analyses were based on a normal linear mixed model, using a mixed cell means approach, with variance estimates from the original data. It was found that, at current effort levels, at least 20 years for pallid and 10 years for shovelnose sturgeon is needed to detect a 5% annual decline. Modified bootstrap simulations suggest power estimates from the original data are conservative due to excessive zero fish counts. In general, the approach presented is applicable to a wide array of animal monitoring programs.
The importance of regional models in assessing canine cancer incidences in Switzerland
Leyk, Stefan; Brunsdon, Christopher; Graf, Ramona; Pospischil, Andreas; Fabrikant, Sara Irina
2018-01-01
Fitting canine cancer incidences through a conventional regression model assumes constant statistical relationships across the study area in estimating the model coefficients. However, it is often more realistic to consider that these relationships may vary over space. Such a condition, known as spatial non-stationarity, implies that the model coefficients need to be estimated locally. In these kinds of local models, the geographic scale, or spatial extent, employed for coefficient estimation may also have a pervasive influence. This is because important variations in the local model coefficients across geographic scales may impact the understanding of local relationships. In this study, we fitted canine cancer incidences across Swiss municipal units through multiple regional models. We computed diagnostic summaries across the different regional models, and contrasted them with the diagnostics of the conventional regression model, using value-by-alpha maps and scalograms. The results of this comparative assessment enabled us to identify variations in the goodness-of-fit and coefficient estimates. We detected spatially non-stationary relationships, in particular, for the variables related to biological risk factors. These variations in the model coefficients were more important at small geographic scales, making a case for the need to model canine cancer incidences locally in contrast to more conventional global approaches. However, we contend that prior to undertaking local modeling efforts, a deeper understanding of the effects of geographic scale is needed to better characterize and identify local model relationships. PMID:29652921
The importance of regional models in assessing canine cancer incidences in Switzerland.
Boo, Gianluca; Leyk, Stefan; Brunsdon, Christopher; Graf, Ramona; Pospischil, Andreas; Fabrikant, Sara Irina
2018-01-01
Fitting canine cancer incidences through a conventional regression model assumes constant statistical relationships across the study area in estimating the model coefficients. However, it is often more realistic to consider that these relationships may vary over space. Such a condition, known as spatial non-stationarity, implies that the model coefficients need to be estimated locally. In these kinds of local models, the geographic scale, or spatial extent, employed for coefficient estimation may also have a pervasive influence. This is because important variations in the local model coefficients across geographic scales may impact the understanding of local relationships. In this study, we fitted canine cancer incidences across Swiss municipal units through multiple regional models. We computed diagnostic summaries across the different regional models, and contrasted them with the diagnostics of the conventional regression model, using value-by-alpha maps and scalograms. The results of this comparative assessment enabled us to identify variations in the goodness-of-fit and coefficient estimates. We detected spatially non-stationary relationships, in particular, for the variables related to biological risk factors. These variations in the model coefficients were more important at small geographic scales, making a case for the need to model canine cancer incidences locally in contrast to more conventional global approaches. However, we contend that prior to undertaking local modeling efforts, a deeper understanding of the effects of geographic scale is needed to better characterize and identify local model relationships.
Multi-Dimensional Calibration of Impact Dynamic Models
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Annett, Martin S.; Jackson, Karen E.
2011-01-01
NASA Langley, under the Subsonic Rotary Wing Program, recently completed two helicopter tests in support of an in-house effort to study crashworthiness. As part of this effort, work is on-going to investigate model calibration approaches and calibration metrics for impact dynamics models. Model calibration of impact dynamics problems has traditionally assessed model adequacy by comparing time histories from analytical predictions to test at only a few critical locations. Although this approach provides for a direct measure of the model predictive capability, overall system behavior is only qualitatively assessed using full vehicle animations. In order to understand the spatial and temporal relationships of impact loads as they migrate throughout the structure, a more quantitative approach is needed. In this work impact shapes derived from simulated time history data are used to recommend sensor placement and to assess model adequacy using time based metrics and orthogonality multi-dimensional metrics. An approach for model calibration is presented that includes metric definitions, uncertainty bounds, parameter sensitivity, and numerical optimization to estimate parameters to reconcile test with analysis. The process is illustrated using simulated experiment data.
Scope Complexity Options Risks Excursions (SCORE) Version 3.0 Mathematical Description.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gearhart, Jared Lee; Samberson, Jonell Nicole; Shettigar, Subhasini
The purpose of the Scope, Complexity, Options, Risks, Excursions (SCORE) model is to estimate the relative complexity of design variants of future warhead options. The results of this model allow those considering these options to understand the complexity tradeoffs between proposed warhead options. The core idea of SCORE is to divide a warhead option into a well- defined set of scope elements and then estimate the complexity of each scope element against a well understood reference system. The uncertainty associated with estimates can also be captured. A weighted summation of the relative complexity of each scope element is used tomore » determine the total complexity of the proposed warhead option or portions of the warhead option (i.e., a National Work Breakdown Structure code). The SCORE analysis process is a growing multi-organizational Nuclear Security Enterprise (NSE) effort, under the management of the NA- 12 led Enterprise Modeling and Analysis Consortium (EMAC), that has provided the data elicitation, integration and computation needed to support the out-year Life Extension Program (LEP) cost estimates included in the Stockpile Stewardship Management Plan (SSMP).« less
The Flight Optimization System Weights Estimation Method
NASA Technical Reports Server (NTRS)
Wells, Douglas P.; Horvath, Bryce L.; McCullers, Linwood A.
2017-01-01
FLOPS has been the primary aircraft synthesis software used by the Aeronautics Systems Analysis Branch at NASA Langley Research Center. It was created for rapid conceptual aircraft design and advanced technology impact assessments. FLOPS is a single computer program that includes weights estimation, aerodynamics estimation, engine cycle analysis, propulsion data scaling and interpolation, detailed mission performance analysis, takeoff and landing performance analysis, noise footprint estimation, and cost analysis. It is well known as a baseline and common denominator for aircraft design studies. FLOPS is capable of calibrating a model to known aircraft data, making it useful for new aircraft and modifications to existing aircraft. The weight estimation method in FLOPS is known to be of high fidelity for conventional tube with wing aircraft and a substantial amount of effort went into its development. This report serves as a comprehensive documentation of the FLOPS weight estimation method. The development process is presented with the weight estimation process.
Rodhouse, T.J.; Irvine, K.M.; Vierling, K.T.; Vierling, L.A.
2011-01-01
Monitoring programs that evaluate restoration and inform adaptive management are important for addressing environmental degradation. These efforts may be well served by spatially explicit hierarchical approaches to modeling because of unavoidable spatial structure inherited from past land use patterns and other factors. We developed Bayesian hierarchical models to estimate trends from annual density counts observed in a spatially structured wetland forb (Camassia quamash [camas]) population following the cessation of grazing and mowing on the study area, and in a separate reference population of camas. The restoration site was bisected by roads and drainage ditches, resulting in distinct subpopulations ("zones") with different land use histories. We modeled this spatial structure by fitting zone-specific intercepts and slopes. We allowed spatial covariance parameters in the model to vary by zone, as in stratified kriging, accommodating anisotropy and improving computation and biological interpretation. Trend estimates provided evidence of a positive effect of passive restoration, and the strength of evidence was influenced by the amount of spatial structure in the model. Allowing trends to vary among zones and accounting for topographic heterogeneity increased precision of trend estimates. Accounting for spatial autocorrelation shifted parameter coefficients in ways that varied among zones depending on strength of statistical shrinkage, autocorrelation and topographic heterogeneity-a phenomenon not widely described. Spatially explicit estimates of trend from hierarchical models will generally be more useful to land managers than pooled regional estimates and provide more realistic assessments of uncertainty. The ability to grapple with historical contingency is an appealing benefit of this approach.
Mollenhauer, Robert; Brewer, Shannon K.
2017-01-01
Failure to account for variable detection across survey conditions constrains progressive stream ecology and can lead to erroneous stream fish management and conservation decisions. In addition to variable detection’s confounding long-term stream fish population trends, reliable abundance estimates across a wide range of survey conditions are fundamental to establishing species–environment relationships. Despite major advancements in accounting for variable detection when surveying animal populations, these approaches remain largely ignored by stream fish scientists, and CPUE remains the most common metric used by researchers and managers. One notable advancement for addressing the challenges of variable detection is the multinomial N-mixture model. Multinomial N-mixture models use a flexible hierarchical framework to model the detection process across sites as a function of covariates; they also accommodate common fisheries survey methods, such as removal and capture–recapture. Effective monitoring of stream-dwelling Smallmouth Bass Micropterus dolomieu populations has long been challenging; therefore, our objective was to examine the use of multinomial N-mixture models to improve the applicability of electrofishing for estimating absolute abundance. We sampled Smallmouth Bass populations by using tow-barge electrofishing across a range of environmental conditions in streams of the Ozark Highlands ecoregion. Using an information-theoretic approach, we identified effort, water clarity, wetted channel width, and water depth as covariates that were related to variable Smallmouth Bass electrofishing detection. Smallmouth Bass abundance estimates derived from our top model consistently agreed with baseline estimates obtained via snorkel surveys. Additionally, confidence intervals from the multinomial N-mixture models were consistently more precise than those of unbiased Petersen capture–recapture estimates due to the dependency among data sets in the hierarchical framework. We demonstrate the application of this contemporary population estimation method to address a longstanding stream fish management issue. We also detail the advantages and trade-offs of hierarchical population estimation methods relative to CPUE and estimation methods that model each site separately.
Model parameter uncertainty analysis for an annual field-scale P loss model
NASA Astrophysics Data System (ADS)
Bolster, Carl H.; Vadas, Peter A.; Boykin, Debbie
2016-08-01
Phosphorous (P) fate and transport models are important tools for developing and evaluating conservation practices aimed at reducing P losses from agricultural fields. Because all models are simplifications of complex systems, there will exist an inherent amount of uncertainty associated with their predictions. It is therefore important that efforts be directed at identifying, quantifying, and communicating the different sources of model uncertainties. In this study, we conducted an uncertainty analysis with the Annual P Loss Estimator (APLE) model. Our analysis included calculating parameter uncertainties and confidence and prediction intervals for five internal regression equations in APLE. We also estimated uncertainties of the model input variables based on values reported in the literature. We then predicted P loss for a suite of fields under different management and climatic conditions while accounting for uncertainties in the model parameters and inputs and compared the relative contributions of these two sources of uncertainty to the overall uncertainty associated with predictions of P loss. Both the overall magnitude of the prediction uncertainties and the relative contributions of the two sources of uncertainty varied depending on management practices and field characteristics. This was due to differences in the number of model input variables and the uncertainties in the regression equations associated with each P loss pathway. Inspection of the uncertainties in the five regression equations brought attention to a previously unrecognized limitation with the equation used to partition surface-applied fertilizer P between leaching and runoff losses. As a result, an alternate equation was identified that provided similar predictions with much less uncertainty. Our results demonstrate how a thorough uncertainty and model residual analysis can be used to identify limitations with a model. Such insight can then be used to guide future data collection and model development and evaluation efforts.
Banger, Kamaljit; Yuan, Mingwei; Wang, Junming; Nafziger, Emerson D.; Pittelkow, Cameron M.
2017-01-01
Meeting crop nitrogen (N) demand while minimizing N losses to the environment has proven difficult despite significant field research and modeling efforts. To improve N management, several real-time N management tools have been developed with a primary focus on enhancing crop production. However, no coordinated effort exists to simultaneously address sustainability concerns related to N losses at field- and regional-scales. In this perspective, we highlight the opportunity for incorporating environmental effects into N management decision support tools for United States maize production systems by integrating publicly available crop models with grower-entered management information and gridded soil and climate data in a geospatial framework specifically designed to quantify environmental and crop production tradeoffs. To facilitate advances in this area, we assess the capability of existing crop models to provide in-season N recommendations while estimating N leaching and nitrous oxide emissions, discuss several considerations for initial framework development, and highlight important challenges related to improving the accuracy of crop model predictions. Such a framework would benefit the development of regional sustainable intensification strategies by enabling the identification of N loss hotspots which could be used to implement spatially explicit mitigation efforts in relation to current environmental quality goals and real-time weather conditions. Nevertheless, we argue that this long-term vision can only be realized by leveraging a variety of existing research efforts to overcome challenges related to improving model structure, accessing field data to enhance model performance, and addressing the numerous social difficulties in delivery and adoption of such tool by stakeholders. PMID:28804490
Banger, Kamaljit; Yuan, Mingwei; Wang, Junming; Nafziger, Emerson D; Pittelkow, Cameron M
2017-01-01
Meeting crop nitrogen (N) demand while minimizing N losses to the environment has proven difficult despite significant field research and modeling efforts. To improve N management, several real-time N management tools have been developed with a primary focus on enhancing crop production. However, no coordinated effort exists to simultaneously address sustainability concerns related to N losses at field- and regional-scales. In this perspective, we highlight the opportunity for incorporating environmental effects into N management decision support tools for United States maize production systems by integrating publicly available crop models with grower-entered management information and gridded soil and climate data in a geospatial framework specifically designed to quantify environmental and crop production tradeoffs. To facilitate advances in this area, we assess the capability of existing crop models to provide in-season N recommendations while estimating N leaching and nitrous oxide emissions, discuss several considerations for initial framework development, and highlight important challenges related to improving the accuracy of crop model predictions. Such a framework would benefit the development of regional sustainable intensification strategies by enabling the identification of N loss hotspots which could be used to implement spatially explicit mitigation efforts in relation to current environmental quality goals and real-time weather conditions. Nevertheless, we argue that this long-term vision can only be realized by leveraging a variety of existing research efforts to overcome challenges related to improving model structure, accessing field data to enhance model performance, and addressing the numerous social difficulties in delivery and adoption of such tool by stakeholders.
Transient modeling in simulation of hospital operations for emergency response.
Paul, Jomon Aliyas; George, Santhosh K; Yi, Pengfei; Lin, Li
2006-01-01
Rapid estimates of hospital capacity after an event that may cause a disaster can assist disaster-relief efforts. Due to the dynamics of hospitals, following such an event, it is necessary to accurately model the behavior of the system. A transient modeling approach using simulation and exponential functions is presented, along with its applications in an earthquake situation. The parameters of the exponential model are regressed using outputs from designed simulation experiments. The developed model is capable of representing transient, patient waiting times during a disaster. Most importantly, the modeling approach allows real-time capacity estimation of hospitals of various sizes and capabilities. Further, this research is an analysis of the effects of priority-based routing of patients within the hospital and the effects on patient waiting times determined using various patient mixes. The model guides the patients based on the severity of injuries and queues the patients requiring critical care depending on their remaining survivability time. The model also accounts the impact of prehospital transport time on patient waiting time.
Computing diffuse fraction of global horizontal solar radiation: A model comparison.
Dervishi, Sokol; Mahdavi, Ardeshir
2012-06-01
For simulation-based prediction of buildings' energy use or expected gains from building-integrated solar energy systems, information on both direct and diffuse component of solar radiation is necessary. Available measured data are, however, typically restricted to global horizontal irradiance. There have been thus many efforts in the past to develop algorithms for the derivation of the diffuse fraction of solar irradiance. In this context, the present paper compares eight models for estimating diffuse fraction of irradiance based on a database of measured irradiance from Vienna, Austria. These models generally involve mathematical formulations with multiple coefficients whose values are typically valid for a specific location. Subsequent to a first comparison of these eight models, three better performing models were selected for a more detailed analysis. Thereby, the coefficients of the models were modified to account for Vienna data. The results suggest that some models can provide relatively reliable estimations of the diffuse fractions of the global irradiance. The calibration procedure could only slightly improve the models' performance.
Inference about density and temporary emigration in unmarked populations
Chandler, Richard B.; Royle, J. Andrew; King, David I.
2011-01-01
Few species are distributed uniformly in space, and populations of mobile organisms are rarely closed with respect to movement, yet many models of density rely upon these assumptions. We present a hierarchical model allowing inference about the density of unmarked populations subject to temporary emigration and imperfect detection. The model can be fit to data collected using a variety of standard survey methods such as repeated point counts in which removal sampling, double-observer sampling, or distance sampling is used during each count. Simulation studies demonstrated that parameter estimators are unbiased when temporary emigration is either "completely random" or is determined by the size and location of home ranges relative to survey points. We also applied the model to repeated removal sampling data collected on Chestnut-sided Warblers (Dendroica pensylvancia) in the White Mountain National Forest, USA. The density estimate from our model, 1.09 birds/ha, was similar to an estimate of 1.11 birds/ha produced by an intensive spot-mapping effort. Our model is also applicable when processes other than temporary emigration affect the probability of being available for detection, such as in studies using cue counts. Functions to implement the model have been added to the R package unmarked.
Constellation Program Life-cycle Cost Analysis Model (LCAM)
NASA Technical Reports Server (NTRS)
Prince, Andy; Rose, Heidi; Wood, James
2008-01-01
The Constellation Program (CxP) is NASA's effort to replace the Space Shuttle, return humans to the moon, and prepare for a human mission to Mars. The major elements of the Constellation Lunar sortie design reference mission architecture are shown. Unlike the Apollo Program of the 1960's, affordability is a major concern of United States policy makers and NASA management. To measure Constellation affordability, a total ownership cost life-cycle parametric cost estimating capability is required. This capability is being developed by the Constellation Systems Engineering and Integration (SE&I) Directorate, and is called the Lifecycle Cost Analysis Model (LCAM). The requirements for LCAM are based on the need to have a parametric estimating capability in order to do top-level program analysis, evaluate design alternatives, and explore options for future systems. By estimating the total cost of ownership within the context of the planned Constellation budget, LCAM can provide Program and NASA management with the cost data necessary to identify the most affordable alternatives. LCAM is also a key component of the Integrated Program Model (IPM), an SE&I developed capability that combines parametric sizing tools with cost, schedule, and risk models to perform program analysis. LCAM is used in the generation of cost estimates for system level trades and analyses. It draws upon the legacy of previous architecture level cost models, such as the Exploration Systems Mission Directorate (ESMD) Architecture Cost Model (ARCOM) developed for Simulation Based Acquisition (SBA), and ATLAS. LCAM is used to support requirements and design trade studies by calculating changes in cost relative to a baseline option cost. Estimated costs are generally low fidelity to accommodate available input data and available cost estimating relationships (CERs). LCAM is capable of interfacing with the Integrated Program Model to provide the cost estimating capability for that suite of tools.
NASA Astrophysics Data System (ADS)
Mel, Riccardo; Viero, Daniele Pietro; Carniello, Luca; Defina, Andrea; D'Alpaos, Luigi
2014-09-01
Providing reliable and accurate storm surge forecasts is important for a wide range of problems related to coastal environments. In order to adequately support decision-making processes, it also become increasingly important to be able to estimate the uncertainty associated with the storm surge forecast. The procedure commonly adopted to do this uses the results of a hydrodynamic model forced by a set of different meteorological forecasts; however, this approach requires a considerable, if not prohibitive, computational cost for real-time application. In the present paper we present two simplified methods for estimating the uncertainty affecting storm surge prediction with moderate computational effort. In the first approach we use a computationally fast, statistical tidal model instead of a hydrodynamic numerical model to estimate storm surge uncertainty. The second approach is based on the observation that the uncertainty in the sea level forecast mainly stems from the uncertainty affecting the meteorological fields; this has led to the idea to estimate forecast uncertainty via a linear combination of suitable meteorological variances, directly extracted from the meteorological fields. The proposed methods were applied to estimate the uncertainty in the storm surge forecast in the Venice Lagoon. The results clearly show that the uncertainty estimated through a linear combination of suitable meteorological variances nicely matches the one obtained using the deterministic approach and overcomes some intrinsic limitations in the use of a statistical tidal model.
The burden of typhoid fever in low- and middle-income countries: A meta-regression approach.
Antillón, Marina; Warren, Joshua L; Crawford, Forrest W; Weinberger, Daniel M; Kürüm, Esra; Pak, Gi Deok; Marks, Florian; Pitzer, Virginia E
2017-02-01
Upcoming vaccination efforts against typhoid fever require an assessment of the baseline burden of disease in countries at risk. There are no typhoid incidence data from most low- and middle-income countries (LMICs), so model-based estimates offer insights for decision-makers in the absence of readily available data. We developed a mixed-effects model fit to data from 32 population-based studies of typhoid incidence in 22 locations in 14 countries. We tested the contribution of economic and environmental indices for predicting typhoid incidence using a stochastic search variable selection algorithm. We performed out-of-sample validation to assess the predictive performance of the model. We estimated that 17.8 million cases of typhoid fever occur each year in LMICs (95% credible interval: 6.9-48.4 million). Central Africa was predicted to experience the highest incidence of typhoid, followed by select countries in Central, South, and Southeast Asia. Incidence typically peaked in the 2-4 year old age group. Models incorporating widely available economic and environmental indicators were found to describe incidence better than null models. Recent estimates of typhoid burden may under-estimate the number of cases and magnitude of uncertainty in typhoid incidence. Our analysis permits prediction of overall as well as age-specific incidence of typhoid fever in LMICs, and incorporates uncertainty around the model structure and estimates of the predictors. Future studies are needed to further validate and refine model predictions and better understand year-to-year variation in cases.
NASA Astrophysics Data System (ADS)
Johnson, D. M.; Dorn, M. F.; Crawford, C.
2015-12-01
Since the dawn of earth observation imagery, particularly from systems like Landsat and the Advanced Very High Resolution Radiometer, there has been an overarching desire to regionally estimate crop production remotely. Research efforts integrating space-based imagery into yield models to achieve this need have indeed paralleled these systems through the years, yet development of a truly useful crop production monitoring system has been arguably mediocre in coming. As a result, relatively few organizations have yet to operationalize the concept, and this is most acute in regions of the globe where there are not even alternative sources of crop production data being collected. However, the National Agricultural Statistics Service (NASS) has continued to push for this type of data source as a means to complement its long-standing, traditional crop production survey efforts which are financially costly to the government and create undue respondent burden on farmers. Corn and soybeans, the two largest field crops in the United States, have been the focus of satellite-based production monitoring by NASS for the past decade. Data from the Moderate Resolution Imaging Spectroradiometer (MODIS) has been seen as the most pragmatic input source for modeling yields primarily based on its daily revisit capabilities and reasonable ground sample resolution. The research methods presented here will be broad but provides a summary of what is useful and adoptable with satellite imagery in terms of crop yield estimation. Corn and soybeans will be of particular focus but other major staple crops like wheat and rice will also be presented. NASS will demonstrate that while MODIS provides a slew of vegetation related products, the traditional normalized difference vegetation index (NDVI) is still ideal. Results using land surface temperature products, also generated from MODIS, will also be shown. Beyond the MODIS data itself, NASS research has also focused efforts on understanding a variety of data mining and modeling options and results strongly lean toward solutions of ensemble decision trees like Cubist and Random Forest. Those comparisons of what are seen as best will be also be shown. And finally, important model refinements accounting for temporal and spatial trends have also been considered and results will be presented.
Modifying Taper-Derived Merchantable Height Estimates to Account for Tree Characteristics
James A. Westfall
2006-01-01
The U.S. Department of Agriculture Forest Service Northeastern Forest Inventory and Analysis program (NE-FIA) is developing regionwide tree-taper equations. Unlike most previous work on modeling tree form, this effort necessarily includes a wide array of tree species. For some species, branching patterns can produce undesirable tree form that reduces the merchantable...
Haas, Jessica R.; Thompson, Matthew P.; Tillery, Anne C.; Scott, Joe H.
2017-01-01
Wildfires can increase the frequency and magnitude of catastrophic debris flows. Integrated, proactive natural hazard assessment would therefore characterize landscapes based on the potential for the occurrence and interactions of wildfires and postwildfire debris flows. This chapter presents a new modeling effort that can quantify the variability surrounding a key input to postwildfire debris-flow modeling, the amount of watershed burned at moderate to high severity, in a prewildfire context. The use of stochastic wildfire simulation captures variability surrounding the timing and location of ignitions, fire weather patterns, and ultimately the spatial patterns of watershed area burned. Model results provide for enhanced estimates of postwildfire debris-flow hazard in a prewildfire context, and multiple hazard metrics are generated to characterize and contrast hazards across watersheds. Results can guide mitigation efforts by allowing planners to identify which factors may be contributing the most to the hazard rankings of watersheds.
Recent Progress Towards Predicting Aircraft Ground Handling Performance
NASA Technical Reports Server (NTRS)
Yager, T. J.; White, E. J.
1981-01-01
The significant progress which has been achieved in development of aircraft ground handling simulation capability is reviewed and additional improvements in software modeling identified. The problem associated with providing necessary simulator input data for adequate modeling of aircraft tire/runway friction behavior is discussed and efforts to improve this complex model, and hence simulator fidelity, are described. Aircraft braking performance data obtained on several wet runway surfaces is compared to ground vehicle friction measurements and, by use of empirically derived methods, good agreement between actual and estimated aircraft braking friction from ground vehilce data is shown. The performance of a relatively new friction measuring device, the friction tester, showed great promise in providing data applicable to aircraft friction performance. Additional research efforts to improve methods of predicting tire friction performance are discussed including use of an instrumented tire test vehicle to expand the tire friction data bank and a study of surface texture measurement techniques.
NASA Astrophysics Data System (ADS)
Wellen, Christopher; Arhonditsis, George B.; Labencki, Tanya; Boyd, Duncan
2012-10-01
Regression-type, hybrid empirical/process-based models (e.g., SPARROW, PolFlow) have assumed a prominent role in efforts to estimate the sources and transport of nutrient pollution at river basin scales. However, almost no attempts have been made to explicitly accommodate interannual nutrient loading variability in their structure, despite empirical and theoretical evidence indicating that the associated source/sink processes are quite variable at annual timescales. In this study, we present two methodological approaches to accommodate interannual variability with the Spatially Referenced Regressions on Watershed attributes (SPARROW) nonlinear regression model. The first strategy uses the SPARROW model to estimate a static baseline load and climatic variables (e.g., precipitation) to drive the interannual variability. The second approach allows the source/sink processes within the SPARROW model to vary at annual timescales using dynamic parameter estimation techniques akin to those used in dynamic linear models. Model parameterization is founded upon Bayesian inference techniques that explicitly consider calibration data and model uncertainty. Our case study is the Hamilton Harbor watershed, a mixed agricultural and urban residential area located at the western end of Lake Ontario, Canada. Our analysis suggests that dynamic parameter estimation is the more parsimonious of the two strategies tested and can offer insights into the temporal structural changes associated with watershed functioning. Consistent with empirical and theoretical work, model estimated annual in-stream attenuation rates varied inversely with annual discharge. Estimated phosphorus source areas were concentrated near the receiving water body during years of high in-stream attenuation and dispersed along the main stems of the streams during years of low attenuation, suggesting that nutrient source areas are subject to interannual variability.
Kovacs, Stephanie D; Mullholland, Kim; Bosch, Julia; Campbell, Harry; Forouzanfar, Mohammad H; Khalil, Ibrahim; Lim, Stephen; Liu, Li; Maley, Stephen N; Mathers, Colin D; Matheson, Alastair; Mokdad, Ali H; O'Brien, Kate; Parashar, Umesh; Schaafsma, Torin T; Steele, Duncan; Hawes, Stephen E; Grove, John T
2015-01-16
Pneumonia and diarrhea are leading causes of death for children under five (U5). It is challenging to estimate the total number of deaths and cause-specific mortality fractions. Two major efforts, one led by the Institute for Health Metrics and Evaluation (IHME) and the other led by the World Health Organization (WHO)/Child Health Epidemiology Reference Group (CHERG) created estimates for the burden of disease due to these two syndromes, yet their estimates differed greatly for 2010. This paper discusses three main drivers of the differences: data sources, data processing, and covariates used for modelling. The paper discusses differences in the model assumptions for etiology-specific estimates and presents recommendations for improving future models. IHME's Global Burden of Disease (GBD) 2010 study estimated 6.8 million U5 deaths compared to 7.6 million U5 deaths from CHERG. The proportional differences between the pneumonia and diarrhea burden estimates from the two groups are much larger; GBD 2010 estimated 0.847 million and CHERG estimated 1.396 million due to pneumonia. Compared to CHERG, GBD 2010 used broader inclusion criteria for verbal autopsy and vital registration data. GBD 2010 and CHERG used different data processing procedures and therefore attributed the causes of neonatal death differently. The major difference in pneumonia etiologies modeling approach was the inclusion of observational study data; GBD 2010 included observational studies. CHERG relied on vaccine efficacy studies. Greater transparency in modeling methods and more timely access to data sources are needed. In October 2013, the Bill & Melinda Gates Foundation (BMGF) hosted an expert meeting to examine possible approaches for better estimation. The group recommended examining the impact of data by systematically excluding sources in their models. GBD 2.0 will use a counterfactual approach for estimating mortality from pathogens due to specific etiologies to overcome bias of the methods used in GBD 2010 going forward.
DOE Office of Scientific and Technical Information (OSTI.GOV)
James Francfort; Kevin Morrow; Dimitri Hochard
2007-02-01
This report documents efforts to develop a computer tool for modeling the economic payback for comparative airport ground support equipment (GSE) that are propelled by either electric motors or gasoline and diesel engines. The types of GSE modeled are pushback tractors, baggage tractors, and belt loaders. The GSE modeling tool includes an emissions module that estimates the amount of tailpipe emissions saved by replacing internal combustion engine GSE with electric GSE. This report contains modeling assumptions, methodology, a user’s manual, and modeling results. The model was developed based on the operations of two airlines at four United States airports.
Ehn, S; Sellerer, T; Mechlem, K; Fehringer, A; Epple, M; Herzen, J; Pfeiffer, F; Noël, P B
2017-01-07
Following the development of energy-sensitive photon-counting detectors using high-Z sensor materials, application of spectral x-ray imaging methods to clinical practice comes into reach. However, these detectors require extensive calibration efforts in order to perform spectral imaging tasks like basis material decomposition. In this paper, we report a novel approach to basis material decomposition that utilizes a semi-empirical estimator for the number of photons registered in distinct energy bins in the presence of beam-hardening effects which can be termed as a polychromatic Beer-Lambert model. A maximum-likelihood estimator is applied to the model in order to obtain estimates of the underlying sample composition. Using a Monte-Carlo simulation of a typical clinical CT acquisition, the performance of the proposed estimator was evaluated. The estimator is shown to be unbiased and efficient according to the Cramér-Rao lower bound. In particular, the estimator is capable of operating with a minimum number of calibration measurements. Good results were obtained after calibration using less than 10 samples of known composition in a two-material attenuation basis. This opens up the possibility for fast re-calibration in the clinical routine which is considered an advantage of the proposed method over other implementations reported in the literature.
NASA Astrophysics Data System (ADS)
Ehn, S.; Sellerer, T.; Mechlem, K.; Fehringer, A.; Epple, M.; Herzen, J.; Pfeiffer, F.; Noël, P. B.
2017-01-01
Following the development of energy-sensitive photon-counting detectors using high-Z sensor materials, application of spectral x-ray imaging methods to clinical practice comes into reach. However, these detectors require extensive calibration efforts in order to perform spectral imaging tasks like basis material decomposition. In this paper, we report a novel approach to basis material decomposition that utilizes a semi-empirical estimator for the number of photons registered in distinct energy bins in the presence of beam-hardening effects which can be termed as a polychromatic Beer-Lambert model. A maximum-likelihood estimator is applied to the model in order to obtain estimates of the underlying sample composition. Using a Monte-Carlo simulation of a typical clinical CT acquisition, the performance of the proposed estimator was evaluated. The estimator is shown to be unbiased and efficient according to the Cramér-Rao lower bound. In particular, the estimator is capable of operating with a minimum number of calibration measurements. Good results were obtained after calibration using less than 10 samples of known composition in a two-material attenuation basis. This opens up the possibility for fast re-calibration in the clinical routine which is considered an advantage of the proposed method over other implementations reported in the literature.
Producing HIV estimates: from global advocacy to country planning and impact measurement
Mahy, Mary; Brown, Tim; Stover, John; Walker, Neff; Stanecki, Karen; Kirungi, Wilford; Garcia-Calleja, Txema; Ghys, Peter D.
2017-01-01
ABSTRACT Background: The development of global HIV estimates has been critical for understanding, advocating for and funding the HIV response. The process of generating HIV estimates has been cited as the gold standard for public health estimates. Objective: This paper provides important lessons from an international scientific collaboration and provides a useful model for those producing public health estimates in other fields. Design: Through the compilation and review of published journal articles, United Nations reports, other documents and personal experience we compiled historical information about the estimates and identified potential lessons for other public health estimation efforts. Results: Through the development of core partnerships with country teams, implementers, demographers, mathematicians, epidemiologists and international organizations, UNAIDS has led a process to develop the capacity of country teams to produce internationally comparable HIV estimates. The guidance provided by these experts has led to refinements in the estimated numbers of people living with HIV, new HIV infections and AIDS-related deaths over the past 20 years. A number of important updates to the methods since 1997 resulted in fluctuations in the estimated levels, trends and impact of HIV. The largest correction occurred between the 2005 and 2007 rounds with the additions of household survey data into the models. In 2001 the UNAIDS models at that time estimated there were 40 million people living with HIV. In 2016, improved models estimate there were 30 million (27.6–32.7 million) people living with HIV in 2001. Conclusions: Country ownership of the estimation tools has allowed for additional uses of the results than had the results been produced by researchers or a team in Geneva. Guidance from a reference group and input from country teams have led to critical improvements in the models over time. Those changes have improved countries’ and stakeholders’ understanding of the HIV epidemic. PMID:28532304
Ground based mobile isotopic methane measurements in the Front Range, Colorado
NASA Astrophysics Data System (ADS)
Vaughn, B. H.; Rella, C.; Petron, G.; Sherwood, O.; Mielke-Maday, I.; Schwietzke, S.
2014-12-01
Increased development of unconventional oil and gas resources in North America has given rise to attempts to monitor and quantify fugitive emissions of methane from the industry. Emission estimates of methane from oil and gas basins can vary significantly from one study to another as well as from EPA or State estimates. New efforts are aimed at reconciling bottom-up, or inventory-based, emission estimates of methane with top-down estimates based on atmospheric measurements from aircraft, towers, mobile ground-based vehicles, and atmospheric models. Attributing airborne measurements of regional methane fluxes to specific sources is informed by ground-based measurements of methane. Stable isotopic measurements (δ13C) of methane help distinguish between emissions from the O&G industry, Confined Animal Feed Operations (CAFO), and landfills, but analytical challenges typically limit meaningful isotopic measurements to individual point sampling. We are developing a toolbox to use δ13CH4 measurements to assess the partitioning of methane emissions for regions with multiple methane sources. The method was applied to the Denver-Julesberg Basin. Here we present data from continuous isotopic measurements obtained over a wide geographic area by using MegaCore, a 1500 ft. tube that is constantly filled with sample air while driving, then subsequently analyzed at slower rates using cavity ring down spectroscopy (CRDS). Pressure, flow and calibration are tightly controlled allowing precise attribution of methane enhancements to their point of collection. Comparisons with point measurements are needed to confirm regional values and further constrain flux estimates and models. This effort was made in conjunction with several major field campaigns in the Colorado Front Range in July-August 2014, including FRAPPÉ (Front Range Air Pollution and Photochemistry Experiment), DISCOVER-AQ, and the Air Water Gas NSF Sustainability Research Network at the University of Colorado.
Ellison, Laura E.; Lukacs, Paul M.
2014-01-01
Concern for migratory tree-roosting bats in North America has grown because of possible population declines from wind energy development. This concern has driven interest in estimating population-level changes. Mark-recapture methodology is one possible analytical framework for assessing bat population changes, but sample size requirements to produce reliable estimates have not been estimated. To illustrate the sample sizes necessary for a mark-recapture-based monitoring program we conducted power analyses using a statistical model that allows reencounters of live and dead marked individuals. We ran 1,000 simulations for each of five broad sample size categories in a Burnham joint model, and then compared the proportion of simulations in which 95% confidence intervals overlapped between and among years for a 4-year study. Additionally, we conducted sensitivity analyses of sample size to various capture probabilities and recovery probabilities. More than 50,000 individuals per year would need to be captured and released to accurately determine 10% and 15% declines in annual survival. To detect more dramatic declines of 33% or 50% survival over four years, then sample sizes of 25,000 or 10,000 per year, respectively, would be sufficient. Sensitivity analyses reveal that increasing recovery of dead marked individuals may be more valuable than increasing capture probability of marked individuals. Because of the extraordinary effort that would be required, we advise caution should such a mark-recapture effort be initiated because of the difficulty in attaining reliable estimates. We make recommendations for what techniques show the most promise for mark-recapture studies of bats because some techniques violate the assumptions of mark-recapture methodology when used to mark bats.
Robust estimation of simulated urinary volume from camera images under bathroom illumination.
Honda, Chizuru; Bhuiyan, Md Shoaib; Kawanaka, Haruki; Watanabe, Eiichi; Oguri, Koji
2016-08-01
General uroflowmetry method involves the risk of nosocomial infections or time and effort of the recording. Medical institutions, therefore, need to measure voided volume simply and hygienically. Multiple cylindrical model that can estimate the fluid flow rate from the photographed image using camera has been proposed in an earlier study. This study implemented a flow rate estimation by using a general-purpose camera system (Raspberry Pi Camera Module) and the multiple cylindrical model. However, large amounts of noise in extracting liquid region are generated by the variation of the illumination when performing measurements in the bathroom. So the estimation error gets very large. In other words, the specifications of the previous study's camera setup regarding the shutter type and the frame rate was too strict. In this study, we relax the specifications to achieve a flow rate estimation using a general-purpose camera. In order to determine the appropriate approximate curve, we propose a binarizing method using background subtraction at each scanning row and a curve approximation method using RANSAC. Finally, by evaluating the estimation accuracy of our experiment and by comparing it with the earlier study's results, we show the effectiveness of our proposed method for flow rate estimation.
Inventory and transport of plastic debris in the Laurentian Great Lakes.
Hoffman, Matthew J; Hittinger, Eric
2017-02-15
Plastic pollution in the world's oceans has received much attention, but there has been increasing concern about the high concentrations of plastic debris in the Laurentian Great Lakes. Using census data and methodologies used to study ocean debris we derive a first estimate of 9887 metric tonnes per year of plastic debris entering the Great Lakes. These estimates are translated into population-dependent particle inputs which are advected using currents from a hydrodynamic model to map the spatial distribution of plastic debris in the Great Lakes. Model results compare favorably with previously published sampling data. The samples are used to calibrate the model to derive surface microplastic mass estimates of 0.0211 metric tonnes in Lake Superior, 1.44 metric tonnes in Huron, and 4.41 metric tonnes in Erie. These results have many applications, including informing cleanup efforts, helping target pollution prevention, and understanding the inter-state or international flows of plastic pollution. Copyright © 2016 Elsevier Ltd. All rights reserved.
Inference of reactive transport model parameters using a Bayesian multivariate approach
NASA Astrophysics Data System (ADS)
Carniato, Luca; Schoups, Gerrit; van de Giesen, Nick
2014-08-01
Parameter estimation of subsurface transport models from multispecies data requires the definition of an objective function that includes different types of measurements. Common approaches are weighted least squares (WLS), where weights are specified a priori for each measurement, and weighted least squares with weight estimation (WLS(we)) where weights are estimated from the data together with the parameters. In this study, we formulate the parameter estimation task as a multivariate Bayesian inference problem. The WLS and WLS(we) methods are special cases in this framework, corresponding to specific prior assumptions about the residual covariance matrix. The Bayesian perspective allows for generalizations to cases where residual correlation is important and for efficient inference by analytically integrating out the variances (weights) and selected covariances from the joint posterior. Specifically, the WLS and WLS(we) methods are compared to a multivariate (MV) approach that accounts for specific residual correlations without the need for explicit estimation of the error parameters. When applied to inference of reactive transport model parameters from column-scale data on dissolved species concentrations, the following results were obtained: (1) accounting for residual correlation between species provides more accurate parameter estimation for high residual correlation levels whereas its influence for predictive uncertainty is negligible, (2) integrating out the (co)variances leads to an efficient estimation of the full joint posterior with a reduced computational effort compared to the WLS(we) method, and (3) in the presence of model structural errors, none of the methods is able to identify the correct parameter values.
Techniques and software tools for estimating ultrasonic signal-to-noise ratios
NASA Astrophysics Data System (ADS)
Chiou, Chien-Ping; Margetan, Frank J.; McKillip, Matthew; Engle, Brady J.; Roberts, Ronald A.
2016-02-01
At Iowa State University's Center for Nondestructive Evaluation (ISU CNDE), the use of models to simulate ultrasonic inspections has played a key role in R&D efforts for over 30 years. To this end a series of wave propagation models, flaw response models, and microstructural backscatter models have been developed to address inspection problems of interest. One use of the combined models is the estimation of signal-to-noise ratios (S/N) in circumstances where backscatter from the microstructure (grain noise) acts to mask sonic echoes from internal defects. Such S/N models have been used in the past to address questions of inspection optimization and reliability. Under the sponsorship of the National Science Foundation's Industry/University Cooperative Research Center at ISU, an effort was recently initiated to improve existing research-grade software by adding graphical user interface (GUI) to become user friendly tools for the rapid estimation of S/N for ultrasonic inspections of metals. The software combines: (1) a Python-based GUI for specifying an inspection scenario and displaying results; and (2) a Fortran-based engine for computing defect signal and backscattered grain noise characteristics. The latter makes use of several models including: the Multi-Gaussian Beam Model for computing sonic fields radiated by commercial transducers; the Thompson-Gray Model for the response from an internal defect; the Independent Scatterer Model for backscattered grain noise; and the Stanke-Kino Unified Model for attenuation. The initial emphasis was on reformulating the research-grade code into a suitable modular form, adding the graphical user interface and performing computations rapidly and robustly. Thus the initial inspection problem being addressed is relatively simple. A normal-incidence pulse/echo immersion inspection is simulated for a curved metal component having a non-uniform microstructure, specifically an equiaxed, untextured microstructure in which the average grain size may vary with depth. The defect may be a flat-bottomed-hole reference reflector, a spherical void or a spherical inclusion. In future generations of the software, microstructures and defect types will be generalized and oblique incidence inspections will be treated as well. This paper provides an overview of the modeling approach and presents illustrative results output by the first-generation software.
Designing Fault-Injection Experiments for the Reliability of Embedded Systems
NASA Technical Reports Server (NTRS)
White, Allan L.
2012-01-01
This paper considers the long-standing problem of conducting fault-injections experiments to establish the ultra-reliability of embedded systems. There have been extensive efforts in fault injection, and this paper offers a partial summary of the efforts, but these previous efforts have focused on realism and efficiency. Fault injections have been used to examine diagnostics and to test algorithms, but the literature does not contain any framework that says how to conduct fault-injection experiments to establish ultra-reliability. A solution to this problem integrates field-data, arguments-from-design, and fault-injection into a seamless whole. The solution in this paper is to derive a model reduction theorem for a class of semi-Markov models suitable for describing ultra-reliable embedded systems. The derivation shows that a tight upper bound on the probability of system failure can be obtained using only the means of system-recovery times, thus reducing the experimental effort to estimating a reasonable number of easily-observed parameters. The paper includes an example of a system subject to both permanent and transient faults. There is a discussion of integrating fault-injection with field-data and arguments-from-design.
Reboussin, Beth A; Preisser, John S; Song, Eun-Young; Wolfson, Mark
2012-07-01
Under-age drinking is an enormous public health issue in the USA. Evidence that community level structures may impact on under-age drinking has led to a proliferation of efforts to change the environment surrounding the use of alcohol. Although the focus of these efforts is to reduce drinking by individual youths, environmental interventions are typically implemented at the community level with entire communities randomized to the same intervention condition. A distinct feature of these trials is the tendency of the behaviours of individuals residing in the same community to be more alike than that of others residing in different communities, which is herein called 'clustering'. Statistical analyses and sample size calculations must account for this clustering to avoid type I errors and to ensure an appropriately powered trial. Clustering itself may also be of scientific interest. We consider the alternating logistic regressions procedure within the population-averaged modelling framework to estimate the effect of a law enforcement intervention on the prevalence of under-age drinking behaviours while modelling the clustering at multiple levels, e.g. within communities and within neighbourhoods nested within communities, by using pairwise odds ratios. We then derive sample size formulae for estimating intervention effects when planning a post-test-only or repeated cross-sectional community-randomized trial using the alternating logistic regressions procedure.
NASA Astrophysics Data System (ADS)
Linares, R.; Palmer, D.; Thompson, D.; Koller, J.
2013-09-01
Recent events in space, including the collision of Russia's Cosmos 2251 satellite with Iridium 33 and China's Feng Yun 1C anti-satellite demonstration, have stressed the capabilities of Space Surveillance Network (SSN) and its ability to provide accurate and actionable impact probability estimates. The SSN network has the unique challenge of tracking more than 18,000 resident space objects (RSOs) and providing critical collision avoidance warnings to military, NASA, and commercial systems. However, due to the large number of RSOs and the limited number of sensors available to track them, it is impossible to maintain persistent surveillance. Observation gaps result in large propagation intervals between measurements and close approaches. Coupled with nonlinear RSO dynamics this results in difficulty in modeling the probability distribution functions (pdfs) of the RSO. In particular low-Earth orbiting (LEO) satellites are heavily influenced by atmospheric drag, which is very difficult to model accurately. A number of atmospheric models exist which can be classified as either empirical or physics-based models. The current Air Force standard is the High Accuracy Satellite Drag Model (HASDM), which is an empirical model based on observation of calibration satellites. These satellite observations are used to determine model parameters based on their orbit determination solutions. Atmospheric orbits are perturbed by a number of factors including drag coefficient, attitude, and shape of the space object. The satellites used for the HASDM model calibration process are chosen because of their relatively simple shapes, to minimize errors introduced due to shape miss-modeling. Under this requirement the number of calibration satellites that can be used for calibrating the atmospheric models is limited. Los Alamos National Laboratory (LANL) has established a research effort, called IMPACT (Integrated Modeling of Perturbations in Atmospheres for Conjunction Tracking), to improve impact assessment via improved physics-based modeling. As part of this effort calibration satellite observations are used to dynamically calibrate the physics-based model and to improve its forecasting capability. The observations are collected from a variety of sources, including from LANL's own Raven-class optical telescope. This system collects both astrometric and photometric data on space objects. The photometric data will be used to estimate the space objects' attitude and shape. Non-resolved photometric data have been studied by many as a mechanism for space object characterization. Photometry is the measurement of an object's flux or apparent brightness measured over a wavelength band. The temporal variation of photometric measurements is referred to as photometric signature. The photometric optical signature of an object contains information about shape, attitude, size and material composition. This work focuses on the processing of the data collected with LANL's telescope in an effort to use photometric data to expand the number of space objects that can be used as calibration satellites. An Unscented Kalman filter is used to estimate the attitude and angular velocity of the space object; both real data and simulated data scenarios are shown. A number of inactive space objects are used for the real data examples and good estimation results are shown.
Toward a consistent modeling framework to assess multi-sectoral climate impacts.
Monier, Erwan; Paltsev, Sergey; Sokolov, Andrei; Chen, Y-H Henry; Gao, Xiang; Ejaz, Qudsia; Couzo, Evan; Schlosser, C Adam; Dutkiewicz, Stephanie; Fant, Charles; Scott, Jeffery; Kicklighter, David; Morris, Jennifer; Jacoby, Henry; Prinn, Ronald; Haigh, Martin
2018-02-13
Efforts to estimate the physical and economic impacts of future climate change face substantial challenges. To enrich the currently popular approaches to impact analysis-which involve evaluation of a damage function or multi-model comparisons based on a limited number of standardized scenarios-we propose integrating a geospatially resolved physical representation of impacts into a coupled human-Earth system modeling framework. Large internationally coordinated exercises cannot easily respond to new policy targets and the implementation of standard scenarios across models, institutions and research communities can yield inconsistent estimates. Here, we argue for a shift toward the use of a self-consistent integrated modeling framework to assess climate impacts, and discuss ways the integrated assessment modeling community can move in this direction. We then demonstrate the capabilities of such a modeling framework by conducting a multi-sectoral assessment of climate impacts under a range of consistent and integrated economic and climate scenarios that are responsive to new policies and business expectations.
Parametric Cost Models for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Henrichs, Todd; Dollinger, Courtney
2010-01-01
Multivariable parametric cost models for space telescopes provide several benefits to designers and space system project managers. They identify major architectural cost drivers and allow high-level design trades. They enable cost-benefit analysis for technology development investment. And, they provide a basis for estimating total project cost. A survey of historical models found that there is no definitive space telescope cost model. In fact, published models vary greatly [1]. Thus, there is a need for parametric space telescopes cost models. An effort is underway to develop single variable [2] and multi-variable [3] parametric space telescope cost models based on the latest available data and applying rigorous analytical techniques. Specific cost estimating relationships (CERs) have been developed which show that aperture diameter is the primary cost driver for large space telescopes; technology development as a function of time reduces cost at the rate of 50% per 17 years; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and increasing mass reduces cost.
Transient high frequency signal estimation: A model-based processing approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnes, F.L.
1985-03-22
By utilizing the superposition property of linear systems a method of estimating the incident signal from reflective nondispersive data is developed. One of the basic merits of this approach is that, the reflections were removed by direct application of a Weiner type estimation algorithm, after the appropriate input was synthesized. The structure of the nondispersive signal model is well documented, and thus its' credence is established. The model is stated and more effort is devoted to practical methods of estimating the model parameters. Though a general approach was developed for obtaining the reflection weights, a simpler approach was employed here,more » since a fairly good reflection model is available. The technique essentially consists of calculating ratios of the autocorrelation function at lag zero and that lag where the incident and first reflection coincide. We initially performed our processing procedure on a measurement of a single signal. Multiple application of the processing procedure was required when we applied the reflection removal technique on a measurement containing information from the interaction of two physical phenomena. All processing was performed using SIG, an interactive signal processing package. One of the many consequences of using SIG was that repetitive operations were, for the most part, automated. A custom menu was designed to perform the deconvolution process.« less
Thermal Protection System Mass Estimating Relationships for Blunt-Body, Earth Entry Spacecraft
NASA Technical Reports Server (NTRS)
Sepka, Steven A.; Samareh, Jamshid A.
2015-01-01
System analysis and design of any entry system must balance the level fidelity for each discipline against the project timeline. One way to inject high fidelity analysis earlier in the design effort is to develop surrogate models for the high-fidelity disciplines. Surrogate models for the Thermal Protection System (TPS) are formulated as Mass Estimating Relationships (MERs). The TPS MERs are presented that predict the amount of TPS necessary for safe Earth entry for blunt-body spacecraft using simple correlations that closely match estimates from NASA's high-fidelity ablation modeling tool, the Fully Implicit Ablation and Thermal Analysis Program (FIAT). These MERs provide a first order estimate for rapid feasibility studies. There are 840 different trajectories considered in this study, and each TPS MER has a peak heating limit. MERs for the vehicle forebody include the ablators Phenolic Impregnated Carbon Ablator (PICA) and Carbon Phenolic atop Advanced Carbon-Carbon. For the aftbody, the materials are Silicone Impregnated Reusable Ceramic Ablator (SIRCA), Acusil II, SLA-561V, and LI-900. The MERs are accurate to within 14% (at one standard deviation) of FIAT prediction, and the most any MER under predicts FIAT TPS thickness is 18.7%. This work focuses on the development of these MERs, the resulting equations, model limitations, and model accuracy.
DeWeber, Jefferson Tyrell; Wagner, Tyler
2015-01-01
The Brook Trout Salvelinus fontinalis is an important species of conservation concern in the eastern USA. We developed a model to predict Brook Trout population status within individual stream reaches throughout the species’ native range in the eastern USA. We utilized hierarchical logistic regression with Bayesian estimation to predict Brook Trout occurrence probability, and we allowed slopes and intercepts to vary among ecological drainage units (EDUs). Model performance was similar for 7,327 training samples and 1,832 validation samples based on the area under the receiver operating curve (∼0.78) and Cohen's kappa statistic (0.44). Predicted water temperature had a strong negative effect on Brook Trout occurrence probability at the stream reach scale and was also negatively associated with the EDU average probability of Brook Trout occurrence (i.e., EDU-specific intercepts). The effect of soil permeability was positive but decreased as EDU mean soil permeability increased. Brook Trout were less likely to occur in stream reaches surrounded by agricultural or developed land cover, and an interaction suggested that agricultural land cover also resulted in an increased sensitivity to water temperature. Our model provides a further understanding of how Brook Trout are shaped by habitat characteristics in the region and yields maps of stream-reach-scale predictions, which together can be used to support ongoing conservation and management efforts. These decision support tools can be used to identify the extent of potentially suitable habitat, estimate historic habitat losses, and prioritize conservation efforts by selecting suitable stream reaches for a given action. Future work could extend the model to account for additional landscape or habitat characteristics, include biotic interactions, or estimate potential Brook Trout responses to climate and land use changes.
Automated Estimation Of Software-Development Costs
NASA Technical Reports Server (NTRS)
Roush, George B.; Reini, William
1993-01-01
COSTMODL is automated software development-estimation tool. Yields significant reduction in risk of cost overruns and failed projects. Accepts description of software product developed and computes estimates of effort required to produce it, calendar schedule required, and distribution of effort and staffing as function of defined set of development life-cycle phases. Written for IBM PC(R)-compatible computers.
Creel survey sampling designs for estimating effort in short-duration Chinook salmon fisheries
McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.
2013-01-01
Chinook Salmon Oncorhynchus tshawytscha sport fisheries in the Columbia River basin are commonly monitored using roving creel survey designs and require precise, unbiased catch estimates. The objective of this study was to examine the relative bias and precision of total catch estimates using various sampling designs to estimate angling effort under the assumption that mean catch rate was known. We obtained information on angling populations based on direct visual observations of portions of Chinook Salmon fisheries in three Idaho river systems over a 23-d period. Based on the angling population, Monte Carlo simulations were used to evaluate the properties of effort and catch estimates for each sampling design. All sampling designs evaluated were relatively unbiased. Systematic random sampling (SYS) resulted in the most precise estimates. The SYS and simple random sampling designs had mean square error (MSE) estimates that were generally half of those observed with cluster sampling designs. The SYS design was more efficient (i.e., higher accuracy per unit cost) than a two-cluster design. Increasing the number of clusters available for sampling within a day decreased the MSE of estimates of daily angling effort, but the MSE of total catch estimates was variable depending on the fishery. The results of our simulations provide guidelines on the relative influence of sample sizes and sampling designs on parameters of interest in short-duration Chinook Salmon fisheries.
A Research Agenda for Helminth Diseases of Humans: Modelling for Control and Elimination
Basáñez, María-Gloria; McCarthy, James S.; French, Michael D.; Yang, Guo-Jing; Walker, Martin; Gambhir, Manoj; Prichard, Roger K.; Churcher, Thomas S.
2012-01-01
Mathematical modelling of helminth infections has the potential to inform policy and guide research for the control and elimination of human helminthiases. However, this potential, unlike in other parasitic and infectious diseases, has yet to be realised. To place contemporary efforts in a historical context, a summary of the development of mathematical models for helminthiases is presented. These efforts are discussed according to the role that models can play in furthering our understanding of parasite population biology and transmission dynamics, and the effect on such dynamics of control interventions, as well as in enabling estimation of directly unobservable parameters, exploration of transmission breakpoints, and investigation of evolutionary outcomes of control. The Disease Reference Group on Helminth Infections (DRG4), established in 2009 by the Special Programme for Research and Training in Tropical Diseases (TDR), was given the mandate to review helminthiases research and identify research priorities and gaps. A research and development agenda for helminthiasis modelling is proposed based on identified gaps that need to be addressed for models to become useful decision tools that can support research and control operations effectively. This agenda includes the use of models to estimate the impact of large-scale interventions on infection incidence; the design of sampling protocols for the monitoring and evaluation of integrated control programmes; the modelling of co-infections; the investigation of the dynamical relationship between infection and morbidity indicators; the improvement of analytical methods for the quantification of anthelmintic efficacy and resistance; the determination of programme endpoints; the linking of dynamical helminth models with helminth geostatistical mapping; and the investigation of the impact of climate change on human helminthiases. It is concluded that modelling should be embedded in helminth research, and in the planning, evaluation, and surveillance of interventions from the outset. Modellers should be essential members of interdisciplinary teams, propitiating a continuous dialogue with end users and stakeholders to reflect public health needs in the terrain, discuss the scope and limitations of models, and update biological assumptions and model outputs regularly. It is highlighted that to reach these goals, a collaborative framework must be developed for the collation, annotation, and sharing of databases from large-scale anthelmintic control programmes, and that helminth modellers should join efforts to tackle key questions in helminth epidemiology and control through the sharing of such databases, and by using diverse, yet complementary, modelling approaches. PMID:22545162
Lifetime earnings for physicians across specialties.
Leigh, J Paul; Tancredi, Daniel; Jerant, Anthony; Romano, Patrick S; Kravitz, Richard L
2012-12-01
Earlier studies estimated annual income differences across specialties, but lifetime income may be more relevant given physicians' long-term commitments to specialties. Annual income and work hours data were collected from 6381 physicians in the nationally representative 2004-2005 Community Tracking Study. Data regarding years of residency were collected from AMA FREIDA. Present value models were constructed assuming 3% discount rates. Estimates were adjusted for demographic and market covariates. Sensitivity analyses included 4 alternative models involving work hours, retirement, exogenous variables, and 1% discount rate. Estimates were generated for 4 broad specialty categories (Primary Care, Surgery, Internal Medicine and Pediatric Subspecialties, and Other), and for 41 specific specialties. The estimates of lifetime earnings for the broad categories of Surgery, Internal Medicine and Pediatric Subspecialties, and Other specialties were $1,587,722, $1,099,655, and $761,402 more than for Primary Care. For the 41 specific specialties, the top 3 (with family medicine as reference) were neurological surgery ($2,880,601), medical oncology ($2,772,665), and radiation oncology ($2,659,657). The estimates from models with varying rates of retirement and including only exogenous variables were similar to those in the preferred model. The 1% discount model generated estimates that were roughly 150% larger than the 3% model. There was considerable variation in the lifetime earnings across physician specialties. After accounting for varying residency years and discounting future earnings, primary care specialties earned roughly $1-3 million less than other specialties. Earnings' differences across specialties may undermine health reform efforts to control costs and ensure adequate numbers of primary care physicians.
Method for six-legged robot stepping on obstacles by indirect force estimation
NASA Astrophysics Data System (ADS)
Xu, Yilin; Gao, Feng; Pan, Yang; Chai, Xun
2016-07-01
Adaptive gaits for legged robots often requires force sensors installed on foot-tips, however impact, temperature or humidity can affect or even damage those sensors. Efforts have been made to realize indirect force estimation on the legged robots using leg structures based on planar mechanisms. Robot Octopus III is a six-legged robot using spatial parallel mechanism(UP-2UPS) legs. This paper proposed a novel method to realize indirect force estimation on walking robot based on a spatial parallel mechanism. The direct kinematics model and the inverse kinematics model are established. The force Jacobian matrix is derived based on the kinematics model. Thus, the indirect force estimation model is established. Then, the relation between the output torques of the three motors installed on one leg to the external force exerted on the foot tip is described. Furthermore, an adaptive tripod static gait is designed. The robot alters its leg trajectory to step on obstacles by using the proposed adaptive gait. Both the indirect force estimation model and the adaptive gait are implemented and optimized in a real time control system. An experiment is carried out to validate the indirect force estimation model. The adaptive gait is tested in another experiment. Experiment results show that the robot can successfully step on a 0.2 m-high obstacle. This paper proposes a novel method to overcome obstacles for the six-legged robot using spatial parallel mechanism legs and to avoid installing the electric force sensors in harsh environment of the robot's foot tips.
Evaluation of a methodology for model identification in the time domain
NASA Technical Reports Server (NTRS)
Beck, R. T.; Beck, J. L.
1988-01-01
A model identification methodology for structural dynamics has been applied to simulated vibrational data as a first step in evaluating its accuracy. The evaluation has taken into account a wide variety of factors which affect the accuracy of the procedure. The effects of each of these factors were observed in both the response time histories and the estimates of the parameters of the model by comparing them with the exact values of the system. Each factor was varied independently but combinations of these have also been considered in an effort to simulate real situations. The results of the tests have shown that for the chain model, the procedure yields robust estimates of the stiffness parameters under the conditions studied whenever uniqueness is ensured. When inaccuracies occur in the results, they are intimately related to non-uniqueness conditions inherent in the inverse problem and not to shortcomings in the methodology.
Williams, Christopher; Dugger, Bruce D.; Brasher, Michael G.; Coluccy, John M.; Cramer, Dane M.; Eadie, John M.; Gray, Matthew J.; Hagy, Heath M.; Livolsi, Mark; McWilliams, Scott R.; Petrie, Matthew; Soulliere, Gregory J.; Tirpak, John M.; Webb, Elisabeth B.
2014-01-01
Population-based habitat conservation planning for migrating and wintering waterfowl in North America is carried out by habitat Joint Venture (JV) initiatives and is based on the premise that food can limit demography (i.e. food limitation hypothesis). Consequently, planners use bioenergetic models to estimate food (energy) availability and population-level energy demands at appropriate spatial and temporal scales, and translate these values into regional habitat objectives. While simple in principle, there are both empirical and theoretical challenges associated with calculating energy supply and demand including: 1) estimating food availability, 2) estimating the energy content of specific foods, 3) extrapolating site-specific estimates of food availability to landscapes for focal species, 4) applicability of estimates from a single species to other species, 5) estimating resting metabolic rate, 6) estimating cost of daily behaviours, and 7) estimating costs of thermoregulation or tissue synthesis. Most models being used are daily ration models (DRMs) whose set of simplifying assumptions are well established and whose use is widely accepted and feasible given the empirical data available to populate such models. However, DRMs do not link habitat objectives to metrics of ultimate ecological importance such as individual body condition or survival, and largely only consider food-producing habitats. Agent-based models (ABMs) provide a possible alternative for creating more biologically realistic models under some conditions; however, ABMs require different types of empirical inputs, many of which have yet to be estimated for key North American waterfowl. Decisions about how JVs can best proceed with habitat conservation would benefit from the use of sensitivity analyses that could identify the empirical and theoretical uncertainties that have the greatest influence on efforts to estimate habitat carrying capacity. Development of ABMs at restricted, yet biologically relevant spatial scales, followed by comparisons of their outputs to those generated from more simplistic, deterministic models can provide a means of assessing degrees of dissimilarity in how alternative models describe desired landscape conditions for migrating and wintering waterfowl.
Estimating the diversity of dinosaurs
NASA Astrophysics Data System (ADS)
Wang, Steve C.; Dodson, Peter
2006-09-01
Despite current interest in estimating the diversity of fossil and extant groups, little effort has been devoted to estimating the diversity of dinosaurs. Here we estimate the diversity of nonavian dinosaurs at ≈1,850 genera, including those that remain to be discovered. With 527 genera currently described, at least 71% of dinosaur genera thus remain unknown. Although known diversity declined in the last stage of the Cretaceous, estimated diversity was steady, suggesting that dinosaurs as a whole were not in decline in the 10 million years before their ultimate extinction. We also show that known diversity is biased by the availability of fossiliferous rock outcrop. Finally, by using a logistic model, we predict that 75% of discoverable genera will be known within 60-100 years and 90% within 100-140 years. Because of nonrandom factors affecting the process of fossil discovery (which preclude the possibility of computing realistic confidence bounds), our estimate of diversity is likely to be a lower bound.
Animated-simulation modeling facilitates clinical-process costing.
Zelman, W N; Glick, N D; Blackmore, C C
2001-09-01
Traditionally, the finance department has assumed responsibility for assessing process costs in healthcare organizations. To enhance process-improvement efforts, however, many healthcare providers need to include clinical staff in process cost analysis. Although clinical staff often use electronic spreadsheets to model the cost of specific processes, PC-based animated-simulation tools offer two major advantages over spreadsheets: they allow clinicians to interact more easily with the costing model so that it more closely represents the process being modeled, and they represent cost output as a cost range rather than as a single cost estimate, thereby providing more useful information for decision making.
Characterization of Inactive Rocket Bodies Via Non-Resolved Photometric Data
NASA Astrophysics Data System (ADS)
Linares, R.; Palmer, D.; Thompson, D.; Klimenko, A.
2014-09-01
Recent events in space, including the collision of Russias Cosmos 2251 satellite with Iridium 33 and Chinas Feng Yun 1C anti-satellite demonstration, have stressed the capabilities of Space Surveillance Network (SSN) and its ability to provide accurate and actionable impact probability estimates. The SSN network has the unique challenge of tracking more than 18,000 resident space objects (RSOs) and providing critical collision avoidance warnings to military, NASA, and commercial systems. However, due to the large number of RSOs and the limited number of sensors available to track them, it is impossible to maintain persistent surveillance. Observation gaps result in large propagation intervals between measurements and close approaches. Coupled with nonlinear RSO dynamics this results in difficulty in modeling the probability distribution functions (pdfs) of the RSO. In particular low-Earth orbiting (LEO) satellites are heavily influenced by atmospheric drag, which is very difficult to model accurately. A number of atmospheric models exist which can be classified as either empirical or physics-based models. The current Air Force standard is the High Accuracy Satellite Drag Model (HASDM), which is an empirical model based on observation of calibration satellites. These satellite observations are used to determine model parameters based on their orbit determination solutions. Atmospheric orbits are perturbed by a number of factors including drag coefficient, attitude, and shape of the space object. The satellites used for the HASDM model calibration process are chosen because of their relatively simple shapes, to minimize errors introduced due to shape miss-modeling. Under this requirement the number of calibration satellites that can be used for calibrating the atmospheric models is limited. Los Alamos National Laboratory (LANL) has established a research effort, called IMPACT (Integrated Modeling of Perturbations in Atmospheres for Conjunction Tracking), to improve impact assessment via improved physics-based modeling. As part of this effort calibration satellite observations are used to dynamically calibrate the physics-based model and to improve its forecasting capability. The observations are collected from a variety of sources, including from LANLs own Raven-class optical telescope. This system collects both astrometric and photometric data on space objects. The photometric data will be used to estimate the space objects attitude and shape. Non-resolved photometric data have been studied by many as a mechanism for space object characterization. Photometry is the measurement of an objects flux or apparent brightness measured over a wavelength band. The temporal variation of photometric measurements is referred to as photometric signature. The photometric optical signature of an object contains information about shape, attitude, size and material composition. This work focuses on the processing of the data collected with LANLs telescope in an effort to use photometric data to expand the number of space objects that can be used as calibration satellites. A nonlinear least squares is used to estimate the attitude and angular velocity of the space object; a number of real data examples are shown. Inactive space objects are used for the real data examples and good estimation results are shown.
NASA Technical Reports Server (NTRS)
Lugo, Rafael A.; Tolson, Robert H.; Schoenenberger, Mark
2013-01-01
As part of the Mars Science Laboratory (MSL) trajectory reconstruction effort at NASA Langley Research Center, free-flight aeroballistic experiments of instrumented MSL scale models was conducted at Aberdeen Proving Ground in Maryland. The models carried an inertial measurement unit (IMU) and a flush air data system (FADS) similar to the MSL Entry Atmospheric Data System (MEADS) that provided data types similar to those from the MSL entry. Multiple sources of redundant data were available, including tracking radar and on-board magnetometers. These experimental data enabled the testing and validation of the various tools and methodologies that will be used for MSL trajectory reconstruction. The aerodynamic parameters Mach number, angle of attack, and sideslip angle were estimated using minimum variance with a priori to combine the pressure data and pre-flight computational fluid dynamics (CFD) data. Both linear and non-linear pressure model terms were also estimated for each pressure transducer as a measure of the errors introduced by CFD and transducer calibration. Parameter uncertainties were estimated using a "consider parameters" approach.
Soniat, Thomas M.; Klinck, John M.; Powell, Eric N.; Cooper, Nathan; Abdelguerfi, Mahdi; Hofmann, Eileen E.; Dahal, Janak; Tu, Shengru; Finigan, John; Eberline, Benjamin S.; La Peyre, Jerome F.; LaPeyre, Megan K.; Qaddoura, Fareed
2012-01-01
A numerical model is presented that defines a sustainability criterion as no net loss of shell, and calculates a sustainable harvest of seed (<75 mm) and sack or market oysters (≥75 mm). Stock assessments of the Primary State Seed Grounds conducted east of the Mississippi from 2009 to 2011 show a general trend toward decreasing abundance of sack and seed oysters. Retrospective simulations provide estimates of annual sustainable harvests. Comparisons of simulated sustainable harvests with actual harvests show a trend toward unsustainable harvests toward the end of the time series. Stock assessments combined with shell-neutral models can be used to estimate sustainable harvest and manage cultch through shell planting when actual harvest exceeds sustainable harvest. For exclusive restoration efforts (no fishing allowed), the model provides a metric for restoration success-namely, shell accretion. Oyster fisheries that remove shell versus reef restorations that promote shell accretion, although divergent in their goals, are convergent in their management; both require vigilant attention to shell budgets.
Sweeney, Lisa M.; Parker, Ann; Haber, Lynne T.; Tran, C. Lang; Kuempel, Eileen D.
2015-01-01
A biomathematical model was previously developed to describe the long-term clearance and retention of particles in the lungs of coal miners. The model structure was evaluated and parameters were estimated in two data sets, one from the United States and one from the United Kingdom. The three-compartment model structure consists of deposition of inhaled particles in the alveolar region, competing processes of either clearance from the alveolar region or translocation to the lung interstitial region, and very slow, irreversible sequestration of interstitialized material in the lung-associated lymph nodes. Point estimates of model parameter values were estimated separately for the two data sets. In the current effort, Bayesian population analysis using Markov chain Monte Carlo simulation was used to recalibrate the model while improving assessments of parameter variability and uncertainty. When model parameters were calibrated simultaneously to the two data sets, agreement between the derived parameters for the two groups was very good, and the central tendency values were similar to those derived from the deterministic approach. These findings are relevant to the proposed update of the ICRP human respiratory tract model with revisions to the alveolar-interstitial region based on this long-term particle clearance and retention model. PMID:23454101
Coggins, L.G.; Pine, William E.; Walters, C.J.; Martell, S.J.D.
2006-01-01
We present a new model to estimate capture probabilities, survival, abundance, and recruitment using traditional Jolly-Seber capture-recapture methods within a standard fisheries virtual population analysis framework. This approach compares the numbers of marked and unmarked fish at age captured in each year of sampling with predictions based on estimated vulnerabilities and abundance in a likelihood function. Recruitment to the earliest age at which fish can be tagged is estimated by using a virtual population analysis method to back-calculate the expected numbers of unmarked fish at risk of capture. By using information from both marked and unmarked animals in a standard fisheries age structure framework, this approach is well suited to the sparse data situations common in long-term capture-recapture programs with variable sampling effort. ?? Copyright by the American Fisheries Society 2006.
Fighting with Siblings and with Peers among Urban High School Students
Johnson, Renee M.; Duncan, Dustin T.; Rothman, Emily F.; Gilreath, Tamika D.; Hemenway, David; Molnar, Beth E.; Azrael, Deborah
2014-01-01
Understanding the determinants of fighting is important for prevention efforts. Unfortunately, there is little research on how sibling fighting is related to peer fighting. Therefore, the aim of this study was to evaluate the association between sibling fighting and peer fighting. Data are from the Boston Youth Survey 2008, a school-based sample of youth in Boston, MA. To estimate the association between sibling fighting and peer fighting we ran four multivariate regression models and estimated adjusted prevalence ratios and 95% confidence intervals. We fit generalized estimating equation models to account for the fact that students were clustered within schools. Controlling for school clustering, race/ethnicity, sex, school failure, substance use, and caregiver aggression, youth who fought with siblings were 2.49 times more likely to have reported fighting with peers. To the extent that we can confirm that sibling violence is associated with aggressive behavior, we should incorporate it into violence prevention programming. PMID:25287411
Scope Complexity Options Risks Excursions (SCORE) Factor Mathematical Description.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gearhart, Jared Lee; Samberson, Jonell Nicole; Shettigar, Subhasini
The purpose of the Scope, Complexity, Options, Risks, Excursions (SCORE) model is to estimate the relative complexity of design variants of future warhead options, resulting in scores. SCORE factors extend this capability by providing estimates of complexity relative to a base system (i.e., all design options are normalized to one weapon system). First, a clearly defined set of scope elements for a warhead option is established. The complexity of each scope element is estimated by Subject Matter Experts (SMEs), including a level of uncertainty, relative to a specific reference system. When determining factors, complexity estimates for a scope element canmore » be directly tied to the base system or chained together via comparable scope elements in a string of reference systems that ends with the base system. The SCORE analysis process is a growing multi-organizational Nuclear Security Enterprise (NSE) effort, under the management of the NA-12 led Enterprise Modeling and Analysis Consortium (EMAC). Historically, it has provided the data elicitation, integration, and computation needed to support the out-year Life Extension Program (LEP) cost estimates included in the Stockpile Stewardship Management Plan (SSMP).« less
Constrained Kalman Filtering Via Density Function Truncation for Turbofan Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2006-01-01
Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter truncates the PDF (probability density function) of the Kalman filter estimate at the known constraints and then computes the constrained filter estimate as the mean of the truncated PDF. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is demonstrated via simulation results obtained from a turbofan engine model. The turbofan engine model contains 3 state variables, 11 measurements, and 10 component health parameters. It is also shown that the truncated Kalman filter may be a more accurate way of incorporating inequality constraints than other constrained filters (e.g., the projection approach to constrained filtering).
Estimating Evapotranspiration with Land Data Assimilation Systems
NASA Technical Reports Server (NTRS)
Peters-Lidard, C. D.; Kumar, S. V.; Mocko, D. M.; Tian, Y.
2011-01-01
Advancements in both land surface models (LSM) and land surface data assimilation, especially over the last decade, have substantially advanced the ability of land data assimilation systems (LDAS) to estimate evapotranspiration (ET). This article provides a historical perspective on international LSM intercomparison efforts and the development of LDAS systems, both of which have improved LSM ET skill. In addition, an assessment of ET estimates for current LDAS systems is provided along with current research that demonstrates improvement in LSM ET estimates due to assimilating satellite-based soil moisture products. Using the Ensemble Kalman Filter in the Land Information System, we assimilate both NASA and Land Parameter Retrieval Model (LPRM) soil moisture products into the Noah LSM Version 3.2 with the North American LDAS phase 2 (NLDAS-2) forcing to mimic the NLDAS-2 configuration. Through comparisons with two global reference ET products, one based on interpolated flux tower data and one from a new satellite ET algorithm, over the NLDAS2 domain, we demonstrate improvement in ET estimates only when assimilating the LPRM soil moisture product.
Kass, Andrea E; Balantekin, Katherine N; Fitzsimmons-Craft, Ellen E; Jacobi, Corinna; Wilfley, Denise E; Taylor, C Barr
2017-03-01
Eating disorders (EDs) are serious health problems affecting college students. This article aimed to estimate the costs, in United States (US) dollars, of a stepped care model for online prevention and treatment among US college students to inform meaningful decisions regarding resource allocation and adoption of efficient care delivery models for EDs on college campuses. Using a payer perspective, we estimated the costs of (1) delivering an online guided self-help (GSH) intervention to individuals with EDs, including the costs of "stepping up" the proportion expected to "fail"; (2) delivering an online preventive intervention compared to a "wait and treat" approach to individuals at ED risk; and (3) applying the stepped care model across a population of 1,000 students, compared to standard care. Combining results for online GSH and preventive interventions, we estimated a stepped care model would cost less and result in fewer individuals needing in-person psychotherapy (after receiving less-intensive intervention) compared to standard care, assuming everyone in need received intervention. A stepped care model was estimated to achieve modest cost savings compared to standard care, but these estimates need to be tested with sensitivity analyses. Model assumptions highlight the complexities of cost calculations to inform resource allocation, and considerations for a disseminable delivery model are presented. Efforts are needed to systematically measure the costs and benefits of a stepped care model for EDs on college campuses, improve the precision and efficacy of ED interventions, and apply these calculations to non-US care systems with different cost structures. © 2017 Wiley Periodicals, Inc.
Overcoming bias in estimating the volume-outcome relationship.
Tsai, Alexander C; Votruba, Mark; Bridges, John F P; Cebul, Randall D
2006-02-01
To examine the effect of hospital volume on 30-day mortality for patients with congestive heart failure (CHF) using administrative and clinical data in conventional regression and instrumental variables (IV) estimation models. The primary data consisted of longitudinal information on comorbid conditions, vital signs, clinical status, and laboratory test results for 21,555 Medicare-insured patients aged 65 years and older hospitalized for CHF in northeast Ohio in 1991-1997. The patient was the primary unit of analysis. We fit a linear probability model to the data to assess the effects of hospital volume on patient mortality within 30 days of admission. Both administrative and clinical data elements were included for risk adjustment. Linear distances between patients and hospitals were used to construct the instrument, which was then used to assess the endogeneity of hospital volume. When only administrative data elements were included in the risk adjustment model, the estimated volume-outcome effect was statistically significant (p=.029) but small in magnitude. The estimate was markedly attenuated in magnitude and statistical significance when clinical data were added to the model as risk adjusters (p=.39). IV estimation shifted the estimate in a direction consistent with selective referral, but we were unable to reject the consistency of the linear probability estimates. Use of only administrative data for volume-outcomes research may generate spurious findings. The IV analysis further suggests that conventional estimates of the volume-outcome relationship may be contaminated by selective referral effects. Taken together, our results suggest that efforts to concentrate hospital-based CHF care in high-volume hospitals may not reduce mortality among elderly patients.
Inverting Image Data For Optical Testing And Alignment
NASA Technical Reports Server (NTRS)
Shao, Michael; Redding, David; Yu, Jeffrey W.; Dumont, Philip J.
1993-01-01
Data from images produced by slightly incorrectly figured concave primary mirror in telescope processed into estimate of spherical aberration of mirror, by use of algorithm finding nonlinear least-squares best fit between actual images and synthetic images produced by multiparameter mathematical model of telescope optical system. Estimated spherical aberration, in turn, converted into estimate of deviation of reflector surface from nominal precise shape. Algorithm devised as part of effort to determine error in surface figure of primary mirror of Hubble space telescope, so corrective lens designed. Modified versions of algorithm also used to find optical errors in other components of telescope or of other optical systems, for purposes of testing, alignment, and/or correction.
Active pneumatic vibration isolation system using negative stiffness structures for a vehicle seat
NASA Astrophysics Data System (ADS)
Danh, Le Thanh; Ahn, Kyoung Kwan
2014-02-01
In this paper, an active pneumatic vibration isolation system using negative stiffness structures (NSS) for a vehicle seat in low excitation frequencies is proposed, which is named as an active system with NSS. Here, the negative stiffness structures (NSS) are used to minimize the vibratory attraction of a vehicle seat. Owing to the time-varying and nonlinear behavior of the proposed system, it is not easy to build an accurate dynamic for model-based controller design. Thus, an adaptive intelligent backstepping controller (AIBC) is designed to manage the system operation for high-isolation effectiveness. In addition, an auxiliary control effort is also introduced to eliminate the effect of the unpredictable perturbations. Moreover, a radial basis function neural network (RBFNN) model is utilized to estimate the optimal gain of the auxiliary control effort. Final control input and the adaptive law for updating coefficients of the approximate series can be obtained step by step using a suitable Lyapunov function. Afterward, the isolation performance of the proposed system is assessed experimentally. In addition, the effectiveness of the designed controller for the proposed system is also compared with that of the traditional backstepping controller (BC). The experimental results show that the isolation effectiveness of the proposed system is better than that of the active system without NSS. Furthermore, the undesirable chattering phenomenon in control effort is quite reduced by the estimation mechanism. Finally, some concluding remarks are given at the end of the paper.
Nóbrega, M F; Kinas, P G; Lessa, R; Ferrandis, E
2015-02-01
The sampling of fish from the artisanal fleet operating with surface lines off north-eastern Brazil was carried out between 1998 and 2000. Generalized linear models (GLMs) were used to standardize mean abundance indices using catch and fishing effort data on dolphinfish Coryphaena hippurus and to identify abundance trends in time and space, using 1215 surface line deployments. A standard relative abundance index (catch per unit effort, CPUE) was estimated for the most frequent vessels used in the sets, employing factors and coefficients generated in the GLMs. According to the models, C. hippurus catches are affected by the operating characteristics and power of different fishing vessels. These differences highlight the need for standardization of catch and effort data for artisanal fisheries. The highest mean abundance values for C. hippurus were off the state of Rio Grande do Norte, with an increasing tendency in areas with greater depths and more distant from the coast, reaching maximal values in areas whose depths range from 200 to 500 m. The highest mean abundance values occurred between April and June. The higher estimated abundance of C. hippurus in this period off the state of Rio Grande do Norte and within the 200-500 m depth range may be related to a migration pattern of food sources, as its main prey, the flying fish Hirundichthys affinis, uses floating algae as refuge and to deposit its pelagic eggs. © 2015 The Fisheries Society of the British Isles.
A Modeling Approach to Global Land Surface Monitoring with Low Resolution Satellite Imaging
NASA Technical Reports Server (NTRS)
Hlavka, Christine A.; Dungan, Jennifer; Livingston, Gerry P.; Gore, Warren J. (Technical Monitor)
1998-01-01
The effects of changing land use/land cover on global climate and ecosystems due to greenhouse gas emissions and changing energy and nutrient exchange rates are being addressed by federal programs such as NASA's Mission to Planet Earth (MTPE) and by international efforts such as the International Geosphere-Biosphere Program (IGBP). The quantification of these effects depends on accurate estimates of the global extent of critical land cover types such as fire scars in tropical savannas and ponds in Arctic tundra. To address the requirement for accurate areal estimates, methods for producing regional to global maps with satellite imagery are being developed. The only practical way to produce maps over large regions of the globe is with data of coarse spatial resolution, such as Advanced Very High Resolution Radiometer (AVHRR) weather satellite imagery at 1.1 km resolution or European Remote-Sensing Satellite (ERS) radar imagery at 100 m resolution. The accuracy of pixel counts as areal estimates is in doubt, especially for highly fragmented cover types such as fire scars and ponds. Efforts to improve areal estimates from coarse resolution maps have involved regression of apparent area from coarse data versus that from fine resolution in sample areas, but it has proven difficult to acquire sufficient fine scale data to develop the regression. A method for computing accurate estimates from coarse resolution maps using little or no fine data is therefore needed.
NASA Astrophysics Data System (ADS)
Newman, A. J.; Sampson, K. M.; Wood, A. W.; Hopson, T. M.; Brekke, L. D.; Arnold, J.; Raff, D. A.; Clark, M. P.
2013-12-01
Skill in model-based hydrologic forecasting depends on the ability to estimate a watershed's initial moisture and energy conditions, to forecast future weather and climate inputs, and on the quality of the hydrologic model's representation of watershed processes. The impact of these factors on prediction skill varies regionally, seasonally, and by model. We are investigating these influences using a watershed simulation platform that spans the continental US (CONUS), encompassing a broad range of hydroclimatic variation, and that uses the current simulation models of National Weather Service streamflow forecasting operations. The first phase of this effort centered on the implementation and calibration of the SNOW-17 and Sacramento soil moisture accounting (SAC-SMA) based hydrologic modeling system for a range of watersheds. The base configuration includes 630 basins in the United States Geological Survey's Hydro-Climatic Data Network 2009 (HCDN-2009, Lins 2012) conterminous U.S. basin subset. Retrospective model forcings were derived from Daymet (http://daymet.ornl.gov/), and where available, a priori parameter estimates were based on or compared with the operational NWS model parameters. Model calibration was accomplished by several objective, automated strategies, including the shuffled complex evolution (SCE) optimization approach developed within the NWS in the early 1990s (Duan et al. 1993). This presentation describes outcomes from this effort, including insights about measuring simulation skill, and on relationships between simulation skill and model parameters, basin characteristics (climate, topography, vegetation, soils), and the quality of forcing inputs. References: %Z Thornton, P.; Thornton, M.; Mayer, B.; Wilhelmi, N.; Wei, Y.; Devarakonda, R; Cook, R. Daymet: Daily Surface Weather on a 1 km Grid for North America. 1980-2008; Oak Ridge National Laboratory Distributed Active Archive Center: Oak Ridge, TN, USA, 2012; Volume 10.
Keisu, Britt-Inger; Öhman, Ann; Enberg, Birgit
2018-03-01
Negative aspects, staff dissatisfaction and problems related to internal organisational factors of working in elderly care are well-known and documented. Much less is known about positive aspects of working in elderly care, and therefore, this study focuses on such positive factors in Swedish elderly care. We combined two theoretical models, the effort-reward imbalance model and the Transformational Leadership Style model. The aim was to estimate the potential associations between employee-perceived transformational leadership style of their managers, and employees' ratings of effort and reward within elderly care work. The article is based on questionnaires distributed at on-site visits to registered nurses, occupational therapists, physiotherapists (high-level education) and assistant nurses (low-level education) in nine Swedish elderly care facilities. In order to grasp the positive factors of work in elderly care, we focused on balance at work, rather than imbalance. We found a significant association between employees' effort-reward balance at work and a transformational leadership style among managers. An association was also found between employees' level of education and their assessments of the first-level managers. We conclude that the first-level manager is an important actor for achieving a good workplace within elderly care, since she/he influences employees' psychosocial working environment. We also conclude that there are differences and inequalities, in terms of well-being, effort and reward at the work place, between those with academic training and those without, in that the former group to a higher degree evaluated their first-level manager to perform a transformational leadership style, which in turn is beneficial for their psychosocial work environment. Consequently, this (re)-produce inequalities in terms of well-being, effort and reward among the employees at the work place. © 2017 Nordic College of Caring Science.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dillon, Michael B.; Kane, Staci R.
A nuclear explosion has the potential to injure or kill tens to hundreds of thousands (or more) of people through exposure to fallout (external gamma) radiation. Existing buildings can protect their occupants (reducing fallout radiation exposures) by placing material and distance between fallout particles and individuals indoors. Prior efforts have determined an initial set of building attributes suitable to reasonably assess a given building’s protection against fallout radiation. The current work provides methods to determine the quantitative values for these attributes from (a) common architectural features and data and (b) buildings described using the Global Earthquake Model (GEM) taxonomy. Thesemore » methods will be used to improve estimates of fallout protection for operational US Department of Defense (DoD) and US Department of Energy (DOE) consequence assessment models.« less
NASA Technical Reports Server (NTRS)
Hops, J. M.; Sherif, J. S.
1994-01-01
A great deal of effort is now being devoted to the study, analysis, prediction, and minimization of software maintenance expected cost, long before software is delivered to users or customers. It has been estimated that, on the average, the effort spent on software maintenance is as costly as the effort spent on all other software costs. Software design methods should be the starting point to aid in alleviating the problems of software maintenance complexity and high costs. Two aspects of maintenance deserve attention: (1) protocols for locating and rectifying defects, and for ensuring that noe new defects are introduced in the development phase of the software process; and (2) protocols for modification, enhancement, and upgrading. This article focuses primarily on the second aspect, the development of protocols to help increase the quality and reduce the costs associated with modifications, enhancements, and upgrades of existing software. This study developed parsimonious models and a relative complexity metric for complexity measurement of software that were used to rank the modules in the system relative to one another. Some success was achieved in using the models and the relative metric to identify maintenance-prone modules.
Li, Jian; Herr, Raphael M; Allen, Joanne; Stephens, Christine; Alpass, Fiona
2017-11-25
The objective of this study was to validate a short version of the Effort-Reward-Imbalance (ERI) questionnaire in the context of New Zealand among older full-time and part-time employees. Data were collected from 1694 adults aged 48-83 years (mean 60 years, 53% female) who reported being in full- or part-time paid employment in the 2010 wave of the New Zealand Health, Work and Retirement study. Scale reliability was evaluated by item-total correlations and Cronbach's alpha. Factorial validity was assessed using multi-group confirmatory factor analyses assessing nested models of configural, metric, scalar and strict invariance across full- and part-time employment groups. Logistic regressions estimated associations of effort-reward ratio and over-commitment with poor physical/mental health, and depressive symptoms. Internal consistency of ERI scales was high across employment groups: effort 0.78-0.76; reward 0.81-0.77, and over-commitment 0.83-0.80. The three-factor model displayed acceptable fit in the overall sample (X 2 /df = 10.31; CFI = 0.95; TLI = 0.94; RMSEA = 0.075), and decrements in model fit indices provided evidence for strict invariance of the three-factor ERI model across full-time and part-time employment groups. High effort-reward ratio scores were consistently associated with poor mental health and depressive symptoms for both employment groups. High over-commitment was associated with poor mental health and depressive symptoms in both groups and also with poor physical health in the full-time employment group. The short ERI questionnaire appears to be a valid instrument to assess adverse psychosocial work characteristics in old full-time and part-time employees in New Zealand.
Li, Jian; Herr, Raphael M.; Allen, Joanne; Stephens, Christine; Alpass, Fiona
2017-01-01
Objectives: The objective of this study was to validate a short version of the Effort-Reward-Imbalance (ERI) questionnaire in the context of New Zealand among older full-time and part-time employees. Methods: Data were collected from 1694 adults aged 48-83 years (mean 60 years, 53% female) who reported being in full- or part-time paid employment in the 2010 wave of the New Zealand Health, Work and Retirement study. Scale reliability was evaluated by item-total correlations and Cronbach's alpha. Factorial validity was assessed using multi-group confirmatory factor analyses assessing nested models of configural, metric, scalar and strict invariance across full- and part-time employment groups. Logistic regressions estimated associations of effort-reward ratio and over-commitment with poor physical/mental health, and depressive symptoms. Results: Internal consistency of ERI scales was high across employment groups: effort 0.78-0.76; reward 0.81-0.77, and over-commitment 0.83-0.80. The three-factor model displayed acceptable fit in the overall sample (X2/df = 10.31; CFI = 0.95; TLI = 0.94; RMSEA = 0.075), and decrements in model fit indices provided evidence for strict invariance of the three-factor ERI model across full-time and part-time employment groups. High effort-reward ratio scores were consistently associated with poor mental health and depressive symptoms for both employment groups. High over-commitment was associated with poor mental health and depressive symptoms in both groups and also with poor physical health in the full-time employment group. Conclusions: The short ERI questionnaire appears to be a valid instrument to assess adverse psychosocial work characteristics in old full-time and part-time employees in New Zealand. PMID:28835574
Engineering the Business of Defense Acquisition: An Analysis of Program Office Processes
2015-05-01
Information Technology and Business Process Redesign | MIT Sloan Management Review . MIT Sloan Management Review . Retrieved from http://sloanreview.mit.edu...links systems management to process execution Three Phases/ Multi-Year Effort (This Phase) Literature review Model development— Formal and...estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining
Lake Michigan, the sixth largest freshwater lake in the world by surface area, was utilized as a water body for assessment within a case study. Field data collected at 116 sampling sites throughout the lake in an intensive monitoring effort were utilized for evaluation of the di...
ERIC Educational Resources Information Center
Seybert, Jef
In an effort to estimate the economic impact of Johnson County Community College (JCCC) on the Kansas City Metropolitan Area for 1988-89, the Ryan-New Jersey model was used to examine both direct and indirect economic influences of the college. Direct economic impact was assessed by examining institutional expenditures in the metropolitan area;…
Limiting assumptions in molecular modeling: electrostatics.
Marshall, Garland R
2013-02-01
Molecular mechanics attempts to represent intermolecular interactions in terms of classical physics. Initial efforts assumed a point charge located at the atom center and coulombic interactions. It is been recognized over multiple decades that simply representing electrostatics with a charge on each atom failed to reproduce the electrostatic potential surrounding a molecule as estimated by quantum mechanics. Molecular orbitals are not spherically symmetrical, an implicit assumption of monopole electrostatics. This perspective reviews recent evidence that requires use of multipole electrostatics and polarizability in molecular modeling.
DeVries, R. J.; Hann, D. A.; Schramm, H.L.
2015-01-01
This study evaluated the effects of environmental parameters on the probability of capturing endangered pallid sturgeon (Scaphirhynchus albus) using trotlines in the lower Mississippi River. Pallid sturgeon were sampled by trotlines year round from 2008 to 2011. A logistic regression model indicated water temperature (T; P < 0.01) and depth (D; P = 0.03) had significant effects on capture probability (Y = −1.75 − 0.06T + 0.10D). Habitat type, surface current velocity, river stage, stage change and non-sturgeon bycatch were not significant predictors (P = 0.26–0.63). Although pallid sturgeon were caught throughout the year, the model predicted that sampling should focus on times when the water temperature is less than 12°C and in deeper water to maximize capture probability; these water temperature conditions commonly occur during November to March in the lower Mississippi River. Further, the significant effect of water temperature which varies widely over time, as well as water depth indicate that any efforts to use the catch rate to infer population trends will require the consideration of temperature and depth in standardized sampling efforts or adjustment of estimates.
On implementing maximum economic yield in commercial fisheries
Dichmont, C. M.; Pascoe, S.; Kompas, T.; Punt, A. E.; Deng, R.
2009-01-01
Economists have long argued that a fishery that maximizes its economic potential usually will also satisfy its conservation objectives. Recently, maximum economic yield (MEY) has been identified as a primary management objective for Australian fisheries and is under consideration elsewhere. However, first attempts at estimating MEY as an actual management target for a real fishery (rather than a conceptual or theoretical exercise) have highlighted some substantial complexities generally unconsidered by fisheries economists. Here, we highlight some of the main issues encountered in our experience and their implications for estimating and transitioning to MEY. Using a bioeconomic model of an Australian fishery for which MEY is the management target, we note that unconstrained optimization may result in effort trajectories that would not be acceptable to industry or managers. Different assumptions regarding appropriate constraints result in different outcomes, each of which may be considered a valid MEY. Similarly, alternative treatments of prices and costs may result in differing estimates of MEY and their associated effort trajectories. To develop an implementable management strategy in an adaptive management framework, a set of assumptions must be agreed among scientists, economists, and industry and managers, indicating that operationalizing MEY is not simply a matter of estimating the numbers but requires strong industry commitment and involvement. PMID:20018676
Cherry, S.; White, G.C.; Keating, K.A.; Haroldson, Mark A.; Schwartz, Charles C.
2007-01-01
Current management of the grizzly bear (Ursus arctos) population in Yellowstone National Park and surrounding areas requires annual estimation of the number of adult female bears with cubs-of-the-year. We examined the performance of nine estimators of population size via simulation. Data were simulated using two methods for different combinations of population size, sample size, and coefficient of variation of individual sighting probabilities. We show that the coefficient of variation does not, by itself, adequately describe the effects of capture heterogeneity, because two different distributions of capture probabilities can have the same coefficient of variation. All estimators produced biased estimates of population size with bias decreasing as effort increased. Based on the simulation results we recommend the Chao estimator for model M h be used to estimate the number of female bears with cubs of the year; however, the estimator of Chao and Shen may also be useful depending on the goals of the research.
Karanth, Krithi K; Gopalaswamy, Arjun M; DeFries, Ruth; Ballal, Natasha
2012-01-01
Mitigating crop and livestock loss to wildlife and improving compensation distribution are important for conservation efforts in landscapes where people and wildlife co-occur outside protected areas. The lack of rigorously collected spatial data poses a challenge to management efforts to minimize loss and mitigate conflicts. We surveyed 735 households from 347 villages in a 5154 km(2) area surrounding Kanha Tiger Reserve in India. We modeled self-reported household crop and livestock loss as a function of agricultural, demographic and environmental factors, and mitigation measures. We also modeled self-reported compensation received by households as a function of demographic factors, conflict type, reporting to authorities, and wildlife species involved. Seventy-three percent of households reported crop loss and 33% livestock loss in the previous year, but less than 8% reported human injury or death. Crop loss was associated with greater number of cropping months per year and proximity to the park. Livestock loss was associated with grazing animals inside the park and proximity to the park. Among mitigation measures only use of protective physical structures were associated with reduced livestock loss. Compensation distribution was more likely for tiger related incidents, and households reporting loss and located in the buffer. Average estimated probability of crop loss was 0.93 and livestock loss was 0.60 for surveyed households. Estimated crop and livestock loss and compensation distribution were higher for households located inside the buffer. Our approach modeled conflict data to aid managers in identifying potential conflict hotspots, influential factors, and spatially maps risk probability of crop and livestock loss. This approach could help focus allocation of conservation efforts and funds directed at conflict prevention and mitigation where high densities of people and wildlife co-occur.
Karanth, Krithi K.; Gopalaswamy, Arjun M.; DeFries, Ruth; Ballal, Natasha
2012-01-01
Mitigating crop and livestock loss to wildlife and improving compensation distribution are important for conservation efforts in landscapes where people and wildlife co-occur outside protected areas. The lack of rigorously collected spatial data poses a challenge to management efforts to minimize loss and mitigate conflicts. We surveyed 735 households from 347 villages in a 5154 km2 area surrounding Kanha Tiger Reserve in India. We modeled self-reported household crop and livestock loss as a function of agricultural, demographic and environmental factors, and mitigation measures. We also modeled self-reported compensation received by households as a function of demographic factors, conflict type, reporting to authorities, and wildlife species involved. Seventy-three percent of households reported crop loss and 33% livestock loss in the previous year, but less than 8% reported human injury or death. Crop loss was associated with greater number of cropping months per year and proximity to the park. Livestock loss was associated with grazing animals inside the park and proximity to the park. Among mitigation measures only use of protective physical structures were associated with reduced livestock loss. Compensation distribution was more likely for tiger related incidents, and households reporting loss and located in the buffer. Average estimated probability of crop loss was 0.93 and livestock loss was 0.60 for surveyed households. Estimated crop and livestock loss and compensation distribution were higher for households located inside the buffer. Our approach modeled conflict data to aid managers in identifying potential conflict hotspots, influential factors, and spatially maps risk probability of crop and livestock loss. This approach could help focus allocation of conservation efforts and funds directed at conflict prevention and mitigation where high densities of people and wildlife co-occur. PMID:23227173
Spatial Distribution of Hydrologic Ecosystem Service Estimates: Comparing Two Models
NASA Astrophysics Data System (ADS)
Dennedy-Frank, P. J.; Ghile, Y.; Gorelick, S.; Logsdon, R. A.; Chaubey, I.; Ziv, G.
2014-12-01
We compare estimates of the spatial distribution of water quantity provided (annual water yield) from two ecohydrologic models: the widely-used Soil and Water Assessment Tool (SWAT) and the much simpler water models from the Integrated Valuation of Ecosystem Services and Tradeoffs (InVEST) toolbox. These two models differ significantly in terms of complexity, timescale of operation, effort, and data required for calibration, and so are often used in different management contexts. We compare two study sites in the US: the Wildcat Creek Watershed (2083 km2) in Indiana, a largely agricultural watershed in a cold aseasonal climate, and the Upper Upatoi Creek Watershed (876 km2) in Georgia, a mostly forested watershed in a temperate aseasonal climate. We evaluate (1) quantitative estimates of water yield to explore how well each model represents this process, and (2) ranked estimates of water yield to indicate how useful the models are for management purposes where other social and financial factors may play significant roles. The SWAT and InVEST models provide very similar estimates of the water yield of individual subbasins in the Wildcat Creek Watershed (Pearson r = 0.92, slope = 0.89), and a similar ranking of the relative water yield of those subbasins (Spearman r = 0.86). However, the two models provide relatively different estimates of the water yield of individual subbasins in the Upper Upatoi Watershed (Pearson r = 0.25, slope = 0.14), and very different ranking of the relative water yield of those subbasins (Spearman r = -0.10). The Upper Upatoi watershed has a significant baseflow contribution due to its sandy, well-drained soils. InVEST's simple seasonality terms, which assume no change in storage over the time of the model run, may not accurately estimate water yield processes when baseflow provides such a strong contribution. Our results suggest that InVEST users take care in situations where storage changes are significant.
NASA Astrophysics Data System (ADS)
Elshall, A. S.; Ye, M.; Niu, G. Y.; Barron-Gafford, G.
2016-12-01
Bayesian multimodel inference is increasingly being used in hydrology. Estimating Bayesian model evidence (BME) is of central importance in many Bayesian multimodel analysis such as Bayesian model averaging and model selection. BME is the overall probability of the model in reproducing the data, accounting for the trade-off between the goodness-of-fit and the model complexity. Yet estimating BME is challenging, especially for high dimensional problems with complex sampling space. Estimating BME using the Monte Carlo numerical methods is preferred, as the methods yield higher accuracy than semi-analytical solutions (e.g. Laplace approximations, BIC, KIC, etc.). However, numerical methods are prone the numerical demons arising from underflow of round off errors. Although few studies alluded to this issue, to our knowledge this is the first study that illustrates these numerical demons. We show that the precision arithmetic can become a threshold on likelihood values and Metropolis acceptance ratio, which results in trimming parameter regions (when likelihood function is less than the smallest floating point number that a computer can represent) and corrupting of the empirical measures of the random states of the MCMC sampler (when using log-likelihood function). We consider two of the most powerful numerical estimators of BME that are the path sampling method of thermodynamic integration (TI) and the importance sampling method of steppingstone sampling (SS). We also consider the two most widely used numerical estimators, which are the prior sampling arithmetic mean (AS) and posterior sampling harmonic mean (HM). We investigate the vulnerability of these four estimators to the numerical demons. Interesting, the most biased estimator, namely the HM, turned out to be the least vulnerable. While it is generally assumed that AM is a bias-free estimator that will always approximate the true BME by investing in computational effort, we show that arithmetic underflow can hamper AM resulting in severe underestimation of BME. TI turned out to be the most vulnerable, resulting in BME overestimation. Finally, we show how SS can be largely invariant to rounding errors, yielding the most accurate and computational efficient results. These research results are useful for MC simulations to estimate Bayesian model evidence.
A financial planning model for estimating hospital debt capacity.
Hopkins, D S; Heath, D; Levin, P J
1982-01-01
A computer-based financial planning model was formulated to measure the impact of a major capital improvement project on the fiscal health of Stanford University Hospital. The model had to be responsive to many variables and easy to use, so as to allow for the testing of numerous alternatives. Special efforts were made to identify the key variables that needed to be presented in the model and to include all known links between capital investment, debt, and hospital operating expenses. Growth in the number of patient days of care was singled out as a major source of uncertainty that would have profound effects on the hospital's finances. Therefore this variable was subjected to special scrutiny in terms of efforts to gauge expected demographic trends and market forces. In addition, alternative base runs of the model were made under three distinct patient-demand assumptions. Use of the model enabled planners at the Stanford University Hospital (a) to determine that a proposed modernization plan was financially feasible under a reasonable (that is, not unduly optimistic) set of assumptions and (b) to examine the major sources of risk. Other than patient demand, these sources were found to be gross revenues per patient, operating costs, and future limitations on government reimbursement programs. When the likely financial consequences of these risks were estimated, both separately and in combination, it was determined that even if two or more assumptions took a somewhat more negative turn than was expected, the hospital would be able to offset adverse consequences by a relatively minor reduction in operating costs. PMID:7111658
Flint, Alan L.; Flint, Lorraine E.
2007-01-01
A regional-scale water-balance model was used to estimate recharge and runoff potential and support U.S. Geological Survey efforts to develop a better understanding of water availability for the Basin and Range carbonate-rock aquifer system (BARCAS) study in White Pine County, Nevada, and adjacent areas in Nevada and Utah. The water-balance model, or Basin Characterization Model (BCM), was used to estimate regional ground-water recharge for the 13 hydrographic areas in the study area. The BCM calculates recharge by using a distributed-parameter, water-balance method and monthly climatic boundary conditions. The BCM requires geographic information system coverages of soil, geology, and topographic information with monthly time-varying climatic conditions of air temperature and precipitation. Potential evapotranspiration, snow accumulation, and snowmelt are distributed spatially with process models. When combined with surface properties of soil-water storage and saturated hydraulic conductivity of bedrock and alluvium, the potential water available for in-place recharge and runoff is calculated using monthly time steps using a grid scale of 866 feet (270 meters). The BCM was used with monthly climatic inputs from 1970 to 2004, and results were averaged to provide an estimate of the average annual recharge for the BARCAS study area. The model estimates 526,000 acre-feet of potential in-place recharge and approximately 398,000 acre-feet of potential runoff. Assuming 15 percent of the runoff becomes recharge, the model estimates average annual ground-water recharge for the BARCAS area of about 586,000 acre-feet. When precipitation is extrapolated to the long-term climatic record (1895-2006), average annual recharge is estimated to be 530,000 acre-feet, or about 9 percent less than the recharge estimated for 1970-2004.
The burden of typhoid fever in low- and middle-income countries: A meta-regression approach
Warren, Joshua L.; Crawford, Forrest W.; Weinberger, Daniel M.; Kürüm, Esra; Pak, Gi Deok; Marks, Florian; Pitzer, Virginia E.
2017-01-01
Background Upcoming vaccination efforts against typhoid fever require an assessment of the baseline burden of disease in countries at risk. There are no typhoid incidence data from most low- and middle-income countries (LMICs), so model-based estimates offer insights for decision-makers in the absence of readily available data. Methods We developed a mixed-effects model fit to data from 32 population-based studies of typhoid incidence in 22 locations in 14 countries. We tested the contribution of economic and environmental indices for predicting typhoid incidence using a stochastic search variable selection algorithm. We performed out-of-sample validation to assess the predictive performance of the model. Results We estimated that 17.8 million cases of typhoid fever occur each year in LMICs (95% credible interval: 6.9–48.4 million). Central Africa was predicted to experience the highest incidence of typhoid, followed by select countries in Central, South, and Southeast Asia. Incidence typically peaked in the 2–4 year old age group. Models incorporating widely available economic and environmental indicators were found to describe incidence better than null models. Conclusions Recent estimates of typhoid burden may under-estimate the number of cases and magnitude of uncertainty in typhoid incidence. Our analysis permits prediction of overall as well as age-specific incidence of typhoid fever in LMICs, and incorporates uncertainty around the model structure and estimates of the predictors. Future studies are needed to further validate and refine model predictions and better understand year-to-year variation in cases. PMID:28241011
Kouvonen, Anne; Kivimäki, Mika; Virtanen, Marianna; Heponiemi, Tarja; Elovainio, Marko; Pentti, Jaana; Linna, Anne; Vahtera, Jussi
2006-01-01
Background In occupational life, a mismatch between high expenditure of effort and receiving few rewards may promote the co-occurrence of lifestyle risk factors, however, there is insufficient evidence to support or refute this hypothesis. The aim of this study is to examine the extent to which the dimensions of the Effort-Reward Imbalance (ERI) model – effort, rewards and ERI – are associated with the co-occurrence of lifestyle risk factors. Methods Based on data from the Finnish Public Sector Study, cross-sectional analyses were performed for 28,894 women and 7233 men. ERI was conceptualized as a ratio of effort and rewards. To control for individual differences in response styles, such as a personal disposition to answer negatively to questionnaires, occupational and organizational -level ecological ERI scores were constructed in addition to individual-level ERI scores. Risk factors included current smoking, heavy drinking, body mass index ≥25 kg/m2, and physical inactivity. Multinomial logistic regression models were used to estimate the likelihood of having one risk factor, two risk factors, and three or four risk factors. The associations between ERI and single risk factors were explored using binary logistic regression models. Results After adjustment for age, socioeconomic position, marital status, and type of job contract, women and men with high ecological ERI were 40% more likely to have simultaneously ≥3 lifestyle risk factors (vs. 0 risk factors) compared with their counterparts with low ERI. When examined separately, both low ecological effort and low ecological rewards were also associated with an elevated prevalence of risk factor co-occurrence. The results obtained with the individual-level scores were in the same direction. The associations of ecological ERI with single risk factors were generally less marked than the associations with the co-occurrence of risk factors. Conclusion This study suggests that a high ratio of occupational efforts relative to rewards may be associated with an elevated risk of having multiple lifestyle risk factors. However, an unexpected association between low effort and a higher likelihood of risk factor co-occurrence as well as the absence of data on overcommitment (and thereby a lack of full test of the ERI model) warrant caution in regard to the extent to which the entire ERI model is supported by our evidence. PMID:16464262
Kouvonen, Anne; Kivimäki, Mika; Virtanen, Marianna; Heponiemi, Tarja; Elovainio, Marko; Pentti, Jaana; Linna, Anne; Vahtera, Jussi
2006-02-07
In occupational life, a mismatch between high expenditure of effort and receiving few rewards may promote the co-occurrence of lifestyle risk factors, however, there is insufficient evidence to support or refute this hypothesis. The aim of this study is to examine the extent to which the dimensions of the Effort-Reward Imbalance (ERI) model--effort, rewards and ERI--are associated with the co-occurrence of lifestyle risk factors. Based on data from the Finnish Public Sector Study, cross-sectional analyses were performed for 28,894 women and 7233 men. ERI was conceptualized as a ratio of effort and rewards. To control for individual differences in response styles, such as a personal disposition to answer negatively to questionnaires, occupational and organizational-level ecological ERI scores were constructed in addition to individual-level ERI scores. Risk factors included current smoking, heavy drinking, body mass index > or =25 kg/m2, and physical inactivity. Multinomial logistic regression models were used to estimate the likelihood of having one risk factor, two risk factors, and three or four risk factors. The associations between ERI and single risk factors were explored using binary logistic regression models. After adjustment for age, socioeconomic position, marital status, and type of job contract, women and men with high ecological ERI were 40% more likely to have simultaneously > or =3 lifestyle risk factors (vs. 0 risk factors) compared with their counterparts with low ERI. When examined separately, both low ecological effort and low ecological rewards were also associated with an elevated prevalence of risk factor co-occurrence. The results obtained with the individual-level scores were in the same direction. The associations of ecological ERI with single risk factors were generally less marked than the associations with the co-occurrence of risk factors. This study suggests that a high ratio of occupational efforts relative to rewards may be associated with an elevated risk of having multiple lifestyle risk factors. However, an unexpected association between low effort and a higher likelihood of risk factor co-occurrence as well as the absence of data on overcommitment (and thereby a lack of full test of the ERI model) warrant caution in regard to the extent to which the entire ERI model is supported by our evidence.
Multi-Detection Events, Probability Density Functions, and Reduced Location Area
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eslinger, Paul W.; Schrom, Brian T.
2016-03-01
Abstract Several efforts have been made in the Comprehensive Nuclear-Test-Ban Treaty (CTBT) community to assess the benefits of combining detections of radionuclides to improve the location estimates available from atmospheric transport modeling (ATM) backtrack calculations. We present a Bayesian estimation approach rather than a simple dilution field of regard approach to allow xenon detections and non-detections to be combined mathematically. This system represents one possible probabilistic approach to radionuclide event formation. Application of this method to a recent interesting radionuclide event shows a substantial reduction in the location uncertainty of that event.
A Comprehensive Model of the Near-Earth Magnetic Field. Phase 3
NASA Technical Reports Server (NTRS)
Sabaka, Terence J.; Olsen, Nils; Langel, Robert A.
2000-01-01
The near-Earth magnetic field is due to sources in Earth's core, ionosphere, magnetosphere, lithosphere, and from coupling currents between ionosphere and magnetosphere and between hemispheres. Traditionally, the main field (low degree internal field) and magnetospheric field have been modeled simultaneously, and fields from other sources modeled separately. Such a scheme, however, can introduce spurious features. A new model, designated CMP3 (Comprehensive Model: Phase 3), has been derived from quiet-time Magsat and POGO satellite measurements and observatory hourly and annual means measurements as part of an effort to coestimate fields from all of these sources. This model represents a significant advancement in the treatment of the aforementioned field sources over previous attempts, and includes an accounting for main field influences on the magnetosphere, main field and solar activity influences on the ionosphere, seasonal influences on the coupling currents, a priori characterization of ionospheric and magnetospheric influence on Earth-induced fields, and an explicit parameterization and estimation of the lithospheric field. The result of this effort is a model whose fits to the data are generally superior to previous models and whose parameter states for the various constituent sources are very reasonable.
NASA Astrophysics Data System (ADS)
Nanteza, J.; Thomas, B. F.; Mukwaya, P. I.
2017-12-01
The general lack of knowledge about the current rates of water abstraction/use is a challenge to sustainable water resources management in many countries, including Uganda. Estimates of water abstraction/use rates over Uganda, currently available from the FAO are not disaggregated according to source, making it difficult to understand how much is taken out of individual water stores, limiting effective management. Modelling efforts have disaggregated water use rates according to source (i.e. groundwater and surface water). However, over Sub-Saharan Africa countries, these model use estimates are highly uncertain given the scale limitations in applying water use (i.e. point versus regional), thus influencing model calibration/validation. In this study, we utilize data from the water supply atlas project over Uganda to estimate current rates of groundwater abstraction across the country based on location, well type and other relevant information. GIS techniques are employed to demarcate areas served by each water source. These areas are combined with past population distributions and average daily water needed per person to estimate water abstraction/use through time. The results indicate an increase in groundwater use, and isolate regions prone to groundwater depletion where improved management is required to sustainably management groundwater use.
Sedinger, James S.; Chelgren, Nathan; Lindberg, Mark S.; Obritchkewitch, Tim; Kirk, Morgan T.; Martin, Philip D.; Anderson, Betty A.; Ward, David H.
2002-01-01
We used capture-recapture methods to estimate adult survival rates for adult female Black Brant (Branta bernicla nigricans; hereafter “brant”) from three colonies in Alaska, two on the Yukon-Kuskokwim Delta, and one on Alaska's Arctic coast. Costs of migration and reproductive effort varied among those colonies, enabling us to examine variation in survival in relation to variation in these other variables. We used the Barker model in program MARK to estimate true annual survival for brant from the three colonies. Models allowing for spatial variation in survival were among the most parsimonious models but were indistinguishable from a model with no spatial variation. Point estimates of annual survival were slightly higher for brant from the Arctic (0.90 ± 0.036) than for brant from either Tutakoke River (0.85 ± 0.004) or Kokechik Bay (0.86 ± 0.011). Thus, our survival estimates do not support a hypothesis that the cost of longer migrations or harvest experienced by brant from the Arctic reduced their annual survival relative to brant from the Yukon-Kuskokwim Delta. Spatial variation in survival provides weak support for life-history theory because brant from the region with lower reproductive investment had slightly higher survival.
Inverse Modeling of Tropospheric Methane Constrained by 13C Isotope in Methane
NASA Astrophysics Data System (ADS)
Mikaloff Fletcher, S. E.; Tans, P. P.; Bruhwiler, L. M.
2001-12-01
Understanding the budget of methane is crucial to predicting climate change and managing earth's carbon reservoirs. Methane is responsible for approximately 15% of the anthropogenic greenhouse forcing and has a large impact on the oxidative capacity of Earth's atmosphere due to its reaction with hydroxyl radical. At present, many of the sources and sinks of methane are poorly understood, due in part to the large spatial and temporal variability of the methane flux. Model calculations of methane mixing ratios using most process-based source estimates typically over-predict the inter-hemispheric gradient of atmospheric methane. Inverse models, which estimate trace gas budgets by using observations of atmospheric mixing ratios and transport models to estimate sources and sinks, have been used to incorporate features of the atmospheric observations into methane budgets. While inverse models of methane generally tend to find a decrease in northern hemisphere sources and an increase in southern hemisphere sources relative to process-based estimates,no inverse study has definitively associated the inter-hemispheric gradient difference with a specific source process or group of processes. In this presentation, observations of isotopic ratios of 13C in methane and isotopic signatures of methane source processes are used in conjunction with an inverse model of methane to further constrain the source estimates of methane. In order to investigate the advantages of incorporating 13C, the TM3 three-dimensional transport model was used. The methane and carbon dioxide measurements used are from a cooperative international effort, the Cooperative Air Sampling Network, lead by the Climate Monitoring Diagnostics Laboratory (CMDL) at the National Oceanic and Atmospheric Administration (NOAA). Experiments using model calculations based on process-based source estimates show that the inter-hemispheric gradient of δ 13CH4 is not reproduced by these source estimates, showing that the addition of observations of δ 13CH4 should provide unique insight into the methane problem.
Korner-Nievergelt, Fränzi; Brinkmann, Robert; Niermann, Ivo; Behr, Oliver
2013-01-01
Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates) remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production. PMID:23844144
Korner-Nievergelt, Fränzi; Brinkmann, Robert; Niermann, Ivo; Behr, Oliver
2013-01-01
Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates) remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production.
Rüdt, Matthias; Gillet, Florian; Heege, Stefanie; Hitzler, Julian; Kalbfuss, Bernd; Guélat, Bertrand
2015-09-25
Application of model-based design is appealing to support the development of protein chromatography in the biopharmaceutical industry. However, the required efforts for parameter estimation are frequently perceived as time-consuming and expensive. In order to speed-up this work, a new parameter estimation approach for modelling ion-exchange chromatography in linear conditions was developed. It aims at reducing the time and protein demand for the model calibration. The method combines the estimation of kinetic and thermodynamic parameters based on the simultaneous variation of the gradient slope and the residence time in a set of five linear gradient elutions. The parameters are estimated from a Yamamoto plot and a gradient-adjusted Van Deemter plot. The combined approach increases the information extracted per experiment compared to the individual methods. As a proof of concept, the combined approach was successfully applied for a monoclonal antibody on a cation-exchanger and for a Fc-fusion protein on an anion-exchange resin. The individual parameter estimations for the mAb confirmed that the new approach maintained the accuracy of the usual Yamamoto and Van Deemter plots. In the second case, offline size-exclusion chromatography was performed in order to estimate the thermodynamic parameters of an impurity (high molecular weight species) simultaneously with the main product. Finally, the parameters obtained from the combined approach were used in a lumped kinetic model to simulate the chromatography runs. The simulated chromatograms obtained for a wide range of gradient lengths and residence times showed only small deviations compared to the experimental data. Copyright © 2015 Elsevier B.V. All rights reserved.
Fraga, Rafael de; Stow, Adam J; Magnusson, William E; Lima, Albertina P
2014-01-01
Studies leading to decision-making for environmental licensing often fail to provide accurate estimates of diversity. Measures of snake diversity are regularly obtained to assess development impacts in the rainforests of the Amazon Basin, but this taxonomic group may be subject to poor detection probabilities. Recently, the Brazilian government tried to standardize sampling designs by the implementation of a system (RAPELD) to quantify biological diversity using spatially-standardized sampling units. Consistency in sampling design allows the detection probabilities to be compared among taxa, and sampling effort and associated cost to be evaluated. The cost effectiveness of detecting snakes has received no attention in Amazonia. Here we tested the effects of reducing sampling effort on estimates of species densities and assemblage composition. We identified snakes in seven plot systems, each standardised with 14 plots. The 250 m long centre line of each plot followed an altitudinal contour. Surveys were repeated four times in each plot and detection probabilities were estimated for the 41 species encountered. Reducing the number of observations, or the size of the sampling modules, caused significant loss of information on species densities and local patterns of variation in assemblage composition. We estimated the cost to find a snake as $ 120 U.S., but general linear models indicated the possibility of identifying differences in assemblage composition for half the overall survey costs. Decisions to reduce sampling effort depend on the importance of lost information to target-issues, and may not be the preferred option if there is the potential for identifying individual snake species requiring specific conservation actions. However, in most studies of human disturbance on species assemblages, it is likely to be more cost-effective to focus on other groups of organisms with higher detection probabilities.
de Fraga, Rafael; Stow, Adam J.; Magnusson, William E.; Lima, Albertina P.
2014-01-01
Studies leading to decision-making for environmental licensing often fail to provide accurate estimates of diversity. Measures of snake diversity are regularly obtained to assess development impacts in the rainforests of the Amazon Basin, but this taxonomic group may be subject to poor detection probabilities. Recently, the Brazilian government tried to standardize sampling designs by the implementation of a system (RAPELD) to quantify biological diversity using spatially-standardized sampling units. Consistency in sampling design allows the detection probabilities to be compared among taxa, and sampling effort and associated cost to be evaluated. The cost effectiveness of detecting snakes has received no attention in Amazonia. Here we tested the effects of reducing sampling effort on estimates of species densities and assemblage composition. We identified snakes in seven plot systems, each standardised with 14 plots. The 250 m long centre line of each plot followed an altitudinal contour. Surveys were repeated four times in each plot and detection probabilities were estimated for the 41 species encountered. Reducing the number of observations, or the size of the sampling modules, caused significant loss of information on species densities and local patterns of variation in assemblage composition. We estimated the cost to find a snake as $ 120 U.S., but general linear models indicated the possibility of identifying differences in assemblage composition for half the overall survey costs. Decisions to reduce sampling effort depend on the importance of lost information to target-issues, and may not be the preferred option if there is the potential for identifying individual snake species requiring specific conservation actions. However, in most studies of human disturbance on species assemblages, it is likely to be more cost-effective to focus on other groups of organisms with higher detection probabilities. PMID:25147930
Error analysis of leaf area estimates made from allometric regression models
NASA Technical Reports Server (NTRS)
Feiveson, A. H.; Chhikara, R. S.
1986-01-01
Biological net productivity, measured in terms of the change in biomass with time, affects global productivity and the quality of life through biochemical and hydrological cycles and by its effect on the overall energy balance. Estimating leaf area for large ecosystems is one of the more important means of monitoring this productivity. For a particular forest plot, the leaf area is often estimated by a two-stage process. In the first stage, known as dimension analysis, a small number of trees are felled so that their areas can be measured as accurately as possible. These leaf areas are then related to non-destructive, easily-measured features such as bole diameter and tree height, by using a regression model. In the second stage, the non-destructive features are measured for all or for a sample of trees in the plots and then used as input into the regression model to estimate the total leaf area. Because both stages of the estimation process are subject to error, it is difficult to evaluate the accuracy of the final plot leaf area estimates. This paper illustrates how a complete error analysis can be made, using an example from a study made on aspen trees in northern Minnesota. The study was a joint effort by NASA and the University of California at Santa Barbara known as COVER (Characterization of Vegetation with Remote Sensing).
Martyr-Koller, R.C.; Kernkamp, H.W.J.; Van Dam, Anne A.; Mick van der Wegen,; Lucas, Lisa; Knowles, N.; Jaffe, B.; Fregoso, T.A.
2017-01-01
A linked modeling approach has been undertaken to understand the impacts of climate and infrastructure on aquatic ecology and water quality in the San Francisco Bay-Delta region. The Delft3D Flexible Mesh modeling suite is used in this effort for its 3D hydrodynamics, salinity, temperature and sediment dynamics, phytoplankton and water-quality coupling infrastructure, and linkage to a habitat suitability model. The hydrodynamic model component of the suite is D-Flow FM, a new 3D unstructured finite-volume model based on the Delft3D model. In this paper, D-Flow FM is applied to the San Francisco Bay-Delta to investigate tidal, seasonal and annual dynamics of water levels, river flows and salinity under historical environmental and infrastructural conditions. The model is driven by historical winds, tides, ocean salinity, and river flows, and includes federal, state, and local freshwater withdrawals, and regional gate and barrier operations. The model is calibrated over a 9-month period, and subsequently validated for water levels, flows, and 3D salinity dynamics over a 2 year period.Model performance was quantified using several model assessment metrics and visualized through target diagrams. These metrics indicate that the model accurately estimated water levels, flows, and salinity over wide-ranging tidal and fluvial conditions, and the model can be used to investigate detailed circulation and salinity patterns throughout the Bay-Delta. The hydrodynamics produced through this effort will be used to drive affiliated sediment, phytoplankton, and contaminant hindcast efforts and habitat suitability assessments for fish and bivalves. The modeling framework applied here will serve as a baseline to ultimately shed light on potential ecosystem change over the current century.
NASA Astrophysics Data System (ADS)
Martyr-Koller, R. C.; Kernkamp, H. W. J.; van Dam, A.; van der Wegen, M.; Lucas, L. V.; Knowles, N.; Jaffe, B.; Fregoso, T. A.
2017-06-01
A linked modeling approach has been undertaken to understand the impacts of climate and infrastructure on aquatic ecology and water quality in the San Francisco Bay-Delta region. The Delft3D Flexible Mesh modeling suite is used in this effort for its 3D hydrodynamics, salinity, temperature and sediment dynamics, phytoplankton and water-quality coupling infrastructure, and linkage to a habitat suitability model. The hydrodynamic model component of the suite is D-Flow FM, a new 3D unstructured finite-volume model based on the Delft3D model. In this paper, D-Flow FM is applied to the San Francisco Bay-Delta to investigate tidal, seasonal and annual dynamics of water levels, river flows and salinity under historical environmental and infrastructural conditions. The model is driven by historical winds, tides, ocean salinity, and river flows, and includes federal, state, and local freshwater withdrawals, and regional gate and barrier operations. The model is calibrated over a 9-month period, and subsequently validated for water levels, flows, and 3D salinity dynamics over a 2 year period. Model performance was quantified using several model assessment metrics and visualized through target diagrams. These metrics indicate that the model accurately estimated water levels, flows, and salinity over wide-ranging tidal and fluvial conditions, and the model can be used to investigate detailed circulation and salinity patterns throughout the Bay-Delta. The hydrodynamics produced through this effort will be used to drive affiliated sediment, phytoplankton, and contaminant hindcast efforts and habitat suitability assessments for fish and bivalves. The modeling framework applied here will serve as a baseline to ultimately shed light on potential ecosystem change over the current century.
Effects of Ensemble Configuration on Estimates of Regional Climate Uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldenson, N.; Mauger, G.; Leung, L. R.
Internal variability in the climate system can contribute substantial uncertainty in climate projections, particularly at regional scales. Internal variability can be quantified using large ensembles of simulations that are identical but for perturbed initial conditions. Here we compare methods for quantifying internal variability. Our study region spans the west coast of North America, which is strongly influenced by El Niño and other large-scale dynamics through their contribution to large-scale internal variability. Using a statistical framework to simultaneously account for multiple sources of uncertainty, we find that internal variability can be quantified consistently using a large ensemble or an ensemble ofmore » opportunity that includes small ensembles from multiple models and climate scenarios. The latter also produce estimates of uncertainty due to model differences. We conclude that projection uncertainties are best assessed using small single-model ensembles from as many model-scenario pairings as computationally feasible, which has implications for ensemble design in large modeling efforts.« less
Shape Models of Asteroids as a Missing Input for Bulk Density Determinations
NASA Astrophysics Data System (ADS)
Hanuš, Josef
2015-07-01
To determine a meaningful bulk density of an asteroid, both accurate volume and mass estimates are necessary. The volume can be computed by scaling the size of the 3D shape model to fit the disk-resolved images or stellar occultation profiles, which are available in the literature or through collaborations. This work provides a list of asteroids, for which (i) there are already mass estimates with reported uncertainties better than 20% or their mass will be most likely determined in the future from Gaia astrometric observations, and (ii) their 3D shape models are currently unknown. Additional optical lightcurves are necessary to determine the convex shape models of these asteroids. The main aim of this article is to motivate the observers to obtain lightcurves of these asteroids, and thus contribute to their shape model determinations. Moreover, a web page https://asteroid-obs.oca.eu, which maintains an up-to-date list of these objects to assure efficiency and to avoid any overlapping efforts, was created.
Reconciling fisheries catch and ocean productivity
Stock, Charles A.; Asch, Rebecca G.; Cheung, William W. L.; Dunne, John P.; Friedland, Kevin D.; Lam, Vicky W. Y.; Sarmiento, Jorge L.; Watson, Reg A.
2017-01-01
Photosynthesis fuels marine food webs, yet differences in fish catch across globally distributed marine ecosystems far exceed differences in net primary production (NPP). We consider the hypothesis that ecosystem-level variations in pelagic and benthic energy flows from phytoplankton to fish, trophic transfer efficiencies, and fishing effort can quantitatively reconcile this contrast in an energetically consistent manner. To test this hypothesis, we enlist global fish catch data that include previously neglected contributions from small-scale fisheries, a synthesis of global fishing effort, and plankton food web energy flux estimates from a prototype high-resolution global earth system model (ESM). After removing a small number of lightly fished ecosystems, stark interregional differences in fish catch per unit area can be explained (r = 0.79) with an energy-based model that (i) considers dynamic interregional differences in benthic and pelagic energy pathways connecting phytoplankton and fish, (ii) depresses trophic transfer efficiencies in the tropics and, less critically, (iii) associates elevated trophic transfer efficiencies with benthic-predominant systems. Model catch estimates are generally within a factor of 2 of values spanning two orders of magnitude. Climate change projections show that the same macroecological patterns explaining dramatic regional catch differences in the contemporary ocean amplify catch trends, producing changes that may exceed 50% in some regions by the end of the 21st century under high-emissions scenarios. Models failing to resolve these trophodynamic patterns may significantly underestimate regional fisheries catch trends and hinder adaptation to climate change. PMID:28115722
Reconciling fisheries catch and ocean productivity.
Stock, Charles A; John, Jasmin G; Rykaczewski, Ryan R; Asch, Rebecca G; Cheung, William W L; Dunne, John P; Friedland, Kevin D; Lam, Vicky W Y; Sarmiento, Jorge L; Watson, Reg A
2017-02-21
Photosynthesis fuels marine food webs, yet differences in fish catch across globally distributed marine ecosystems far exceed differences in net primary production (NPP). We consider the hypothesis that ecosystem-level variations in pelagic and benthic energy flows from phytoplankton to fish, trophic transfer efficiencies, and fishing effort can quantitatively reconcile this contrast in an energetically consistent manner. To test this hypothesis, we enlist global fish catch data that include previously neglected contributions from small-scale fisheries, a synthesis of global fishing effort, and plankton food web energy flux estimates from a prototype high-resolution global earth system model (ESM). After removing a small number of lightly fished ecosystems, stark interregional differences in fish catch per unit area can be explained ( r = 0.79) with an energy-based model that ( i ) considers dynamic interregional differences in benthic and pelagic energy pathways connecting phytoplankton and fish, ( ii ) depresses trophic transfer efficiencies in the tropics and, less critically, ( iii ) associates elevated trophic transfer efficiencies with benthic-predominant systems. Model catch estimates are generally within a factor of 2 of values spanning two orders of magnitude. Climate change projections show that the same macroecological patterns explaining dramatic regional catch differences in the contemporary ocean amplify catch trends, producing changes that may exceed 50% in some regions by the end of the 21st century under high-emissions scenarios. Models failing to resolve these trophodynamic patterns may significantly underestimate regional fisheries catch trends and hinder adaptation to climate change.
Hess, Jeremy J.; Ebi, Kristie L.; Markandya, Anil; Balbus, John M.; Wilkinson, Paul; Haines, Andy; Chalabi, Zaid
2014-01-01
Background: Policy decisions regarding climate change mitigation are increasingly incorporating the beneficial and adverse health impacts of greenhouse gas emission reduction strategies. Studies of such co-benefits and co-harms involve modeling approaches requiring a range of analytic decisions that affect the model output. Objective: Our objective was to assess analytic decisions regarding model framework, structure, choice of parameters, and handling of uncertainty when modeling health co-benefits, and to make recommendations for improvements that could increase policy uptake. Methods: We describe the assumptions and analytic decisions underlying models of mitigation co-benefits, examining their effects on modeling outputs, and consider tools for quantifying uncertainty. Discussion: There is considerable variation in approaches to valuation metrics, discounting methods, uncertainty characterization and propagation, and assessment of low-probability/high-impact events. There is also variable inclusion of adverse impacts of mitigation policies, and limited extension of modeling domains to include implementation considerations. Going forward, co-benefits modeling efforts should be carried out in collaboration with policy makers; these efforts should include the full range of positive and negative impacts and critical uncertainties, as well as a range of discount rates, and should explicitly characterize uncertainty. We make recommendations to improve the rigor and consistency of modeling of health co-benefits. Conclusion: Modeling health co-benefits requires systematic consideration of the suitability of model assumptions, of what should be included and excluded from the model framework, and how uncertainty should be treated. Increased attention to these and other analytic decisions has the potential to increase the policy relevance and application of co-benefits modeling studies, potentially helping policy makers to maximize mitigation potential while simultaneously improving health. Citation: Remais JV, Hess JJ, Ebi KL, Markandya A, Balbus JM, Wilkinson P, Haines A, Chalabi Z. 2014. Estimating the health effects of greenhouse gas mitigation strategies: addressing parametric, model, and valuation challenges. Environ Health Perspect 122:447–455; http://dx.doi.org/10.1289/ehp.1306744 PMID:24583270
NASA Astrophysics Data System (ADS)
Basith, Abdul; Prakoso, Yudhono; Kongko, Widjo
2017-07-01
A tsunami model using high resolution geometric data is indispensable in efforts to tsunami mitigation, especially in tsunami prone areas. It is one of the factors that affect the accuracy results of numerical modeling of tsunami. Sadeng Port is a new infrastructure in the Southern Coast of Java which could potentially hit by massive tsunami from seismic gap. This paper discusses validation and error estimation of tsunami model created using high resolution geometric data in Sadeng Port. Tsunami model validation uses the height wave of Tsunami Pangandaran 2006 recorded by Tide Gauge of Sadeng. Tsunami model will be used to accommodate the tsunami numerical modeling involves the parameters of earthquake-tsunami which is derived from the seismic gap. The validation results using t-test (student) shows that the height of the tsunami modeling results and observation in Tide Gauge of Sadeng are considered statistically equal at 95% confidence level and the value of the RMSE and NRMSE are 0.428 m and 22.12%, while the differences of tsunami wave travel time is 12 minutes.
NASA Astrophysics Data System (ADS)
Luo, Ning; Illman, Walter A.
2016-09-01
Analyses are presented of long-term hydrographs perturbed by variable pumping/injection events in a confined aquifer at a municipal water-supply well field in the Region of Waterloo, Ontario (Canada). Such records are typically not considered for aquifer test analysis. Here, the water-level variations are fingerprinted to pumping/injection rate changes using the Theis model implemented in the WELLS code coupled with PEST. Analyses of these records yield a set of transmissivity ( T) and storativity ( S) estimates between each monitoring and production borehole. These individual estimates are found to poorly predict water-level variations at nearby monitoring boreholes not used in the calibration effort. On the other hand, the geometric means of the individual T and S estimates are similar to those obtained from previous pumping tests conducted at the same site and adequately predict water-level variations in other boreholes. The analyses reveal that long-term municipal water-level records are amenable to analyses using a simple analytical solution to estimate aquifer parameters. However, uniform parameters estimated with analytical solutions should be considered as first rough estimates. More accurate hydraulic parameters should be obtained by calibrating a three-dimensional numerical model that rigorously captures the complexities of the site with these data.
Fault Detection of Rotating Machinery using the Spectral Distribution Function
NASA Technical Reports Server (NTRS)
Davis, Sanford S.
1997-01-01
The spectral distribution function is introduced to characterize the process leading to faults in rotating machinery. It is shown to be a more robust indicator than conventional power spectral density estimates, but requires only slightly more computational effort. The method is illustrated with examples from seeded gearbox transmission faults and an analytical model of a defective bearing. Procedures are suggested for implementation in realistic environments.
Systems Analysis Initiated for All-Electric Aircraft Propulsion
NASA Technical Reports Server (NTRS)
Kohout, Lisa L.
2003-01-01
A multidisciplinary effort is underway at the NASA Glenn Research Center to develop concepts for revolutionary, nontraditional fuel cell power and propulsion systems for aircraft applications. There is a growing interest in the use of fuel cells as a power source for electric propulsion as well as an auxiliary power unit to substantially reduce or eliminate environmentally harmful emissions. A systems analysis effort was initiated to assess potential concepts in an effort to identify those configurations with the highest payoff potential. Among the technologies under consideration are advanced proton exchange membrane (PEM) and solid oxide fuel cells, alternative fuels and fuel processing, and fuel storage. Prior to this effort, the majority of fuel cell analysis done at Glenn was done for space applications. Because of this, a new suite of models was developed. These models include the hydrogen-air PEM fuel cell; internal reforming solid oxide fuel cell; balance-of-plant components (compressor, humidifier, separator, and heat exchangers); compressed gas, cryogenic, and liquid fuel storage tanks; and gas turbine/generator models for hybrid system applications. Initial mass, volume, and performance estimates of a variety of PEM systems operating on hydrogen and reformate have been completed for a baseline general aviation aircraft. Solid oxide/turbine hybrid systems are being analyzed. In conjunction with the analysis efforts, a joint effort has been initiated with Glenn s Computer Services Division to integrate fuel cell stack and component models with the visualization environment that supports the GRUVE lab, Glenn s virtual reality facility. The objective of this work is to provide an environment to assist engineers in the integration of fuel cell propulsion systems into aircraft and provide a better understanding of the interaction between system components and the resulting effect on the overall design and performance of the aircraft. Initially, three-dimensional computer-aided design (CAD) models of representative PEM fuel cell stack and components were developed and integrated into the virtual reality environment along with an Excel-based model used to calculate fuel cell electrical performance on the basis of cell dimensions (see the figure). CAD models of a representative general aviation aircraft were also developed and added to the environment. With the use of special headgear, users will be able to virtually manipulate the fuel cell s physical characteristics and its placement within the aircraft while receiving information on the resultant fuel cell output power and performance. As the systems analysis effort progresses, we will add more component models to the GRUVE environment to help us more fully understand the effect of various system configurations on the aircraft.
A new parameterization for integrated population models to document amphibian reintroductions
Duarte, Adam; Pearl, Christopher; Adams, Michael J.; Peterson, James T.
2017-01-01
Managers are increasingly implementing reintroduction programs as part of a global effort to alleviate amphibian declines. Given uncertainty in factors affecting populations and a need to make recurring decisions to achieve objectives, adaptive management is a useful component of these efforts. A major impediment to the estimation of demographic rates often used to parameterize and refine decision-support models is that life-stage-specific monitoring data are frequently sparse for amphibians. We developed a new parameterization for integrated population models to match the ecology of amphibians and capitalize on relatively inexpensive monitoring data to document amphibian reintroductions. We evaluate the capability of this model by fitting it to Oregon spotted frog (Rana pretiosa) monitoring data collected from 2007 to 2014 following their reintroduction within the Klamath Basin, Oregon, USA. The number of egg masses encountered and the estimated adult and metamorph abundances generally increased following reintroduction. We found that survival probability from egg to metamorph ranged from 0.01 in 2008 to 0.09 in 2009 and was not related to minimum spring temperatures, metamorph survival probability ranged from 0.13 in 2010–2011 to 0.86 in 2012–2013 and was positively related to mean monthly temperatures (logit-scale slope = 2.37), adult survival probability was lower for founders (0.40) than individuals recruited after reintroduction (0.56), and the mean number of egg masses per adult female was 0.74. Our study is the first to test hypotheses concerning Oregon spotted frog egg-to-metamorph and metamorph-to-adult transition probabilities in the wild and document their response at multiple life stages following reintroduction. Furthermore, we provide an example to illustrate how the structure of our integrated population model serves as a useful foundation for amphibian decision-support models within adaptive management programs. The integration of multiple, but related, data sets has an advantage of being able to estimate complex ecological relationships across multiple life stages, offering a modeling framework that accommodates uncertainty, enforces parsimony, and ensures all model parameters can be confronted with monitoring data.
A new parameterization for integrated population models to document amphibian reintroductions.
Duarte, Adam; Pearl, Christopher A; Adams, Michael J; Peterson, James T
2017-09-01
Managers are increasingly implementing reintroduction programs as part of a global effort to alleviate amphibian declines. Given uncertainty in factors affecting populations and a need to make recurring decisions to achieve objectives, adaptive management is a useful component of these efforts. A major impediment to the estimation of demographic rates often used to parameterize and refine decision-support models is that life-stage-specific monitoring data are frequently sparse for amphibians. We developed a new parameterization for integrated population models to match the ecology of amphibians and capitalize on relatively inexpensive monitoring data to document amphibian reintroductions. We evaluate the capability of this model by fitting it to Oregon spotted frog (Rana pretiosa) monitoring data collected from 2007 to 2014 following their reintroduction within the Klamath Basin, Oregon, USA. The number of egg masses encountered and the estimated adult and metamorph abundances generally increased following reintroduction. We found that survival probability from egg to metamorph ranged from 0.01 in 2008 to 0.09 in 2009 and was not related to minimum spring temperatures, metamorph survival probability ranged from 0.13 in 2010-2011 to 0.86 in 2012-2013 and was positively related to mean monthly temperatures (logit-scale slope = 2.37), adult survival probability was lower for founders (0.40) than individuals recruited after reintroduction (0.56), and the mean number of egg masses per adult female was 0.74. Our study is the first to test hypotheses concerning Oregon spotted frog egg-to-metamorph and metamorph-to-adult transition probabilities in the wild and document their response at multiple life stages following reintroduction. Furthermore, we provide an example to illustrate how the structure of our integrated population model serves as a useful foundation for amphibian decision-support models within adaptive management programs. The integration of multiple, but related, data sets has an advantage of being able to estimate complex ecological relationships across multiple life stages, offering a modeling framework that accommodates uncertainty, enforces parsimony, and ensures all model parameters can be confronted with monitoring data. © 2017 by the Ecological Society of America.
Biomechanical evaluation of nursing tasks in a hospital setting.
Jang, R; Karwowski, W; Quesada, P M; Rodrick, D; Sherehiy, B; Cronin, S N; Layer, J K
2007-11-01
A field study was conducted to investigate spinal kinematics and loading in the nursing profession using objective and subjective measurements of selected nursing tasks observed in a hospital setting. Spinal loading was estimated using trunk motion dynamics measured by the lumbar motion monitor (LMM) and lower back compressive and shear forces were estimated using the three-dimensional (3D) Static Strength Prediction Program. Subjective measures included the rate of perceived physical effort and the perceived risk of low back pain. A multiple logistic regression model, reported in the literature for predicting low back injury based on defined risk groups, was tested. The study results concluded that the major risk factors for low back injury in nurses were the weight of patients handled, trunk moment, and trunk axial rotation. The activities that required long time exposure to awkward postures were perceived by nurses as a high physical effort. This study also concluded that self-reported perceived exertion could be used as a tool to identify nursing activities with a high risk of low-back injury.
Emissions from ships in the northwestern United States.
Corbett, James J
2002-03-15
Recent inventory efforts have focused on developing nonroad inventories for emissions modeling and policy insights. Characterizing these inventories geographically and explicitly treating the uncertaintiesthat result from limited emissions testing, incomplete activity and usage data, and other important input parameters currently pose the largest methodological challenges. This paper presents a commercial marine vessel (CMV) emissions inventory for Washington and Oregon using detailed statistics regarding fuel consumption, vessel movements, and cargo volumes for the Columbia and Snake River systems. The inventory estimates emissions for oxides of nitrogen (NOx), particulate matter (PM), and oxides of sulfur (SOx). This analysis estimates that annual NOx emissions from marine transportation in the Columbia and Snake River systems in Washington and Oregon equal 6900 t of NOx (as NO2) per year, 2.6 times greater than previous NO, inventories for this region. Statewide CMV NO, emissions are estimated to be 9,800 t of NOx per year. By relying on a "bottom-up" fuel consumption model that includes vessel characteristics and transit information, the river system inventory may be more accurate than previous estimates. This inventory provides modelers with bounded parametric inputs for sensitivity analysis in pollution modeling. The ability to parametrically model the uncertainty in commercial marine vessel inventories also will help policy-makers determine whether better policy decisions can be enabled through further vessel testing and improved inventory resolution.
Barbraud, C.; Nichols, J.D.; Hines, J.E.; Hafner, H.
2003-01-01
Coloniality has mainly been studied from an evolutionary perspective, but relatively few studies have developed methods for modelling colony dynamics. Changes in number of colonies over time provide a useful tool for predicting and evaluating the responses of colonial species to management and to environmental disturbance. Probabilistic Markov process models have been recently used to estimate colony site dynamics using presence-absence data when all colonies are detected in sampling efforts. Here, we define and develop two general approaches for the modelling and analysis of colony dynamics for sampling situations in which all colonies are, and are not, detected. For both approaches, we develop a general probabilistic model for the data and then constrain model parameters based on various hypotheses about colony dynamics. We use Akaike's Information Criterion (AIC) to assess the adequacy of the constrained models. The models are parameterised with conditional probabilities of local colony site extinction and colonization. Presence-absence data arising from Pollock's robust capture-recapture design provide the basis for obtaining unbiased estimates of extinction, colonization, and detection probabilities when not all colonies are detected. This second approach should be particularly useful in situations where detection probabilities are heterogeneous among colony sites. The general methodology is illustrated using presence-absence data on two species of herons (Purple Heron, Ardea purpurea and Grey Heron, Ardea cinerea). Estimates of the extinction and colonization rates showed interspecific differences and strong temporal and spatial variations. We were also able to test specific predictions about colony dynamics based on ideas about habitat change and metapopulation dynamics. We recommend estimators based on probabilistic modelling for future work on colony dynamics. We also believe that this methodological framework has wide application to problems in animal ecology concerning metapopulation and community dynamics.
FISM 2.0: Improved Spectral Range, Resolution, and Accuracy
NASA Technical Reports Server (NTRS)
Chamberlin, Phillip C.
2012-01-01
The Flare Irradiance Spectral Model (FISM) was first released in 2005 to provide accurate estimates of the solar VUV (0.1-190 nm) irradiance to the Space Weather community. This model was based on TIMED SEE as well as UARS and SORCE SOLSTICE measurements, and was the first model to include a 60 second temporal variation to estimate the variations due to solar flares. Along with flares, FISM also estimates the tradition solar cycle and solar rotational variations over months and decades back to 1947. This model has been highly successful in providing driving inputs to study the affect of solar irradiance variations on the Earth's ionosphere and thermosphere, lunar dust charging, as well as the Martian ionosphere. The second version of FISM, FISM2, is currently being updated to be based on the more accurate SDO/EVE data, which will provide much more accurate estimations in the 0.1-105 nm range, as well as extending the 'daily' model variation up to 300 nm based on the SOLSTICE measurements. with the spectral resolution of SDO/EVE along with SOLSTICE and the TIMED and SORCE XPS 'model' products, the entire range from 0.1-300 nm will also be available at 0.1 nm, allowing FISM2 to be improved a similar 0.1nm spectral bins. FISM also will have a TSI component that will estimate the total radiated energy during flares based on the few TSI flares observed to date. Presented here will be initial results of the FISM2 modeling efforts, as well as some challenges that will need to be overcome in order for FISM2 to accurately model the solar variations on time scales of seconds to decades.
NASA Astrophysics Data System (ADS)
Coopersmith, E. J.; Cosh, M. H.
2014-12-01
NASA's SMAP satellite, launched in November of 2014, produces estimates of average volumetric soil moisture at 3, 9, and 36-kilometer scales. The calibration and validation process of these estimates requires the generation of an identically-scaled soil moisture product from existing in-situ networks. This can be achieved via the integration of NLDAS precipitation data to perform calibration of models at each in-situ gauge. In turn, these models and the gauges' volumetric estimations are used to generate soil moisture estimates at a 500m scale throughout a given test watershed by leveraging, at each location, the gauge-calibrated models deemed most appropriate in terms of proximity, calibration efficacy, soil-textural similarity, and topography. Four ARS watersheds, located in Iowa, Oklahoma, Georgia, and Arizona are employed to demonstrate the utility of this approach. The South Fork watershed in Iowa represents the simplest case - the soil textures and topography are relative constants and the variability of soil moisture is simply tied to the spatial variability of precipitation. The Little Washita watershed in Oklahoma adds soil textural variability (but remains topographically simple), while the Little River watershed in Georgia incorporates topographic classification. Finally, the Walnut Gulch watershed in Arizona adds a dense precipitation network to be employed for even finer-scale modeling estimates. Results suggest RMSE values at or below the 4% volumetric standard adopted for the SMAP mission are attainable over the desired spatial scales via this integration of modeling efforts and existing in-situ networks.
NASA Astrophysics Data System (ADS)
Werren, G.; Balin, D.; Reynard, E.; Lane, S. N.
2012-04-01
Flood modelling is essential for flood hazard assessment. Modelling becomes a challenge in small, ungauged watersheds prone to flash floods, like the ones draining the town of Beni Mellal (Morocco). Four temporary streams meet in the urban area of Beni Mellal, producing every year sheet floods, harmful to infrastructure and to people. Here, statistical analysis may not give realistic results, but the study of these repeated real flash flood events may provide a better understanding of watershed specific hydrology. This study integrates a larger cooperation project between Switzerland and Morroco, aimed at knowledge transfer in disaster risk reduction, especially through hazard mapping and land-use planning, related to implementation of hazard maps. Hydrologic and hydraulic modelling was carried out to obtain hazard maps. An important point was to find open source data and methods that could still produce a realistic model for the area concerned, in order to provide easy-to-use, cost-effective tools for risk management in developing countries like Morocco, where routine data collection is largely lacking. The data used for modelling is the Web available TRMM 3-Hour 0.25 degree rainfall data provided by the Tropical Rainfall Measurement Mission Project (TRMM). Hydrologic modelling for discharge estimation was undertaken using methods available in the HEC-HMS software provided by the US Army Corps of Engineers® (USACE). Several transfer models were used, so as to choose the best-suited method available. As no model calibration was possible for no measured flow data was available, a one-at-the-time sensitivity analysis was performed on the parameters chosen, in order to detect their influence on the results. But the most important verification method remained field observation, through post-flood field campaigns aimed at mapping water surfaces and depths in the flooded areas, as well as river section monitoring, where rough discharge estimates could be obtained using empirical equations. Another information source was local knowledge, as people could give a rough estimation of concentration time by describing flood evolution. Finally, hydraulic modelling of the flooded areas in the urban perimeter was performed using the USACE HEC-RAS® software capabilities. A specific challenge at this stage was field morphology, as the flooded areas form large alluvial fans, with very different flood behaviour compared to flood plains. Model "calibration" at this stage was undertaken using the mapped water surfaces and depths. Great care was taken for field geometry design, where field observations, measured cross sections and field images were used to improve the existing DTM data. The model included protection dikes already built by local authorities in their flood-fight effort. Because of flash-flood specific behaviour, only maximal flooded surfaces and flow velocities were simulated through steady flow analysis in HEC-RAS. The discharge estimates obtained for the chosen event were comparable to 10-year return periods as estimated by the watershed authorities. Times of concentration correspond to this previous estimation and to local people descriptions. The modelled water surfaces reflect field reality. Flash-flood modelling demands extensive knowledge of the studied field in order to compensate data scarcity. However, more precise data, like radar rainfall estimates available in Morocco, would definitely improve outputs. In this perspective, better data access at the local level and good use of the available methods could benefit the disaster risk reduction effort as a whole.
Estimating Flow-Through Balance Momentum Tares with CFD
NASA Technical Reports Server (NTRS)
Melton, John E.; James, Kevin D.; Long, Kurtis R.; Flamm, Jeffrey D.
2016-01-01
This paper describes the process used for estimating flow-through balance momentum tares. The interaction of jet engine exhausts on the BOEINGERA Hybrid Wing Body (HWB) was simulated in the NFAC 40x80 wind tunnel at NASA Ames using a pair of turbine powered simulators (TPS). High-pressure air was passed through a flow-through balance and manifold before being delivered to the TPS units. The force and moment tares that result from the internal shear and pressure distribution were estimated using CFD. Validation of the CFD simulations for these complex internal flows is a challenge, given limited experimental data due to the complications of the internal geometry. Two CFD validation efforts are documented, and comparisons with experimental data from the final model installation are provided.
Robboy, Stanley J; Gupta, Saurabh; Crawford, James M; Cohen, Michael B; Karcher, Donald S; Leonard, Debra G B; Magnani, Barbarajean; Novis, David A; Prystowsky, Michael B; Powell, Suzanne Z; Gross, David J; Black-Schaffer, W Stephen
2015-11-01
Pathologists are physicians who make diagnoses based on interpretation of tissue and cellular specimens (surgical/cytopathology, molecular/genomic pathology, autopsy), provide medical leadership and consultation for laboratory medicine, and are integral members of their institutions' interdisciplinary patient care teams. To develop a dynamic modeling tool to examine how individual factors and practice variables can forecast demand for pathologist services. Build and test a computer-based software model populated with data from surveys and best estimates about current and new pathologist efforts. Most pathologists' efforts focus on anatomic (52%), laboratory (14%), and other direct services (8%) for individual patients. Population-focused services (12%) (eg, laboratory medical direction) and other professional responsibilities (14%) (eg, teaching, research, and hospital committees) consume the rest of their time. Modeling scenarios were used to assess the need to increase or decrease efforts related globally to the Affordable Care Act, and specifically, to genomic medicine, laboratory consolidation, laboratory medical direction, and new areas where pathologists' expertise can add value. Our modeling tool allows pathologists, educators, and policy experts to assess how various factors may affect demand for pathologists' services. These factors include an aging population, advances in biomedical technology, and changing roles in capitated, value-based, and team-based medical care systems. In the future, pathologists will likely have to assume new roles, develop new expertise, and become more efficient in practicing medicine to accommodate new value-based delivery models.
Hassoun, Nicole
2016-05-04
Millions of people cannot access good quality essential medicines they need for some of the world's worst diseases like malaria. The World Health Organization estimates that, in 2013, 198 million people became sick with malaria and 584,000 people died of the disease, while the Institute for Health Metrics Evaluation estimates that there were 164,929,872 cases of malaria in 2013 and 854,568 deaths in 2013. There are many attempts to model different aspects of the global burden of tropical diseases like malaria, but it is also important to measure success in averting malaria-related death and disability. This perspective proposes investing in a systematic effort to measure the benefits of health interventions for malaria along the lines of a model embodied in the Global Health Impact Index (global-health-impact.org). © The American Society of Tropical Medicine and Hygiene.
Genome-wide heterogeneity of nucleotide substitution model fit.
Arbiza, Leonardo; Patricio, Mateus; Dopazo, Hernán; Posada, David
2011-01-01
At a genomic scale, the patterns that have shaped molecular evolution are believed to be largely heterogeneous. Consequently, comparative analyses should use appropriate probabilistic substitution models that capture the main features under which different genomic regions have evolved. While efforts have concentrated in the development and understanding of model selection techniques, no descriptions of overall relative substitution model fit at the genome level have been reported. Here, we provide a characterization of best-fit substitution models across three genomic data sets including coding regions from mammals, vertebrates, and Drosophila (24,000 alignments). According to the Akaike Information Criterion (AIC), 82 of 88 models considered were selected as best-fit models at least in one occasion, although with very different frequencies. Most parameter estimates also varied broadly among genes. Patterns found for vertebrates and Drosophila were quite similar and often more complex than those found in mammals. Phylogenetic trees derived from models in the 95% confidence interval set showed much less variance and were significantly closer to the tree estimated under the best-fit model than trees derived from models outside this interval. Although alternative criteria selected simpler models than the AIC, they suggested similar patterns. All together our results show that at a genomic scale, different gene alignments for the same set of taxa are best explained by a large variety of different substitution models and that model choice has implications on different parameter estimates including the inferred phylogenetic trees. After taking into account the differences related to sample size, our results suggest a noticeable diversity in the underlying evolutionary process. All together, we conclude that the use of model selection techniques is important to obtain consistent phylogenetic estimates from real data at a genomic scale.
NASA Technical Reports Server (NTRS)
Santanello, Joseph
2011-01-01
NASA's Land Information System (LIS; lis.gsfc.nasa.gov) is a flexible land surface modeling and data assimilation framework developed over the past decade with the goal of integrating satellite- and ground-based observational data products and advanced land surface modeling techniques to produce optimal fields of land surface states and fluxes. LIS features a high performance and flexible design, and operates on an ensemble of land surface models for extension over user-specified regional or global domains. The extensible interfaces of LIS allow the incorporation of new domains, land surface models (LSMs), land surface parameters, meteorological inputs, data assimilation and optimization algorithms. In addition, LIS has also been demonstrated for parameter estimation and uncertainty estimation, and has been coupled to the Weather Research and Forecasting (WRF) mesoscale model. A visiting fellowship is currently underway to implement JULES into LIS and to undertake some fundamental science on the feedbacks between the land surface and the atmosphere. An overview of the LIS system, features, and sample results will be presented in an effort to engage the community in the potential advantages of LIS-JULES for a range of applications. Ongoing efforts to develop a framework for diagnosing land-atmosphere coupling will also be presented using the suite of LSM and PBL schemes available in LIS and WRF along with observations from the U. S .. Southern Great Plains. This methodology provides a potential pathway to study factors controlling local land-atmosphere coupling (LoCo) using the LIS-WRF system, which will serve as a testbed for future experiments to evaluate coupling diagnostics within the community.
Model-based software for simulating ultrasonic pulse/echo inspections of metal components
NASA Astrophysics Data System (ADS)
Chiou, Chien-Ping; Margetan, Frank J.; Taylor, Jared L.; McKillip, Matthew; Engle, Brady J.; Roberts, Ronald A.; Barnard, Daniel J.
2017-02-01
Under the sponsorship of the National Science Foundation's Industry/University Cooperative Research Center at Iowa State University, an effort was initiated in 2015 to repackage existing research-grade software into user friendly tools for the rapid estimation of signal-to-noise ratio (S/N) for ultrasonic inspections of metals. The software combines: (1) a Python-based graphical user interface for specifying an inspection scenario and displaying results; and (2) a Fortran-based engine for computing defect signals and backscattered grain noise characteristics. The later makes use the Thompson-Gray Model for the response from an internal defect and the Independent Scatterer Model for backscattered grain noise. This paper provides an overview of the ongoing modeling effort with emphasis on recent developments. These include: treatment of angle-beam inspections, implementation of distance-amplitude corrections, changes in the generation of "invented" calibration signals, efforts to simulate ultrasonic C-scans; and experimental testing of model predictions. The simulation software can now treat both normal and oblique-incidence immersion inspections of curved metal components having equiaxed microstructures in which the grain size varies with depth. Both longitudinal and shear-wave inspections are treated. The model transducer can either be planar, spherically-focused, or bi-cylindrically-focused. A calibration (or reference) signal is required and is used to deduce the measurement system efficiency function. This can be "invented" by the software using center frequency and bandwidth information specified by the user, or, alternatively, a measured calibration signal can be used. Defect types include flat-bottomed-hole reference reflectors, and spherical pores and inclusions. Simulation outputs include estimated defect signal amplitudes, root-mean-squared grain noise amplitudes, and S/N as functions of the depth of the defect within the metal component. At any particular depth, the user can view a simulated A-scan displaying the superimposed defect and grain-noise waveforms. The realistic grain noise signals used in the A-scans are generated from a set of measured "universal" noise signals whose strengths and spectral characteristics are altered to match predicted noise characteristics for the simulation at hand. We present simulation examples demonstrating recent developments, and discuss plans to improve simulator capabilities.
A small, single stage orifice pulse tube cryocooler demonstration
NASA Technical Reports Server (NTRS)
Hendricks, John B.
1990-01-01
This final report summarizes and presents the analytical and experimental progress in the present effort. The principal objective of this effort was the demonstration of a 0.25 Watt, 80 Kelvin orifice pulse tube refrigerator. The experimental apparatus is described. The design of a partially optimized pulse tube refrigerator is included. The refrigerator demonstrates an ultimate temperature of 77 K, has a projected cooling power of 0.18 Watts at 80 K, and has a measured cooling power of 1 Watt at 97 K, with an electrical efficiency of 250 Watts/Watt, much better than previous pulse tube refrigerators. A model of the pulse tube refrigerator that provides estimates of pressure ratio and mass flow within the pulse tube refrigerator, based on component physical characteristics is included. A model of a pulse tube operation based on generalized analysis which is adequate to support local optimization of existing designs is included. A model of regenerator performance based on an analogy to counterflow heat exchangers is included.
Monitoring Coastal Marshes for Persistent Flooding and Salinity Stress
NASA Technical Reports Server (NTRS)
Kalcic, Maria
2010-01-01
Our objective is to provide NASA remote sensing products that provide inundation and salinity information on an ecosystem level to support habitat switching models. Project born out of need by the Coastal Restoration Monitoring System (CRMS), joint effort by Louisiana Department of Natural Resources and the U.S. Geological Survey, for information on persistence of flooding by storm surge and other flood waters. The results of the this work support the habitat-switching modules in the Coastal Louisiana Ecosystem Assessment and Restoration (CLEAR) model, which provides scientific evaluation for restoration management. CLEAR is a collaborative effort between the Louisiana Board of Regents, the Louisiana Department of Natural Resources (LDNR), the U.S. Geological Survey (USGS), and the U.S. Army Corps of Engineers (USACE). Anticipated results will use: a) Resolution enhanced time series data combining spatial resolution of Landsat with temporal resolution of MODIS for inundation estimates. b) Potential salinity products from radar and multispectral modeling. c) Combined inundation and salinity inputs to habitat switching module to produce habitat switching maps (shown at left)
Assessing Security of Supply: Three Methods Used in Finland
NASA Astrophysics Data System (ADS)
Sivonen, Hannu
Public Private Partnership (PPP) has an important role in securing supply in Finland. Three methods are used in assessing the level of security of supply. First, in national expert groups, a linear mathematical model has been used. The model is based on interdependency estimates. It ranks societal functions or its more detailed components, such as items in the food supply chain, according to the effect and risk pertinent to the interdependencies. Second, the security of supply is assessed in industrial branch committees (clusters and pools) in the form of indicators. The level of security of supply is assessed against five generic factors (dimension 1) and tens of business branch specific functions (dimension 2). Third, in two thousand individual critical companies, the maturity of operational continuity management is assessed using Capability Maturity Model (CMM) in an extranet application. The pool committees and authorities obtain an anonymous summary. The assessments are used in allocating efforts for securing supply. The efforts may be new instructions, training, exercising, and in some cases, investment and regulation.
Implications of asymptomatic carriers for infectious disease transmission and control.
Chisholm, Rebecca H; Campbell, Patricia T; Wu, Yue; Tong, Steven Y C; McVernon, Jodie; Geard, Nicholas
2018-02-01
For infectious pathogens such as Staphylococcus aureus and Streptococcus pneumoniae , some hosts may carry the pathogen and transmit it to others, yet display no symptoms themselves. These asymptomatic carriers contribute to the spread of disease but go largely undetected and can therefore undermine efforts to control transmission. Understanding the natural history of carriage and its relationship to disease is important for the design of effective interventions to control transmission. Mathematical models of infectious diseases are frequently used to inform decisions about control and should therefore accurately capture the role played by asymptomatic carriers. In practice, incorporating asymptomatic carriers into models is challenging due to the sparsity of direct evidence. This absence of data leads to uncertainty in estimates of model parameters and, more fundamentally, in the selection of an appropriate model structure. To assess the implications of this uncertainty, we systematically reviewed published models of carriage and propose a new model of disease transmission with asymptomatic carriage. Analysis of our model shows how different assumptions about the role of asymptomatic carriers can lead to different conclusions about the transmission and control of disease. Critically, selecting an inappropriate model structure, even when parameters are correctly estimated, may lead to over- or under-estimates of intervention effectiveness. Our results provide a more complete understanding of the role of asymptomatic carriers in transmission and highlight the importance of accurately incorporating carriers into models used to make decisions about disease control.
A new software for deformation source optimization, the Bayesian Earthquake Analysis Tool (BEAT)
NASA Astrophysics Data System (ADS)
Vasyura-Bathke, H.; Dutta, R.; Jonsson, S.; Mai, P. M.
2017-12-01
Modern studies of crustal deformation and the related source estimation, including magmatic and tectonic sources, increasingly use non-linear optimization strategies to estimate geometric and/or kinematic source parameters and often consider both jointly, geodetic and seismic data. Bayesian inference is increasingly being used for estimating posterior distributions of deformation source model parameters, given measured/estimated/assumed data and model uncertainties. For instance, some studies consider uncertainties of a layered medium and propagate these into source parameter uncertainties, while others use informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed to efficiently explore the high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational burden of these methods is high and estimation codes are rarely made available along with the published results. Even if the codes are accessible, it is usually challenging to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in deformation source estimations, we undertook the effort of developing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package builds on the pyrocko seismological toolbox (www.pyrocko.org), and uses the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat), and we encourage and solicit contributions to the project. Here, we present our strategy for developing BEAT and show application examples; especially the effect of including the model prediction uncertainty of the velocity model in following source optimizations: full moment tensor, Mogi source, moderate strike-slip earth-quake.
Johnson, Aaron W; Duda, Kevin R; Sheridan, Thomas B; Oman, Charles M
2017-03-01
This article describes a closed-loop, integrated human-vehicle model designed to help understand the underlying cognitive processes that influenced changes in subject visual attention, mental workload, and situation awareness across control mode transitions in a simulated human-in-the-loop lunar landing experiment. Control mode transitions from autopilot to manual flight may cause total attentional demands to exceed operator capacity. Attentional resources must be reallocated and reprioritized, which can increase the average uncertainty in the operator's estimates of low-priority system states. We define this increase in uncertainty as a reduction in situation awareness. We present a model built upon the optimal control model for state estimation, the crossover model for manual control, and the SEEV (salience, effort, expectancy, value) model for visual attention. We modify the SEEV attention executive to direct visual attention based, in part, on the uncertainty in the operator's estimates of system states. The model was validated using the simulated lunar landing experimental data, demonstrating an average difference in the percentage of attention ≤3.6% for all simulator instruments. The model's predictions of mental workload and situation awareness, measured by task performance and system state uncertainty, also mimicked the experimental data. Our model supports the hypothesis that visual attention is influenced by the uncertainty in system state estimates. Conceptualizing situation awareness around the metric of system state uncertainty is a valuable way for system designers to understand and predict how reallocations in the operator's visual attention during control mode transitions can produce reallocations in situation awareness of certain states.
Karl, J Bradley; Medders, Lorilee A; Maroney, Patrick F
2016-06-01
We examine whether the risk characterization estimated by catastrophic loss projection models is sensitive to the revelation of new information regarding risk type. We use commercial loss projection models from two widely employed modeling firms to estimate the expected hurricane losses of Florida Atlantic University's building stock, both including and excluding secondary information regarding hurricane mitigation features that influence damage vulnerability. We then compare the results of the models without and with this revealed information and find that the revelation of additional, secondary information influences modeled losses for the windstorm-exposed university building stock, primarily evidenced by meaningful percent differences in the loss exceedance output indicated after secondary modifiers are incorporated in the analysis. Secondary risk characteristics for the data set studied appear to have substantially greater impact on probable maximum loss estimates than on average annual loss estimates. While it may be intuitively expected for catastrophe models to indicate that secondary risk characteristics hold value for reducing modeled losses, the finding that the primary value of secondary risk characteristics is in reduction of losses in the "tail" (low probability, high severity) events is less intuitive, and therefore especially interesting. Further, we address the benefit-cost tradeoffs that commercial entities must consider when deciding whether to undergo the data collection necessary to include secondary information in modeling. Although we assert the long-term benefit-cost tradeoff is positive for virtually every entity, we acknowledge short-term disincentives to such an effort. © 2015 Society for Risk Analysis.
Probabilistic Mass Growth Uncertainties
NASA Technical Reports Server (NTRS)
Plumer, Eric; Elliott, Darren
2013-01-01
Mass has been widely used as a variable input parameter for Cost Estimating Relationships (CER) for space systems. As these space systems progress from early concept studies and drawing boards to the launch pad, their masses tend to grow substantially, hence adversely affecting a primary input to most modeling CERs. Modeling and predicting mass uncertainty, based on historical and analogous data, is therefore critical and is an integral part of modeling cost risk. This paper presents the results of a NASA on-going effort to publish mass growth datasheet for adjusting single-point Technical Baseline Estimates (TBE) of masses of space instruments as well as spacecraft, for both earth orbiting and deep space missions at various stages of a project's lifecycle. This paper will also discusses the long term strategy of NASA Headquarters in publishing similar results, using a variety of cost driving metrics, on an annual basis. This paper provides quantitative results that show decreasing mass growth uncertainties as mass estimate maturity increases. This paper's analysis is based on historical data obtained from the NASA Cost Analysis Data Requirements (CADRe) database.
Wake Vortex Advisory System (WakeVAS) Evaluation of Impacts on the National Airspace System
NASA Technical Reports Server (NTRS)
Smith, Jeremy C.; Dollyhigh, Samuel M.
2005-01-01
This report is one of a series that describes an ongoing effort in high-fidelity modeling/simulation, evaluation and analysis of the benefits and performance metrics of the Wake Vortex Advisory System (WakeVAS) Concept of Operations being developed as part of the Virtual Airspace Modeling and Simulation (VAMS) project. A previous study, determined the overall increases in runway arrival rates that could be achieved at 12 selected airports due to WakeVAS reduced aircraft spacing under Instrument Meteorological Conditions. This study builds on the previous work to evaluate the NAS wide impacts of equipping various numbers of airports with WakeVAS. A queuing network model of the National Airspace System, built by the Logistics Management Institute, Mclean, VA, for NASA (LMINET) was used to estimate the reduction in delay that could be achieved by using WakeVAS under non-visual meteorological conditions for the projected air traffic demand in 2010. The results from LMINET were used to estimate the total annual delay reduction that could be achieved and from this, an estimate of the air carrier variable operating cost saving was made.
Nishiura, Hiroshi
2007-05-11
The incubation period of infectious diseases, the time from infection with a microorganism to onset of disease, is directly relevant to prevention and control. Since explicit models of the incubation period enhance our understanding of the spread of disease, previous classic studies were revisited, focusing on the modeling methods employed and paying particular attention to relatively unknown historical efforts. The earliest study on the incubation period of pandemic influenza was published in 1919, providing estimates of the incubation period of Spanish flu using the daily incidence on ships departing from several ports in Australia. Although the study explicitly dealt with an unknown time of exposure, the assumed periods of exposure, which had an equal probability of infection, were too long, and thus, likely resulted in slight underestimates of the incubation period. After the suggestion that the incubation period follows lognormal distribution, Japanese epidemiologists extended this assumption to estimates of the time of exposure during a point source outbreak. Although the reason why the incubation period of acute infectious diseases tends to reveal a right-skewed distribution has been explored several times, the validity of the lognormal assumption is yet to be fully clarified. At present, various different distributions are assumed, and the lack of validity in assuming lognormal distribution is particularly apparent in the case of slowly progressing diseases. The present paper indicates that (1) analysis using well-defined short periods of exposure with appropriate statistical methods is critical when the exact time of exposure is unknown, and (2) when assuming a specific distribution for the incubation period, comparisons using different distributions are needed in addition to estimations using different datasets, analyses of the determinants of incubation period, and an understanding of the underlying disease mechanisms.
Romer, Jeremy D.; Gitelman, Alix I.; Clements, Shaun; Schreck, Carl B.
2015-01-01
A number of researchers have attempted to estimate salmonid smolt survival during outmigration through an estuary. However, it is currently unclear how the design of such studies influences the accuracy and precision of survival estimates. In this simulation study we consider four patterns of smolt survival probability in the estuary, and test the performance of several different sampling strategies for estimating estuarine survival assuming perfect detection. The four survival probability patterns each incorporate a systematic component (constant, linearly increasing, increasing and then decreasing, and two pulses) and a random component to reflect daily fluctuations in survival probability. Generally, spreading sampling effort (tagging) across the season resulted in more accurate estimates of survival. All sampling designs in this simulation tended to under-estimate the variation in the survival estimates because seasonal and daily variation in survival probability are not incorporated in the estimation procedure. This under-estimation results in poorer performance of estimates from larger samples. Thus, tagging more fish may not result in better estimates of survival if important components of variation are not accounted for. The results of our simulation incorporate survival probabilities and run distribution data from previous studies to help illustrate the tradeoffs among sampling strategies in terms of the number of tags needed and distribution of tagging effort. This information will assist researchers in developing improved monitoring programs and encourage discussion regarding issues that should be addressed prior to implementation of any telemetry-based monitoring plan. We believe implementation of an effective estuary survival monitoring program will strengthen the robustness of life cycle models used in recovery plans by providing missing data on where and how much mortality occurs in the riverine and estuarine portions of smolt migration. These data could result in better informed management decisions and assist in guidance for more effective estuarine restoration projects.
Statistical aspects of point count sampling
Barker, R.J.; Sauer, J.R.; Ralph, C.J.; Sauer, J.R.; Droege, S.
1995-01-01
The dominant feature of point counts is that they do not census birds, but instead provide incomplete counts of individuals present within a survey plot. Considering a simple model for point count sampling, we demon-strate that use of these incomplete counts can bias estimators and testing procedures, leading to inappropriate conclusions. A large portion of the variability in point counts is caused by the incomplete counting, and this within-count variation can be confounded with ecologically meaningful varia-tion. We recommend caution in the analysis of estimates obtained from point counts. Using; our model, we also consider optimal allocation of sampling effort. The critical step in the optimization process is in determining the goals of the study and methods that will be used to meet these goals. By explicitly defining the constraints on sampling and by estimating the relationship between precision and bias of estimators and time spent counting, we can predict the optimal time at a point for each of several monitoring goals. In general, time spent at a point will differ depending on the goals of the study.
Designing Control System Application Software for Change
NASA Technical Reports Server (NTRS)
Boulanger, Richard
2001-01-01
The Unified Modeling Language (UML) was used to design the Environmental Systems Test Stand (ESTS) control system software. The UML was chosen for its ability to facilitate a clear dialog between software designer and customer, from which requirements are discovered and documented in a manner which transposes directly to program objects. Applying the UML to control system software design has resulted in a baseline set of documents from which change and effort of that change can be accurately measured. As the Environmental Systems Test Stand evolves, accurate estimates of the time and effort required to change the control system software will be made. Accurate quantification of the cost of software change can be before implementation, improving schedule and budget accuracy.
NASA Astrophysics Data System (ADS)
Odman, M. T.; Hu, Y.; Russell, A. G.
2016-12-01
Prescribed burning is practiced throughout the US, and most widely in the Southeast, for the purpose of maintaining and improving the ecosystem, and reducing the wildfire risk. However, prescribed burn emissions contribute significantly to the of trace gas and particulate matter loads in the atmosphere. In places where air quality is already stressed by other anthropogenic emissions, prescribed burns can lead to major health and environmental problems. Air quality modeling efforts are under way to assess the impacts of prescribed burn emissions. Operational forecasts of the impacts are also emerging for use in dynamic management of air quality as well as the burns. Unfortunately, large uncertainties exist in the process of estimating prescribed burn emissions and these uncertainties limit the accuracy of the burn impact predictions. Prescribed burn emissions are estimated by using either ground-based information or satellite observations. When there is sufficient local information about the burn area, the types of fuels, their consumption amounts, and the progression of the fire, ground-based estimates are more accurate. In the absence of such information satellites remain as the only reliable source for emission estimation. To determine the level of uncertainty in prescribed burn emissions, we compared estimates derived from a burn permit database and other ground-based information to the estimates by the Biomass Burning Emissions Product derived from a constellation of NOAA and NASA satellites. Using these emissions estimates we conducted simulations with the Community Multiscale Air Quality (CMAQ) model and predicted trace gas and particulate matter concentrations throughout the Southeast for two consecutive burn seasons (2015 and 2016). In this presentation, we will compare model predicted concentrations to measurements at monitoring stations and evaluate if the differences are commensurate with our emission uncertainty estimates. We will also investigate if spatial and temporal patterns in the differences reveal the sources of the uncertainty in the prescribed burn emission estimates.
The evolution of life-history variation in fishes, with particular reference to flatfishes
NASA Astrophysics Data System (ADS)
Roff, Derek A.
This paper explores four aspects of the evolution of life-history variation in fish, with particular reference to the flatfishes: 1. genetic variation and evolutionary response; 2. the size and age at first reproduction; 3. adult lifespan and variation in recruitment; 4. the relationship between reproductive effort and age. Evolutionary response may be limited by previous evolutionary pathways (phylogenetic variation) or by lack of genetic variation due to selection for a single trait. Estimates of heritability suggest, as predicted, that selection is stronger on life-history traits than morphological traits; but there is still adequate genetic variation to permit fairly rapid evolutionary changes. Several approaches to the analysis of the optimal age and size at first reproduction are discussed in the light of a general life-history model based on the assumption that natural selection maximizes r or R 0. It is concluded that one of the most important areas of future research is the relationship between reproduction and mortality. Murphy's hypothesis that the reproductive lifespan should increase with variation in spawning success is shown to be incorrect for fish, at least at the level of interspecific comparison. The model of Charlesworth & León predicting the sufficient condition for reproductive effort to increase with age is tested: in 28 of 31 cases the model predicts an increase of reproductive effort with age. These results suggest that, in general, reproductive effort should increase with age in fish. This prediction is confirmed in the 15 species for which adequate data exist.
Schaefer, David R; Adams, Jimi; Haas, Steven A
2013-10-01
Adolescent smoking and friendship networks are related in many ways that can amplify smoking prevalence. Understanding and developing interventions within such a complex system requires new analytic approaches. We draw on recent advances in dynamic network modeling to develop a technique that explores the implications of various intervention strategies targeted toward micro-level processes. Our approach begins by estimating a stochastic actor-based model using data from one school in the National Longitudinal Study of Adolescent Health. The model provides estimates of several factors predicting friendship ties and smoking behavior. We then use estimated model parameters to simulate the coevolution of friendship and smoking behavior under potential intervention scenarios. Namely, we manipulate the strength of peer influence on smoking and the popularity of smokers relative to nonsmokers. We measure how these manipulations affect smoking prevalence, smoking initiation, and smoking cessation. Results indicate that both peer influence and smoking-based popularity affect smoking behavior and that their joint effects are nonlinear. This study demonstrates how a simulation-based approach can be used to explore alternative scenarios that may be achievable through intervention efforts and offers new hypotheses about the association between friendship and smoking.
Schaefer, David R.; adams, jimi; Haas, Steven A.
2015-01-01
Adolescent smoking and friendship networks are related in many ways that can amplify smoking prevalence. Understanding and developing interventions within such a complex system requires new analytic approaches. We draw upon recent advances in dynamic network modeling to develop a technique that explores the implications of various intervention strategies targeted toward micro-level processes. Our approach begins by estimating a stochastic actor-based model using data from one school in the National Longitudinal Study of Adolescent Health. The model provides estimates of several factors predicting friendship ties and smoking behavior. We then use estimated model parameters to simulate the co-evolution of friendship and smoking behavior under potential intervention scenarios. Namely, we manipulate the strength of peer influence on smoking and the popularity of smokers relative to nonsmokers. We measure how these manipulations affect smoking prevalence, smoking initiation, and smoking cessation. Results indicate that both peer influence and smoking-based popularity affect smoking behavior, and that their joint effects are nonlinear. This study demonstrates how a simulation-based approach can be used to explore alternative scenarios that may be achievable through intervention efforts and offers new hypotheses about the association between friendship and smoking. PMID:24084397
NASA Astrophysics Data System (ADS)
Courchesne, Samuel
Knowledge of the dynamic characteristics of a fixed-wing UAV is necessary to design flight control laws and to conceive a high quality flight simulator. The basic features of a flight mechanic model include the properties of mass, inertia and major aerodynamic terms. They respond to a complex process involving various numerical analysis techniques and experimental procedures. This thesis focuses on the analysis of estimation techniques applied to estimate problems of stability and control derivatives from flight test data provided by an experimental UAV. To achieve this objective, a modern identification methodology (Quad-M) is used to coordinate the processing tasks from multidisciplinary fields, such as parameter estimation modeling, instrumentation, the definition of flight maneuvers and validation. The system under study is a non-linear model with six degrees of freedom with a linear aerodynamic model. The time domain techniques are used for identification of the drone. The first technique, the equation error method is used to determine the structure of the aerodynamic model. Thereafter, the output error method and filter error method are used to estimate the aerodynamic coefficients values. The Matlab scripts for estimating the parameters obtained from the American Institute of Aeronautics and Astronautics (AIAA) are used and modified as necessary to achieve the desired results. A commendable effort in this part of research is devoted to the design of experiments. This includes an awareness of the system data acquisition onboard and the definition of flight maneuvers. The flight tests were conducted under stable flight conditions and with low atmospheric disturbance. Nevertheless, the identification results showed that the filter error method is most effective for estimating the parameters of the drone due to the presence of process noise and measurement. The aerodynamic coefficients are validated using a numerical analysis of the vortex method. In addition, a simulation model incorporating the estimated parameters is used to compare the behavior of states measured. Finally, a good correspondence between the results is demonstrated despite a limited number of flight data. Keywords: drone, identification, estimation, nonlinear, flight test, system, aerodynamic coefficient.
Hisano, Mizue; Connolly, Sean R; Robbins, William D
2011-01-01
Overfishing of sharks is a global concern, with increasing numbers of species threatened by overfishing. For many sharks, both catch rates and underwater visual surveys have been criticized as indices of abundance. In this context, estimation of population trends using individual demographic rates provides an important alternative means of assessing population status. However, such estimates involve uncertainties that must be appropriately characterized to credibly and effectively inform conservation efforts and management. Incorporating uncertainties into population assessment is especially important when key demographic rates are obtained via indirect methods, as is often the case for mortality rates of marine organisms subject to fishing. Here, focusing on two reef shark species on the Great Barrier Reef, Australia, we estimated natural and total mortality rates using several indirect methods, and determined the population growth rates resulting from each. We used bootstrapping to quantify the uncertainty associated with each estimate, and to evaluate the extent of agreement between estimates. Multiple models produced highly concordant natural and total mortality rates, and associated population growth rates, once the uncertainties associated with the individual estimates were taken into account. Consensus estimates of natural and total population growth across multiple models support the hypothesis that these species are declining rapidly due to fishing, in contrast to conclusions previously drawn from catch rate trends. Moreover, quantitative projections of abundance differences on fished versus unfished reefs, based on the population growth rate estimates, are comparable to those found in previous studies using underwater visual surveys. These findings appear to justify management actions to substantially reduce the fishing mortality of reef sharks. They also highlight the potential utility of rigorously characterizing uncertainty, and applying multiple assessment methods, to obtain robust estimates of population trends in species threatened by overfishing.
Hisano, Mizue; Connolly, Sean R.; Robbins, William D.
2011-01-01
Overfishing of sharks is a global concern, with increasing numbers of species threatened by overfishing. For many sharks, both catch rates and underwater visual surveys have been criticized as indices of abundance. In this context, estimation of population trends using individual demographic rates provides an important alternative means of assessing population status. However, such estimates involve uncertainties that must be appropriately characterized to credibly and effectively inform conservation efforts and management. Incorporating uncertainties into population assessment is especially important when key demographic rates are obtained via indirect methods, as is often the case for mortality rates of marine organisms subject to fishing. Here, focusing on two reef shark species on the Great Barrier Reef, Australia, we estimated natural and total mortality rates using several indirect methods, and determined the population growth rates resulting from each. We used bootstrapping to quantify the uncertainty associated with each estimate, and to evaluate the extent of agreement between estimates. Multiple models produced highly concordant natural and total mortality rates, and associated population growth rates, once the uncertainties associated with the individual estimates were taken into account. Consensus estimates of natural and total population growth across multiple models support the hypothesis that these species are declining rapidly due to fishing, in contrast to conclusions previously drawn from catch rate trends. Moreover, quantitative projections of abundance differences on fished versus unfished reefs, based on the population growth rate estimates, are comparable to those found in previous studies using underwater visual surveys. These findings appear to justify management actions to substantially reduce the fishing mortality of reef sharks. They also highlight the potential utility of rigorously characterizing uncertainty, and applying multiple assessment methods, to obtain robust estimates of population trends in species threatened by overfishing. PMID:21966402
Examining the utility of satellite-based wind sheltering estimates for lake hydrodynamic modeling
Van Den Hoek, Jamon; Read, Jordan S.; Winslow, Luke A.; Montesano, Paul; Markfort, Corey D.
2015-01-01
Satellite-based measurements of vegetation canopy structure have been in common use for the last decade but have never been used to estimate canopy's impact on wind sheltering of individual lakes. Wind sheltering is caused by slower winds in the wake of topography and shoreline obstacles (e.g. forest canopy) and influences heat loss and the flux of wind-driven mixing energy into lakes, which control lake temperatures and indirectly structure lake ecosystem processes, including carbon cycling and thermal habitat partitioning. Lakeshore wind sheltering has often been parameterized by lake surface area but such empirical relationships are only based on forested lakeshores and overlook the contributions of local land cover and terrain to wind sheltering. This study is the first to examine the utility of satellite imagery-derived broad-scale estimates of wind sheltering across a diversity of land covers. Using 30 m spatial resolution ASTER GDEM2 elevation data, the mean sheltering height, hs, being the combination of local topographic rise and canopy height above the lake surface, is calculated within 100 m-wide buffers surrounding 76,000 lakes in the U.S. state of Wisconsin. Uncertainty of GDEM2-derived hs was compared to SRTM-, high-resolution G-LiHT lidar-, and ICESat-derived estimates of hs, respective influences of land cover type and buffer width on hsare examined; and the effect of including satellite-based hs on the accuracy of a statewide lake hydrodynamic model was discussed. Though GDEM2 hs uncertainty was comparable to or better than other satellite-based measures of hs, its higher spatial resolution and broader spatial coverage allowed more lakes to be included in modeling efforts. GDEM2 was shown to offer superior utility for estimating hs compared to other satellite-derived data, but was limited by its consistent underestimation of hs, inability to detect within-buffer hs variability, and differing accuracy across land cover types. Nonetheless, considering a GDEM2 hs-derived wind sheltering potential improved the modeled lake temperature root mean square error for non-forested lakes by 0.72 °C compared to a commonly used wind sheltering model based on lake area alone. While results from this study show promise, the limitations of near-global GDEM2 data in timeliness, temporal and spatial resolution, and vertical accuracy were apparent. As hydrodynamic modeling and high-resolution topographic mapping efforts both expand, future remote sensing-derived vegetation structure data must be improved to meet wind sheltering accuracy requirements to expand our understanding of lake processes.
Atmospheric Tracer Inverse Modeling Using Markov Chain Monte Carlo (MCMC)
NASA Astrophysics Data System (ADS)
Kasibhatla, P.
2004-12-01
In recent years, there has been an increasing emphasis on the use of Bayesian statistical estimation techniques to characterize the temporal and spatial variability of atmospheric trace gas sources and sinks. The applications have been varied in terms of the particular species of interest, as well as in terms of the spatial and temporal resolution of the estimated fluxes. However, one common characteristic has been the use of relatively simple statistical models for describing the measurement and chemical transport model error statistics and prior source statistics. For example, multivariate normal probability distribution functions (pdfs) are commonly used to model these quantities and inverse source estimates are derived for fixed values of pdf paramaters. While the advantage of this approach is that closed form analytical solutions for the a posteriori pdfs of interest are available, it is worth exploring Bayesian analysis approaches which allow for a more general treatment of error and prior source statistics. Here, we present an application of the Markov Chain Monte Carlo (MCMC) methodology to an atmospheric tracer inversion problem to demonstrate how more gereral statistical models for errors can be incorporated into the analysis in a relatively straightforward manner. The MCMC approach to Bayesian analysis, which has found wide application in a variety of fields, is a statistical simulation approach that involves computing moments of interest of the a posteriori pdf by efficiently sampling this pdf. The specific inverse problem that we focus on is the annual mean CO2 source/sink estimation problem considered by the TransCom3 project. TransCom3 was a collaborative effort involving various modeling groups and followed a common modeling and analysis protocoal. As such, this problem provides a convenient case study to demonstrate the applicability of the MCMC methodology to atmospheric tracer source/sink estimation problems.
Model improvements and validation of TerraSAR-X precise orbit determination
NASA Astrophysics Data System (ADS)
Hackel, S.; Montenbruck, O.; Steigenberger, P.; Balss, U.; Gisinger, C.; Eineder, M.
2017-05-01
The radar imaging satellite mission TerraSAR-X requires precisely determined satellite orbits for validating geodetic remote sensing techniques. Since the achieved quality of the operationally derived, reduced-dynamic (RD) orbit solutions limits the capabilities of the synthetic aperture radar (SAR) validation, an effort is made to improve the estimated orbit solutions. This paper discusses the benefits of refined dynamical models on orbit accuracy as well as estimated empirical accelerations and compares different dynamic models in a RD orbit determination. Modeling aspects discussed in the paper include the use of a macro-model for drag and radiation pressure computation, the use of high-quality atmospheric density and wind models as well as the benefit of high-fidelity gravity and ocean tide models. The Sun-synchronous dusk-dawn orbit geometry of TerraSAR-X results in a particular high correlation of solar radiation pressure modeling and estimated normal-direction positions. Furthermore, this mission offers a unique suite of independent sensors for orbit validation. Several parameters serve as quality indicators for the estimated satellite orbit solutions. These include the magnitude of the estimated empirical accelerations, satellite laser ranging (SLR) residuals, and SLR-based orbit corrections. Moreover, the radargrammetric distance measurements of the SAR instrument are selected for assessing the quality of the orbit solutions and compared to the SLR analysis. The use of high-fidelity satellite dynamics models in the RD approach is shown to clearly improve the orbit quality compared to simplified models and loosely constrained empirical accelerations. The estimated empirical accelerations are substantially reduced by 30% in tangential direction when working with the refined dynamical models. Likewise the SLR residuals are reduced from -3 ± 17 to 2 ± 13 mm, and the SLR-derived normal-direction position corrections are reduced from 15 to 6 mm, obtained from the 2012-2014 period. The radar range bias is reduced from -10.3 to -6.1 mm with the updated orbit solutions, which coincides with the reduced standard deviation of the SLR residuals. The improvements are mainly driven by the satellite macro-model for the purpose of solar radiation pressure modeling, improved atmospheric density models, and the use of state-of-the-art gravity field models.
1981-11-01
i very little effort has been put upon the model validation, which is essential in any scientific research. T’-- -rientation we aim at in the present...better than the former to the target function. This implies that, although the interval of ability e of our interest is even a little smaller than [-3.0...approaches turned out to be similar, with some deviations, i.e., some of them are a little closer to the theoretical density function, and some of
Development of a Kemp’s ridley sea turtle stock assessment model
Gallaway, Benny J.; Gazey, William; Caillouet, Charles W.; Plotkin, Pamela T.; Abreu Grobois, F. Alberto; Amos, Anthony F.; Burchfield, Patrick M.; Carthy, Raymond R.; Castro Martinez, Marco A.; Cole, John G.; Coleman, Andrew T.; Cook, Melissa; DiMarco, Steven F.; Epperly, Sheryan P.; Fujiwara, Masami; Gamez, Daniel Gomez; Graham, Gary L.; Griffin, Wade L.; Illescas Martinez, Francisco; Lamont, Margaret M.; Lewison, Rebecca L.; Lohmann, Kenneth J.; Nance, James M.; Pitchford, Jonathan; Putman, Nathan F.; Raborn, Scott W.; Rester, Jeffrey K.; Rudloe, Jack J.; Sarti Martinez, Laura; Schexnayder, Mark; Schmid, Jeffrey R.; Shaver, Donna J.; Slay, Christopher; Tucker, Anton D.; Tumlin, Mandy; Wibbels, Thane; Zapata Najera, Blanca M.
2016-01-01
We developed a Kemp’s ridley (Lepidochelys kempii) stock assessment model to evaluate the relative contributions of conservation efforts and other factors toward this critically endangered species’ recovery. The Kemp’s ridley demographic model developed by the Turtle Expert Working Group (TEWG) in 1998 and 2000 and updated for the binational recovery plan in 2011 was modified for use as our base model. The TEWG model uses indices of the annual reproductive population (number of nests) and hatchling recruitment to predict future annual numbers of nests on the basis of a series of assumptions regarding age and maturity, remigration interval, sex ratios, nests per female, juvenile mortality, and a putative ‘‘turtle excluder device effect’’ multiplier starting in 1990. This multiplier was necessary to fit the number of nests observed in 1990 and later. We added the effects of shrimping effort directly, modified by habitat weightings, as a proxy for all sources of anthropogenic mortality. Additional data included in our model were incremental growth of Kemp’s ridleys marked and recaptured in the Gulf of Mexico, and the length frequency of stranded Kemp’s ridleys. We also added a 2010 mortality factor that was necessary to fit the number of nests for 2010 and later (2011 and 2012). Last, we used an empirical basis for estimating natural mortality, on the basis of a Lorenzen mortality curve and growth estimates. Although our model generated reasonable estimates of annual total turtle deaths attributable to shrimp trawling, as well as additional deaths due to undetermined anthropogenic causes in 2010, we were unable to provide a clear explanation for the observed increase in the number of stranded Kemp’s ridleys in recent years, and subsequent disruption of the species’ exponential growth since the 2009 nesting season. Our consensus is that expanded data collection at the nesting beaches is needed and of high priority, and that 2015 be targeted for the next stock assessment to evaluate the 2010 event using more recent nesting and in-water data.
NASA Technical Reports Server (NTRS)
Murphy, P. C.
1986-01-01
An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. With the fitted surface, sensitivity information can be updated at each iteration with less computational effort than that required by either a finite-difference method or integration of the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, and thus provides flexibility to use model equations in any convenient format. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. The degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels and to predict the degree of agreement between CR bounds and search estimates.
An Estimate of Avian Mortality at Communication Towers in the United States and Canada
Longcore, Travis; Rich, Catherine; Mineau, Pierre; MacDonald, Beau; Bert, Daniel G.; Sullivan, Lauren M.; Mutrie, Erin; Gauthreaux, Sidney A.; Avery, Michael L.; Crawford, Robert L.; Manville, Albert M.; Travis, Emilie R.; Drake, David
2012-01-01
Avian mortality at communication towers in the continental United States and Canada is an issue of pressing conservation concern. Previous estimates of this mortality have been based on limited data and have not included Canada. We compiled a database of communication towers in the continental United States and Canada and estimated avian mortality by tower with a regression relating avian mortality to tower height. This equation was derived from 38 tower studies for which mortality data were available and corrected for sampling effort, search efficiency, and scavenging where appropriate. Although most studies document mortality at guyed towers with steady-burning lights, we accounted for lower mortality at towers without guy wires or steady-burning lights by adjusting estimates based on published studies. The resulting estimate of mortality at towers is 6.8 million birds per year in the United States and Canada. Bootstrapped subsampling indicated that the regression was robust to the choice of studies included and a comparison of multiple regression models showed that incorporating sampling, scavenging, and search efficiency adjustments improved model fit. Estimating total avian mortality is only a first step in developing an assessment of the biological significance of mortality at communication towers for individual species or groups of species. Nevertheless, our estimate can be used to evaluate this source of mortality, develop subsequent per-species mortality estimates, and motivate policy action. PMID:22558082
Black bear density in Glacier National Park, Montana
Stetz, Jeff B.; Kendall, Katherine C.; Macleod, Amy C.
2013-01-01
We report the first abundance and density estimates for American black bears (Ursus americanus) in Glacier National Park (NP),Montana, USA.We used data from 2 independent and concurrent noninvasive genetic sampling methods—hair traps and bear rubs—collected during 2004 to generate individual black bear encounter histories for use in closed population mark–recapture models. We improved the precision of our abundance estimate by using noninvasive genetic detection events to develop individual-level covariates of sampling effort within the full and one-half mean maximum distance moved (MMDM) from each bear’s estimated activity center to explain capture probability heterogeneity and inform our estimate of the effective sampling area.Models including the one-halfMMDMcovariate received overwhelming Akaike’s Information Criterion support suggesting that buffering our study area by this distance would be more appropriate than no buffer or the full MMDM buffer for estimating the effectively sampled area and thereby density. Our modelaveraged super-population abundance estimate was 603 (95% CI¼522–684) black bears for Glacier NP. Our black bear density estimate (11.4 bears/100 km2, 95% CI¼9.9–13.0) was consistent with published estimates for populations that are sympatric with grizzly bears (U. arctos) and without access to spawning salmonids. Published 2013. This article is a U.S. Government work and is in the public domain in the USA.
An estimate of avian mortality at communication towers in the United States and Canada.
Longcore, Travis; Rich, Catherine; Mineau, Pierre; MacDonald, Beau; Bert, Daniel G; Sullivan, Lauren M; Mutrie, Erin; Gauthreaux, Sidney A; Avery, Michael L; Crawford, Robert L; Manville, Albert M; Travis, Emilie R; Drake, David
2012-01-01
Avian mortality at communication towers in the continental United States and Canada is an issue of pressing conservation concern. Previous estimates of this mortality have been based on limited data and have not included Canada. We compiled a database of communication towers in the continental United States and Canada and estimated avian mortality by tower with a regression relating avian mortality to tower height. This equation was derived from 38 tower studies for which mortality data were available and corrected for sampling effort, search efficiency, and scavenging where appropriate. Although most studies document mortality at guyed towers with steady-burning lights, we accounted for lower mortality at towers without guy wires or steady-burning lights by adjusting estimates based on published studies. The resulting estimate of mortality at towers is 6.8 million birds per year in the United States and Canada. Bootstrapped subsampling indicated that the regression was robust to the choice of studies included and a comparison of multiple regression models showed that incorporating sampling, scavenging, and search efficiency adjustments improved model fit. Estimating total avian mortality is only a first step in developing an assessment of the biological significance of mortality at communication towers for individual species or groups of species. Nevertheless, our estimate can be used to evaluate this source of mortality, develop subsequent per-species mortality estimates, and motivate policy action.
NASA Technical Reports Server (NTRS)
Scott, Elaine P.
1994-01-01
Thermal stress analyses are an important aspect in the development of aerospace vehicles at NASA-LaRC. These analyses require knowledge of the temperature distributions within the vehicle structures which consequently necessitates the need for accurate thermal property data. The overall goal of this ongoing research effort is to develop methodologies for the estimation of the thermal property data needed to describe the temperature responses of these complex structures. The research strategy undertaken utilizes a building block approach. The idea here is to first focus on the development of property estimation methodologies for relatively simple conditions, such as isotropic materials at constant temperatures, and then systematically modify the technique for the analysis of more and more complex systems, such as anisotropic multi-component systems. The estimation methodology utilized is a statistically based method which incorporates experimental data and a mathematical model of the system. Several aspects of this overall research effort were investigated during the time of the ASEE summer program. One important aspect involved the calibration of the estimation procedure for the estimation of the thermal properties through the thickness of a standard material. Transient experiments were conducted using a Pyrex standard at various temperatures, and then the thermal properties (thermal conductivity and volumetric heat capacity) were estimated at each temperature. Confidence regions for the estimated values were also determined. These results were then compared to documented values. Another set of experimental tests were conducted on carbon composite samples at different temperatures. Again, the thermal properties were estimated for each temperature, and the results were compared with values obtained using another technique. In both sets of experiments, a 10-15 percent off-set between the estimated values and the previously determined values was found. Another effort was related to the development of the experimental techniques. Initial experiments required a resistance heater placed between two samples. The design was modified such that the heater was placed on the surface of only one sample, as would be necessary in the analysis of built up structures. Experiments using the modified technique were conducted on the composite sample used previously at different temperatures. The results were within 5 percent of those found using two samples. Finally, an initial heat transfer analysis, including conduction, convection and radiation components, was completed on a titanium sandwich structural sample. Experiments utilizing this sample are currently being designed and will be used to first estimate the material's effective thermal conductivity and later to determine the properties associated with each individual heat transfer component.
Combining multistate capture-recapture data with tag recoveries to estimate demographic parameters
Kendall, W.L.; Conn, P.B.; Hines, J.E.
2006-01-01
Matrix population models that allow an animal to occupy more than one state over time are important tools for population and evolutionary ecologists. Definition of state can vary, including location for metapopulation models and breeding state for life history models. For populations whose members can be marked and subsequently re-encountered, multistate mark-recapture models are available to estimate the survival and transition probabilities needed to construct population models. Multistate models have proved extremely useful in this context, but they often require a substantial amount of data and restrict estimation of transition probabilities to those areas or states subjected to formal sampling effort. At the same time, for many species, there are considerable tag recovery data provided by the public that could be modeled in order to increase precision and to extend inference to a greater number of areas or states. Here we present a statistical model for combining multistate capture-recapture data (e.g., from a breeding ground study) with multistate tag recovery data (e.g., from wintering grounds). We use this method to analyze data from a study of Canada Geese (Branta canadensis) in the Atlantic Flyway of North America. Our analysis produced marginal improvement in precision, due to relatively few recoveries, but we demonstrate how precision could be further improved with increases in the probability that a retrieved tag is reported.
Nuclear thermal propulsion engine system design analysis code development
NASA Astrophysics Data System (ADS)
Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman J.; Ivanenok, Joseph F.
1992-01-01
A Nuclear Thermal Propulsion (NTP) Engine System Design Analyis Code has recently been developed to characterize key NTP engine system design features. Such a versatile, standalone NTP system performance and engine design code is required to support ongoing and future engine system and vehicle design efforts associated with proposed Space Exploration Initiative (SEI) missions of interest. Key areas of interest in the engine system modeling effort were the reactor, shielding, and inclusion of an engine multi-redundant propellant pump feed system design option. A solid-core nuclear thermal reactor and internal shielding code model was developed to estimate the reactor's thermal-hydraulic and physical parameters based on a prescribed thermal output which was integrated into a state-of-the-art engine system design model. The reactor code module has the capability to model graphite, composite, or carbide fuels. Key output from the model consists of reactor parameters such as thermal power, pressure drop, thermal profile, and heat generation in cooled structures (reflector, shield, and core supports), as well as the engine system parameters such as weight, dimensions, pressures, temperatures, mass flows, and performance. The model's overall analysis methodology and its key assumptions and capabilities are summarized in this paper.
Robertson, Dale M.; Schwarz, Gregory E.; Saad, David A.; Alexander, Richard B.
2009-01-01
Excessive loads of nutrients transported by tributary rivers have been linked to hypoxia in the Gulf of Mexico. Management efforts to reduce the hypoxic zone in the Gulf of Mexico and improve the water quality of rivers and streams could benefit from targeting nutrient reductions toward watersheds with the highest nutrient yields delivered to sensitive downstream waters. One challenge is that most conventional watershed modeling approaches (e.g., mechanistic models) used in these management decisions do not consider uncertainties in the predictions of nutrient yields and their downstream delivery. The increasing use of parameter estimation procedures to statistically estimate model coefficients, however, allows uncertainties in these predictions to be reliably estimated. Here, we use a robust bootstrapping procedure applied to the results of a previous application of the hybrid statistical/mechanistic watershed model SPARROW (Spatially Referenced Regression On Watershed attributes) to develop a statistically reliable method for identifying “high priority” areas for management, based on a probabilistic ranking of delivered nutrient yields from watersheds throughout a basin. The method is designed to be used by managers to prioritize watersheds where additional stream monitoring and evaluations of nutrient-reduction strategies could be undertaken. Our ranking procedure incorporates information on the confidence intervals of model predictions and the corresponding watershed rankings of the delivered nutrient yields. From this quantified uncertainty, we estimate the probability that individual watersheds are among a collection of watersheds that have the highest delivered nutrient yields. We illustrate the application of the procedure to 818 eight-digit Hydrologic Unit Code watersheds in the Mississippi/Atchafalaya River basin by identifying 150 watersheds having the highest delivered nutrient yields to the Gulf of Mexico. Highest delivered yields were from watersheds in the Central Mississippi, Ohio, and Lower Mississippi River basins. With 90% confidence, only a few watersheds can be reliably placed into the highest 150 category; however, many more watersheds can be removed from consideration as not belonging to the highest 150 category. Results from this ranking procedure provide robust information on watershed nutrient yields that can benefit management efforts to reduce nutrient loadings to downstream coastal waters, such as the Gulf of Mexico, or to local receiving streams and reservoirs.
Allocating HIV prevention funds in the United States: recommendations from an optimization model.
Lasry, Arielle; Sansom, Stephanie L; Hicks, Katherine A; Uzunangelov, Vladislav
2012-01-01
The Centers for Disease Control and Prevention (CDC) had an annual budget of approximately $327 million to fund health departments and community-based organizations for core HIV testing and prevention programs domestically between 2001 and 2006. Annual HIV incidence has been relatively stable since the year 2000 and was estimated at 48,600 cases in 2006 and 48,100 in 2009. Using estimates on HIV incidence, prevalence, prevention program costs and benefits, and current spending, we created an HIV resource allocation model that can generate a mathematically optimal allocation of the Division of HIV/AIDS Prevention's extramural budget for HIV testing, and counseling and education programs. The model's data inputs and methods were reviewed by subject matter experts internal and external to the CDC via an extensive validation process. The model projects the HIV epidemic for the United States under different allocation strategies under a fixed budget. Our objective is to support national HIV prevention planning efforts and inform the decision-making process for HIV resource allocation. Model results can be summarized into three main recommendations. First, more funds should be allocated to testing and these should further target men who have sex with men and injecting drug users. Second, counseling and education interventions ought to provide a greater focus on HIV positive persons who are aware of their status. And lastly, interventions should target those at high risk for transmitting or acquiring HIV, rather than lower-risk members of the general population. The main conclusions of the HIV resource allocation model have played a role in the introduction of new programs and provide valuable guidance to target resources and improve the impact of HIV prevention efforts in the United States.
Duarte, Adam; Adams, Michael J.; Peterson, James T.
2018-01-01
Monitoring animal populations is central to wildlife and fisheries management, and the use of N-mixture models toward these efforts has markedly increased in recent years. Nevertheless, relatively little work has evaluated estimator performance when basic assumptions are violated. Moreover, diagnostics to identify when bias in parameter estimates from N-mixture models is likely is largely unexplored. We simulated count data sets using 837 combinations of detection probability, number of sample units, number of survey occasions, and type and extent of heterogeneity in abundance or detectability. We fit Poisson N-mixture models to these data, quantified the bias associated with each combination, and evaluated if the parametric bootstrap goodness-of-fit (GOF) test can be used to indicate bias in parameter estimates. We also explored if assumption violations can be diagnosed prior to fitting N-mixture models. In doing so, we propose a new model diagnostic, which we term the quasi-coefficient of variation (QCV). N-mixture models performed well when assumptions were met and detection probabilities were moderate (i.e., ≥0.3), and the performance of the estimator improved with increasing survey occasions and sample units. However, the magnitude of bias in estimated mean abundance with even slight amounts of unmodeled heterogeneity was substantial. The parametric bootstrap GOF test did not perform well as a diagnostic for bias in parameter estimates when detectability and sample sizes were low. The results indicate the QCV is useful to diagnose potential bias and that potential bias associated with unidirectional trends in abundance or detectability can be diagnosed using Poisson regression. This study represents the most thorough assessment to date of assumption violations and diagnostics when fitting N-mixture models using the most commonly implemented error distribution. Unbiased estimates of population state variables are needed to properly inform management decision making. Therefore, we also discuss alternative approaches to yield unbiased estimates of population state variables using similar data types, and we stress that there is no substitute for an effective sample design that is grounded upon well-defined management objectives.
NASA Astrophysics Data System (ADS)
Dechant, B.; Ryu, Y.; Jiang, C.; Yang, K.
2017-12-01
Solar-induced chlorophyll fluorescence (SIF) is rapidly becoming an important tool to remotely estimate terrestrial gross primary productivity (GPP) at large spatial scales. Many findings, however, are based on empirical relationships between SIF and GPP that have been found to be dependent on plant functional types. Therefore, combining model-based analysis with observations is crucial to improve our understanding of SIF-GPP relationships. So far, most model-based results were based on SCOPE, a complex ecophysiological model with explicit description of canopy layers and a large number of parameters that may not be easily obtained reliably on large scales. Here, we report on our efforts to incorporate SIF into a two-big leaf (sun and shade) process-based model that is suitable for obtaining its inputs entirely from satellite products. We examine if the SIF-GPP relationships are consistent with the findings from SCOPE simulations and investigate if incorporation of the SIF signal into BESS can help improve GPP estimation. A case study in a rice paddy is presented.
Vaccine approaches to malaria control and elimination: Insights from mathematical models.
White, Michael T; Verity, Robert; Churcher, Thomas S; Ghani, Azra C
2015-12-22
A licensed malaria vaccine would provide a valuable new tool for malaria control and elimination efforts. Several candidate vaccines targeting different stages of the malaria parasite's lifecycle are currently under development, with one candidate, RTS,S/AS01 for the prevention of Plasmodium falciparum infection, having recently completed Phase III trials. Predicting the public health impact of a candidate malaria vaccine requires using clinical trial data to estimate the vaccine's efficacy profile--the initial efficacy following vaccination and the pattern of waning of efficacy over time. With an estimated vaccine efficacy profile, the effects of vaccination on malaria transmission can be simulated with the aid of mathematical models. Here, we provide an overview of methods for estimating the vaccine efficacy profiles of pre-erythrocytic vaccines and transmission-blocking vaccines from clinical trial data. In the case of RTS,S/AS01, model estimates from Phase II clinical trial data indicate a bi-phasic exponential profile of efficacy against infection, with efficacy waning rapidly in the first 6 months after vaccination followed by a slower rate of waning over the next 4 years. Transmission-blocking vaccines have yet to be tested in large-scale Phase II or Phase III clinical trials so we review ongoing work investigating how a clinical trial might be designed to ensure that vaccine efficacy can be estimated with sufficient statistical power. Finally, we demonstrate how parameters estimated from clinical trials can be used to predict the impact of vaccination campaigns on malaria using a mathematical model of malaria transmission. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Kui; Liu, Qun; Kalhoro, Muhsan Ali
2015-06-01
Delay-difference models are intermediate between simple surplus-production models and complicated age-structured models. Such intermediate models are more efficient and require less data than age-structured models. In this study, a delay-difference model was applied to fit catch and catch per unit effort (CPUE) data (1975-2011) of the southern Atlantic albacore ( Thunnus alalunga) stock. The proposed delay-difference model captures annual fluctuations in predicted CPUE data better than Fox model. In a Monte Carlo simulation, white noises (CVs) were superimposed on the observed CPUE data at four levels. Relative estimate error was then calculated to compare the estimated results with the true values of parameters α and β in Ricker stock-recruitment model and the catchability coefficient q. a is more sensitive to CV than β and q. We also calculated an 80% percentile confidence interval of the maximum sustainable yield (MSY, 21756 t to 23408 t; median 22490 t) with the delay-difference model. The yield of the southern Atlantic albacore stock in 2011 was 24122 t, and the estimated ratios of catch against MSY for the past seven years were approximately 1.0. We suggest that care should be taken to protect the albacore fishery in the southern Atlantic Ocean. The proposed delay-difference model provides a good fit to the data of southern Atlantic albacore stock and may be a useful choice for the assessment of regional albacore stock.
Estimation of Channel-Forming Discharge and Large-Event Geomorphic Response Using HEC-RAS
NASA Astrophysics Data System (ADS)
Hamilton, P.; Strom, K.; Hosseiny, S. M. H.
2015-12-01
The goal of the present work was to consider the functionality and applicability of HEC-RAS sediment transport simulations in two situations. The first was as a mode for obtaining quick estimates of the effective discharge, one measure of channel-forming discharge, and the second was as a mode to quickly estimate sediment transport and the commensurate potential erosion and deposition during large flood events. Though there are many other sediment transport and morphodynamic models available, e.g., CCHE1D, Nays2DH, we were interested in using HEC-RAS since this is the model of choice for many regulatory bodies, e.g., FEMA, cities, and counties. This makes using the sediment transport capability of HEC-RAS a natural extension of models that already otherwise exist and are well calibrated. In first looking at the utility of these models, we wanted to estimate the effective discharge of streams. Effective discharge is one way of defining the channel-forming discharge for a stream and is therefore an important parameter in natural channel design and restoration efforts. By running this range of floods, one can easily obtain an estimate for recurrence interval most responsible for moving the majority of sediment over a long time period. Results were compared to data collected within our research group on the Brazos River (TX). Effective discharge is an important estimate, particularly in understanding the equilibrium channel condition. Nevertheless, large floods are contemporaneously catastrophic and understanding their potential effects is desirable. Finally, we performed some sensitivity analysis to better understand the underlying assumptions of the various sediment transport model options and how they might affect the outcome of the aforementioned computations.
NASA Astrophysics Data System (ADS)
Kim, R. S.; Durand, M. T.; Li, D.; Baldo, E.; Margulis, S. A.; Dumont, M.; Morin, S.
2017-12-01
This paper presents a newly-proposed snow depth retrieval approach for mountainous deep snow using airborne multifrequency passive microwave (PM) radiance observation. In contrast to previous snow depth estimations using satellite PM radiance assimilation, the newly-proposed method utilized single flight observation and deployed the snow hydrologic models. This method is promising since the satellite-based retrieval methods have difficulties to estimate snow depth due to their coarse resolution and computational effort. Indeed, this approach consists of particle filter using combinations of multiple PM frequencies and multi-layer snow physical model (i.e., Crocus) to resolve melt-refreeze crusts. The method was performed over NASA Cold Land Processes Experiment (CLPX) area in Colorado during 2002 and 2003. Results showed that there was a significant improvement over the prior snow depth estimates and the capability to reduce the prior snow depth biases. When applying our snow depth retrieval algorithm using a combination of four PM frequencies (10.7,18.7, 37.0 and 89.0 GHz), the RMSE values were reduced by 48 % at the snow depth transects sites where forest density was less than 5% despite deep snow conditions. This method displayed a sensitivity to different combinations of frequencies, model stratigraphy (i.e. different number of layering scheme for snow physical model) and estimation methods (particle filter and Kalman filter). The prior RMSE values at the forest-covered areas were reduced by 37 - 42 % even in the presence of forest cover.
A Novel Uncertainty Framework for Improving Discharge Data Quality Using Hydraulic Modelling.
NASA Astrophysics Data System (ADS)
Mansanarez, V.; Westerberg, I.; Lyon, S. W.; Lam, N.
2017-12-01
Flood risk assessments rely on accurate discharge data records. Establishing a reliable stage-discharge (SD) rating curve for calculating discharge from stage at a gauging station normally takes years of data collection efforts. Estimation of high flows is particularly difficult as high flows occur rarely and are often practically difficult to gauge. Hydraulically-modelled rating curves can be derived based on as few as two concurrent stage-discharge and water-surface slope measurements at different flow conditions. This means that a reliable rating curve can, potentially, be derived much faster than a traditional rating curve based on numerous stage-discharge gaugings. We introduce an uncertainty framework using hydraulic modelling for developing SD rating curves and estimating their uncertainties. The proposed framework incorporates information from both the hydraulic configuration (bed slope, roughness, vegetation) and the information available in the stage-discharge observation data (gaugings). This method provides a direct estimation of the hydraulic configuration (slope, bed roughness and vegetation roughness). Discharge time series are estimated propagating stage records through posterior rating curve results.We applied this novel method to two Swedish hydrometric stations, accounting for uncertainties in the gaugings for the hydraulic model. Results from these applications were compared to discharge measurements and official discharge estimations.Sensitivity analysis was performed. We focused analyses on high-flow uncertainty and the factors that could reduce this uncertainty. In particular, we investigated which data uncertainties were most important, and at what flow conditions the gaugings should preferably be taken.
Airborne measurements of isoprene and monoterpene emissions from southeastern U.S. forests.
Yu, Haofei; Guenther, Alex; Gu, Dasa; Warneke, Carsten; Geron, Chris; Goldstein, Allen; Graus, Martin; Karl, Thomas; Kaser, Lisa; Misztal, Pawel; Yuan, Bin
2017-10-01
Isoprene and monoterpene emission rates are essential inputs for atmospheric chemistry models that simulate atmospheric oxidant and particle distributions. Process studies of the biochemical and physiological mechanisms controlling these emissions are advancing our understanding and the accuracy of model predictions but efforts to quantify regional emissions have been limited by a lack of constraints on regional distributions of ecosystem emission capacities. We used an airborne wavelet-based eddy covariance measurement technique to characterize isoprene and monoterpene fluxes with high spatial resolution during the 2013 SAS (Southeast Atmosphere Study) in the southeastern United States. The fluxes measured by direct eddy covariance were comparable to emissions independently estimated using an indirect inverse modeling approach. Isoprene emission factors based on the aircraft wavelet flux estimates for high isoprene chemotypes (e.g., oaks) were similar to the MEGAN2.1 biogenic emission model estimates for landscapes dominated by oaks. Aircraft flux measurement estimates for landscapes with fewer isoprene emitting trees (e.g., pine plantations), were about a factor of two lower than MEGAN2.1 model estimates. The tendency for high isoprene emitters in these landscapes to occur in the shaded understory, where light dependent isoprene emissions are diminished, may explain the lower than expected emissions. This result demonstrates the importance of accurately representing the vertical profile of isoprene emitting biomass in biogenic emission models. Airborne measurement-based emission factors for high monoterpene chemotypes agreed with MEGAN2.1 in landscapes dominated by pine (high monoterpene chemotype) trees but were more than a factor of three higher than model estimates for landscapes dominated by oak (relatively low monoterpene emitting) trees. This results suggests that unaccounted processes, such as floral emissions or light dependent monoterpene emissions, or vegetation other than high monoterpene emitting trees may be an important source of monoterpene emissions in those landscapes and should be identified and included in biogenic emission models. Copyright © 2017 Elsevier B.V. All rights reserved.
Airborne measurements of isoprene and monoterpene emissions from southeastern U.S. forests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Haofei; Guenther, Alex; Gu, Dasa
Isoprene and monoterpene emission rates are essential inputs for atmospheric chemistry models that simulate atmospheric oxidant and particle distributions. Process studies of the biochemical and physiological mechanisms controlling these emissions are advancing our understanding and the accuracy of model predictions but efforts to quantify regional emissions have been limited by a lack of constraints on regional distributions of ecosystem emission capacities. We used an airborne wavelet-based eddy covariance measurement technique to characterize isoprene and monoterpene fluxes with high spatial resolution during the 2013 SAS (Southeast Atmosphere Study) in the southeastern United States. The fluxes measured by direct eddy covariance weremore » comparable to emissions independently estimated using an indirect inverse modeling approach. Isoprene emission factors based on the aircraft wavelet flux estimates for high isoprene chemotypes (e.g., oaks) were similar to the MEGAN2.1 biogenic emission model estimates for landscapes dominated by oaks. Aircraft flux measurement estimates for landscapes with fewer isoprene emitting trees (e.g., pine plantations), were about a factor of two lower than MEGAN2.1 model estimates. The tendency for high isoprene emitters in these landscapes to occur in the shaded understory, where light dependent isoprene emissions are diminished, may explain the lower than expected emissions. This result demonstrates the importance of accurately representing the vertical profile of isoprene emitting biomass in biogenic emission models. Airborne measurement-based emission factors for high monoterpene chemotypes agreed with MEGAN2.1 in landscapes dominated by pine (high monoterpene chemotype) trees but were more than a factor of three higher than model estimates for landscapes dominated by oak (relatively low monoterpene emitting) trees. This results suggests that unaccounted processes, such as floral emissions or light dependent monoterpene emissions, or vegetation other than high monoterpene emitting trees may be an important source of monoterpene emissions in those landscapes and should be identified and included in biogenic emission models.« less
Evaluating immunocontraception for managing suburban white-tailed deer in Irondequoit, New York
Rudolph, B.A.; Porter, W.F.; Underwood, H.B.
2000-01-01
Immunocontraception is frequently proposed as an alternative to lethal removal of females for deer management. However, little information is available for evaluating the potential of applying immunocontraceptives to free-ranging populations. Our objectives were to estimate effort required to apply porcine zona pellucida (PZP) to individual deer and assess the utility of using immunocontraception to control growth of deer populations. The study was conducted in a 43-km2 suburban community with about 400 deer. Effort per deer was measured as time required to capture and mark deer, and then to apply booster immunocontraceptive treatments by remote injection. Estimates of numbers of females to treat to control population growth were based on the generalized sustained-yield (SY) model adapted for contraception of females. The SY curve was calibrated using data on deer abundance acquired from aerial population surveys and nutritional condition of females removed by a concurrent culling program. Effort was influenced by 4 factors: deer population density, approachability of individual deer, access to private and public land, and efficacy of the contraceptive treatment. Effort and deer density were inversely related. Cumulative effort for treatment increased exponentially because some deer were more difficult to approach than others. Potential of using immunocontraception at low deer population densities (<25% ecological carrying capacity) is limited by the interaction of the proportion of breeding-age females in the population and treatment efficacy, as well as encounter rates. Immunocontraception has the best potential for holding suburban deer populations between 30 and 70% of ecological carrying capacity, but is likely to be useful only in localized populations when the number of females to be treated is small (e.g., <200 deer).
Hopf, Jess K; Jones, Geoffrey P; Williamson, David H; Connolly, Sean R
2016-06-20
Marine no-take reserves, where fishing and other extractive activities are prohibited, have well-established conservation benefits [1], yet their impacts on fisheries remains contentious [2-4]. For fishery species, reserves are often implemented alongside more conventional harvest strategies, including catch and size limits [2, 5]. However, catch and fish abundances observed post-intervention are often attributed to reserves, without explicitly estimating the potential contribution of concurrent management interventions [2, 3, 6-9]. Here we test a metapopulation model against observed fishery [10] and population [11] data for an important coral reef fishery (coral trout; Plectropomus spp.) in Australia's Great Barrier Reef Marine Park (GBRMP) to evaluate how the combined increase in reserve area [12] and reduction in fishing effort [13, 14] in 2004 influenced changes in fish stocks and the commercial fishery. We found that declines in catch, increases in catch rates, and increases in biomass since 2004 were substantially attributable to the integration of direct effort controls with the rezoning, rather than the rezoning alone. The combined management approach was estimated to have been more productive for fish and fisheries than if the rezoning had occurred alone and comparable to what would have been obtained with effort controls alone. Sensitivity analyses indicate that the direct effort controls prevented initial decreases in catch per unit effort that would have otherwise occurred with the rezoning. Our findings demonstrate that by concurrently restructuring the fishery, the conservation benefits of reserves were enhanced and the fishery cost of rezoning the reserve network was socialized, mitigating negative impacts on individual fishers. Copyright © 2016 Elsevier Ltd. All rights reserved.
Dopamine Manipulation Affects Response Vigor Independently of Opportunity Cost.
Zénon, Alexandre; Devesse, Sophie; Olivier, Etienne
2016-09-14
Dopamine is known to be involved in regulating effort investment in relation to reward, and the disruption of this mechanism is thought to be central in some pathological situations such as Parkinson's disease, addiction, and depression. According to an influential model, dopamine plays this role by encoding the opportunity cost, i.e., the average value of forfeited actions, which is an important parameter to take into account when making decisions about which action to undertake and how fast to execute it. We tested this hypothesis by asking healthy human participants to perform two effort-based decision-making tasks, following either placebo or levodopa intake in a double blind within-subject protocol. In the effort-constrained task, there was a trade-off between the amount of force exerted and the time spent in executing the task, such that investing more effort decreased the opportunity cost. In the time-constrained task, the effort duration was constant, but exerting more force allowed the subject to earn more substantial reward instead of saving time. Contrary to the model predictions, we found that levodopa caused an increase in the force exerted only in the time-constrained task, in which there was no trade-off between effort and opportunity cost. In addition, a computational model showed that dopamine manipulation left the opportunity cost factor unaffected but altered the ratio between the effort cost and reinforcement value. These findings suggest that dopamine does not represent the opportunity cost but rather modulates how much effort a given reward is worth. Dopamine has been proposed in a prevalent theory to signal the average reward rate, used to estimate the cost of investing time in an action, also referred to as opportunity cost. We contrasted the effect of dopamine manipulation in healthy participants in two tasks, in which increasing response vigor (i.e., the amount of effort invested in an action) allowed either to save time or to earn more reward. We found that levodopa-a synthetic precursor of dopamine-increases response vigor only in the latter situation, demonstrating that, rather than the opportunity cost, dopamine is involved in computing the expected value of effort. Copyright © 2016 the authors 0270-6474/16/369516-10$15.00/0.
Aircraft Turbofan Engine Health Estimation Using Constrained Kalman Filtering
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2003-01-01
Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter is a combination of a standard Kalman filter and a quadratic programming problem. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is proven theoretically and shown via simulation results obtained from application to a turbofan engine model. This model contains 16 state variables, 12 measurements, and 8 component health parameters. It is shown that the new algorithms provide improved performance in this example over unconstrained Kalman filtering.
NASA Astrophysics Data System (ADS)
Palevsky, Hilary I.; Doney, Scott C.
2018-05-01
Estimated rates and efficiency of ocean carbon export flux are sensitive to differences in the depth horizons used to define export, which often vary across methodological approaches. We evaluate sinking particulate organic carbon (POC) flux rates and efficiency (e-ratios) in a global earth system model, using a range of commonly used depth horizons: the seasonal mixed layer depth, the particle compensation depth, the base of the euphotic zone, a fixed depth horizon of 100 m, and the maximum annual mixed layer depth. Within this single dynamically consistent model framework, global POC flux rates vary by 30% and global e-ratios by 21% across different depth horizon choices. Zonal variability in POC flux and e-ratio also depends on the export depth horizon due to pronounced influence of deep winter mixing in subpolar regions. Efforts to reconcile conflicting estimates of export need to account for these systematic discrepancies created by differing depth horizon choices.
Fighting With Siblings and With Peers Among Urban High School Students.
Johnson, Renee M; Duncan, Dustin T; Rothman, Emily F; Gilreath, Tamika D; Hemenway, David; Molnar, Beth E; Azrael, Deborah
2015-08-01
Understanding the determinants of fighting is important for prevention efforts. Unfortunately, there is little research on how sibling fighting is related to peer fighting. Therefore, the aim of this study was to evaluate the association between sibling fighting and peer fighting. Data are from the Boston Youth Survey 2008, a school-based sample of youth in Boston, MA. To estimate the association between sibling fighting and peer fighting, we ran four multivariate regression models and estimated adjusted prevalence ratios and 95% confidence intervals. We fit generalized estimating equation models to account for the fact that students were clustered within schools. Controlling for school clustering, race/ethnicity, sex, school failure, substance use, and caregiver aggression, youth who fought with siblings were 2.49 times more likely to have reported fighting with peers. To the extent that we can confirm that sibling violence is associated with aggressive behavior, we should incorporate it into violence prevention programming. © The Author(s) 2014.
Carbon Storage in Urban Areas in the USA
NASA Astrophysics Data System (ADS)
Churkina, G.; Brown, D.; Keoleian, G.
2007-12-01
It is widely accepted that human settlements occupy a small proportion of the landmass and therefore play a relatively small role in the dynamics of the global carbon cycle. Most modeling studies focusing on the land carbon cycle use models of varying complexity to estimate carbon fluxes through forests, grasses, and croplands, but completely omit urban areas from their scope. Here, we estimate carbon storage in urban areas within the United States, defined to encompass a range of observed settlement densities, and its changes from 1950 to 2000. We show that this storage is not negligible and has been continuously increasing. We include natural- and human-related components of urban areas in our estimates. The natural component includes carbon storage in urban soil and vegetation. The human related component encompasses carbon stored long term in buildings, furniture, cars, and waste. The study suggests that urban areas should receive continued attention in efforts to accurately account for carbon uptake and storage in terrestrial systems.
Aaron Weiskittel; Jereme Frank; James Westfall; David Walker; Phil Radtke; David Affleck; David Macfarlane
2015-01-01
Tree biomass models are widely used but differ due to variation in the quality and quantity of data used in their development. We reviewed over 250 biomass studies and categorized them by species, location, sampled diameter distribution, and sample size. Overall, less than half of the tree species in Forest Inventory and Analysis database (FIADB) are without a...
Redox flow cell development and demonstration project, calendar year 1976
NASA Technical Reports Server (NTRS)
1977-01-01
The major focus of the effort was the key technology issues that directly influence the fundamental feasibility of the overall redox concept. These issues were the development of a suitable semipermeable separator membrane for the system, the screening and study of candidate redox couples to achieve optimum cell performance, and the carrying out of systems analysis and modeling to develop system performance goals and cost estimates.
Sue Miller; Bill Elliot; Pete Robichaud; Randy Foltz; Dennis Flanagan; Erin Brooks
2014-01-01
Forest erosion can lead to topsoil loss, and also to damaging deposits of sediment in aquatic ecosystems. For this reason, forest managers must be able to estimate the erosion potential of both planned management activities and catastrophic events, in order to decide where to use limited funds to focus erosion control efforts. To meet this need, scientists from RMRS (...
Curtis A. Collins; David L. Evans; Keith L. Belli; Patrick A. Glass
2010-01-01
Hurricane Katrinaâs passage through south Mississippi on August 29, 2005, which damaged or destroyed thousands of hectares of forest land, was followed by massive salvage, cleanup, and assessment efforts. An initial assessment by the Mississippi Forestry Commission estimated that over $1 billion in raw wood material was downed by the storm, with county-level damage...
Estimation of mortality for stage-structured zooplankton populations: What is to be done?
NASA Astrophysics Data System (ADS)
Ohman, Mark D.
2012-05-01
Estimation of zooplankton mortality rates in field populations is a challenging task that some contend is inherently intractable. This paper examines several of the objections that are commonly raised to efforts to estimate mortality. We find that there are circumstances in the field where it is possible to sequentially sample the same population and to resolve biologically caused mortality, albeit with error. Precision can be improved with sampling directed by knowledge of the physical structure of the water column, combined with adequate sample replication. Intercalibration of sampling methods can make it possible to sample across the life history in a quantitative manner. Rates of development can be constrained by laboratory-based estimates of stage durations from temperature- and food-dependent functions, mesocosm studies of molting rates, or approximation of development rates from growth rates, combined with the vertical distributions of organisms in relation to food and temperature gradients. Careful design of field studies guided by the assumptions of specific estimation models can lead to satisfactory mortality estimates, but model uncertainty also needs to be quantified. We highlight additional issues requiring attention to further advance the field, including the need for linked cooperative studies of the rates and causes of mortality of co-occurring holozooplankton and ichthyoplankton.
Consumer phase risk assessment for Listeria monocytogenes in deli meats.
Yang, Hong; Mokhtari, Amirhossein; Jaykus, Lee-Ann; Morales, Roberta A; Cates, Sheryl C; Cowen, Peter
2006-02-01
The foodborne disease risk associated with the pathogen Listeria monocytogenes has been the subject of recent efforts in quantitative microbial risk assessment. Building upon one of these efforts undertaken jointly by the U.S. Food and Drug Administration and the U.S. Department of Agriculture (USDA), the purpose of this work was to expand on the consumer phase of the risk assessment to focus on handling practices in the home. One-dimensional Monte Carlo simulation was used to model variability in growth and cross-contamination of L. monocytogenes during food storage and preparation of deli meats. Simulations approximated that 0.3% of the servings were contaminated with >10(4) CFU/g of L. monocytogenes at the time of consumption. The estimated mean risk associated with the consumption of deli meats for the intermediate-age population was approximately 7 deaths per 10(11) servings. Food handling in homes increased the estimated mean mortality by 10(6)-fold. Of all the home food-handling practices modeled, inadequate storage, particularly refrigeration temperatures, provided the greatest contribution to increased risk. The impact of cross-contamination in the home was considerably less. Adherence to USDA Food Safety and Inspection Service recommendations for consumer handling of ready-to-eat foods substantially reduces the risk of listeriosis.
A mathematical model of microalgae growth in cylindrical photobioreactor
NASA Astrophysics Data System (ADS)
Bakeri, Noorhadila Mohd; Jamaian, Siti Suhana
2017-08-01
Microalgae are unicellular organisms, which exist individually or in chains or groups but can be utilized in many applications. Researchers have done various efforts in order to increase the growth rate of microalgae. Microalgae have a potential as an effective tool for wastewater treatment, besides as a replacement for natural fuel such as coal and biodiesel. The growth of microalgae can be estimated by using Geider model, which this model is based on photosynthesis irradiance curve (PI-curve) and focused on flat panel photobioreactor. Therefore, in this study a mathematical model for microalgae growth in cylindrical photobioreactor is proposed based on the Geider model. The light irradiance is the crucial part that affects the growth rate of microalgae. The absorbed photon flux will be determined by calculating the average light irradiance in a cylindrical system illuminated by unidirectional parallel flux and considering the cylinder as a collection of differential parallelepipeds. Results from this study showed that the specific growth rate of microalgae increases until the constant level is achieved. Therefore, the proposed mathematical model can be used to estimate the rate of microalgae growth in cylindrical photobioreactor.
Comparison of different models for non-invasive FFR estimation
NASA Astrophysics Data System (ADS)
Mirramezani, Mehran; Shadden, Shawn
2017-11-01
Coronary artery disease is a leading cause of death worldwide. Fractional flow reserve (FFR), derived from invasively measuring the pressure drop across a stenosis, is considered the gold standard to diagnose disease severity and need for treatment. Non-invasive estimation of FFR has gained recent attention for its potential to reduce patient risk and procedural cost versus invasive FFR measurement. Non-invasive FFR can be obtained by using image-based computational fluid dynamics to simulate blood flow and pressure in a patient-specific coronary model. However, 3D simulations require extensive effort for model construction and numerical computation, which limits their routine use. In this study we compare (ordered by increasing computational cost/complexity): reduced-order algebraic models of pressure drop across a stenosis; 1D, 2D (multiring) and 3D CFD models; as well as 3D FSI for the computation of FFR in idealized and patient-specific stenosis geometries. We demonstrate the ability of an appropriate reduced order algebraic model to closely predict FFR when compared to FFR from a full 3D simulation. This work was supported by the NIH, Grant No. R01-HL103419.
Empirical estimation of present-day Antarctic glacial isostatic adjustment and ice mass change
NASA Astrophysics Data System (ADS)
Gunter, B. C.; Didova, O.; Riva, R. E. M.; Ligtenberg, S. R. M.; Lenaerts, J. T. M.; King, M. A.; van den Broeke, M. R.; Urban, T.
2014-04-01
This study explores an approach that simultaneously estimates Antarctic mass balance and glacial isostatic adjustment (GIA) through the combination of satellite gravity and altimetry data sets. The results improve upon previous efforts by incorporating a firn densification model to account for firn compaction and surface processes as well as reprocessed data sets over a slightly longer period of time. A range of different Gravity Recovery and Climate Experiment (GRACE) gravity models were evaluated and a new Ice, Cloud, and Land Elevation Satellite (ICESat) surface height trend map computed using an overlapping footprint approach. When the GIA models created from the combination approach were compared to in situ GPS ground station displacements, the vertical rates estimated showed consistently better agreement than recent conventional GIA models. The new empirically derived GIA rates suggest the presence of strong uplift in the Amundsen Sea sector in West Antarctica (WA) and the Philippi/Denman sectors, as well as subsidence in large parts of East Antarctica (EA). The total GIA-related mass change estimates for the entire Antarctic ice sheet ranged from 53 to 103 Gt yr-1, depending on the GRACE solution used, with an estimated uncertainty of ±40 Gt yr-1. Over the time frame February 2003-October 2009, the corresponding ice mass change showed an average value of -100 ± 44 Gt yr-1 (EA: 5 ± 38, WA: -105 ± 22), consistent with other recent estimates in the literature, with regional mass loss mostly concentrated in WA. The refined approach presented in this study shows the contribution that such data combinations can make towards improving estimates of present-day GIA and ice mass change, particularly with respect to determining more reliable uncertainties.
Robinson, Hugh S.; Abarca, Maria; Zeller, Katherine A.; Velasquez, Grisel; Paemelaere, Evi A. D.; Goldberg, Joshua F.; Payan, Esteban; Hoogesteijn, Rafael; Boede, Ernesto O.; Schmidt, Krzysztof; Lampo, Margarita; Viloria, Ángel L.; Carreño, Rafael; Robinson, Nathaniel; Lukacs, Paul M.; Nowak, J. Joshua; Salom-Pérez, Roberto; Castañeda, Franklin; Boron, Valeria; Quigley, Howard
2018-01-01
Broad scale population estimates of declining species are desired for conservation efforts. However, for many secretive species including large carnivores, such estimates are often difficult. Based on published density estimates obtained through camera trapping, presence/absence data, and globally available predictive variables derived from satellite imagery, we modelled density and occurrence of a large carnivore, the jaguar, across the species’ entire range. We then combined these models in a hierarchical framework to estimate the total population. Our models indicate that potential jaguar density is best predicted by measures of primary productivity, with the highest densities in the most productive tropical habitats and a clear declining gradient with distance from the equator. Jaguar distribution, in contrast, is determined by the combined effects of human impacts and environmental factors: probability of jaguar occurrence increased with forest cover, mean temperature, and annual precipitation and declined with increases in human foot print index and human density. Probability of occurrence was also significantly higher for protected areas than outside of them. We estimated the world’s jaguar population at 173,000 (95% CI: 138,000–208,000) individuals, mostly concentrated in the Amazon Basin; elsewhere, populations tend to be small and fragmented. The high number of jaguars results from the large total area still occupied (almost 9 million km2) and low human densities (< 1 person/km2) coinciding with high primary productivity in the core area of jaguar range. Our results show the importance of protected areas for jaguar persistence. We conclude that combining modelling of density and distribution can reveal ecological patterns and processes at global scales, can provide robust estimates for use in species assessments, and can guide broad-scale conservation actions. PMID:29579129
Estimating Water Fluxes Across the Sediment-Water Interface in the Lower Merced River, California
Zamora, Celia
2008-01-01
The lower Merced River Basin was chosen by the U.S. Geological Survey?s (USGS) National Water Quality Assessment Program (NAWQA) to be included in a national study on how hydrological processes and agricultural practices interact to affect the transport and fate of agricultural chemicals. As part of this effort, surface-water?ground-water (sw?gw) interactions were studied in an instrumented 100-m reach on the lower Merced River. This study focused on estimating vertical rates of exchange across the sediment?water interface by direct measurement using seepage meters and by using temperature as a tracer coupled with numerical modeling. Temperature loggers and pressure transducers were placed in monitoring wells within the streambed and in the river to continuously monitor temperature and hydraulic head every 15 minutes from March 2004 to October 2005. One-dimensional modeling of heat and water flow was used to interpret the temperature and head observations and deduce the sw?gw fluxes using the USGS numerical model, VS2DH, which simulates variably saturated water flow and solves the energy transport equation. Results of the modeling effort indicate that the Merced River at the study reach is generally a slightly gaining stream with small head differences (cm) between the surface water and ground water, with flow reversals occurring during high streamflow events. The average vertical flux across the sediment?water interface was 0.4?2.2 cm/day, and the range of hydraulic conductivities was 1?10 m/day. Seepage meters generally failed to provide accurate data in this high-energy system because of slow seepage rates and a moving streambed resulting in scour or burial of the seepage meters. Estimates of streambed hydraulic conductivity were also made using grain-size analysis and slug tests. Estimated hydraulic conductivity for the upstream transect determined using slug tests ranged from 40 to 250 m/day, whereas the downstream transect ranged from 10 to 100 m/day. The range in variability was a result of position along each transect. A relative percent difference was used to describe the variability in estimates of hydraulic conductivity by grain-size analysis and slug test. Variability in applied methods at the upstream transect ranged from 0 to 9 percent, whereas the downstream transect showed greater variability, with a range of 80 to 133 percent.
Estimate of the Reliability in Geological Forecasts for Tunnels: Toward a Structured Approach
NASA Astrophysics Data System (ADS)
Perello, Paolo
2011-11-01
In tunnelling, a reliable geological model often allows providing an effective design and facing the construction phase without unpleasant surprises. A geological model can be considered reliable when it is a valid support to correctly foresee the rock mass behaviour, therefore preventing unexpected events during the excavation. The higher the model reliability, the lower the probability of unforeseen rock mass behaviour. Unfortunately, owing to different reasons, geological models are affected by uncertainties and a fully reliable knowledge of the rock mass is, in most cases, impossible. Therefore, estimating to which degree a geological model is reliable, becomes a primary requirement in order to save time and money and to adopt the appropriate construction strategy. The definition of the geological model reliability is often achieved by engineering geologists through an unstructured analytical process and variable criteria. This paper focusses on geological models for projects of linear underground structures and represents an effort to analyse and include in a conceptual framework the factors influencing such models. An empirical parametric procedure is then developed with the aim of obtaining an index called "geological model rating (GMR)", which can be used to provide a more standardised definition of a geological model reliability.
Expected Navigation Flight Performance for the Magnetospheric Multiscale (MMS) Mission
NASA Technical Reports Server (NTRS)
Olson, Corwin; Wright, Cinnamon; Long, Anne
2012-01-01
The Magnetospheric Multiscale (MMS) mission consists of four formation-flying spacecraft placed in highly eccentric elliptical orbits about the Earth. The primary scientific mission objective is to study magnetic reconnection within the Earth s magnetosphere. The baseline navigation concept is the independent estimation of each spacecraft state using GPS pseudorange measurements (referenced to an onboard Ultra Stable Oscillator) and accelerometer measurements during maneuvers. State estimation for the MMS spacecraft is performed onboard each vehicle using the Goddard Enhanced Onboard Navigation System, which is embedded in the Navigator GPS receiver. This paper describes the latest efforts to characterize expected navigation flight performance using upgraded simulation models derived from recent analyses.
Applications of the JARS method to study levee sites in southern Texas and southern New Mexico
Ivanov, J.; Miller, R.D.; Xia, J.; Dunbar, J.B.
2007-01-01
We apply the joint analysis of refractions with surface waves (JARS) method to several sites and compare its results to traditional refraction-tomography methods in efforts of finding a more realistic solution to the inverse refraction-traveltime problem. The JARS method uses a reference model, derived from surface-wave shear-wave velocity estimates, as a constraint. In all of the cases JARS estimates appear more realistic than those from the conventional refraction-tomography methods. As a result, we consider, the JARS algorithm as the preferred method for finding solutions to the inverse refraction-tomography problems. ?? 2007 Society of Exploration Geophysicists.
Estimating abundance without recaptures of marked pallid sturgeon in the Mississippi River.
Friedenberg, Nicholas A; Hoover, Jan Jeffrey; Boysen, Krista; Killgore, K Jack
2018-04-01
Abundance estimates are essential for assessing the viability of populations and the risks posed by alternative management actions. An effort to estimate abundance via a repeated mark-recapture experiment may fail to recapture marked individuals. We devised a method for obtaining lower bounds on abundance in the absence of recaptures for both panmictic and spatially structured populations. The method assumes few enough recaptures were expected to be missed by random chance. The upper Bayesian credible limit on expected recaptures allows probabilistic statements about the minimum number of individuals present in the population. We applied this method to data from a 12-year survey of pallid sturgeon (Scaphirhynchus albus) in the lower and middle Mississippi River (U.S.A.). None of the 241 individuals marked was recaptured in the survey. After accounting for survival and movement, our model-averaged estimate of the total abundance of pallid sturgeon ≥3 years old in the study area had a 1%, 5%, or 25% chance of being <4,600, 7,000, or 15,000, respectively. When we assumed fish were distributed in proportion to survey catch per unit effort, the farthest downstream reach in the survey hosted at least 4.5-15 fish per river kilometer (rkm), whereas the remainder of the reaches in the lower and middle Mississippi River hosted at least 2.6-8.5 fish/rkm for all model variations examined. The lower Mississippi River had an average density of pallid sturgeon ≥3 years old of at least 3.0-9.8 fish/rkm. The choice of Bayesian prior was the largest source of uncertainty we considered but did not alter the order of magnitude of lower bounds. Nil-recapture estimates of abundance are highly uncertain and require careful communication but can deliver insights from experiments that might otherwise be considered a failure. © 2017 Society for Conservation Biology.
Estimating Forest Species Composition Using a Multi-Sensor Approach
NASA Astrophysics Data System (ADS)
Wolter, P. T.
2009-12-01
The magnitude, duration, and frequency of forest disturbance caused by the spruce budworm and forest tent caterpillar has increased over the last century due to a shift in forest species composition linked to historical fire suppression, forest management, and pesticide application that has fostered the increase in dominance of host tree species. Modeling approaches are currently being used to understand and forecast potential management effects in changing insect disturbance trends. However, detailed forest composition data needed for these efforts is often lacking. Here, we used partial least squares (PLS) regression to integrate satellite sensor data from Landsat, Radarsat-1, and PALSAR, as well as pixel-wise forest structure information derived from SPOT-5 sensor data (Wolter et al. 2009), to estimate species-level forest composition of 12 species required for modeling efforts. C-band Radarsat-1 data and L-band PALSAR data were frequently among the strongest predictors of forest composition. Pixel-level forest structure data were more important for estimating conifer rather than hardwood forest composition. The coefficients of determination for species relative basal area (RBA) ranged from 0.57 (white cedar) to 0.94 (maple) with RMSE of 8.88 to 6.44 % RBA, respectively. Receiver operating characteristic (ROC) curves were used to determine the effective lower limits of usefulness of species RBA estimates which ranged from 5.94 % (jack pine) to 39.41 % (black ash). These estimates were then used to produce a dominant forest species map for the study region with an overall accuracy of 78 %. Most notably, this approach facilitated discrimination of aspen from birch as well as spruce and fir from other conifer species which is crucial for the study of forest tent caterpillar and spruce budworm dynamics, respectively, in the Upper Midwest. Thus, use of PLS regression as a data fusion strategy has proven to be an effective tool for regional characterization of forest composition within spatially heterogeneous forests using large-format satellite sensor data.
Rosinska, M; Gwiazda, P; De Angelis, D; Presanis, A M
2016-04-01
HIV spread in men who have sex with men (MSM) is an increasing problem in Poland. Despite the existence of a surveillance system, there is no direct evidence to allow estimation of HIV prevalence and the proportion undiagnosed in MSM. We extracted data on HIV and the MSM population in Poland, including case-based surveillance data, diagnostic testing prevalence data and behavioural data relating to self-reported prior diagnosis, stratified by age (⩽35, >35 years) and region (Mazowieckie including the capital city of Warsaw; other regions). They were integrated into one model based on a Bayesian evidence synthesis approach. The posterior distributions for HIV prevalence and the undiagnosed fraction were estimated by Markov Chain Monte Carlo methods. To improve the model fit we repeated the analysis, introducing bias parameters to account for potential lack of representativeness in data. By placing additional constraints on bias parameters we obtained precisely identified estimates. This family of models indicates a high undiagnosed fraction [68·3%, 95% credibility interval (CrI) 53·9-76·1] and overall low prevalence (2·3%, 95% CrI 1·4-4·1) of HIV in MSM. Additional data are necessary in order to produce more robust epidemiological estimates. More effort is urgently needed to ensure timely diagnosis of HIV in Poland.
Robust detection, isolation and accommodation for sensor failures
NASA Technical Reports Server (NTRS)
Emami-Naeini, A.; Akhter, M. M.; Rock, S. M.
1986-01-01
The objective is to extend the recent advances in robust control system design of multivariable systems to sensor failure detection, isolation, and accommodation (DIA), and estimator design. This effort provides analysis tools to quantify the trade-off between performance robustness and DIA sensitivity, which are to be used to achieve higher levels of performance robustness for given levels of DIA sensitivity. An innovations-based DIA scheme is used. Estimators, which depend upon a model of the process and process inputs and outputs, are used to generate these innovations. Thresholds used to determine failure detection are computed based on bounds on modeling errors, noise properties, and the class of failures. The applicability of the newly developed tools are demonstrated on a multivariable aircraft turbojet engine example. A new concept call the threshold selector was developed. It represents a significant and innovative tool for the analysis and synthesis of DiA algorithms. The estimators were made robust by introduction of an internal model and by frequency shaping. The internal mode provides asymptotically unbiased filter estimates.The incorporation of frequency shaping of the Linear Quadratic Gaussian cost functional modifies the estimator design to make it suitable for sensor failure DIA. The results are compared with previous studies which used thresholds that were selcted empirically. Comparison of these two techniques on a nonlinear dynamic engine simulation shows improved performance of the new method compared to previous techniques
Pinto-Leite, C M; Rocha, P L B
2012-12-01
Empirical studies using visual search methods to investigate spider communities were conducted with different sampling protocols, including a variety of plot sizes, sampling efforts, and diurnal periods for sampling. We sampled 11 plots ranging in size from 5 by 10 m to 5 by 60 m. In each plot, we computed the total number of species detected every 10 min during 1 hr during the daytime and during the nighttime (0630 hours to 1100 hours, both a.m. and p.m.). We measured the influence of time effort on the measurement of species richness by comparing the curves produced by sample-based rarefaction and species richness estimation (first-order jackknife). We used a general linear model with repeated measures to assess whether the phase of the day during which sampling occurred and the differences in the plot lengths influenced the number of species observed and the number of species estimated. To measure the differences in species composition between the phases of the day, we used a multiresponse permutation procedure and a graphical representation based on nonmetric multidimensional scaling. After 50 min of sampling, we noted a decreased rate of species accumulation and a tendency of the estimated richness curves to reach an asymptote. We did not detect an effect of plot size on the number of species sampled. However, differences in observed species richness and species composition were found between phases of the day. Based on these results, we propose guidelines for visual search for tropical web spiders.
2011-01-01
Background Insecticide-treated mosquito nets (ITNs) and indoor-residual spraying have been scaled-up across sub-Saharan Africa as part of international efforts to control malaria. These interventions have the potential to significantly impact child survival. The Lives Saved Tool (LiST) was developed to provide national and regional estimates of cause-specific mortality based on the extent of intervention coverage scale-up. We compared the percent reduction in all-cause child mortality estimated by LiST against measured reductions in all-cause child mortality from studies assessing the impact of vector control interventions in Africa. Methods We performed a literature search for appropriate studies and compared reductions in all-cause child mortality estimated by LiST to 4 studies that estimated changes in all-cause child mortality following the scale-up of vector control interventions. The following key parameters measured by each study were applied to available country projections: baseline all-cause child mortality rate, proportion of mortality due to malaria, and population coverage of vector control interventions at baseline and follow-up years. Results The percent reduction in all-cause child mortality estimated by the LiST model fell within the confidence intervals around the measured mortality reductions for all 4 studies. Two of the LiST estimates overestimated the mortality reductions by 6.1 and 4.2 percentage points (33% and 35% relative to the measured estimates), while two underestimated the mortality reductions by 4.7 and 6.2 percentage points (22% and 25% relative to the measured estimates). Conclusions The LiST model did not systematically under- or overestimate the impact of ITNs on all-cause child mortality. These results show the LiST model to perform reasonably well at estimating the effect of vector control scale-up on child mortality when compared against measured data from studies across a range of malaria transmission settings. The LiST model appears to be a useful tool in estimating the potential mortality reduction achieved from scaling-up malaria control interventions. PMID:21501453
VBA: A Probabilistic Treatment of Nonlinear Models for Neurobiological and Behavioural Data
Daunizeau, Jean; Adam, Vincent; Rigoux, Lionel
2014-01-01
This work is in line with an on-going effort tending toward a computational (quantitative and refutable) understanding of human neuro-cognitive processes. Many sophisticated models for behavioural and neurobiological data have flourished during the past decade. Most of these models are partly unspecified (i.e. they have unknown parameters) and nonlinear. This makes them difficult to peer with a formal statistical data analysis framework. In turn, this compromises the reproducibility of model-based empirical studies. This work exposes a software toolbox that provides generic, efficient and robust probabilistic solutions to the three problems of model-based analysis of empirical data: (i) data simulation, (ii) parameter estimation/model selection, and (iii) experimental design optimization. PMID:24465198