Phillips, K A; Morrison, K R; Andersen, R; Aday, L A
1998-01-01
OBJECTIVE: The behavioral model of utilization, developed by Andersen, Aday, and others, is one of the most frequently used frameworks for analyzing the factors that are associated with patient utilization of healthcare services. However, the use of the model for examining the context within which utilization occurs-the role of the environment and provider-related factors-has been largely neglected. OBJECTIVE: To conduct a systematic review and analysis to determine if studies of medical care utilization that have used the behavioral model during the last 20 years have included environmental and provider-related variables and the methods used to analyze these variables. We discuss barriers to the use of these contextual variables and potential solutions. DATA SOURCES: The Social Science Citation Index and Science Citation Index. We included all articles from 1975-1995 that cited any of three key articles on the behavioral model, that included all articles that were empirical analyses and studies of formal medical care utilization, and articles that specifically stated their use of the behavioral model (n = 139). STUDY DESIGN: Design was a systematic literature review. DATA ANALYSIS: We used a structured review process to code articles on whether they included contextual variables: (1) environmental variables (characteristics of the healthcare delivery system, external environment, and community-level enabling factors); and (2) provider-related variables (patient factors that may be influenced by providers and provider characteristics that interact with patient characteristics to influence utilization). We also examined the methods used in studies that included contextual variables. PRINCIPAL FINDINGS: Forty-five percent of the studies included environmental variables and 51 percent included provider-related variables. Few studies examined specific measures of the healthcare system or provider characteristics or used methods other than simple regression analysis with hierarchical entry of variables. Only 14 percent of studies analyzed the context of healthcare by including both environmental and provider-related variables as well as using relevant methods. CONCLUSIONS: By assessing whether and how contextual variables are used, we are able to highlight the contributions made by studies using these approaches, to identify variables and methods that have been relatively underused, and to suggest solutions to barriers in using contextual variables. PMID:9685123
The Diffusion Model Is Not a Deterministic Growth Model: Comment on Jones and Dzhafarov (2014)
Smith, Philip L.; Ratcliff, Roger; McKoon, Gail
2015-01-01
Jones and Dzhafarov (2014) claim that several current models of speeded decision making in cognitive tasks, including the diffusion model, can be viewed as special cases of other general models or model classes. The general models can be made to match any set of response time (RT) distribution and accuracy data exactly by a suitable choice of parameters and so are unfalsifiable. The implication of their claim is that models like the diffusion model are empirically testable only by artificially restricting them to exclude unfalsifiable instances of the general model. We show that Jones and Dzhafarov’s argument depends on enlarging the class of “diffusion” models to include models in which there is little or no diffusion. The unfalsifiable models are deterministic or near-deterministic growth models, from which the effects of within-trial variability have been removed or in which they are constrained to be negligible. These models attribute most or all of the variability in RT and accuracy to across-trial variability in the rate of evidence growth, which is permitted to be distributed arbitrarily and to vary freely across experimental conditions. In contrast, in the standard diffusion model, within-trial variability in evidence is the primary determinant of variability in RT. Across-trial variability, which determines the relative speed of correct responses and errors, is theoretically and empirically constrained. Jones and Dzhafarov’s attempt to include the diffusion model in a class of models that also includes deterministic growth models misrepresents and trivializes it and conveys a misleading picture of cognitive decision-making research. PMID:25347314
Fatigue and crashes: the case of freight transport in Colombia.
Torregroza-Vargas, Nathaly M; Bocarejo, Juan Pablo; Ramos-Bonilla, Juan P
2014-11-01
Truck drivers have been involved in a significant number of road fatalities in Colombia. To identify variables that could be associated with crashes in which truck drivers are involved, a logistic regression model was constructed. The model had as the response variable a dichotomous variable that included the presence or absence of a crash during a specific trip. As independent variables the model included information regarding a driver's work shift, with variables that could be associated with driver's fatigue. The model also included potential confounders related with road conditions. With the model, it was possible to determine the odds ratio of a crash in relation to several variables, adjusting for confounding. To collect the information about the trips included in the model, a survey among truck drivers was conducted. The results suggest strong associations between crashes (i.e., some of them statistically significant) with the number of stops made during the trip, and the average time of each stop. Survey analysis allowed us to identify the practices that contribute to generating fatigue and unhealthy conditions on the road among professional drivers. A review of national regulations confirmed the lack of legislation on this topic. Copyright © 2014 Elsevier Ltd. All rights reserved.
Importance of scale, land cover, and weather on the abundance of bird species in a managed forest
Grinde, Alexis R.; Hiemi, Gerald J.; Sturtevant, Brian R.; Panci, Hannah; Thogmartin, Wayne E.; Wolter, Peter
2017-01-01
Climate change and habitat loss are projected to be the two greatest drivers of biodiversity loss over the coming century. While public lands have the potential to increase regional resilience of bird populations to these threats, long-term data are necessary to document species responses to changes in climate and habitat to better understand population vulnerabilities. We used generalized linear mixed models to determine the importance of stand-level characteristics, multi-scale land cover, and annual weather factors to the abundance of 61 bird species over a 20-year time frame in Chippewa National Forest, Minnesota, USA. Of the 61 species modeled, we were able to build final models with R-squared values that ranged from 26% to 69% for 37 species; the remaining 24 species models had issues with convergence or low explanatory power (R-squared < 20%). Models for the 37 species show that stand-level characteristics, land cover factors, and annual weather effects on species abundance were species-specific and varied within guilds. Forty-one percent of the final species models included stand-level characteristics, 92% included land cover variables at the 200 m scale, 51% included land cover variables at the 500 m scale, 46% included land cover variables at the 1000 m scale, and 38% included weather variables in best models. Three species models (8%) included significant weather and land cover interaction terms. Overall, models indicated that aboveground tree biomass and land cover variables drove changes in the majority of species. Of those species models including weather variables, more included annual variation in precipitation or drought than temperature. Annual weather variability was significantly more likely to impact abundance of species associated with deciduous forests and bird species that are considered climate sensitive. The long-term data and models we developed are particularly suited to informing science-based adaptive forest management plans that incorporate climate sensitivity, aim to conserve large areas of forest habitat, and maintain an historical mosaic of cover types for conserving a diverse and abundant avian assemblage.
A Multivariate Model of Parent-Adolescent Relationship Variables in Early Adolescence
ERIC Educational Resources Information Center
McKinney, Cliff; Renk, Kimberly
2011-01-01
Given the importance of predicting outcomes for early adolescents, this study examines a multivariate model of parent-adolescent relationship variables, including parenting, family environment, and conflict. Participants, who completed measures assessing these variables, included 710 culturally diverse 11-14-year-olds who were attending a middle…
Bayesian Semiparametric Structural Equation Models with Latent Variables
ERIC Educational Resources Information Center
Yang, Mingan; Dunson, David B.
2010-01-01
Structural equation models (SEMs) with latent variables are widely useful for sparse covariance structure modeling and for inferring relationships among latent variables. Bayesian SEMs are appealing in allowing for the incorporation of prior information and in providing exact posterior distributions of unknowns, including the latent variables. In…
Measurement Model Specification Error in LISREL Structural Equation Models.
ERIC Educational Resources Information Center
Baldwin, Beatrice; Lomax, Richard
This LISREL study examines the robustness of the maximum likelihood estimates under varying degrees of measurement model misspecification. A true model containing five latent variables (two endogenous and three exogenous) and two indicator variables per latent variable was used. Measurement model misspecification considered included errors of…
A Bayesian Semiparametric Latent Variable Model for Mixed Responses
ERIC Educational Resources Information Center
Fahrmeir, Ludwig; Raach, Alexander
2007-01-01
In this paper we introduce a latent variable model (LVM) for mixed ordinal and continuous responses, where covariate effects on the continuous latent variables are modelled through a flexible semiparametric Gaussian regression model. We extend existing LVMs with the usual linear covariate effects by including nonparametric components for nonlinear…
a Latent Variable Path Analysis Model of Secondary Physics Enrollments in New York State.
NASA Astrophysics Data System (ADS)
Sobolewski, Stanley John
The Percentage of Enrollment in Physics (PEP) at the secondary level nationally has been approximately 20% for the past few decades. For a more scientifically literate citizenry as well as specialists to continue scientific research and development, it is desirable that more students enroll in physics. Some of the predictor variables for physics enrollment and physics achievement that have been identified previously includes a community's socioeconomic status, the availability of physics, the sex of the student, the curriculum, as well as teacher and student data. This study isolated and identified predictor variables for PEP of secondary schools in New York. Data gathered by the State Education Department for the 1990-1991 school year was used. The source of this data included surveys completed by teachers and administrators on student characteristics and school facilities. A data analysis similar to that done by Bryant (1974) was conducted to determine if the relationships between a set of predictor variables related to physics enrollment had changed in the past 20 years. Variables which were isolated included: community, facilities, teacher experience, number of type of science courses, school size and school science facilities. When these variables were isolated, latent variable path diagrams were proposed and verified by the Linear Structural Relations computer modeling program (LISREL). These diagrams differed from those developed by Bryant in that there were more manifest variables used which included achievement scores in the form of Regents exam results. Two criterion variables were used, percentage of students enrolled in physics (PEP) and percent of students enrolled passing the Regents physics exam (PPP). The first model treated school and community level variables as exogenous while the second model treated only the community level variables as exogenous. The goodness of fit indices for the models was 0.77 for the first model and 0.83 for the second model. No dramatic differences were found between the relationship of predictor variables to physics enrollment in 1972 and 1991. New models indicated that smaller school size, enrollment in previous science and math courses and other school variables were more related to high enrollment rather than achievement. Exogenous variables such as community size were related to achievement. It was shown that achievement and enrollment were related to a different set of predictor variables.
Bailey, Ryan T.; Morway, Eric D.; Niswonger, Richard G.; Gates, Timothy K.
2013-01-01
A numerical model was developed that is capable of simulating multispecies reactive solute transport in variably saturated porous media. This model consists of a modified version of the reactive transport model RT3D (Reactive Transport in 3 Dimensions) that is linked to the Unsaturated-Zone Flow (UZF1) package and MODFLOW. Referred to as UZF-RT3D, the model is tested against published analytical benchmarks as well as other published contaminant transport models, including HYDRUS-1D, VS2DT, and SUTRA, and the coupled flow and transport modeling system of CATHY and TRAN3D. Comparisons in one-dimensional, two-dimensional, and three-dimensional variably saturated systems are explored. While several test cases are included to verify the correct implementation of variably saturated transport in UZF-RT3D, other cases are included to demonstrate the usefulness of the code in terms of model run-time and handling the reaction kinetics of multiple interacting species in variably saturated subsurface systems. As UZF1 relies on a kinematic-wave approximation for unsaturated flow that neglects the diffusive terms in Richards equation, UZF-RT3D can be used for large-scale aquifer systems for which the UZF1 formulation is reasonable, that is, capillary-pressure gradients can be neglected and soil parameters can be treated as homogeneous. Decreased model run-time and the ability to include site-specific chemical species and chemical reactions make UZF-RT3D an attractive model for efficient simulation of multispecies reactive transport in variably saturated large-scale subsurface systems.
Identifying bird and reptile vulnerabilities to climate change in the southwestern United States
Hatten, James R.; Giermakowski, J. Tomasz; Holmes, Jennifer A.; Nowak, Erika M.; Johnson, Matthew J.; Ironside, Kirsten E.; van Riper, Charles; Peters, Michael; Truettner, Charles; Cole, Kenneth L.
2016-07-06
Current and future breeding ranges of 15 bird and 16 reptile species were modeled in the Southwestern United States. Rather than taking a broad-scale, vulnerability-assessment approach, we created a species distribution model (SDM) for each focal species incorporating climatic, landscape, and plant variables. Baseline climate (1940–2009) was characterized with Parameter-elevation Regressions on Independent Slopes Model (PRISM) data and future climate with global-circulation-model data under an A1B emission scenario. Climatic variables included monthly and seasonal temperature and precipitation; landscape variables included terrain ruggedness, soil type, and insolation; and plant variables included trees and shrubs commonly associated with a focal species. Not all species-distribution models contained a plant, but if they did, we included a built-in annual migration rate for more accurate plant-range projections in 2039 or 2099. We conducted a group meta-analysis to (1) determine how influential each variable class was when averaged across all species distribution models (birds or reptiles), and (2) identify the correlation among contemporary (2009) habitat fragmentation and biological attributes and future range projections (2039 or 2099). Projected changes in bird and reptile ranges varied widely among species, with one-third of the ranges predicted to expand and two-thirds predicted to contract. A group meta-analysis indicated that climatic variables were the most influential variable class when averaged across all models for both groups, followed by landscape and plant variables (birds), or plant and landscape variables (reptiles), respectively. The second part of the meta-analysis indicated that numerous contemporary habitat-fragmentation (for example, patch isolation) and biological-attribute (for example, clutch size, longevity) variables were significantly correlated with the magnitude of projected range changes for birds and reptiles. Patch isolation was a significant trans-specific driver of projected bird and reptile ranges, suggesting that strategic actions should focus on restoration and enhancement of habitat at local and regional scales to promote landscape connectivity and conservation of core areas.
Stochastic Time Models of Syllable Structure
Shaw, Jason A.; Gafos, Adamantios I.
2015-01-01
Drawing on phonology research within the generative linguistics tradition, stochastic methods, and notions from complex systems, we develop a modelling paradigm linking phonological structure, expressed in terms of syllables, to speech movement data acquired with 3D electromagnetic articulography and X-ray microbeam methods. The essential variable in the models is syllable structure. When mapped to discrete coordination topologies, syllabic organization imposes systematic patterns of variability on the temporal dynamics of speech articulation. We simulated these dynamics under different syllabic parses and evaluated simulations against experimental data from Arabic and English, two languages claimed to parse similar strings of segments into different syllabic structures. Model simulations replicated several key experimental results, including the fallibility of past phonetic heuristics for syllable structure, and exposed the range of conditions under which such heuristics remain valid. More importantly, the modelling approach consistently diagnosed syllable structure proving resilient to multiple sources of variability in experimental data including measurement variability, speaker variability, and contextual variability. Prospects for extensions of our modelling paradigm to acoustic data are also discussed. PMID:25996153
Variable-Speed Simulation of a Dual-Clutch Gearbox Tiltrotor Driveline
NASA Technical Reports Server (NTRS)
DeSmidt, Hans; Wang, Kon-Well; Smith, Edward C.; Lewicki, David G.
2012-01-01
This investigation explores the variable-speed operation and shift response of a prototypical two-speed dual-clutch transmission tiltrotor driveline in forward flight. Here, a Comprehensive Variable-Speed Rotorcraft Propulsion System Modeling (CVSRPM) tool developed under a NASA funded NRA program is utilized to simulate the drive system dynamics. In this study, a sequential shifting control strategy is analyzed under a steady forward cruise condition. This investigation attempts to build upon previous variable-speed rotorcraft propulsion studies by 1) including a fully nonlinear transient gas-turbine engine model, 2) including clutch stick-slip friction effects, 3) including shaft flexibility, 4) incorporating a basic flight dynamics model to account for interactions with the flight control system. Through exploring the interactions between the various subsystems, this analysis provides important insights into the continuing development of variable-speed rotorcraft propulsion systems.
NASA Technical Reports Server (NTRS)
Kalnay, E.; Balgovind, R.; Chao, W.; Edelmann, D.; Pfaendtner, J.; Takacs, L.; Takano, K.
1983-01-01
Volume 3 of a 3-volume technical memoranda which contains documentation of the GLAS fourth order genera circulation model is presented. The volume contains the CYBER 205 scalar and vector codes of the model, list of variables, and cross references. A dictionary of FORTRAN variables used in the Scalar Version, and listings of the FORTRAN Code compiled with the C-option, are included. Cross reference maps of local variables are included for each subroutine.
Dorota, Myszkowska
2013-03-01
The aim of the study was to construct the model forecasting the birch pollen season characteristics in Cracow on the basis of an 18-year data series. The study was performed using the volumetric method (Lanzoni/Burkard trap). The 98/95 % method was used to calculate the pollen season. The Spearman's correlation test was applied to find the relationship between the meteorological parameters and pollen season characteristics. To construct the predictive model, the backward stepwise multiple regression analysis was used including the multi-collinearity of variables. The predictive models best fitted the pollen season start and end, especially models containing two independent variables. The peak concentration value was predicted with the higher prediction error. Also the accuracy of the models predicting the pollen season characteristics in 2009 was higher in comparison with 2010. Both, the multi-variable model and one-variable model for the beginning of the pollen season included air temperature during the last 10 days of February, while the multi-variable model also included humidity at the beginning of April. The models forecasting the end of the pollen season were based on temperature in March-April, while the peak day was predicted using the temperature during the last 10 days of March.
Natural variability of marine ecosystems inferred from a coupled climate to ecosystem simulation
NASA Astrophysics Data System (ADS)
Le Mézo, Priscilla; Lefort, Stelly; Séférian, Roland; Aumont, Olivier; Maury, Olivier; Murtugudde, Raghu; Bopp, Laurent
2016-01-01
This modeling study analyzes the simulated natural variability of pelagic ecosystems in the North Atlantic and North Pacific. Our model system includes a global Earth System Model (IPSL-CM5A-LR), the biogeochemical model PISCES and the ecosystem model APECOSM that simulates upper trophic level organisms using a size-based approach and three interactive pelagic communities (epipelagic, migratory and mesopelagic). Analyzing an idealized (e.g., no anthropogenic forcing) 300-yr long pre-industrial simulation, we find that low and high frequency variability is dominant for the large and small organisms, respectively. Our model shows that the size-range exhibiting the largest variability at a given frequency, defined as the resonant range, also depends on the community. At a given frequency, the resonant range of the epipelagic community includes larger organisms than that of the migratory community and similarly, the latter includes larger organisms than the resonant range of the mesopelagic community. This study shows that the simulated temporal variability of marine pelagic organisms' abundance is not only influenced by natural climate fluctuations but also by the structure of the pelagic community. As a consequence, the size- and community-dependent response of marine ecosystems to climate variability could impact the sustainability of fisheries in a warming world.
Modeling Predictors of Duties Not Including Flying Status.
Tvaryanas, Anthony P; Griffith, Converse
2018-01-01
The purpose of this study was to reuse available datasets to conduct an analysis of potential predictors of U.S. Air Force aircrew nonavailability in terms of being in "duties not to include flying" (DNIF) status. This study was a retrospective cohort analysis of U.S. Air Force aircrew on active duty during the period from 2003-2012. Predictor variables included age, Air Force Specialty Code (AFSC), clinic location, diagnosis, gender, pay grade, and service component. The response variable was DNIF duration. Nonparametric methods were used for the exploratory analysis and parametric methods were used for model building and statistical inference. Out of a set of 783 potential predictor variables, 339 variables were identified from the nonparametric exploratory analysis for inclusion in the parametric analysis. Of these, 54 variables had significant associations with DNIF duration in the final model fitted to the validation data set. The predicted results of this model for DNIF duration had a correlation of 0.45 with the actual number of DNIF days. Predictor variables included age, 6 AFSCs, 7 clinic locations, and 40 primary diagnosis categories. Specific demographic (i.e., age), occupational (i.e., AFSC), and health (i.e., clinic location and primary diagnosis category) DNIF drivers were identified. Subsequent research should focus on the application of primary, secondary, and tertiary prevention measures to ameliorate the potential impact of these DNIF drivers where possible.Tvaryanas AP, Griffith C Jr. Modeling predictors of duties not including flying status. Aerosp Med Hum Perform. 2018; 89(1):52-57.
Ground-Based Telescope Parametric Cost Model
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Rowell, Ginger Holmes
2004-01-01
A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis, The model includes both engineering and performance parameters. While diameter continues to be the dominant cost driver, other significant factors include primary mirror radius of curvature and diffraction limited wavelength. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e.. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter are derived. This analysis indicates that recent mirror technology advances have indeed reduced the historical telescope cost curve.
Beck, J D; Weintraub, J A; Disney, J A; Graves, R C; Stamm, J W; Kaste, L M; Bohannan, H M
1992-12-01
The purpose of this analysis is to compare three different statistical models for predicting children likely to be at risk of developing dental caries over a 3-yr period. Data are based on 4117 children who participated in the University of North Carolina Caries Risk Assessment Study, a longitudinal study conducted in the Aiken, South Carolina, and Portland, Maine areas. The three models differed with respect to either the types of variables included or the definition of disease outcome. The two "Prediction" models included both risk factor variables thought to cause dental caries and indicator variables that are associated with dental caries, but are not thought to be causal for the disease. The "Etiologic" model included only etiologic factors as variables. A dichotomous outcome measure--none or any 3-yr increment, was used in the "Any Risk Etiologic model" and the "Any Risk Prediction Model". Another outcome, based on a gradient measure of disease, was used in the "High Risk Prediction Model". The variables that are significant in these models vary across grades and sites, but are more consistent among the Etiologic model than the Predictor models. However, among the three sets of models, the Any Risk Prediction Models have the highest sensitivity and positive predictive values, whereas the High Risk Prediction Models have the highest specificity and negative predictive values. Considerations in determining model preference are discussed.
Multilevel and Latent Variable Modeling with Composite Links and Exploded Likelihoods
ERIC Educational Resources Information Center
Rabe-Hesketh, Sophia; Skrondal, Anders
2007-01-01
Composite links and exploded likelihoods are powerful yet simple tools for specifying a wide range of latent variable models. Applications considered include survival or duration models, models for rankings, small area estimation with census information, models for ordinal responses, item response models with guessing, randomized response models,…
Modeling heart rate variability including the effect of sleep stages
NASA Astrophysics Data System (ADS)
Soliński, Mateusz; Gierałtowski, Jan; Żebrowski, Jan
2016-02-01
We propose a model for heart rate variability (HRV) of a healthy individual during sleep with the assumption that the heart rate variability is predominantly a random process. Autonomic nervous system activity has different properties during different sleep stages, and this affects many physiological systems including the cardiovascular system. Different properties of HRV can be observed during each particular sleep stage. We believe that taking into account the sleep architecture is crucial for modeling the human nighttime HRV. The stochastic model of HRV introduced by Kantelhardt et al. was used as the initial starting point. We studied the statistical properties of sleep in healthy adults, analyzing 30 polysomnographic recordings, which provided realistic information about sleep architecture. Next, we generated synthetic hypnograms and included them in the modeling of nighttime RR interval series. The results of standard HRV linear analysis and of nonlinear analysis (Shannon entropy, Poincaré plots, and multiscale multifractal analysis) show that—in comparison with real data—the HRV signals obtained from our model have very similar properties, in particular including the multifractal characteristics at different time scales. The model described in this paper is discussed in the context of normal sleep. However, its construction is such that it should allow to model heart rate variability in sleep disorders. This possibility is briefly discussed.
Wheeler, David C; Czarnota, Jenna; Jones, Resa M
2017-01-01
Socioeconomic status (SES) is often considered a risk factor for health outcomes. SES is typically measured using individual variables of educational attainment, income, housing, and employment variables or a composite of these variables. Approaches to building the composite variable include using equal weights for each variable or estimating the weights with principal components analysis or factor analysis. However, these methods do not consider the relationship between the outcome and the SES variables when constructing the index. In this project, we used weighted quantile sum (WQS) regression to estimate an area-level SES index and its effect in a model of colonoscopy screening adherence in the Minnesota-Wisconsin Metropolitan Statistical Area. We considered several specifications of the SES index including using different spatial scales (e.g., census block group-level, tract-level) for the SES variables. We found a significant positive association (odds ratio = 1.17, 95% CI: 1.15-1.19) between the SES index and colonoscopy adherence in the best fitting model. The model with the best goodness-of-fit included a multi-scale SES index with 10 variables at the block group-level and one at the tract-level, with home ownership, race, and income among the most important variables. Contrary to previous index construction, our results were not consistent with an assumption of equal importance of variables in the SES index when explaining colonoscopy screening adherence. Our approach is applicable in any study where an SES index is considered as a variable in a regression model and the weights for the SES variables are not known in advance.
Characterizing Uncertainty and Variability in PBPK Models ...
Mode-of-action based risk and safety assessments can rely upon tissue dosimetry estimates in animals and humans obtained from physiologically-based pharmacokinetic (PBPK) modeling. However, risk assessment also increasingly requires characterization of uncertainty and variability; such characterization for PBPK model predictions represents a continuing challenge to both modelers and users. Current practices show significant progress in specifying deterministic biological models and the non-deterministic (often statistical) models, estimating their parameters using diverse data sets from multiple sources, and using them to make predictions and characterize uncertainty and variability. The International Workshop on Uncertainty and Variability in PBPK Models, held Oct 31-Nov 2, 2006, sought to identify the state-of-the-science in this area and recommend priorities for research and changes in practice and implementation. For the short term, these include: (1) multidisciplinary teams to integrate deterministic and non-deterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through more complete documentation of the model structure(s) and parameter values, the results of sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include: (1) theoretic and practical methodological impro
Selection of Variables in Cluster Analysis: An Empirical Comparison of Eight Procedures
ERIC Educational Resources Information Center
Steinley, Douglas; Brusco, Michael J.
2008-01-01
Eight different variable selection techniques for model-based and non-model-based clustering are evaluated across a wide range of cluster structures. It is shown that several methods have difficulties when non-informative variables (i.e., random noise) are included in the model. Furthermore, the distribution of the random noise greatly impacts the…
State variable modeling of the integrated engine and aircraft dynamics
NASA Astrophysics Data System (ADS)
Rotaru, Constantin; Sprinţu, Iuliana
2014-12-01
This study explores the dynamic characteristics of the combined aircraft-engine system, based on the general theory of the state variables for linear and nonlinear systems, with details leading first to the separate formulation of the longitudinal and the lateral directional state variable models, followed by the merging of the aircraft and engine models into a single state variable model. The linearized equations were expressed in a matrix form and the engine dynamics was included in terms of variation of thrust following a deflection of the throttle. The linear model of the shaft dynamics for a two-spool jet engine was derived by extending the one-spool model. The results include the discussion of the thrust effect upon the aircraft response when the thrust force associated with the engine has a sizable moment arm with respect to the aircraft center of gravity for creating a compensating moment.
Messier, Kyle P; Jackson, Laura E; White, Jennifer L; Hilborn, Elizabeth D
2015-01-01
This study assessed how landcover classification affects associations between landscape characteristics and Lyme disease rate. Landscape variables were derived from the National Land Cover Database (NLCD), including native classes (e.g., deciduous forest, developed low intensity) and aggregate classes (e.g., forest, developed). Percent of each landcover type, median income, and centroid coordinates were calculated by census tract. Regression results from individual and aggregate variable models were compared with the dispersion parameter-based R(2) (Rα(2)) and AIC. The maximum Rα(2) was 0.82 and 0.83 for the best aggregate and individual model, respectively. The AICs for the best models differed by less than 0.5%. The aggregate model variables included forest, developed, agriculture, agriculture-squared, y-coordinate, y-coordinate-squared, income and income-squared. The individual model variables included deciduous forest, deciduous forest-squared, developed low intensity, pasture, y-coordinate, y-coordinate-squared, income, and income-squared. Results indicate that regional landscape models for Lyme disease rate are robust to NLCD landcover classification resolution. Published by Elsevier Ltd.
Application of variable-gain output feedback for high-alpha control
NASA Technical Reports Server (NTRS)
Ostroff, Aaron J.
1990-01-01
A variable-gain, optimal, discrete, output feedback design approach that is applied to a nonlinear flight regime is described. The flight regime covers a wide angle-of-attack range that includes stall and post stall. The paper includes brief descriptions of the variable-gain formulation, the discrete-control structure and flight equations used to apply the design approach, and the high performance airplane model used in the application. Both linear and nonlinear analysis are shown for a longitudinal four-model design case with angles of attack of 5, 15, 35, and 60 deg. Linear and nonlinear simulations are compared for a single-point longitudinal design at 60 deg angle of attack. Nonlinear simulations for the four-model, multi-mode, variable-gain design include a longitudinal pitch-up and pitch-down maneuver and high angle-of-attack regulation during a lateral maneuver.
Relevance of anisotropy and spatial variability of gas diffusivity for soil-gas transport
NASA Astrophysics Data System (ADS)
Schack-Kirchner, Helmer; Kühne, Anke; Lang, Friederike
2017-04-01
Models of soil gas transport generally do not consider neither direction dependence of gas diffusivity, nor its small-scale variability. However, in a recent study, we could provide evidence for anisotropy favouring vertical gas diffusion in natural soils. We hypothesize that gas transport models based on gas diffusion data measured with soil rings are strongly influenced by both, anisotropy and spatial variability and the use of averaged diffusivities could be misleading. To test this we used a 2-dimensional model of soil gas transport to under compacted wheel tracks to model the soil-air oxygen distribution in the soil. The model was parametrized with data obtained from soil-ring measurements with its central tendency and variability. The model includes vertical parameter variability as well as variation perpendicular to the elongated wheel track. Different parametrization types have been tested: [i)]Averaged values for wheel track and undisturbed. em [ii)]Random distribution of soil cells with normally distributed variability within the strata. em [iii)]Random distributed soil cells with uniformly distributed variability within the strata. All three types of small-scale variability has been tested for [j)] isotropic gas diffusivity and em [jj)]reduced horizontal gas diffusivity (constant factor), yielding in total six models. As expected the different parametrizations had an important influence to the aeration state under wheel tracks with the strongest oxygen depletion in case of uniformly distributed variability and anisotropy towards higher vertical diffusivity. The simple simulation approach clearly showed the relevance of anisotropy and spatial variability in case of identical central tendency measures of gas diffusivity. However, until now it did not consider spatial dependency of variability, that could even aggravate effects. To consider anisotropy and spatial variability in gas transport models we recommend a) to measure soil-gas transport parameters spatially explicit including different directions and b) to use random-field stochastic models to assess the possible effects for gas-exchange models.
Werneke, Mark W; Edmond, Susan; Deutscher, Daniel; Ward, Jason; Grigsby, David; Young, Michelle; McGill, Troy; McClenahan, Brian; Weinberg, Jon; Davidow, Amy L
2016-09-01
Study Design Retrospective cohort. Background Patient-classification subgroupings may be important prognostic factors explaining outcomes. Objectives To determine effects of adding classification variables (McKenzie syndrome and pain patterns, including centralization and directional preference; Symptom Checklist Back Pain Prediction Model [SCL BPPM]; and the Fear-Avoidance Beliefs Questionnaire subscales of work and physical activity) to a baseline risk-adjusted model predicting functional status (FS) outcomes. Methods Consecutive patients completed a battery of questionnaires that gathered information on 11 risk-adjustment variables. Physical therapists trained in Mechanical Diagnosis and Therapy methods classified each patient by McKenzie syndromes and pain pattern. Functional status was assessed at discharge by patient-reported outcomes. Only patients with complete data were included. Risk of selection bias was assessed. Prediction of discharge FS was assessed using linear stepwise regression models, allowing 13 variables to enter the model. Significant variables were retained in subsequent models. Model power (R(2)) and beta coefficients for model variables were estimated. Results Two thousand sixty-six patients with lumbar impairments were evaluated. Of those, 994 (48%), 10 (<1%), and 601 (29%) were excluded due to incomplete psychosocial data, McKenzie classification data, and missing FS at discharge, respectively. The final sample for analyses was 723 (35%). Overall R(2) for the baseline prediction FS model was 0.40. Adding classification variables to the baseline model did not result in significant increases in R(2). McKenzie syndrome or pain pattern explained 2.8% and 3.0% of the variance, respectively. When pain pattern and SCL BPPM were added simultaneously, overall model R(2) increased to 0.44. Although none of these increases in R(2) were significant, some classification variables were stronger predictors compared with some other variables included in the baseline model. Conclusion The small added prognostic capabilities identified when combining McKenzie or pain-pattern classifications with the SCL BPPM classification did not significantly improve prediction of FS outcomes in this study. Additional research is warranted to investigate the importance of classification variables compared with those used in the baseline model to maximize predictive power. Level of Evidence Prognosis, level 4. J Orthop Sports Phys Ther 2016;46(9):726-741. Epub 31 Jul 2016. doi:10.2519/jospt.2016.6266.
Hao, Yong; Sun, Xu-Dong; Yang, Qiang
2012-12-01
Variables selection strategy combined with local linear embedding (LLE) was introduced for the analysis of complex samples by using near infrared spectroscopy (NIRS). Three methods include Monte Carlo uninformation variable elimination (MCUVE), successive projections algorithm (SPA) and MCUVE connected with SPA were used for eliminating redundancy spectral variables. Partial least squares regression (PLSR) and LLE-PLSR were used for modeling complex samples. The results shown that MCUVE can both extract effective informative variables and improve the precision of models. Compared with PLSR models, LLE-PLSR models can achieve more accurate analysis results. MCUVE combined with LLE-PLSR is an effective modeling method for NIRS quantitative analysis.
Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant
Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa
2013-09-17
System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.
Variable Density Effects in Stochastic Lagrangian Models for Turbulent Combustion
2016-07-20
PDF methods in dealing with chemical reaction and convection are preserved irrespective of density variation. Since the density variation in a typical...combustion process may be as large as factor of seven, including variable- density effects in PDF methods is of significance. Conventionally, the...strategy of modelling variable density flows in PDF methods is similar to that used for second-moment closure models (SMCM): models are developed based on
Variable Complexity Optimization of Composite Structures
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.
2002-01-01
The use of several levels of modeling in design has been dubbed variable complexity modeling. The work under the grant focused on developing variable complexity modeling strategies with emphasis on response surface techniques. Applications included design of stiffened composite plates for improved damage tolerance, the use of response surfaces for fitting weights obtained by structural optimization, and design against uncertainty using response surface techniques.
Alpha1 LASSO data bundles Lamont, OK
Gustafson, William Jr; Vogelmann, Andrew; Endo, Satoshi; Toto, Tami; Xiao, Heng; Li, Zhijin; Cheng, Xiaoping; Krishna, Bhargavi (ORCID:000000018828528X)
2016-08-03
A data bundle is a unified package consisting of LASSO LES input and output, observations, evaluation diagnostics, and model skill scores. LES input includes model configuration information and forcing data. LES output includes profile statistics and full domain fields of cloud and environmental variables. Model evaluation data consists of LES output and ARM observations co-registered on the same grid and sampling frequency. Model performance is quantified by skill scores and diagnostics in terms of cloud and environmental variables.
Structural Equations and Path Analysis for Discrete Data.
ERIC Educational Resources Information Center
Winship, Christopher; Mare, Robert D.
1983-01-01
Presented is an approach to causal models in which some or all variables are discretely measured, showing that path analytic methods permit quantification of causal relationships among variables with the same flexibility and power of interpretation as is feasible in models including only continuous variables. Examples are provided. (Author/IS)
Pérez-Hoyos, S; Sáez Zafra, M; Barceló, M A; Cambra, C; Figueiras Guzmán, A; Ordóñez, J M; Guillén Grima, F; Ocaña, R; Bellido, J; Cirera Suárez, L; López, A A; Rodríguez, V; Alcalá Nalvaiz, T; Ballester Díez, F
1999-01-01
The aim of this study is to Mortality show the protocol of analysis which was set out as part of the EMECAM Project, illustrating the application thereof to the effect of pollution has on the mortality in the city of Valencia. The response variables considered will be the daily deaths rate resulting from all causes, except external ones. The explicative variables are the daily series of different pollutants (black smoke, SO2, NO2, CO, O3). As possible confusion variables, weather factors, structural factors and weekly cases of flu are taken into account. A Poisson regression model is built up for each one of the four deaths series in two stages. In the first stage, a baseline model is fitted using the possible confusion variables. In the second stage, the pollution variables or the time legs thereof are included, controlling the residual autocorrelation by including mortality time lags. The process of fitting the baseline model is as follows: 1) Include the significant sinusoidal terms up to the sixth order. 2) Include the significant temperature or temperature squared terms with the time lags thereof up to the 7th order. 3) Repeat this process with the relative humidity. 4) Add in the significant terms of calendar years, daily tendency and tendency squared. 5) The days of the week as dummy variables are always included in the model. 6) Include the holidays and the significant time lags of up to two weeks of flu. Following the reassessment of the model, each one of the pollutants and the time lags thereof up to the fifth order are proven out. The impact is analyzed by six-month periods, including interaction terms.
A Comparison between Multiple Regression Models and CUN-BAE Equation to Predict Body Fat in Adults
Fuster-Parra, Pilar; Bennasar-Veny, Miquel; Tauler, Pedro; Yañez, Aina; López-González, Angel A.; Aguiló, Antoni
2015-01-01
Background Because the accurate measure of body fat (BF) is difficult, several prediction equations have been proposed. The aim of this study was to compare different multiple regression models to predict BF, including the recently reported CUN-BAE equation. Methods Multi regression models using body mass index (BMI) and body adiposity index (BAI) as predictors of BF will be compared. These models will be also compared with the CUN-BAE equation. For all the analysis a sample including all the participants and another one including only the overweight and obese subjects will be considered. The BF reference measure was made using Bioelectrical Impedance Analysis. Results The simplest models including only BMI or BAI as independent variables showed that BAI is a better predictor of BF. However, adding the variable sex to both models made BMI a better predictor than the BAI. For both the whole group of participants and the group of overweight and obese participants, using simple models (BMI, age and sex as variables) allowed obtaining similar correlations with BF as when the more complex CUN-BAE was used (ρ = 0:87 vs. ρ = 0:86 for the whole sample and ρ = 0:88 vs. ρ = 0:89 for overweight and obese subjects, being the second value the one for CUN-BAE). Conclusions There are simpler models than CUN-BAE equation that fits BF as well as CUN-BAE does. Therefore, it could be considered that CUN-BAE overfits. Using a simple linear regression model, the BAI, as the only variable, predicts BF better than BMI. However, when the sex variable is introduced, BMI becomes the indicator of choice to predict BF. PMID:25821960
A comparison between multiple regression models and CUN-BAE equation to predict body fat in adults.
Fuster-Parra, Pilar; Bennasar-Veny, Miquel; Tauler, Pedro; Yañez, Aina; López-González, Angel A; Aguiló, Antoni
2015-01-01
Because the accurate measure of body fat (BF) is difficult, several prediction equations have been proposed. The aim of this study was to compare different multiple regression models to predict BF, including the recently reported CUN-BAE equation. Multi regression models using body mass index (BMI) and body adiposity index (BAI) as predictors of BF will be compared. These models will be also compared with the CUN-BAE equation. For all the analysis a sample including all the participants and another one including only the overweight and obese subjects will be considered. The BF reference measure was made using Bioelectrical Impedance Analysis. The simplest models including only BMI or BAI as independent variables showed that BAI is a better predictor of BF. However, adding the variable sex to both models made BMI a better predictor than the BAI. For both the whole group of participants and the group of overweight and obese participants, using simple models (BMI, age and sex as variables) allowed obtaining similar correlations with BF as when the more complex CUN-BAE was used (ρ = 0:87 vs. ρ = 0:86 for the whole sample and ρ = 0:88 vs. ρ = 0:89 for overweight and obese subjects, being the second value the one for CUN-BAE). There are simpler models than CUN-BAE equation that fits BF as well as CUN-BAE does. Therefore, it could be considered that CUN-BAE overfits. Using a simple linear regression model, the BAI, as the only variable, predicts BF better than BMI. However, when the sex variable is introduced, BMI becomes the indicator of choice to predict BF.
Introduction to the special section on mixture modeling in personality assessment.
Wright, Aidan G C; Hallquist, Michael N
2014-01-01
Latent variable models offer a conceptual and statistical framework for evaluating the underlying structure of psychological constructs, including personality and psychopathology. Complex structures that combine or compare categorical and dimensional latent variables can be accommodated using mixture modeling approaches, which provide a powerful framework for testing nuanced theories about psychological structure. This special series includes introductory primers on cross-sectional and longitudinal mixture modeling, in addition to empirical examples applying these techniques to real-world data collected in clinical settings. This group of articles is designed to introduce personality assessment scientists and practitioners to a general latent variable framework that we hope will stimulate new research and application of mixture models to the assessment of personality and its pathology.
BOREAS RSS-8 BIOME-BGC Model Simulations at Tower Flux Sites in 1994
NASA Technical Reports Server (NTRS)
Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Kimball, John
2000-01-01
BIOME-BGC is a general ecosystem process model designed to simulate biogeochemical and hydrologic processes across multiple scales (Running and Hunt, 1993). In this investigation, BIOME-BGC was used to estimate daily water and carbon budgets for the BOREAS tower flux sites for 1994. Carbon variables estimated by the model include gross primary production (i.e., net photosynthesis), maintenance and heterotrophic respiration, net primary production, and net ecosystem carbon exchange. Hydrologic variables estimated by the model include snowcover, evaporation, transpiration, evapotranspiration, soil moisture, and outflow. The information provided by the investigation includes input initialization and model output files for various sites in tabular ASCII format.
Empirical spatial econometric modelling of small scale neighbourhood
NASA Astrophysics Data System (ADS)
Gerkman, Linda
2012-07-01
The aim of the paper is to model small scale neighbourhood in a house price model by implementing the newest methodology in spatial econometrics. A common problem when modelling house prices is that in practice it is seldom possible to obtain all the desired variables. Especially variables capturing the small scale neighbourhood conditions are hard to find. If there are important explanatory variables missing from the model, the omitted variables are spatially autocorrelated and they are correlated with the explanatory variables included in the model, it can be shown that a spatial Durbin model is motivated. In the empirical application on new house price data from Helsinki in Finland, we find the motivation for a spatial Durbin model, we estimate the model and interpret the estimates for the summary measures of impacts. By the analysis we show that the model structure makes it possible to model and find small scale neighbourhood effects, when we know that they exist, but we are lacking proper variables to measure them.
Suppressor Variables: The Difference between "Is" versus "Acting As"
ERIC Educational Resources Information Center
Ludlow, Larry; Klein, Kelsey
2014-01-01
Correlated predictors in regression models are a fact of life in applied social science research. The extent to which they are correlated will influence the estimates and statistics associated with the other variables they are modeled along with. These effects, for example, may include enhanced regression coefficients for the other variables--a…
Milewski, John O.; Sklar, Edward
1998-01-01
A laser welding process including: (a) using optical ray tracing to make a model of a laser beam and the geometry of a joint to be welded; (b) adjusting variables in the model to choose variables for use in making a laser weld; and (c) laser welding the joint to be welded using the chosen variables.
Milewski, J.O.; Sklar, E.
1998-06-02
A laser welding process including: (a) using optical ray tracing to make a model of a laser beam and the geometry of a joint to be welded; (b) adjusting variables in the model to choose variables for use in making a laser weld; and (c) laser welding the joint to be welded using the chosen variables. 34 figs.
Nevers, M.B.; Whitman, R.L.
2008-01-01
To understand the fate and movement of Escherichia coli in beach water, numerous modeling studies have been undertaken including mechanistic predictions of currents and plumes and empirical modeling based on hydrometeorological variables. Most approaches are limited in scope by nearshore currents or physical obstacles and data limitations; few examine the issue from a larger spatial scale. Given the similarities between variables typically included in these models, we attempted to take a broader view of E. coli fluctuations by simultaneously examining twelve beaches along 35 km of Indiana's Lake Michigan coastline that includes five point-source outfalls. The beaches had similar E. coli fluctuations, and a best-fit empirical model included two variables: wave height and an interactive term comprised of wind direction and creek turbidity. Individual beach R2 was 0.32-0.50. Data training-set results were comparable to validation results (R2 = 0.48). Amount of variation explained by the model was similar to previous reports for individual beaches. By extending the modeling approach to include more coastline distance, broader-scale spatial and temporal changes in bacteria concentrations and the influencing factors can be characterized. ?? 2008 American Chemical Society.
NASA Technical Reports Server (NTRS)
Connolly, Joseph W.; Friedlander, David; Kopasakis, George
2015-01-01
This paper covers the development of an integrated nonlinear dynamic simulation for a variable cycle turbofan engine and nozzle that can be integrated with an overall vehicle Aero-Propulso-Servo-Elastic (APSE) model. A previously developed variable cycle turbofan engine model is used for this study and is enhanced here to include variable guide vanes allowing for operation across the supersonic flight regime. The primary focus of this study is to improve the fidelity of the model's thrust response by replacing the simple choked flow equation convergent-divergent nozzle model with a MacCormack method based quasi-1D model. The dynamic response of the nozzle model using the MacCormack method is verified by comparing it against a model of the nozzle using the conservation element/solution element method. A methodology is also presented for the integration of the MacCormack nozzle model with the variable cycle engine.
NASA Technical Reports Server (NTRS)
Connolly, Joseph W.; Friedlander, David; Kopasakis, George
2014-01-01
This paper covers the development of an integrated nonlinear dynamic simulation for a variable cycle turbofan engine and nozzle that can be integrated with an overall vehicle Aero-Propulso-Servo-Elastic (APSE) model. A previously developed variable cycle turbofan engine model is used for this study and is enhanced here to include variable guide vanes allowing for operation across the supersonic flight regime. The primary focus of this study is to improve the fidelity of the model's thrust response by replacing the simple choked flow equation convergent-divergent nozzle model with a MacCormack method based quasi-1D model. The dynamic response of the nozzle model using the MacCormack method is verified by comparing it against a model of the nozzle using the conservation element/solution element method. A methodology is also presented for the integration of the MacCormack nozzle model with the variable cycle engine.
Influence of Body Composition on Gait Kinetics throughout Pregnancy and Postpartum Period
Branco, Marco; Santos-Rocha, Rita; Vieira, Filomena; Silva, Maria-Raquel; Aguiar, Liliana; Veloso, António P.
2016-01-01
Pregnancy leads to several changes in body composition and morphology of women. It is not clear whether the biomechanical changes occurring in this period are due exclusively to body composition and size or to other physiological factors. The purpose was to quantify the morphology and body composition of women throughout pregnancy and in the postpartum period and identify the contribution of these parameters on the lower limb joints kinetic during gait. Eleven women were assessed longitudinally, regarding anthropometric, body composition, and kinetic parameters of gait. Body composition and body dimensions showed a significant increase during pregnancy and a decrease in the postpartum period. In the postpartum period, body composition was similar to the 1st trimester, except for triceps skinfold, total calf area, and body mass index, with higher results than at the beginning of pregnancy. Regression models were developed to predict women's internal loading through anthropometric variables. Four models include variables associated with the amount of fat; four models include variables related to overall body weight; three models include fat-free mass; one model includes the shape of the trunk as a predictor variable. Changes in maternal body composition and morphology largely determine kinetic dynamics of the joints in pregnant women. PMID:27073713
Systems and methods for modeling and analyzing networks
Hill, Colin C; Church, Bruce W; McDonagh, Paul D; Khalil, Iya G; Neyarapally, Thomas A; Pitluk, Zachary W
2013-10-29
The systems and methods described herein utilize a probabilistic modeling framework for reverse engineering an ensemble of causal models, from data and then forward simulating the ensemble of models to analyze and predict the behavior of the network. In certain embodiments, the systems and methods described herein include data-driven techniques for developing causal models for biological networks. Causal network models include computational representations of the causal relationships between independent variables such as a compound of interest and dependent variables such as measured DNA alterations, changes in mRNA, protein, and metabolites to phenotypic readouts of efficacy and toxicity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaFarge, R.A.
1990-05-01
MCPRAM (Monte Carlo PReprocessor for AMEER), a computer program that uses Monte Carlo techniques to create an input file for the AMEER trajectory code, has been developed for the Sandia National Laboratories VAX and Cray computers. Users can select the number of trajectories to compute, which AMEER variables to investigate, and the type of probability distribution for each variable. Any legal AMEER input variable can be investigated anywhere in the input run stream with either a normal, uniform, or Rayleigh distribution. Users also have the option to use covariance matrices for the investigation of certain correlated variables such as boostermore » pre-reentry errors and wind, axial force, and atmospheric models. In conjunction with MCPRAM, AMEER was modified to include the variables introduced by the covariance matrices and to include provisions for six types of fuze models. The new fuze models and the new AMEER variables are described in this report.« less
Kodis, Mali'o; Galante, Peter; Sterling, Eleanor J; Blair, Mary E
2018-04-26
Under the threat of ongoing and projected climate change, communities in the Pacific Islands face challenges of adapting culture and lifestyle to accommodate a changing landscape. Few models can effectively predict how biocultural livelihoods might be impacted. Here, we examine how environmental and anthropogenic factors influence an ecological niche model (ENM) for the realized niche of cultivated taro (Colocasia esculenta) in Hawaii. We created and tuned two sets of ENMs: one using only environmental variables, and one using both environmental and cultural characteristics of Hawaii. These models were projected under two different Intergovernmental Panel on Climate Change (IPCC) Representative Concentration Pathways (RCPs) for 2070. Models were selected and evaluated using average omission rate and area under the receiver operating characteristic curve (AUC). We compared optimal model predictions by comparing the percentage of taro plots predicted present and measured ENM overlap using Schoener's D statistic. The model including only environmental variables consisted of 19 Worldclim bioclimatic variables, in addition to slope, altitude, distance to perennial streams, soil evaporation, and soil moisture. The optimal model with environmental variables plus anthropogenic features also included a road density variable (which we assumed as a proxy for urbanization) and a variable indicating agricultural lands of importance to the state of Hawaii. The model including anthropogenic features performed better than the environment-only model based on omission rate, AUC, and review of spatial projections. The two models also differed in spatial projections for taro under anticipated future climate change. Our results demonstrate how ENMs including anthropogenic features can predict which areas might be best suited to plant cultivated species in the future, and how these areas could change under various climate projections. These predictions might inform biocultural conservation priorities and initiatives. In addition, we discuss the incongruences that arise when traditional ENM theory is applied to species whose distribution has been significantly impacted by human intervention, particularly at a fine scale relevant to biocultural conservation initiatives. © 2018 by the Ecological Society of America.
Schwartz, Jennifer; Wang, Yongfei; Qin, Li; Schwamm, Lee H; Fonarow, Gregg C; Cormier, Nicole; Dorsey, Karen; McNamara, Robert L; Suter, Lisa G; Krumholz, Harlan M; Bernheim, Susannah M
2017-11-01
The Centers for Medicare & Medicaid Services publicly reports a hospital-level stroke mortality measure that lacks stroke severity risk adjustment. Our objective was to describe novel measures of stroke mortality suitable for public reporting that incorporate stroke severity into risk adjustment. We linked data from the American Heart Association/American Stroke Association Get With The Guidelines-Stroke registry with Medicare fee-for-service claims data to develop the measures. We used logistic regression for variable selection in risk model development. We developed 3 risk-standardized mortality models for patients with acute ischemic stroke, all of which include the National Institutes of Health Stroke Scale score: one that includes other risk variables derived only from claims data (claims model); one that includes other risk variables derived from claims and clinical variables that could be obtained from electronic health record data (hybrid model); and one that includes other risk variables that could be derived only from electronic health record data (electronic health record model). The cohort used to develop and validate the risk models consisted of 188 975 hospital admissions at 1511 hospitals. The claims, hybrid, and electronic health record risk models included 20, 21, and 9 risk-adjustment variables, respectively; the C statistics were 0.81, 0.82, and 0.79, respectively (as compared with the current publicly reported model C statistic of 0.75); the risk-standardized mortality rates ranged from 10.7% to 19.0%, 10.7% to 19.1%, and 10.8% to 20.3%, respectively; the median risk-standardized mortality rate was 14.5% for all measures; and the odds of mortality for a high-mortality hospital (+1 SD) were 1.51, 1.52, and 1.52 times those for a low-mortality hospital (-1 SD), respectively. We developed 3 quality measures that demonstrate better discrimination than the Centers for Medicare & Medicaid Services' existing stroke mortality measure, adjust for stroke severity, and could be implemented in a variety of settings. © 2017 American Heart Association, Inc.
NASA Astrophysics Data System (ADS)
Häusler, K.; Hagan, M. E.; Baumgaertner, A. J. G.; Maute, A.; Lu, G.; Doornbos, E.; Bruinsma, S.; Forbes, J. M.; Gasperini, F.
2014-08-01
We report on a new source of tidal variability in the National Center for Atmospheric Research thermosphere-ionosphere-mesosphere-electrodynamics general circulation model (TIME-GCM). Lower boundary forcing of the TIME-GCM for a simulation of November-December 2009 based on 3-hourly Modern-Era Retrospective Analysis for Research and Application (MERRA) reanalysis data includes day-to-day variations in both diurnal and semidiurnal tides of tropospheric origin. Comparison with TIME-GCM results from a heretofore standard simulation that includes climatological tropospheric tides from the global-scale wave model reveal evidence of the impacts of MERRA forcing throughout the model domain, including measurable tidal variability in the TIME-GCM upper thermosphere. Additional comparisons with measurements made by the Gravity field and steady-state Ocean Circulation Explorer satellite show improved TIME-GCM capability to capture day-to-day variations in thermospheric density for the November-December 2009 period with the new MERRA lower boundary forcing.
Theory and design of variable conductance heat pipes
NASA Technical Reports Server (NTRS)
Marcus, B. D.
1972-01-01
A comprehensive review and analysis of all aspects of heat pipe technology pertinent to the design of self-controlled, variable conductance devices for spacecraft thermal control is presented. Subjects considered include hydrostatics, hydrodynamics, heat transfer into and out of the pipe, fluid selection, materials compatibility and variable conductance control techniques. The report includes a selected bibliography of pertinent literature, analytical formulations of various models and theories describing variable conductance heat pipe behavior, and the results of numerous experiments on the steady state and transient performance of gas controlled variable conductance heat pipes. Also included is a discussion of VCHP design techniques.
Aitkenhead, Matt J; Black, Helaina I J
2018-02-01
Using the International Centre for Research in Agroforestry-International Soil Reference and Information Centre (ICRAF-ISRIC) global soil spectroscopy database, models were developed to estimate a number of soil variables using different input data types. These input types included: (1) site data only; (2) visible-near-infrared (Vis-NIR) diffuse reflectance spectroscopy only; (3) combined site and Vis-NIR data; (4) red-green-blue (RGB) color data only; and (5) combined site and RGB color data. The models produced variable estimation accuracy, with RGB only being generally worst and spectroscopy plus site being best. However, we showed that for certain variables, estimation accuracy levels achieved with the "site plus RGB input data" were sufficiently good to provide useful estimates (r 2 > 0.7). These included major elements (Ca, Si, Al, Fe), organic carbon, and cation exchange capacity. Estimates for bulk density, contrast-to-noise (C/N), and P were moderately good, but K was not well estimated using this model type. For the "spectra plus site" model, many more variables were well estimated, including many that are important indicators for agricultural productivity and soil health. Sum of cation, electrical conductivity, Si, Ca, and Al oxides, and C/N ratio were estimated using this approach with r 2 values > 0.9. This work provides a mechanism for identifying the cost-effectiveness of using different model input data, with associated costs, for estimating soil variables to required levels of accuracy.
Apparatus and method for controlling autotroph cultivation
Fuxman, Adrian M; Tixier, Sebastien; Stewart, Gregory E; Haran, Frank M; Backstrom, Johan U; Gerbrandt, Kelsey
2013-07-02
A method includes receiving at least one measurement of a dissolved carbon dioxide concentration of a mixture of fluid containing an autotrophic organism. The method also includes determining an adjustment to one or more manipulated variables using the at least one measurement. The method further includes generating one or more signals to modify the one or more manipulated variables based on the determined adjustment. The one or more manipulated variables could include a carbon dioxide flow rate, an air flow rate, a water temperature, and an agitation level for the mixture. At least one model relates the dissolved carbon dioxide concentration to one or more manipulated variables, and the adjustment could be determined by using the at least one model to drive the dissolved carbon dioxide concentration to at least one target that optimize a goal function. The goal function could be to optimize biomass growth rate, nutrient removal and/or lipid production.
1974-12-01
urbofan engine performance. An AiKesearch Model TFE731 -2 Turbofan Engine was modified to incorporate production-type variable-geometry hardware...reliability was shown for the variable- geometry components. The TFE731 , modified to include variable geometry, proved to be an inexpensive...Atm at a Met Thrust of 3300 LBF 929 85 Variable-Cycle Engine TFE731 Exhaust-Nozzle Performance 948 86 Analytical Model Comparisons, Aerodynamic
NASA Astrophysics Data System (ADS)
Habibi, Hamed; Rahimi Nohooji, Hamed; Howard, Ian
2017-09-01
Power maximization has always been a practical consideration in wind turbines. The question of how to address optimal power capture, especially when the system dynamics are nonlinear and the actuators are subject to unknown faults, is significant. This paper studies the control methodology for variable-speed variable-pitch wind turbines including the effects of uncertain nonlinear dynamics, system fault uncertainties, and unknown external disturbances. The nonlinear model of the wind turbine is presented, and the problem of maximizing extracted energy is formulated by designing the optimal desired states. With the known system, a model-based nonlinear controller is designed; then, to handle uncertainties, the unknown nonlinearities of the wind turbine are estimated by utilizing radial basis function neural networks. The adaptive neural fault tolerant control is designed passively to be robust on model uncertainties, disturbances including wind speed and model noises, and completely unknown actuator faults including generator torque and pitch actuator torque. The Lyapunov direct method is employed to prove that the closed-loop system is uniformly bounded. Simulation studies are performed to verify the effectiveness of the proposed method.
Impacts of Considering Climate Variability on Investment Decisions in Ethiopia
NASA Astrophysics Data System (ADS)
Strzepek, K.; Block, P.; Rosegrant, M.; Diao, X.
2005-12-01
In Ethiopia, climate extremes, inducing droughts or floods, are not unusual. Monitoring the effects of these extremes, and climate variability in general, is critical for economic prediction and assessment of the country's future welfare. The focus of this study involves adding climate variability to a deterministic, mean climate-driven agro-economic model, in an attempt to understand its effects and degree of influence on general economic prediction indicators for Ethiopia. Four simulations are examined, including a baseline simulation and three investment strategies: simulations of irrigation investment, roads investment, and a combination investment of both irrigation and roads. The deterministic model is transformed into a stochastic model by dynamically adding year-to-year climate variability through climate-yield factors. Nine sets of actual, historic, variable climate data are individually assembled and implemented into the 12-year stochastic model simulation, producing an ensemble of economic prediction indicators. This ensemble allows for a probabilistic approach to planning and policy making, allowing decision makers to consider risk. The economic indicators from the deterministic and stochastic approaches, including rates of return to investments, are significantly different. The predictions of the deterministic model appreciably overestimate the future welfare of Ethiopia; the predictions of the stochastic model, utilizing actual climate data, tend to give a better semblance of what may be expected. Inclusion of climate variability is vital for proper analysis of the predictor values from this agro-economic model.
Development of a mathematical model of the human cardiovascular system: An educational perspective
NASA Astrophysics Data System (ADS)
Johnson, Bruce Allen
A mathematical model of the human cardiovascular system will be a useful educational tool in biological sciences and bioengineering classrooms. The goal of this project is to develop a mathematical model of the human cardiovascular system that responds appropriately to variations of significant physical variables. Model development is based on standard fluid statics and dynamics principles, pressure-volume characteristics of the cardiac cycle, and compliant behavior of blood vessels. Cardiac cycle phases provide the physical and logical model structure, and Boolean algebra links model sections. The model is implemented using VisSim, a highly intuitive and easily learned block diagram modeling software package. Comparisons of model predictions of key variables to published values suggest that the model reasonably approximates expected behavior of those variables. The model responds plausibly to variations of independent variables. Projected usefulness of the model as an educational tool is threefold: independent variables which determine heart function may be easily varied to observe cause and effect; the model is used in an interactive setting; and the relationship of governing equations to model behavior is readily viewable and intuitive. Future use of this model in classrooms may give a more reasonable indication of its value as an educational tool.* *This dissertation includes a CD that is multimedia (contains text and other applications that are not available in a printed format). The CD requires the following applications: CorelPhotoHouse, CorelWordPerfect, VisSinViewer (included on CD), Internet access.
Analysis of model development strategies: predicting ventral hernia recurrence.
Holihan, Julie L; Li, Linda T; Askenasy, Erik P; Greenberg, Jacob A; Keith, Jerrod N; Martindale, Robert G; Roth, J Scott; Liang, Mike K
2016-11-01
There have been many attempts to identify variables associated with ventral hernia recurrence; however, it is unclear which statistical modeling approach results in models with greatest internal and external validity. We aim to assess the predictive accuracy of models developed using five common variable selection strategies to determine variables associated with hernia recurrence. Two multicenter ventral hernia databases were used. Database 1 was randomly split into "development" and "internal validation" cohorts. Database 2 was designated "external validation". The dependent variable for model development was hernia recurrence. Five variable selection strategies were used: (1) "clinical"-variables considered clinically relevant, (2) "selective stepwise"-all variables with a P value <0.20 were assessed in a step-backward model, (3) "liberal stepwise"-all variables were included and step-backward regression was performed, (4) "restrictive internal resampling," and (5) "liberal internal resampling." Variables were included with P < 0.05 for the Restrictive model and P < 0.10 for the Liberal model. A time-to-event analysis using Cox regression was performed using these strategies. The predictive accuracy of the developed models was tested on the internal and external validation cohorts using Harrell's C-statistic where C > 0.70 was considered "reasonable". The recurrence rate was 32.9% (n = 173/526; median/range follow-up, 20/1-58 mo) for the development cohort, 36.0% (n = 95/264, median/range follow-up 20/1-61 mo) for the internal validation cohort, and 12.7% (n = 155/1224, median/range follow-up 9/1-50 mo) for the external validation cohort. Internal validation demonstrated reasonable predictive accuracy (C-statistics = 0.772, 0.760, 0.767, 0.757, 0.763), while on external validation, predictive accuracy dipped precipitously (C-statistic = 0.561, 0.557, 0.562, 0.553, 0.560). Predictive accuracy was equally adequate on internal validation among models; however, on external validation, all five models failed to demonstrate utility. Future studies should report multiple variable selection techniques and demonstrate predictive accuracy on external data sets for model validation. Copyright © 2016 Elsevier Inc. All rights reserved.
Forecasting high-priority infectious disease surveillance regions: a socioeconomic model.
Chan, Emily H; Scales, David A; Brewer, Timothy F; Madoff, Lawrence C; Pollack, Marjorie P; Hoen, Anne G; Choden, Tenzin; Brownstein, John S
2013-02-01
Few researchers have assessed the relationships between socioeconomic inequality and infectious disease outbreaks at the population level globally. We use a socioeconomic model to forecast national annual rates of infectious disease outbreaks. We constructed a multivariate mixed-effects Poisson model of the number of times a given country was the origin of an outbreak in a given year. The dataset included 389 outbreaks of international concern reported in the World Health Organization's Disease Outbreak News from 1996 to 2008. The initial full model included 9 socioeconomic variables related to education, poverty, population health, urbanization, health infrastructure, gender equality, communication, transportation, and democracy, and 1 composite index. Population, latitude, and elevation were included as potential confounders. The initial model was pared down to a final model by a backwards elimination procedure. The dependent and independent variables were lagged by 2 years to allow for forecasting future rates. Among the socioeconomic variables tested, the final model included child measles immunization rate and telephone line density. The Democratic Republic of Congo, China, and Brazil were predicted to be at the highest risk for outbreaks in 2010, and Colombia and Indonesia were predicted to have the highest percentage of increase in their risk compared to their average over 1996-2008. Understanding socioeconomic factors could help improve the understanding of outbreak risk. The inclusion of the measles immunization variable suggests that there is a fundamental basis in ensuring adequate public health capacity. Increased vigilance and expanding public health capacity should be prioritized in the projected high-risk regions.
Selecting the process variables for filament winding
NASA Technical Reports Server (NTRS)
Calius, E.; Springer, G. S.
1986-01-01
A model is described which can be used to determine the appropriate values of the process variables for filament winding cylinders. The process variables which can be selected by the model include the winding speed, fiber tension, initial resin degree of cure, and the temperatures applied during winding, curing, and post-curing. The effects of these process variables on the properties of the cylinder during and after manufacture are illustrated by a numerical example.
NASA Astrophysics Data System (ADS)
Paiewonsky, Pablo; Elison Timm, Oliver
2018-03-01
In this paper, we present a simple dynamic global vegetation model whose primary intended use is auxiliary to the land-atmosphere coupling scheme of a climate model, particularly one of intermediate complexity. The model simulates and provides important ecological-only variables but also some hydrological and surface energy variables that are typically either simulated by land surface schemes or else used as boundary data input for these schemes. The model formulations and their derivations are presented here, in detail. The model includes some realistic and useful features for its level of complexity, including a photosynthetic dependency on light, full coupling of photosynthesis and transpiration through an interactive canopy resistance, and a soil organic carbon dependence for bare-soil albedo. We evaluate the model's performance by running it as part of a simple land surface scheme that is driven by reanalysis data. The evaluation against observational data includes net primary productivity, leaf area index, surface albedo, and diagnosed variables relevant for the closure of the hydrological cycle. In this setup, we find that the model gives an adequate to good simulation of basic large-scale ecological and hydrological variables. Of the variables analyzed in this paper, gross primary productivity is particularly well simulated. The results also reveal the current limitations of the model. The most significant deficiency is the excessive simulation of evapotranspiration in mid- to high northern latitudes during their winter to spring transition. The model has a relative advantage in situations that require some combination of computational efficiency, model transparency and tractability, and the simulation of the large-scale vegetation and land surface characteristics under non-present-day conditions.
NASA Technical Reports Server (NTRS)
Entekhabi, D.; Eagleson, P. S.
1989-01-01
Parameterizations are developed for the representation of subgrid hydrologic processes in atmospheric general circulation models. Reasonable a priori probability density functions of the spatial variability of soil moisture and of precipitation are introduced. These are used in conjunction with the deterministic equations describing basic soil moisture physics to derive expressions for the hydrologic processes that include subgrid scale variation in parameters. The major model sensitivities to soil type and to climatic forcing are explored.
Bayesian dynamic modeling of time series of dengue disease case counts.
Martínez-Bello, Daniel Adyro; López-Quílez, Antonio; Torres-Prieto, Alexander
2017-07-01
The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model's short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health.
Jaime-González, Carlos; Acebes, Pablo; Mateos, Ana; Mezquida, Eduardo T
2017-01-01
LiDAR technology has firmly contributed to strengthen the knowledge of habitat structure-wildlife relationships, though there is an evident bias towards flying vertebrates. To bridge this gap, we investigated and compared the performance of LiDAR and field data to model habitat preferences of wood mouse (Apodemus sylvaticus) in a Mediterranean high mountain pine forest (Pinus sylvestris). We recorded nine field and 13 LiDAR variables that were summarized by means of Principal Component Analyses (PCA). We then analyzed wood mouse's habitat preferences using three different models based on: (i) field PCs predictors, (ii) LiDAR PCs predictors; and (iii) both set of predictors in a combined model, including a variance partitioning analysis. Elevation was also included as a predictor in the three models. Our results indicate that LiDAR derived variables were better predictors than field-based variables. The model combining both data sets slightly improved the predictive power of the model. Field derived variables indicated that wood mouse was positively influenced by the gradient of increasing shrub cover and negatively affected by elevation. Regarding LiDAR data, two LiDAR PCs, i.e. gradients in canopy openness and complexity in forest vertical structure positively influenced wood mouse, although elevation interacted negatively with the complexity in vertical structure, indicating wood mouse's preferences for plots with lower elevations but with complex forest vertical structure. The combined model was similar to the LiDAR-based model and included the gradient of shrub cover measured in the field. Variance partitioning showed that LiDAR-based variables, together with elevation, were the most important predictors and that part of the variation explained by shrub cover was shared. LiDAR derived variables were good surrogates of environmental characteristics explaining habitat preferences by the wood mouse. Our LiDAR metrics represented structural features of the forest patch, such as the presence and cover of shrubs, as well as other characteristics likely including time since perturbation, food availability and predation risk. Our results suggest that LiDAR is a promising technology for further exploring habitat preferences by small mammal communities.
Army College Fund Cost-Effectiveness Study
1990-11-01
Section A.2 presents a theory of enlistment supply to provide a basis for specifying the regression model , The model Is specified in Section A.3, which...Supplementary materials are included in the final four sections. Section A.6 provides annual trends in the regression model variables. Estimates of the model ...millions, A.S. ESTIMATION OF A YOUTH EARNINGS FORECASTING MODEL Civilian pay is an important explanatory variable in the regression model . Previous
Bayesian dynamical systems modelling in the social sciences.
Ranganathan, Shyam; Spaiser, Viktoria; Mann, Richard P; Sumpter, David J T
2014-01-01
Data arising from social systems is often highly complex, involving non-linear relationships between the macro-level variables that characterize these systems. We present a method for analyzing this type of longitudinal or panel data using differential equations. We identify the best non-linear functions that capture interactions between variables, employing Bayes factor to decide how many interaction terms should be included in the model. This method punishes overly complicated models and identifies models with the most explanatory power. We illustrate our approach on the classic example of relating democracy and economic growth, identifying non-linear relationships between these two variables. We show how multiple variables and variable lags can be accounted for and provide a toolbox in R to implement our approach.
Aspects of porosity prediction using multivariate linear regression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Byrnes, A.P.; Wilson, M.D.
1991-03-01
Highly accurate multiple linear regression models have been developed for sandstones of diverse compositions. Porosity reduction or enhancement processes are controlled by the fundamental variables, Pressure (P), Temperature (T), Time (t), and Composition (X), where composition includes mineralogy, size, sorting, fluid composition, etc. The multiple linear regression equation, of which all linear porosity prediction models are subsets, takes the generalized form: Porosity = C{sub 0} + C{sub 1}(P) + C{sub 2}(T) + C{sub 3}(X) + C{sub 4}(t) + C{sub 5}(PT) + C{sub 6}(PX) + C{sub 7}(Pt) + C{sub 8}(TX) + C{sub 9}(Tt) + C{sub 10}(Xt) + C{sub 11}(PTX) + C{submore » 12}(PXt) + C{sub 13}(PTt) + C{sub 14}(TXt) + C{sub 15}(PTXt). The first four primary variables are often interactive, thus requiring terms involving two or more primary variables (the form shown implies interaction and not necessarily multiplication). The final terms used may also involve simple mathematic transforms such as log X, e{sup T}, X{sup 2}, or more complex transformations such as the Time-Temperature Index (TTI). The X term in the equation above represents a suite of compositional variable and, therefore, a fully expanded equation may include a series of terms incorporating these variables. Numerous published bivariate porosity prediction models involving P (or depth) or Tt (TTI) are effective to a degree, largely because of the high degree of colinearity between p and TTI. However, all such bivariate models ignore the unique contributions of P and Tt, as well as various X terms. These simpler models become poor predictors in regions where colinear relations change, were important variables have been ignored, or where the database does not include a sufficient range or weight distribution for the critical variables.« less
Hwang, Jeong-Hwa; Misumi, Shigeki; Curran-Everett, Douglas; Brown, Kevin K; Sahin, Hakan; Lynch, David A
2011-08-01
The aim of this study was to evaluate the prognostic implications of computed tomography (CT) and physiologic variables at baseline and on sequential evaluation in patients with fibrosing interstitial pneumonia. We identified 72 patients with fibrosing interstitial pneumonia (42 with idiopathic disease, 30 with collagen vascular disease). Pulmonary function tests and CT were performed at the time of diagnosis and at a median follow-up of 12 months, respectively. Two chest radiologists scored the extent of specific abnormalities and overall disease on baseline and follow-up CT. Rate of survival was estimated using the Kaplan-Meier method. Three Cox proportional hazards models were constructed to evaluate the relationship between CT and physiologic variables and rate of survival: model 1 included only baseline variables, model 2 included only serial change variables, and model 3 included both baseline and serial change variables. On follow-up CT, the extent of mixed ground-glass and reticular opacities (P<0.001), pure reticular opacity (P=0.04), honeycombing (P=0.02), and overall extent of disease (P<0.001) was increased in the idiopathic group, whereas these variables remained unchanged in the collagen vascular disease group. Patients with idiopathic disease had a shorter rate of survival than those with collagen vascular disease (P=0.03). In model 1, the extent of honeycombing on baseline CT was the only independent predictor of mortality (P=0.02). In model 2, progression in honeycombing was the only predictor of mortality (P=0.005). In model 3, baseline extent of honeycombing and progression of honeycombing were the only independent predictors of mortality (P=0.001 and 0.002, respectively). Neither baseline nor serial change physiologic variables, nor the presence of collagen vascular disease, was predictive of rate of survival. The extent of honeycombing at baseline and its progression on follow-up CT are important determinants of rate of survival in patients with fibrosing interstitial pneumonia.
NASA Astrophysics Data System (ADS)
Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao
2017-03-01
Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.
Barton, Hugh A; Chiu, Weihsueh A; Setzer, R Woodrow; Andersen, Melvin E; Bailer, A John; Bois, Frédéric Y; Dewoskin, Robert S; Hays, Sean; Johanson, Gunnar; Jones, Nancy; Loizou, George; Macphail, Robert C; Portier, Christopher J; Spendiff, Martin; Tan, Yu-Mei
2007-10-01
Physiologically based pharmacokinetic (PBPK) models are used in mode-of-action based risk and safety assessments to estimate internal dosimetry in animals and humans. When used in risk assessment, these models can provide a basis for extrapolating between species, doses, and exposure routes or for justifying nondefault values for uncertainty factors. Characterization of uncertainty and variability is increasingly recognized as important for risk assessment; this represents a continuing challenge for both PBPK modelers and users. Current practices show significant progress in specifying deterministic biological models and nondeterministic (often statistical) models, estimating parameters using diverse data sets from multiple sources, using them to make predictions, and characterizing uncertainty and variability of model parameters and predictions. The International Workshop on Uncertainty and Variability in PBPK Models, held 31 Oct-2 Nov 2006, identified the state-of-the-science, needed changes in practice and implementation, and research priorities. For the short term, these include (1) multidisciplinary teams to integrate deterministic and nondeterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through improved documentation of model structure(s), parameter values, sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include (1) theoretical and practical methodological improvements for nondeterministic/statistical modeling; (2) better methods for evaluating alternative model structures; (3) peer-reviewed databases of parameters and covariates, and their distributions; (4) expanded coverage of PBPK models across chemicals with different properties; and (5) training and reference materials, such as cases studies, bibliographies/glossaries, model repositories, and enhanced software. The multidisciplinary dialogue initiated by this Workshop will foster the collaboration, research, data collection, and training necessary to make characterizing uncertainty and variability a standard practice in PBPK modeling and risk assessment.
Brunwasser, Steven M; Gebretsadik, Tebeb; Gold, Diane R; Turi, Kedir N; Stone, Cosby A; Datta, Soma; Gern, James E; Hartert, Tina V
2018-01-01
The International Study of Asthma and Allergies in Children (ISAAC) Wheezing Module is commonly used to characterize pediatric asthma in epidemiological studies, including nearly all airway cohorts participating in the Environmental Influences on Child Health Outcomes (ECHO) consortium. However, there is no consensus model for operationalizing wheezing severity with this instrument in explanatory research studies. Severity is typically measured using coarsely-defined categorical variables, reducing power and potentially underestimating etiological associations. More precise measurement approaches could improve testing of etiological theories of wheezing illness. We evaluated a continuous latent variable model of pediatric wheezing severity based on four ISAAC Wheezing Module items. Analyses included subgroups of children from three independent cohorts whose parents reported past wheezing: infants ages 0-2 in the INSPIRE birth cohort study (Cohort 1; n = 657), 6-7-year-old North American children from Phase One of the ISAAC study (Cohort 2; n = 2,765), and 5-6-year-old children in the EHAAS birth cohort study (Cohort 3; n = 102). Models were estimated using structural equation modeling. In all cohorts, covariance patterns implied by the latent variable model were consistent with the observed data, as indicated by non-significant χ2 goodness of fit tests (no evidence of model misspecification). Cohort 1 analyses showed that the latent factor structure was stable across time points and child sexes. In both cohorts 1 and 3, the latent wheezing severity variable was prospectively associated with wheeze-related clinical outcomes, including physician asthma diagnosis, acute corticosteroid use, and wheeze-related outpatient medical visits when adjusting for confounders. We developed an easily applicable continuous latent variable model of pediatric wheezing severity based on items from the well-validated ISAAC Wheezing Module. This model prospectively associates with asthma morbidity, as demonstrated in two ECHO birth cohort studies, and provides a more statistically powerful method of testing etiologic hypotheses of childhood wheezing illness and asthma.
Variable selection in discrete survival models including heterogeneity.
Groll, Andreas; Tutz, Gerhard
2017-04-01
Several variable selection procedures are available for continuous time-to-event data. However, if time is measured in a discrete way and therefore many ties occur models for continuous time are inadequate. We propose penalized likelihood methods that perform efficient variable selection in discrete survival modeling with explicit modeling of the heterogeneity in the population. The method is based on a combination of ridge and lasso type penalties that are tailored to the case of discrete survival. The performance is studied in simulation studies and an application to the birth of the first child.
Hamann, Hendrik F.; Hwang, Youngdeok; van Kessel, Theodore G.; Khabibrakhmanov, Ildar K.; Muralidhar, Ramachandran
2016-10-18
A method and a system to perform multi-model blending are described. The method includes obtaining one or more sets of predictions of historical conditions, the historical conditions corresponding with a time T that is historical in reference to current time, and the one or more sets of predictions of the historical conditions being output by one or more models. The method also includes obtaining actual historical conditions, the actual historical conditions being measured conditions at the time T, assembling a training data set including designating the two or more set of predictions of historical conditions as predictor variables and the actual historical conditions as response variables, and training a machine learning algorithm based on the training data set. The method further includes obtaining a blended model based on the machine learning algorithm.
Jackson, B Scott
2004-10-01
Many different types of integrate-and-fire models have been designed in order to explain how it is possible for a cortical neuron to integrate over many independent inputs while still producing highly variable spike trains. Within this context, the variability of spike trains has been almost exclusively measured using the coefficient of variation of interspike intervals. However, another important statistical property that has been found in cortical spike trains and is closely associated with their high firing variability is long-range dependence. We investigate the conditions, if any, under which such models produce output spike trains with both interspike-interval variability and long-range dependence similar to those that have previously been measured from actual cortical neurons. We first show analytically that a large class of high-variability integrate-and-fire models is incapable of producing such outputs based on the fact that their output spike trains are always mathematically equivalent to renewal processes. This class of models subsumes a majority of previously published models, including those that use excitation-inhibition balance, correlated inputs, partial reset, or nonlinear leakage to produce outputs with high variability. Next, we study integrate-and-fire models that have (nonPoissonian) renewal point process inputs instead of the Poisson point process inputs used in the preceding class of models. The confluence of our analytical and simulation results implies that the renewal-input model is capable of producing high variability and long-range dependence comparable to that seen in spike trains recorded from cortical neurons, but only if the interspike intervals of the inputs have infinite variance, a physiologically unrealistic condition. Finally, we suggest a new integrate-and-fire model that does not suffer any of the previously mentioned shortcomings. By analyzing simulation results for this model, we show that it is capable of producing output spike trains with interspike-interval variability and long-range dependence that match empirical data from cortical spike trains. This model is similar to the other models in this study, except that its inputs are fractional-gaussian-noise-driven Poisson processes rather than renewal point processes. In addition to this model's success in producing realistic output spike trains, its inputs have long-range dependence similar to that found in most subcortical neurons in sensory pathways, including the inputs to cortex. Analysis of output spike trains from simulations of this model also shows that a tight balance between the amounts of excitation and inhibition at the inputs to cortical neurons is not necessary for high interspike-interval variability at their outputs. Furthermore, in our analysis of this model, we show that the superposition of many fractional-gaussian-noise-driven Poisson processes does not approximate a Poisson process, which challenges the common assumption that the total effect of a large number of inputs on a neuron is well represented by a Poisson process.
Variability of the Martian thermospheric temperatures during the last 7 Martian Years
NASA Astrophysics Data System (ADS)
Gonzalez-Galindo, Francisco; Lopez-Valverde, Miguel Angel; Millour, Ehouarn; Forget, François
2014-05-01
The temperatures and densities in the Martian upper atmosphere have a significant influence over the different processes producing atmospheric escape. A good knowledge of the thermosphere and its variability is thus necessary in order to better understand and quantify the atmospheric loss to space and the evolution of the planet. Different global models have been used to study the seasonal and interannual variability of the Martian thermosphere, usually considering three solar scenarios (solar minimum, solar medium and solar maximum conditions) to take into account the solar cycle variability. However, the variability of the solar activity within the simulated period of time is not usually considered in these models. We have improved the description of the UV solar flux included on the General Circulation Model for Mars developed at the Laboratoire de Météorologie Dynamique (LMD-MGCM) in order to include its observed day-to-day variability. We have used the model to simulate the thermospheric variability during Martian Years 24 to 30, using realistic UV solar fluxes and dust opacities. The model predicts and interannual variability of the temperatures in the upper thermosphere that ranges from about 50 K during the aphelion to up to 150 K during perihelion. The seasonal variability of temperatures due to the eccentricity of the Martian orbit is modified by the variability of the solar flux within a given Martian year. The solar rotation cycle produces temperature oscillations of up to 30 K. We have also studied the response of the modeled thermosphere to the global dust storms in Martian Year 25 and Martian Year 28. The atmospheric dynamics are significantly modified by the global dust storms, which induces significant changes in the thermospheric temperatures. The response of the model to the presence of both global dust storms is in good agreement with previous modeling results (Medvedev et al., Journal of Geophysical Research, 2013). As expected, the simulated ionosphere is also sensitive to the variability of the solar activity. Acknowledgemnt: Francisco González-Galindo is funded by a CSIC JAE-Doc contract financed by the European Social Fund
NASA Astrophysics Data System (ADS)
Fernandez-del-Rincon, A.; Garcia, P.; Diez-Ibarbia, A.; de-Juan, A.; Iglesias, M.; Viadero, F.
2017-02-01
Gear transmissions remain as one of the most complex mechanical systems from the point of view of noise and vibration behavior. Research on gear modeling leading to the obtaining of models capable of accurately reproduce the dynamic behavior of real gear transmissions has spread out the last decades. Most of these models, although useful for design stages, often include simplifications that impede their application for condition monitoring purposes. Trying to filling this gap, the model presented in this paper allows us to simulate gear transmission dynamics including most of these features usually neglected by the state of the art models. This work presents a model capable of considering simultaneously the internal excitations due to the variable meshing stiffness (including the coupling among successive tooth pairs in contact, the non-linearity linked with the contacts between surfaces and the dissipative effects), and those excitations consequence of the bearing variable compliance (including clearances or pre-loads). The model can also simulate gear dynamics in a realistic torque dependent scenario. The proposed model combines a hybrid formulation for calculation of meshing forces with a non-linear variable compliance approach for bearings. Meshing forces are obtained by means of a double approach which combines numerical and analytical aspects. The methodology used provides a detailed description of the meshing forces, allowing their calculation even when gear center distance is modified due to shaft and bearing flexibilities, which are unavoidable in real transmissions. On the other hand, forces at bearing level were obtained considering a variable number of supporting rolling elements, depending on the applied load and clearances. Both formulations have been developed and applied to the simulation of the vibration of a sample transmission, focusing the attention on the transmitted load, friction meshing forces and bearing preloads.
A descriptivist approach to trait conceptualization and inference.
Jonas, Katherine G; Markon, Kristian E
2016-01-01
In their recent article, How Functionalist and Process Approaches to Behavior Can Explain Trait Covariation, Wood, Gardner, and Harms (2015) underscore the need for more process-based understandings of individual differences. At the same time, the article illustrates a common error in the use and interpretation of latent variable models: namely, the misuse of models to arbitrate issues of causation and the nature of latent variables. Here, we explain how latent variables can be understood simply as parsimonious summaries of data, and how statistical inference can be based on choosing those summaries that minimize information required to represent the data using the model. Although Wood, Gardner, and Harms acknowledge this perspective, they underestimate its significance, including its importance to modeling and the conceptualization of psychological measurement. We believe this perspective has important implications for understanding individual differences in a number of domains, including current debates surrounding the role of formative versus reflective latent variables. (c) 2015 APA, all rights reserved).
Identifying populations sensitive to environmental chemicals by simulating toxicokinetic variability
We incorporate inter-individual variability, including variability across demographic subgroups, into an open-source high-throughput (HT) toxicokinetics (TK) modeling framework for use in a next-generation risk prioritization approach. Risk prioritization involves rapid triage of...
Early Clinical Manifestations Associated with Death from Visceral Leishmaniasis
de Araújo, Valdelaine Etelvina Miranda; Morais, Maria Helena Franco; Reis, Ilka Afonso; Rabello, Ana; Carneiro, Mariângela
2012-01-01
Background In Brazil, lethality from visceral leishmaniasis (VL) is high and few studies have addressed prognostic factors. This historical cohort study was designed to investigate the prognostic factors for death from VL in Belo Horizonte (Brazil). Methodology The analysis was based on data of the Reportable Disease Information System-SINAN (Brazilian Ministry of Health) relating to the clinical manifestations of the disease. During the study period (2002–2009), the SINAN changed platform from a Windows to a Net-version that differed with respect to some of the parameters collected. Multivariate logistic regression models were performed to identify variables associated with death from VL, and these were included in prognostic score. Principal Findings Model 1 (period 2002–2009; 111 deaths from VL and 777 cured patients) included the variables present in both SINAN versions, whereas Model 2 (period 2007–2009; 49 deaths from VL and 327 cured patients) included variables common to both SINAN versions plus the additional variables included in the Net version. In Model 1, the variables significantly associated with a greater risk of death from VL were weakness (OR 2.9; 95%CI 1.3–6.4), Leishmania-HIV co-infection (OR 2.4; 95%CI 1.2–4.8) and age ≥60 years (OR 2.5; 95%CI 1.5–4.3). In Model 2, the variables were bleeding (OR 3.5; 95%CI 1.2–10.3), other associated infections (OR 3.2; 95%CI 1.3–7.8), jaundice (OR 10.1; 95%CI 3.7–27.2) and age ≥60 years (OR 3.1; 95%CI 1.4–7.1). The prognosis score was developed using the variables associated with death from VL of the latest version of the SINAN (Model 2). The predictive performance of which was evaluated by sensitivity (71.4%), specificity (73.7%), positive and negative predictive values (28.9% and 94.5%) and area under the receiver operating characteristic curve (75.6%). Conclusions Knowledge regarding the factors associated with death from VL may improve clinical management of patients and contribute to lower mortality. PMID:22347514
Forecasting High-Priority Infectious Disease Surveillance Regions: A Socioeconomic Model
Chan, Emily H.; Scales, David A.; Brewer, Timothy F.; Madoff, Lawrence C.; Pollack, Marjorie P.; Hoen, Anne G.; Choden, Tenzin; Brownstein, John S.
2013-01-01
Background. Few researchers have assessed the relationships between socioeconomic inequality and infectious disease outbreaks at the population level globally. We use a socioeconomic model to forecast national annual rates of infectious disease outbreaks. Methods. We constructed a multivariate mixed-effects Poisson model of the number of times a given country was the origin of an outbreak in a given year. The dataset included 389 outbreaks of international concern reported in the World Health Organization's Disease Outbreak News from 1996 to 2008. The initial full model included 9 socioeconomic variables related to education, poverty, population health, urbanization, health infrastructure, gender equality, communication, transportation, and democracy, and 1 composite index. Population, latitude, and elevation were included as potential confounders. The initial model was pared down to a final model by a backwards elimination procedure. The dependent and independent variables were lagged by 2 years to allow for forecasting future rates. Results. Among the socioeconomic variables tested, the final model included child measles immunization rate and telephone line density. The Democratic Republic of Congo, China, and Brazil were predicted to be at the highest risk for outbreaks in 2010, and Colombia and Indonesia were predicted to have the highest percentage of increase in their risk compared to their average over 1996–2008. Conclusions. Understanding socioeconomic factors could help improve the understanding of outbreak risk. The inclusion of the measles immunization variable suggests that there is a fundamental basis in ensuring adequate public health capacity. Increased vigilance and expanding public health capacity should be prioritized in the projected high-risk regions. PMID:23118271
UAH mathematical model of the variable polarity plasma ARC welding system calculation
NASA Technical Reports Server (NTRS)
Hung, R. J.
1994-01-01
Significant advantages of Variable Polarity Plasma Arc (VPPA) welding process include faster welding, fewer repairs, less joint preparation, reduced weldment distortion, and absence of porosity. A mathematical model is presented to analyze the VPPA welding process. Results of the mathematical model were compared with the experimental observation accomplished by the GDI team.
Missing Data Treatments at the Second Level of Hierarchical Linear Models
ERIC Educational Resources Information Center
St. Clair, Suzanne W.
2011-01-01
The current study evaluated the performance of traditional versus modern MDTs in the estimation of fixed-effects and variance components for data missing at the second level of an hierarchical linear model (HLM) model across 24 different study conditions. Variables manipulated in the analysis included, (a) number of Level-2 variables with missing…
ERIC Educational Resources Information Center
Teo, Timothy; Milutinovic, Verica
2015-01-01
This study aims to examine the variables that influence Serbian pre-service teachers' intention to use technology to teach mathematics. Using the technology acceptance model (TAM) as the framework, we developed a research model to include subjective norm, knowledge of mathematics, and facilitating conditions as external variables to the TAM. In…
Indices and Dynamics of Global Hydroclimate Over the Past Millennium from Data Assimilation
NASA Astrophysics Data System (ADS)
Steiger, N. J.; Smerdon, J. E.
2017-12-01
Reconstructions based on data assimilation (DA) are at the forefront of model-data syntheses in that such reconstructions optimally fuse proxy data with climate models. DA-based paleoclimate reconstructions have the benefit of being physically-consistent across the reconstructed climate variables and are capable of providing dynamical information about past climate phenomena. Here we use a new implementation of DA, that includes updated proxy system models and climate model bias correction procedures, to reconstruct global hydroclimate on seasonal and annual timescales over the last millennium. This new global hydroclimate product includes reconstructions of the Palmer Drought Severity Index, the Standardized Precipitation Evapotranspiration Index, and global surface temperature along with dynamical variables including the Nino 3.4 index, the latitudinal location of the intertropical convergence zone, and an index of the Atlantic Multidecadal Oscillation. Here we present a validation of the reconstruction product and also elucidate the causes of severe drought in North America and in equatorial Africa. Specifically, we explore the connection between droughts in North America and modes of ocean variability in the Pacific and Atlantic oceans. We also link drought over equatorial Africa to shifts of the intertropical convergence zone and modes of ocean variability.
Liang, Shih-Hsiung; Walther, Bruno Andreas; Shieh, Bao-Sen
2017-01-01
Biological invasions have become a major threat to biodiversity, and identifying determinants underlying success at different stages of the invasion process is essential for both prevention management and testing ecological theories. To investigate variables associated with different stages of the invasion process in a local region such as Taiwan, potential problems using traditional parametric analyses include too many variables of different data types (nominal, ordinal, and interval) and a relatively small data set with too many missing values. We therefore used five decision tree models instead and compared their performance. Our dataset contains 283 exotic bird species which were transported to Taiwan; of these 283 species, 95 species escaped to the field successfully (introduction success); of these 95 introduced species, 36 species reproduced in the field of Taiwan successfully (establishment success). For each species, we collected 22 variables associated with human selectivity and species traits which may determine success during the introduction stage and establishment stage. For each decision tree model, we performed three variable treatments: (I) including all 22 variables, (II) excluding nominal variables, and (III) excluding nominal variables and replacing ordinal values with binary ones. Five performance measures were used to compare models, namely, area under the receiver operating characteristic curve (AUROC), specificity, precision, recall, and accuracy. The gradient boosting models performed best overall among the five decision tree models for both introduction and establishment success and across variable treatments. The most important variables for predicting introduction success were the bird family, the number of invaded countries, and variables associated with environmental adaptation, whereas the most important variables for predicting establishment success were the number of invaded countries and variables associated with reproduction. Our final optimal models achieved relatively high performance values, and we discuss differences in performance with regard to sample size and variable treatments. Our results showed that, for both the establishment model and introduction model, the number of invaded countries was the most important or second most important determinant, respectively. Therefore, we suggest that future success for introduction and establishment of exotic birds may be gauged by simply looking at previous success in invading other countries. Finally, we found that species traits related to reproduction were more important in establishment models than in introduction models; importantly, these determinants were not averaged but either minimum or maximum values of species traits. Therefore, we suggest that in addition to averaged values, reproductive potential represented by minimum and maximum values of species traits should be considered in invasion studies.
Liang, Shih-Hsiung; Walther, Bruno Andreas
2017-01-01
Background Biological invasions have become a major threat to biodiversity, and identifying determinants underlying success at different stages of the invasion process is essential for both prevention management and testing ecological theories. To investigate variables associated with different stages of the invasion process in a local region such as Taiwan, potential problems using traditional parametric analyses include too many variables of different data types (nominal, ordinal, and interval) and a relatively small data set with too many missing values. Methods We therefore used five decision tree models instead and compared their performance. Our dataset contains 283 exotic bird species which were transported to Taiwan; of these 283 species, 95 species escaped to the field successfully (introduction success); of these 95 introduced species, 36 species reproduced in the field of Taiwan successfully (establishment success). For each species, we collected 22 variables associated with human selectivity and species traits which may determine success during the introduction stage and establishment stage. For each decision tree model, we performed three variable treatments: (I) including all 22 variables, (II) excluding nominal variables, and (III) excluding nominal variables and replacing ordinal values with binary ones. Five performance measures were used to compare models, namely, area under the receiver operating characteristic curve (AUROC), specificity, precision, recall, and accuracy. Results The gradient boosting models performed best overall among the five decision tree models for both introduction and establishment success and across variable treatments. The most important variables for predicting introduction success were the bird family, the number of invaded countries, and variables associated with environmental adaptation, whereas the most important variables for predicting establishment success were the number of invaded countries and variables associated with reproduction. Discussion Our final optimal models achieved relatively high performance values, and we discuss differences in performance with regard to sample size and variable treatments. Our results showed that, for both the establishment model and introduction model, the number of invaded countries was the most important or second most important determinant, respectively. Therefore, we suggest that future success for introduction and establishment of exotic birds may be gauged by simply looking at previous success in invading other countries. Finally, we found that species traits related to reproduction were more important in establishment models than in introduction models; importantly, these determinants were not averaged but either minimum or maximum values of species traits. Therefore, we suggest that in addition to averaged values, reproductive potential represented by minimum and maximum values of species traits should be considered in invasion studies. PMID:28316893
NASA Astrophysics Data System (ADS)
Ingram, G. Walter; Alvarez-Berastegui, Diego; Reglero, Patricia; Balbín, Rosa; García, Alberto; Alemany, Francisco
2017-06-01
Fishery independent indices of bluefin tuna larvae in the Western Mediterranean Sea are presented utilizing ichthyoplankton survey data collected from 2001 through 2005 and 2012 through 2013. Indices were developed using larval catch rates collected using two different types of bongo sampling, by first standardizing catch rates by gear/fishing-style and then employing a delta-lognormal modeling approach. The delta-lognormal models were developed three ways: 1) a basic larval index including the following covariates: time of day, a systematic geographic area variable, month and year; 2) a standard environmental larval index including the following covariates: mean water temperature over the mixed layer depth, mean salinity over the mixed layer depth, geostrophic velocity, time of day, a systematic geographic area variable, month and year; and 3) a habitat-adjusted larval index including the following covariates: a potential habitat variable, time of day, a systematic geographic area variable, month and year. Results indicated that all three model-types had similar precision in index values. However, the habitat-adjusted larval index demonstrated a high correlation with estimates of spawning stock biomass from the previous stock assessment model, and, therefore, is recommended as a tuning index in future stock assessment models.
Modeling longitudinal data, I: principles of multivariate analysis.
Ravani, Pietro; Barrett, Brendan; Parfrey, Patrick
2009-01-01
Statistical models are used to study the relationship between exposure and disease while accounting for the potential role of other factors' impact on outcomes. This adjustment is useful to obtain unbiased estimates of true effects or to predict future outcomes. Statistical models include a systematic component and an error component. The systematic component explains the variability of the response variable as a function of the predictors and is summarized in the effect estimates (model coefficients). The error element of the model represents the variability in the data unexplained by the model and is used to build measures of precision around the point estimates (confidence intervals).
Comparison of three-dimensional multi-segmental foot models used in clinical gait laboratories.
Nicholson, Kristen; Church, Chris; Takata, Colton; Niiler, Tim; Chen, Brian Po-Jung; Lennon, Nancy; Sees, Julie P; Henley, John; Miller, Freeman
2018-05-16
Many skin-mounted three-dimensional multi-segmented foot models are currently in use for gait analysis. Evidence regarding the repeatability of models, including between trial and between assessors, is mixed, and there are no between model comparisons of kinematic results. This study explores differences in kinematics and repeatability between five three-dimensional multi-segmented foot models. The five models include duPont, Heidelberg, Oxford Child, Leardini, and Utah. Hind foot, forefoot, and hallux angles were calculated with each model for ten individuals. Two physical therapists applied markers three times to each individual to assess within and between therapist variability. Standard deviations were used to evaluate marker placement variability. Locally weighted regression smoothing with alpha-adjusted serial T tests analysis was used to assess kinematic similarities. All five models had similar variability, however, the Leardini model showed high standard deviations in plantarflexion/dorsiflexion angles. P-value curves for the gait cycle were used to assess kinematic similarities. The duPont and Oxford models had the most similar kinematics. All models demonstrated similar marker placement variability. Lower variability was noted in the sagittal and coronal planes compared to rotation in the transverse plane, suggesting a higher minimal detectable change when clinically considering rotation and a need for additional research. Between the five models, the duPont and Oxford shared the most kinematic similarities. While patterns of movement were very similar between all models, offsets were often present and need to be considered when evaluating published data. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lawrence, D. M.; Fisher, R.; Koven, C.; Oleson, K. W.; Swenson, S. C.; Hoffman, F. M.; Randerson, J. T.; Collier, N.; Mu, M.
2017-12-01
The International Land Model Benchmarking (ILAMB) project is a model-data intercomparison and integration project designed to assess and help improve land models. The current package includes assessment of more than 25 land variables across more than 60 global, regional, and site-level (e.g., FLUXNET) datasets. ILAMB employs a broad range of metrics including RMSE, mean error, spatial distributions, interannual variability, and functional relationships. Here, we apply ILAMB for the purpose of assessment of several generations of the Community Land Model (CLM4, CLM4.5, and CLM5). Encouragingly, CLM5, which is the result of model development over the last several years by more than 50 researchers from 15 different institutions, shows broad improvements across many ILAMB metrics including LAI, GPP, vegetation carbon stocks, and the historical net ecosystem carbon balance among others. We will also show that considerable uncertainty arises from the historical climate forcing data used (GSWP3v1 and CRUNCEPv7). ILAMB score variations due to forcing data can be as large for many variables as that due to model structural differences. Strengths and weaknesses and persistent biases across model generations will also be presented.
Wang, Zhu; Shuangge, Ma; Wang, Ching-Yun
2017-01-01
In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD) and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using an open-source R package mpath. PMID:26059498
Qiu, Menglong; Wang, Qi; Li, Fangbai; Chen, Junjian; Yang, Guoyi; Liu, Liming
2016-01-01
A customized logistic-based cellular automata (CA) model was developed to simulate changes in heavy metal contamination (HMC) in farmland soils of Dongguan, a manufacturing center in Southern China, and to discover the relationship between HMC and related explanatory variables (continuous and categorical). The model was calibrated through the simulation and validation of HMC in 2012. Thereafter, the model was implemented for the scenario simulation of development alternatives for HMC in 2022. The HMC in 2002 and 2012 was determined through soil tests and cokriging. Continuous variables were divided into two groups by odds ratios. Positive variables (odds ratios >1) included the Nemerow synthetic pollution index in 2002, linear drainage density, distance from the city center, distance from the railway, slope, and secondary industrial output per unit of land. Negative variables (odds ratios <1) included elevation, distance from the road, distance from the key polluting enterprises, distance from the town center, soil pH, and distance from bodies of water. Categorical variables, including soil type, parent material type, organic content grade, and land use type, also significantly influenced HMC according to Wald statistics. The relative operating characteristic and kappa coefficients were 0.91 and 0.64, respectively, which proved the validity and accuracy of the model. The scenario simulation shows that the government should not only implement stricter environmental regulation but also strengthen the remediation of the current polluted area to effectively mitigate HMC.
Røislien, Jo; Søvik, Signe; Eken, Torsten
2018-01-01
Trauma is a leading global cause of death, and predicting the burden of trauma admissions is vital for good planning of trauma care. Seasonality in trauma admissions has been found in several studies. Seasonal fluctuations in daylight hours, temperature and weather affect social and cultural practices but also individual neuroendocrine rhythms that may ultimately modify behaviour and potentially predispose to trauma. The aim of the present study was to explore to what extent the observed seasonality in daily trauma admissions could be explained by changes in daylight and weather variables throughout the year. Retrospective registry study on trauma admissions in the 10-year period 2001-2010 at Oslo University Hospital, Ullevål, Norway, where the amount of daylight varies from less than 6 hours to almost 19 hours per day throughout the year. Daily number of admissions was analysed by fitting non-linear Poisson time series regression models, simultaneously adjusting for several layers of temporal patterns, including a non-linear long-term trend and both seasonal and weekly cyclic effects. Five daylight and weather variables were explored, including hours of daylight and amount of precipitation. Models were compared using Akaike's Information Criterion (AIC). A regression model including daylight and weather variables significantly outperformed a traditional seasonality model in terms of AIC. A cyclic week effect was significant in all models. Daylight and weather variables are better predictors of seasonality in daily trauma admissions than mere information on day-of-year.
Mathematical modeling to predict residential solid waste generation.
Benítez, Sara Ojeda; Lozano-Olvera, Gabriela; Morelos, Raúl Adalberto; Vega, Carolina Armijo de
2008-01-01
One of the challenges faced by waste management authorities is determining the amount of waste generated by households in order to establish waste management systems, as well as trying to charge rates compatible with the principle applied worldwide, and design a fair payment system for households according to the amount of residential solid waste (RSW) they generate. The goal of this research work was to establish mathematical models that correlate the generation of RSW per capita to the following variables: education, income per household, and number of residents. This work was based on data from a study on generation, quantification and composition of residential waste in a Mexican city in three stages. In order to define prediction models, five variables were identified and included in the model. For each waste sampling stage a different mathematical model was developed, in order to find the model that showed the best linear relation to predict residential solid waste generation. Later on, models to explore the combination of included variables and select those which showed a higher R(2) were established. The tests applied were normality, multicolinearity and heteroskedasticity. Another model, formulated with four variables, was generated and the Durban-Watson test was applied to it. Finally, a general mathematical model is proposed to predict residential waste generation, which accounts for 51% of the total.
NASA Technical Reports Server (NTRS)
Fortenbaugh, R. L.
1980-01-01
Equations incorporated in a VATOL six degree of freedom off-line digital simulation program and data for the Vought SF-121 VATOL aircraft concept which served as the baseline for the development of this program are presented. The equations and data are intended to facilitate the development of a piloted VATOL simulation. The equation presentation format is to state the equations which define a particular model segment. Listings of constants required to quantify the model segment, input variables required to exercise the model segment, and output variables required by other model segments are included. In several instances a series of input or output variables are followed by a section number in parentheses which identifies the model segment of origination or termination of those variables.
NASA Astrophysics Data System (ADS)
Abdussalam, Auwal; Monaghan, Andrew; Dukic, Vanja; Hayden, Mary; Hopson, Thomas; Leckebusch, Gregor
2013-04-01
Northwest Nigeria is a region with high risk of bacterial meningitis. Since the first documented epidemic of meningitis in Nigeria in 1905, the disease has been endemic in the northern part of the country, with epidemics occurring regularly. In this study we examine the influence of climate on the interannual variability of meningitis incidence and epidemics. Monthly aggregate counts of clinically confirmed hospital-reported cases of meningitis were collected in northwest Nigeria for the 22-year period spanning 1990-2011. Several generalized linear statistical models were fit to the monthly meningitis counts, including generalized additive models. Explanatory variables included monthly records of temperatures, humidity, rainfall, wind speed, sunshine and dustiness from weather stations nearest to the hospitals, and a time series of polysaccharide vaccination efficacy. The effects of other confounding factors -- i.e., mainly non-climatic factors for which records were not available -- were estimated as a smooth, monthly-varying function of time in the generalized additive models. Results reveal that the most important explanatory climatic variables are mean maximum monthly temperature, relative humidity and dustiness. Accounting for confounding factors (e.g., social processes) in the generalized additive models explains more of the year-to-year variation of meningococcal disease compared to those generalized linear models that do not account for such factors. Promising results from several models that included only explanatory variables that preceded the meningitis case data by 1-month suggest there may be potential for prediction of meningitis in northwest Nigeria to aid decision makers on this time scale.
Mbuthia, Jackson M; Rewe, Thomas O; Kahi, Alexander K
2015-02-01
A deterministic bio-economic model was developed and applied to evaluate biological and economic variables that characterize smallholder pig production systems in Kenya. Two pig production systems were considered namely, semi-intensive (SI) and extensive (EX). The input variables were categorized into biological variables including production and functional traits, nutritional variables, management variables and economic variables. The model factored the various sow physiological systems including gestation, farrowing, lactation, growth and development. The model was developed to evaluate a farrow to finish operation, but the results were customized to account for a farrow to weaner operation for a comparative analysis. The operations were defined as semi-intensive farrow to finish (SIFF), semi-intensive farrow to weaner (SIFW), extensive farrow to finish (EXFF) and extensive farrow to weaner (EXFW). In SI, the profits were the highest at KES. 74,268.20 per sow per year for SIFF against KES. 4026.12 for SIFW. The corresponding profits for EX were KES. 925.25 and KES. 626.73. Feed costs contributed the major part of the total costs accounting for 67.0, 50.7, 60.5 and 44.5 % in the SIFF, SIFW, EXFF and EXFW operations, respectively. The bio-economic model developed could be extended with modifications for use in deriving economic values for breeding goal traits for pigs under smallholder production systems in other parts of the tropics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Ming; Deng, Yi
2015-02-06
El Niño-Southern Oscillation (ENSO) and Annular Modes (AMs) represent respectively the most important modes of low frequency variability in the tropical and extratropical circulations. The future projection of the ENSO and AM variability, however, remains highly uncertain with the state-of-the-art coupled general circulation models. A comprehensive understanding of the factors responsible for the inter-model discrepancies in projecting future changes in the ENSO and AM variability, in terms of multiple feedback processes involved, has yet to be achieved. The proposed research aims to identify sources of such uncertainty and establish a set of process-resolving quantitative evaluations of the existing predictions ofmore » the future ENSO and AM variability. The proposed process-resolving evaluations are based on a feedback analysis method formulated in Lu and Cai (2009), which is capable of partitioning 3D temperature anomalies/perturbations into components linked to 1) radiation-related thermodynamic processes such as cloud and water vapor feedbacks, 2) local dynamical processes including convection and turbulent/diffusive energy transfer and 3) non-local dynamical processes such as the horizontal energy transport in the oceans and atmosphere. Taking advantage of the high-resolution, multi-model ensemble products from the Coupled Model Intercomparison Project Phase 5 (CMIP5) soon to be available at the Lawrence Livermore National Lab, we will conduct a process-resolving decomposition of the global three-dimensional (3D) temperature (including SST) response to the ENSO and AM variability in the preindustrial, historical and future climate simulated by these models. Specific research tasks include 1) identifying the model-observation discrepancies in the global temperature response to ENSO and AM variability and attributing such discrepancies to specific feedback processes, 2) delineating the influence of anthropogenic radiative forcing on the key feedback processes operating on ENSO and AM variability and quantifying their relative contributions to the changes in the temperature anomalies associated with different phases of ENSO and AMs, and 3) investigating the linkages between model feedback processes that lead to inter-model differences in time-mean temperature projection and model feedback processes that cause inter-model differences in the simulated ENSO and AM temperature response. Through a thorough model-observation and inter-model comparison of the multiple energetic processes associated with ENSO and AM variability, the proposed research serves to identify key uncertainties in model representation of ENSO and AM variability, and investigate how the model uncertainty in predicting time-mean response is related to the uncertainty in predicting response of the low-frequency modes. The proposal is thus a direct response to the first topical area of the solicitation: Interaction of Climate Change and Low Frequency Modes of Natural Climate Variability. It ultimately supports the accomplishment of the BER climate science activity Long Term Measure (LTM): "Deliver improved scientific data and models about the potential response of the Earth's climate and terrestrial biosphere to increased greenhouse gas levels for policy makers to determine safe levels of greenhouse gases in the atmosphere."« less
Jiang, Hui; Zhang, Hang; Chen, Quansheng; Mei, Congli; Liu, Guohai
2015-01-01
The use of wavelength variable selection before partial least squares discriminant analysis (PLS-DA) for qualitative identification of solid state fermentation degree by FT-NIR spectroscopy technique was investigated in this study. Two wavelength variable selection methods including competitive adaptive reweighted sampling (CARS) and stability competitive adaptive reweighted sampling (SCARS) were employed to select the important wavelengths. PLS-DA was applied to calibrate identified model using selected wavelength variables by CARS and SCARS for identification of solid state fermentation degree. Experimental results showed that the number of selected wavelength variables by CARS and SCARS were 58 and 47, respectively, from the 1557 original wavelength variables. Compared with the results of full-spectrum PLS-DA, the two wavelength variable selection methods both could enhance the performance of identified models. Meanwhile, compared with CARS-PLS-DA model, the SCARS-PLS-DA model achieved better results with the identification rate of 91.43% in the validation process. The overall results sufficiently demonstrate the PLS-DA model constructed using selected wavelength variables by a proper wavelength variable method can be more accurate identification of solid state fermentation degree. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Jiang, Hui; Zhang, Hang; Chen, Quansheng; Mei, Congli; Liu, Guohai
2015-10-01
The use of wavelength variable selection before partial least squares discriminant analysis (PLS-DA) for qualitative identification of solid state fermentation degree by FT-NIR spectroscopy technique was investigated in this study. Two wavelength variable selection methods including competitive adaptive reweighted sampling (CARS) and stability competitive adaptive reweighted sampling (SCARS) were employed to select the important wavelengths. PLS-DA was applied to calibrate identified model using selected wavelength variables by CARS and SCARS for identification of solid state fermentation degree. Experimental results showed that the number of selected wavelength variables by CARS and SCARS were 58 and 47, respectively, from the 1557 original wavelength variables. Compared with the results of full-spectrum PLS-DA, the two wavelength variable selection methods both could enhance the performance of identified models. Meanwhile, compared with CARS-PLS-DA model, the SCARS-PLS-DA model achieved better results with the identification rate of 91.43% in the validation process. The overall results sufficiently demonstrate the PLS-DA model constructed using selected wavelength variables by a proper wavelength variable method can be more accurate identification of solid state fermentation degree.
Climatological Modeling of Monthly Air Temperature and Precipitation in Egypt through GIS Techniques
NASA Astrophysics Data System (ADS)
El Kenawy, A.
2009-09-01
This paper describes a method for modeling and mapping four climatic variables (maximum temperature, minimum temperature, mean temperature and total precipitation) in Egypt using a multiple regression approach implemented in a GIS environment. In this model, a set of variables including latitude, longitude, elevation within a distance of 5, 10 and 15 km, slope, aspect, distance to the Mediterranean Sea, distance to the Red Sea, distance to the Nile, ratio between land and water masses within a radius of 5, 10, 15 km, the Normalized Difference Vegetation Index (NDVI), the Normalized Difference Water Index (NDWI), the Normalized Difference Temperature Index (NDTI) and reflectance are included as independent variables. These variables were integrated as raster layers in MiraMon software at a spatial resolution of 1 km. Climatic variables were considered as dependent variables and averaged from quality controlled and homogenized 39 series distributing across the entire country during the period of (1957-2006). For each climatic variable, digital and objective maps were finally obtained using the multiple regression coefficients at monthly, seasonal and annual timescale. The accuracy of these maps were assessed through cross-validation between predicted and observed values using a set of statistics including coefficient of determination (R2), root mean square error (RMSE), mean absolute error (MAE), mean bias Error (MBE) and D Willmott statistic. These maps are valuable in the sense of spatial resolution as well as the number of observatories involved in the current analysis.
Ip, Ryan H L; Li, W K; Leung, Kenneth M Y
2013-09-15
Large scale environmental remediation projects applied to sea water always involve large amount of capital investments. Rigorous effectiveness evaluations of such projects are, therefore, necessary and essential for policy review and future planning. This study aims at investigating effectiveness of environmental remediation using three different Seemingly Unrelated Regression (SUR) time series models with intervention effects, including Model (1) assuming no correlation within and across variables, Model (2) assuming no correlation across variable but allowing correlations within variable across different sites, and Model (3) allowing all possible correlations among variables (i.e., an unrestricted model). The results suggested that the unrestricted SUR model is the most reliable one, consistently having smallest variations of the estimated model parameters. We discussed our results with reference to marine water quality management in Hong Kong while bringing managerial issues into consideration. Copyright © 2013 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Dardis, Christina M.; Kelley, Erika L.; Edwards, Katie M.; Gidycz, Christine A.
2013-01-01
Objective: This study assessed abused and nonabused women's perceptions of Investment Model (IM) variables (ie, relationship investment, satisfaction, commitment, quality of alternatives) utilizing a mixed-methods design. Participants: Participants included 102 college women, approximately half of whom were in abusive dating relationships.…
Child-related cognitions and affective functioning of physically abusive and comparison parents.
Haskett, Mary E; Smith Scott, Susan; Grant, Raven; Ward, Caryn Sabourin; Robinson, Canby
2003-06-01
The goal of this research was to utilize the cognitive behavioral model of abusive parenting to select and examine risk factors to illuminate the unique and combined influences of social cognitive and affective variables in predicting abuse group membership. Participants included physically abusive parents (n=56) and a closely-matched group of comparison parents (n=62). Social cognitive risk variables measured were (a) parent's expectations for children's abilities and maturity, (b) parental attributions of intentionality of child misbehavior, and (c) parents' perceptions of their children's adjustment. Affective risk variables included (a) psychopathology and (b) parenting stress. A series of logistic regression models were constructed to test the individual, combined, and interactive effects of risk variables on abuse group membership. The full set of five risk variables was predictive of abuse status; however, not all variables were predictive when considered individually and interactions did not contribute significantly to prediction. A risk composite score computed for each parent based on the five risk variables significantly predicted abuse status. Wide individual differences in risk across the five variables were apparent within the sample of abusive parents. Findings were generally consistent with a cognitive behavioral model of abuse, with cognitive variables being more salient in predicting abuse status than affective factors. Results point to the importance of considering diversity in characteristics of abusive parents.
Extended-Range Prediction with Low-Dimensional, Stochastic-Dynamic Models: A Data-driven Approach
2012-09-30
characterization of extratropical storms and extremes and link these to LFV modes. Mingfang Ting, Yochanan Kushnir, Andrew W. Robertson...simulating and predicting a wide range of climate phenomena including ENSO, tropical Atlantic sea surface temperatures (SSTs), storm track variability...into empirical prediction models. Use observations to improve low-order dynamical MJO models. Adam Sobel, Daehyun Kim. Extratropical variability
Comparison of yellow poplar growth models on the basis of derived growth analysis variables
Keith F. Jensen; Daniel A. Yaussy
1986-01-01
Quadratic and cubic polynomials, and Gompertz and Richards asymptotic models were fitted to yellow poplar growth data. These data included height, leaf area, leaf weight and new shoot height for 23 weeks. Seven growth analysis variables were estimated from each function. The Gompertz and Richards models fitted the data best and provided the most accurate derived...
Song, Li-Yu
2017-04-01
This study examined a comprehensive set of potential correlates of recovery based on the Unity Model of Recovery. Thirty-two community psychiatric rehabilitation centers in Taiwan agreed to participate in this study. A sample of 592 participants were administered the questionnaires. Five groups of independent variables were included in the model: socio-demographic variables, illness variables, resilience, informal support, and formal support. The results of regression analysis provided support for the validity of the Unity Model of Recovery. The independent variables explained 53.5% of the variance in recovery for the full sample, and 55.5% for the subsample of the consumers who have been ever employed. The significance of the three cornerstones (resilience, family support, and symptoms) for recovery was confirmed. Other critical support variables, including the extent of rehabilitation service use, professional relationship, and professional support were also found to be significant factors. Among all the significant correlates, resilience, family support, and extent of rehabilitation service use ranked in the top three. The findings could shed light on paths to recovery. Implications for psychiatric services were discussed and suggested. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Research on Zheng Classification Fusing Pulse Parameters in Coronary Heart Disease
Guo, Rui; Wang, Yi-Qin; Xu, Jin; Yan, Hai-Xia; Yan, Jian-Jun; Li, Fu-Feng; Xu, Zhao-Xia; Xu, Wen-Jie
2013-01-01
This study was conducted to illustrate that nonlinear dynamic variables of Traditional Chinese Medicine (TCM) pulse can improve the performances of TCM Zheng classification models. Pulse recordings of 334 coronary heart disease (CHD) patients and 117 normal subjects were collected in this study. Recurrence quantification analysis (RQA) was employed to acquire nonlinear dynamic variables of pulse. TCM Zheng models in CHD were constructed, and predictions using a novel multilabel learning algorithm based on different datasets were carried out. Datasets were designed as follows: dataset1, TCM inquiry information including inspection information; dataset2, time-domain variables of pulse and dataset1; dataset3, RQA variables of pulse and dataset1; and dataset4, major principal components of RQA variables and dataset1. The performances of the different models for Zheng differentiation were compared. The model for Zheng differentiation based on RQA variables integrated with inquiry information had the best performance, whereas that based only on inquiry had the worst performance. Meanwhile, the model based on time-domain variables of pulse integrated with inquiry fell between the above two. This result showed that RQA variables of pulse can be used to construct models of TCM Zheng and improve the performance of Zheng differentiation models. PMID:23737839
Van Holsbeke, C; Ameye, L; Testa, A C; Mascilini, F; Lindqvist, P; Fischerova, D; Frühauf, F; Fransis, S; de Jonge, E; Timmerman, D; Epstein, E
2014-05-01
To develop and validate strategies, using new ultrasound-based mathematical models, for the prediction of high-risk endometrial cancer and compare them with strategies using previously developed models or the use of preoperative grading only. Women with endometrial cancer were prospectively examined using two-dimensional (2D) and three-dimensional (3D) gray-scale and color Doppler ultrasound imaging. More than 25 ultrasound, demographic and histological variables were analyzed. Two logistic regression models were developed: one 'objective' model using mainly objective variables; and one 'subjective' model including subjective variables (i.e. subjective impression of myometrial and cervical invasion, preoperative grade and demographic variables). The following strategies were validated: a one-step strategy using only preoperative grading and two-step strategies using preoperative grading as the first step and one of the new models, subjective assessment or previously developed models as a second step. One-hundred and twenty-five patients were included in the development set and 211 were included in the validation set. The 'objective' model retained preoperative grade and minimal tumor-free myometrium as variables. The 'subjective' model retained preoperative grade and subjective assessment of myometrial invasion. On external validation, the performance of the new models was similar to that on the development set. Sensitivity for the two-step strategy with the 'objective' model was 78% (95% CI, 69-84%) at a cut-off of 0.50, 82% (95% CI, 74-88%) for the strategy with the 'subjective' model and 83% (95% CI, 75-88%) for that with subjective assessment. Specificity was 68% (95% CI, 58-77%), 72% (95% CI, 62-80%) and 71% (95% CI, 61-79%) respectively. The two-step strategies detected up to twice as many high-risk cases as preoperative grading only. The new models had a significantly higher sensitivity than did previously developed models, at the same specificity. Two-step strategies with 'new' ultrasound-based models predict high-risk endometrial cancers with good accuracy and do this better than do previously developed models. Copyright © 2013 ISUOG. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Ransom, K.; Nolan, B. T.; Faunt, C. C.; Bell, A.; Gronberg, J.; Traum, J.; Wheeler, D. C.; Rosecrans, C.; Belitz, K.; Eberts, S.; Harter, T.
2016-12-01
A hybrid, non-linear, machine learning statistical model was developed within a statistical learning framework to predict nitrate contamination of groundwater to depths of approximately 500 m below ground surface in the Central Valley, California. A database of 213 predictor variables representing well characteristics, historical and current field and county scale nitrogen mass balance, historical and current landuse, oxidation/reduction conditions, groundwater flow, climate, soil characteristics, depth to groundwater, and groundwater age were assigned to over 6,000 private supply and public supply wells measured previously for nitrate and located throughout the study area. The machine learning method, gradient boosting machine (GBM) was used to screen predictor variables and rank them in order of importance in relation to the groundwater nitrate measurements. The top five most important predictor variables included oxidation/reduction characteristics, historical field scale nitrogen mass balance, climate, and depth to 60 year old water. Twenty-two variables were selected for the final model and final model errors for log-transformed hold-out data were R squared of 0.45 and root mean square error (RMSE) of 1.124. Modeled mean groundwater age was tested separately for error improvement in the model and when included decreased model RMSE by 0.5% compared to the same model without age and by 0.20% compared to the model with all 213 variables. 1D and 2D partial plots were examined to determine how variables behave individually and interact in the model. Some variables behaved as expected: log nitrate decreased with increasing probability of anoxic conditions and depth to 60 year old water, generally decreased with increasing natural landuse surrounding wells and increasing mean groundwater age, generally increased with increased minimum depth to high water table and with increased base flow index value. Other variables exhibited much more erratic or noisy behavior in the model making them more difficult to interpret but highlighting the usefulness of the non-linear machine learning method. 2D interaction plots show probability of anoxic groundwater conditions largely control estimated nitrate concentrations compared to the other predictors.
Bootstrap investigation of the stability of a Cox regression model.
Altman, D G; Andersen, P K
1989-07-01
We describe a bootstrap investigation of the stability of a Cox proportional hazards regression model resulting from the analysis of a clinical trial of azathioprine versus placebo in patients with primary biliary cirrhosis. We have considered stability to refer both to the choice of variables included in the model and, more importantly, to the predictive ability of the model. In stepwise Cox regression analyses of 100 bootstrap samples using 17 candidate variables, the most frequently selected variables were those selected in the original analysis, and no other important variable was identified. Thus there was no reason to doubt the model obtained in the original analysis. For each patient in the trial, bootstrap confidence intervals were constructed for the estimated probability of surviving two years. It is shown graphically that these intervals are markedly wider than those obtained from the original model.
A gentle introduction to quantile regression for ecologists
Cade, B.S.; Noon, B.R.
2003-01-01
Quantile regression is a way to estimate the conditional quantiles of a response variable distribution in the linear model that provides a more complete view of possible causal relationships between variables in ecological processes. Typically, all the factors that affect ecological processes are not measured and included in the statistical models used to investigate relationships between variables associated with those processes. As a consequence, there may be a weak or no predictive relationship between the mean of the response variable (y) distribution and the measured predictive factors (X). Yet there may be stronger, useful predictive relationships with other parts of the response variable distribution. This primer relates quantile regression estimates to prediction intervals in parametric error distribution regression models (eg least squares), and discusses the ordering characteristics, interval nature, sampling variation, weighting, and interpretation of the estimates for homogeneous and heterogeneous regression models.
Vizcaino, Pilar; Lavalle, Carlo
2018-05-04
A new Land Use Regression model was built to develop pan-European 100 m resolution maps of NO 2 concentrations. The model was built using NO 2 concentrations from routine monitoring stations available in the Airbase database as dependent variable. Predictor variables included land use, road traffic proxies, population density, climatic and topographical variables, and distance to sea. In order to capture international and inter regional disparities not accounted for with the mentioned predictor variables, additional proxies of NO 2 concentrations, like levels of activity intensity and NO x emissions for specific sectors, were also included. The model was built using Random Forest techniques. Model performance was relatively good given the EU-wide scale (R 2 = 0.53). Output predictions of annual average concentrations of NO 2 were in line with other existing models in terms of spatial distribution and values of concentration. The model was validated for year 2015, comparing model predictions derived from updated values of independent variables, with concentrations in monitoring stations for that year. The algorithm was then used to model future concentrations up to the year 2030, considering different emission scenarios as well as changes in land use, population distribution and economic factors assuming the most likely socio-economic trends. Levels of exposure were derived from maps of concentration. The model proved to be a useful tool for the ex-ante evaluation of specific air pollution mitigation measures, and more broadly, for impact assessment of EU policies on territorial development. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Estimation of Particulate Mass and Manganese Exposure Levels among Welders
Hobson, Angela; Seixas, Noah; Sterling, David; Racette, Brad A.
2011-01-01
Background: Welders are frequently exposed to Manganese (Mn), which may increase the risk of neurological impairment. Historical exposure estimates for welding-exposed workers are needed for epidemiological studies evaluating the relationship between welding and neurological or other health outcomes. The objective of this study was to develop and validate a multivariate model to estimate quantitative levels of welding fume exposures based on welding particulate mass and Mn concentrations reported in the published literature. Methods: Articles that described welding particulate and Mn exposures during field welding activities were identified through a comprehensive literature search. Summary measures of exposure and related determinants such as year of sampling, welding process performed, type of ventilation used, degree of enclosure, base metal, and location of sampling filter were extracted from each article. The natural log of the reported arithmetic mean exposure level was used as the dependent variable in model building, while the independent variables included the exposure determinants. Cross-validation was performed to aid in model selection and to evaluate the generalizability of the models. Results: A total of 33 particulate and 27 Mn means were included in the regression analysis. The final model explained 76% of the variability in the mean exposures and included welding process and degree of enclosure as predictors. There was very little change in the explained variability and root mean squared error between the final model and its cross-validation model indicating the final model is robust given the available data. Conclusions: This model may be improved with more detailed exposure determinants; however, the relatively large amount of variance explained by the final model along with the positive generalizability results of the cross-validation increases the confidence that the estimates derived from this model can be used for estimating welder exposures in absence of individual measurement data. PMID:20870928
NASA Astrophysics Data System (ADS)
Fouad, Geoffrey; Skupin, André; Hope, Allen
2016-04-01
The flow duration curve (FDC) is one of the most widely used tools to quantify streamflow. Its percentile flows are often required for water resource applications, but these values must be predicted for ungauged basins with insufficient or no streamflow data. Regional regression is a commonly used approach for predicting percentile flows that involves identifying hydrologic regions and calibrating regression models to each region. The independent variables used to describe the physiographic and climatic setting of the basins are a critical component of regional regression, yet few studies have investigated their effect on resulting predictions. In this study, the complexity of the independent variables needed for regional regression is investigated. Different levels of variable complexity are applied for a regional regression consisting of 918 basins in the US. Both the hydrologic regions and regression models are determined according to the different sets of variables, and the accuracy of resulting predictions is assessed. The different sets of variables include (1) a simple set of three variables strongly tied to the FDC (mean annual precipitation, potential evapotranspiration, and baseflow index), (2) a traditional set of variables describing the average physiographic and climatic conditions of the basins, and (3) a more complex set of variables extending the traditional variables to include statistics describing the distribution of physiographic data and temporal components of climatic data. The latter set of variables is not typically used in regional regression, and is evaluated for its potential to predict percentile flows. The simplest set of only three variables performed similarly to the other more complex sets of variables. Traditional variables used to describe climate, topography, and soil offered little more to the predictions, and the experimental set of variables describing the distribution of basin data in more detail did not improve predictions. These results are largely reflective of cross-correlation existing in hydrologic datasets, and highlight the limited predictive power of many traditionally used variables for regional regression. A parsimonious approach including fewer variables chosen based on their connection to streamflow may be more efficient than a data mining approach including many different variables. Future regional regression studies may benefit from having a hydrologic rationale for including different variables and attempting to create new variables related to streamflow.
Qiao, Yuanhua; West, Harry H; Mannan, M Sam; Johnson, David W; Cornwell, John B
2006-03-17
Liquefied natural gas (LNG) release, spread, evaporation, and dispersion processes are illustrated using the Federal Energy Regulatory Commission models in this paper. The spillage consequences are dependent upon the tank conditions, release scenarios, and the environmental conditions. The effects of the contributing variables, including the tank configuration, breach hole size, ullage pressure, wind speed and stability class, and surface roughness, on the consequence of LNG spillage onto water are evaluated using the models. The sensitivities of the consequences to those variables are discussed.
J Jeuck; F. Cubbage; R. Abt; R. Bardon; J. McCarter; J. Coulston; M. Renkow
2014-01-01
: We conducted a meta-analysis on 64 econometric models from 47 studies predicting forestland conversion to agriculture (F2A), forestland to development (F2D), forestland to non-forested (F2NF) and undeveloped (including forestland) to developed (U2D) land. Over 250 independent econometric variables were identified from 21 F2A models, 21 F2D models, 12 F2NF models, and...
Effect of climate variables on cocoa black pod incidence in Sabah using ARIMAX model
NASA Astrophysics Data System (ADS)
Ling Sheng Chang, Albert; Ramba, Haya; Mohd. Jaaffar, Ahmad Kamil; Kim Phin, Chong; Chong Mun, Ho
2016-06-01
Cocoa black pod disease is one of the major diseases affecting the cocoa production in Malaysia and also around the world. Studies have shown that the climate variables have influenced the cocoa black pod disease incidence and it is important to quantify the black pod disease variation due to the effect of climate variables. Application of time series analysis especially auto-regressive moving average (ARIMA) model has been widely used in economics study and can be used to quantify the effect of climate variables on black pod incidence to forecast the right time to control the incidence. However, ARIMA model does not capture some turning points in cocoa black pod incidence. In order to improve forecasting performance, other explanatory variables such as climate variables should be included into ARIMA model as ARIMAX model. Therefore, this paper is to study the effect of climate variables on the cocoa black pod disease incidence using ARIMAX model. The findings of the study showed ARIMAX model using MA(1) and relative humidity at lag 7 days, RHt - 7 gave better R square value compared to ARIMA model using MA(1) which could be used to forecast the black pod incidence to assist the farmers determine timely application of fungicide spraying and culture practices to control the black pod incidence.
Bayesian dynamic modeling of time series of dengue disease case counts
López-Quílez, Antonio; Torres-Prieto, Alexander
2017-01-01
The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model’s short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health. PMID:28671941
Beauregard, Frieda; de Blois, Sylvie
2014-01-01
Both climatic and edaphic conditions determine plant distribution, however many species distribution models do not include edaphic variables especially over large geographical extent. Using an exceptional database of vegetation plots (n = 4839) covering an extent of ∼55000 km2, we tested whether the inclusion of fine scale edaphic variables would improve model predictions of plant distribution compared to models using only climate predictors. We also tested how well these edaphic variables could predict distribution on their own, to evaluate the assumption that at large extents, distribution is governed largely by climate. We also hypothesized that the relative contribution of edaphic and climatic data would vary among species depending on their growth forms and biogeographical attributes within the study area. We modelled 128 native plant species from diverse taxa using four statistical model types and three sets of abiotic predictors: climate, edaphic, and edaphic-climate. Model predictive accuracy and variable importance were compared among these models and for species' characteristics describing growth form, range boundaries within the study area, and prevalence. For many species both the climate-only and edaphic-only models performed well, however the edaphic-climate models generally performed best. The three sets of predictors differed in the spatial information provided about habitat suitability, with climate models able to distinguish range edges, but edaphic models able to better distinguish within-range variation. Model predictive accuracy was generally lower for species without a range boundary within the study area and for common species, but these effects were buffered by including both edaphic and climatic predictors. The relative importance of edaphic and climatic variables varied with growth forms, with trees being more related to climate whereas lower growth forms were more related to edaphic conditions. Our study identifies the potential for non-climate aspects of the environment to pose a constraint to range expansion under climate change. PMID:24658097
Beauregard, Frieda; de Blois, Sylvie
2014-01-01
Both climatic and edaphic conditions determine plant distribution, however many species distribution models do not include edaphic variables especially over large geographical extent. Using an exceptional database of vegetation plots (n = 4839) covering an extent of ∼55,000 km2, we tested whether the inclusion of fine scale edaphic variables would improve model predictions of plant distribution compared to models using only climate predictors. We also tested how well these edaphic variables could predict distribution on their own, to evaluate the assumption that at large extents, distribution is governed largely by climate. We also hypothesized that the relative contribution of edaphic and climatic data would vary among species depending on their growth forms and biogeographical attributes within the study area. We modelled 128 native plant species from diverse taxa using four statistical model types and three sets of abiotic predictors: climate, edaphic, and edaphic-climate. Model predictive accuracy and variable importance were compared among these models and for species' characteristics describing growth form, range boundaries within the study area, and prevalence. For many species both the climate-only and edaphic-only models performed well, however the edaphic-climate models generally performed best. The three sets of predictors differed in the spatial information provided about habitat suitability, with climate models able to distinguish range edges, but edaphic models able to better distinguish within-range variation. Model predictive accuracy was generally lower for species without a range boundary within the study area and for common species, but these effects were buffered by including both edaphic and climatic predictors. The relative importance of edaphic and climatic variables varied with growth forms, with trees being more related to climate whereas lower growth forms were more related to edaphic conditions. Our study identifies the potential for non-climate aspects of the environment to pose a constraint to range expansion under climate change.
Lachmann, Bernd; Sariyska, Rayna; Kannen, Christopher; Błaszkiewicz, Konrad; Trendafilov, Boris; Andone, Ionut; Eibes, Mark; Markowetz, Alexander; Li, Mei; Kendrick, Keith M.
2017-01-01
Virtually everybody would agree that life satisfaction is of immense importance in everyday life. Thus, it is not surprising that a considerable amount of research using many different methodological approaches has investigated what the best predictors of life satisfaction are. In the present study, we have focused on several key potential influences on life satisfaction including bottom-up and top-down models, cross-cultural effects, and demographic variables. In four independent (large scale) surveys with sample sizes ranging from N = 488 to 40,297, we examined the associations between life satisfaction and various related variables. Our findings demonstrate that prediction of overall life satisfaction works best when including information about specific life satisfaction variables. From this perspective, satisfaction with leisure showed the highest impact on overall life satisfaction in our European samples. Personality was also robustly associated with life satisfaction, but only when life satisfaction variables were not included in the regression model. These findings could be replicated in all four independent samples, but it was also demonstrated that the relevance of life satisfaction variables changed under the influence of cross-cultural effects. PMID:29295529
NASA Astrophysics Data System (ADS)
De Lucia, Frank C., Jr.; Gottfried, Jennifer L.
2011-02-01
Using a series of thirteen organic materials that includes novel high-nitrogen energetic materials, conventional organic military explosives, and benign organic materials, we have demonstrated the importance of variable selection for maximizing residue discrimination with partial least squares discriminant analysis (PLS-DA). We built several PLS-DA models using different variable sets based on laser induced breakdown spectroscopy (LIBS) spectra of the organic residues on an aluminum substrate under an argon atmosphere. The model classification results for each sample are presented and the influence of the variables on these results is discussed. We found that using the whole spectra as the data input for the PLS-DA model gave the best results. However, variables due to the surrounding atmosphere and the substrate contribute to discrimination when the whole spectra are used, indicating this may not be the most robust model. Further iterative testing with additional validation data sets is necessary to determine the most robust model.
Eluru, Naveen; Chakour, Vincent; Chamberlain, Morgan; Miranda-Moreno, Luis F
2013-10-01
Vehicle operating speed measured on roadways is a critical component for a host of analysis in the transportation field including transportation safety, traffic flow modeling, roadway geometric design, vehicle emissions modeling, and road user route decisions. The current research effort contributes to the literature on examining vehicle speed on urban roads methodologically and substantively. In terms of methodology, we formulate a new econometric model framework for examining speed profiles. The proposed model is an ordered response formulation of a fractional split model. The ordered nature of the speed variable allows us to propose an ordered variant of the fractional split model in the literature. The proposed formulation allows us to model the proportion of vehicles traveling in each speed interval for the entire segment of roadway. We extend the model to allow the influence of exogenous variables to vary across the population. Further, we develop a panel mixed version of the fractional split model to account for the influence of site-specific unobserved effects. The paper contributes substantively by estimating the proposed model using a unique dataset from Montreal consisting of weekly speed data (collected in hourly intervals) for about 50 local roads and 70 arterial roads. We estimate separate models for local roads and arterial roads. The model estimation exercise considers a whole host of variables including geometric design attributes, roadway attributes, traffic characteristics and environmental factors. The model results highlight the role of various street characteristics including number of lanes, presence of parking, presence of sidewalks, vertical grade, and bicycle route on vehicle speed proportions. The results also highlight the presence of site-specific unobserved effects influencing the speed distribution. The parameters from the modeling exercise are validated using a hold-out sample not considered for model estimation. The results indicate that the proposed panel mixed ordered probit fractional split model offers promise for modeling such proportional ordinal variables. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Deser, C.
2017-12-01
Natural climate variability occurs over a wide range of time and space scales as a result of processes intrinsic to the atmosphere, the ocean, and their coupled interactions. Such internally generated climate fluctuations pose significant challenges for the identification of externally forced climate signals such as those driven by volcanic eruptions or anthropogenic increases in greenhouse gases. This challenge is exacerbated for regional climate responses evaluated from short (< 50 years) data records. The limited duration of the observations also places strong constraints on how well the spatial and temporal characteristics of natural climate variability are known, especially on multi-decadal time scales. The observational constraints, in turn, pose challenges for evaluation of climate models, including their representation of internal variability and assessing the accuracy of their responses to natural and anthropogenic radiative forcings. A promising new approach to climate model assessment is the advent of large (10-100 member) "initial-condition" ensembles of climate change simulations with individual models. Such ensembles allow for accurate determination, and straightforward separation, of externally forced climate signals and internal climate variability on regional scales. The range of climate trajectories in a given model ensemble results from the fact that each simulation represents a particular sequence of internal variability superimposed upon a common forced response. This makes clear that nature's single realization is only one of many that could have unfolded. This perspective leads to a rethinking of approaches to climate model evaluation that incorporate observational uncertainty due to limited sampling of internal variability. Illustrative examples across a range of well-known climate phenomena including ENSO, volcanic eruptions, and anthropogenic climate change will be discussed.
Hood, Donald C; Anderson, Susan C; Wall, Michael; Raza, Ali S; Kardon, Randy H
2009-09-01
Retinal nerve fiber (RNFL) thickness and visual field loss data from patients with glaucoma were analyzed in the context of a model, to better understand individual variation in structure versus function. Optical coherence tomography (OCT) RNFL thickness and standard automated perimetry (SAP) visual field loss were measured in the arcuate regions of one eye of 140 patients with glaucoma and 82 normal control subjects. An estimate of within-individual (measurement) error was obtained by repeat measures made on different days within a short period in 34 patients and 22 control subjects. A linear model, previously shown to describe the general characteristics of the structure-function data, was extended to predict the variability in the data. For normal control subjects, between-individual error (individual differences) accounted for 87% and 71% of the total variance in OCT and SAP measures, respectively. SAP within-individual error increased and then decreased with increased SAP loss, whereas OCT error remained constant. The linear model with variability (LMV) described much of the variability in the data. However, 12.5% of the patients' points fell outside the 95% boundary. An examination of these points revealed factors that can contribute to the overall variability in the data. These factors include epiretinal membranes, edema, individual variation in field-to-disc mapping, and the location of blood vessels and degree to which they are included by the RNFL algorithm. The model and the partitioning of within- versus between-individual variability helped elucidate the factors contributing to the considerable variability in the structure-versus-function data.
Hahn, Ezra; Jiang, Haiyan; Ng, Angela; Bashir, Shaheena; Ahmed, Sameera; Tsang, Richard; Sun, Alexander; Gospodarowicz, Mary; Hodgson, David
2017-08-01
Mediastinal radiation therapy (RT) for Hodgkin lymphoma (HL) is associated with late cardiotoxicity, but there are limited data to indicate which dosimetric parameters are most valuable for predicting this risk. This study investigated which whole heart dosimetric measurements provide the most information regarding late cardiotoxicity, and whether coronary artery dosimetry was more predictive of this outcome than whole heart dosimetry. A random sample of 125 HL patients treated with mediastinal RT was selected, and 3-dimensional cardiac dose-volume data were generated from historical plans using validated methods. Cardiac events were determined by linking patients to population-based datasets of inpatient and same-day hospitalizations and same-day procedures. Variables collected for the whole heart and 3 coronary arteries included the following: Dmean, Dmax, Dmin, dose homogeneity, V5, V10, V20, and V30. Multivariable competing risk regression models were generated for the whole heart and coronary arteries. There were 44 cardiac events documented, of which 70% were ischemic. The best multivariable model included the following covariates: whole heart Dmean (hazard ratio [HR] 1.09, P=.0083), dose homogeneity (HR 0.94, P=.0034), male sex (HR 2.31, P=.014), and age (HR 1.03, P=.0049). When any adverse cardiac event was the outcome, models using coronary artery variables did not perform better than models using whole heart variables. However, in a subanalysis of ischemic cardiac events only, the model using coronary artery variables was superior to the whole heart model and included the following covariates: age (HR 1.05, P<.001), volume of left anterior descending artery receiving 5 Gy (HR 0.98, P=.003), and volume of left circumflex artery receiving 20 Gy (HR 1.03, P<.001). In addition to higher mean heart dose, increasing inhomogeneity in cardiac dose was associated with a greater risk of late cardiac effects. When all types of cardiotoxicity were evaluated, the whole heart variable model outperformed the coronary artery models. However, when events were limited to ischemic cardiotoxicity, the coronary artery-based model was superior. Copyright © 2017 Elsevier Inc. All rights reserved.
González Costa, J J; Reigosa, M J; Matías, J M; Covelo, E F
2017-09-01
The aim of this study was to model the sorption and retention of Cd, Cu, Ni, Pb and Zn in soils. To that extent, the sorption and retention of these metals were studied and the soil characterization was performed separately. Multiple stepwise regression was used to produce multivariate models with linear techniques and with support vector machines, all of which included 15 explanatory variables characterizing soils. When the R-squared values are represented, two different groups are noticed. Cr, Cu and Pb sorption and retention show a higher R-squared; the most explanatory variables being humified organic matter, Al oxides and, in some cases, cation-exchange capacity (CEC). The other group of metals (Cd, Ni and Zn) shows a lower R-squared, and clays are the most explanatory variables, including a percentage of vermiculite and slime. In some cases, quartz, plagioclase or hematite percentages also show some explanatory capacity. Support Vector Machine (SVM) regression shows that the different models are not as regular as in multiple regression in terms of number of variables, the regression for nickel adsorption being the one with the highest number of variables in its optimal model. On the other hand, there are cases where the most explanatory variables are the same for two metals, as it happens with Cd and Cr adsorption. A similar adsorption mechanism is thus postulated. These patterns of the introduction of variables in the model allow us to create explainability sequences. Those which are the most similar to the selectivity sequences obtained by Covelo (2005) are Mn oxides in multiple regression and change capacity in SVM. Among all the variables, the only one that is explanatory for all the metals after applying the maximum parsimony principle is the percentage of sand in the retention process. In the competitive model arising from the aforementioned sequences, the most intense competitiveness for the adsorption and retention of different metals appears between Cr and Cd, Cu and Zn in multiple regression; and between Cr and Cd in SVM regression. Copyright © 2017 Elsevier B.V. All rights reserved.
Conjoint Analysis: A Study of the Effects of Using Person Variables.
ERIC Educational Resources Information Center
Fraas, John W.; Newman, Isadore
Three statistical techniques--conjoint analysis, a multiple linear regression model, and a multiple linear regression model with a surrogate person variable--were used to estimate the relative importance of five university attributes for students in the process of selecting a college. The five attributes include: availability and variety of…
VS2DRTI: Simulating Heat and Reactive Solute Transport in Variably Saturated Porous Media.
Healy, Richard W; Haile, Sosina S; Parkhurst, David L; Charlton, Scott R
2018-01-29
Variably saturated groundwater flow, heat transport, and solute transport are important processes in environmental phenomena, such as the natural evolution of water chemistry of aquifers and streams, the storage of radioactive waste in a geologic repository, the contamination of water resources from acid-rock drainage, and the geologic sequestration of carbon dioxide. Up to now, our ability to simulate these processes simultaneously with fully coupled reactive transport models has been limited to complex and often difficult-to-use models. To address the need for a simple and easy-to-use model, the VS2DRTI software package has been developed for simulating water flow, heat transport, and reactive solute transport through variably saturated porous media. The underlying numerical model, VS2DRT, was created by coupling the flow and transport capabilities of the VS2DT and VS2DH models with the equilibrium and kinetic reaction capabilities of PhreeqcRM. Flow capabilities include two-dimensional, constant-density, variably saturated flow; transport capabilities include both heat and multicomponent solute transport; and the reaction capabilities are a complete implementation of geochemical reactions of PHREEQC. The graphical user interface includes a preprocessor for building simulations and a postprocessor for visual display of simulation results. To demonstrate the simulation of multiple processes, the model is applied to a hypothetical example of injection of heated waste water to an aquifer with temperature-dependent cation exchange. VS2DRTI is freely available public domain software. © 2018, National Ground Water Association.
Variable selection based cotton bollworm odor spectroscopic detection
NASA Astrophysics Data System (ADS)
Lü, Chengxu; Gai, Shasha; Luo, Min; Zhao, Bo
2016-10-01
Aiming at rapid automatic pest detection based efficient and targeting pesticide application and shooting the trouble of reflectance spectral signal covered and attenuated by the solid plant, the possibility of near infrared spectroscopy (NIRS) detection on cotton bollworm odor is studied. Three cotton bollworm odor samples and 3 blank air gas samples were prepared. Different concentrations of cotton bollworm odor were prepared by mixing the above gas samples, resulting a calibration group of 62 samples and a validation group of 31 samples. Spectral collection system includes light source, optical fiber, sample chamber, spectrometer. Spectra were pretreated by baseline correction, modeled with partial least squares (PLS), and optimized by genetic algorithm (GA) and competitive adaptive reweighted sampling (CARS). Minor counts differences are found among spectra of different cotton bollworm odor concentrations. PLS model of all the variables was built presenting RMSEV of 14 and RV2 of 0.89, its theory basis is insect volatilizes specific odor, including pheromone and allelochemics, which are used for intra-specific and inter-specific communication and could be detected by NIR spectroscopy. 28 sensitive variables are selected by GA, presenting the model performance of RMSEV of 14 and RV2 of 0.90. Comparably, 8 sensitive variables are selected by CARS, presenting the model performance of RMSEV of 13 and RV2 of 0.92. CARS model employs only 1.5% variables presenting smaller error than that of all variable. Odor gas based NIR technique shows the potential for cotton bollworm detection.
Wang, Zhu; Ma, Shuangge; Wang, Ching-Yun
2015-09-01
In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD), and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, but also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using the open-source R package mpath. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Seabloom, William; Seabloom, Mary E; Seabloom, Eric; Barron, Robert; Hendrickson, Sharon
2003-08-01
The study determines the effectiveness of a sexuality-positive adolescent sexual offender treatment program and examines subsequent criminal recidivism in the three outcome groups (completed, withdrawn, referred). The sample consists of 122 adolescent males and their families (491 individuals). Of the demographic variables, only living situation was significant, such that patients living with parents were more likely to graduate. None of the behavioral variables were found to be significant. Of the treatment variables, length of time in the program and participation in the Family Journey Seminar were included in the final model. When they were included in the model, no other treatment variable were significantly related to probability of graduation. There were no arrests or convictions for sex-related crimes in the population of participants that successfully completed the program. This group was also less likely than the other groups to be arrested (p = 0.014) or convicted (p = 0.004) across all crime categories.
Factors associated with mouth breathing in children with -developmental -disabilities.
de Castilho, Lia Silva; Abreu, Mauro Henrique Nogueira Guimarães; de Oliveira, Renata Batista; Souza E Silva, Maria Elisa; Resende, Vera Lúcia Silva
2016-01-01
To investigate the prevalence and factors associated with mouth breathing among patients with developmental disabilities of a dental service. We analyzed 408 dental records. Mouth breathing was reported by the patients' parents and from direct observation. Other variables were as -follows: history of asthma, bronchitis, palate shape, pacifier use, thumb -sucking, nail biting, use of medications, gastroesophageal reflux, bruxism, gender, age, and diagnosis of the patient. Statistical analysis included descriptive analysis with ratio calculation and multiple logistic regression. Variables with p < 0.25 were included in the model to estimate the adjusted OR (95% CI), calculated by the forward stepwise method. Variables with p < 0.05 were kept in the model. Being male (p = 0.016) and use of centrally acting drugs (p = 0.001) were the variables that remained in the model. Among patients with -developmental disabilities, boys and psychotropic drug users had a greater chance of being mouth breathers. © 2016 Special Care Dentistry Association and Wiley Periodicals, Inc.
Stephens, Christine; Noone, Jack; Alpass, Fiona
2014-01-01
This study tested the effects of social network engagement and social support on the health of older people moving into retirement, using a model which includes social context variables. A prospective survey of a New Zealand population sample aged 54-70 at baseline (N = 2,282) was used to assess the effects on mental and physical health across time. A structural equation model assessed pathways from the social context variables through network engagement to social support and then to mental and physical health 2 years later. The proposed model of effects on mental health was supported when gender, economic living standards, and ethnicity were included along with the direct effects of these variables on social support. These findings confirm the importance of taking social context variables into account when considering social support networks. Social engagement appears to be an important aspect of social network functioning which could be investigated further.
Random parameter models for accident prediction on two-lane undivided highways in India.
Dinu, R R; Veeraragavan, A
2011-02-01
Generalized linear modeling (GLM), with the assumption of Poisson or negative binomial error structure, has been widely employed in road accident modeling. A number of explanatory variables related to traffic, road geometry, and environment that contribute to accident occurrence have been identified and accident prediction models have been proposed. The accident prediction models reported in literature largely employ the fixed parameter modeling approach, where the magnitude of influence of an explanatory variable is considered to be fixed for any observation in the population. Similar models have been proposed for Indian highways too, which include additional variables representing traffic composition. The mixed traffic on Indian highways comes with a lot of variability within, ranging from difference in vehicle types to variability in driver behavior. This could result in variability in the effect of explanatory variables on accidents across locations. Random parameter models, which can capture some of such variability, are expected to be more appropriate for the Indian situation. The present study is an attempt to employ random parameter modeling for accident prediction on two-lane undivided rural highways in India. Three years of accident history, from nearly 200 km of highway segments, is used to calibrate and validate the models. The results of the analysis suggest that the model coefficients for traffic volume, proportion of cars, motorized two-wheelers and trucks in traffic, and driveway density and horizontal and vertical curvatures are randomly distributed across locations. The paper is concluded with a discussion on modeling results and the limitations of the present study. Copyright © 2010 Elsevier Ltd. All rights reserved.
A satellite observation test bed for cloud parameterization development
NASA Astrophysics Data System (ADS)
Lebsock, M. D.; Suselj, K.
2015-12-01
We present an observational test-bed of cloud and precipitation properties derived from CloudSat, CALIPSO, and the the A-Train. The focus of the test-bed is on marine boundary layer clouds including stratocumulus and cumulus and the transition between these cloud regimes. Test-bed properties include the cloud cover and three dimensional cloud fraction along with the cloud water path and precipitation water content, and associated radiative fluxes. We also include the subgrid scale distribution of cloud and precipitation, and radiaitive quantities, which must be diagnosed by a model parameterization. The test-bed further includes meterological variables from the Modern Era Retrospective-analysis for Research and Applications (MERRA). MERRA variables provide the initialization and forcing datasets to run a parameterization in Single Column Model (SCM) mode. We show comparisons of an Eddy-Diffusivity/Mass-FLux (EDMF) parameterization coupled to micorphsycis and macrophysics packages run in SCM mode with observed clouds. Comparsions are performed regionally in areas of climatological subsidence as well stratified by dynamical and thermodynamical variables. Comparisons demonstrate the ability of the EDMF model to capture the observed transitions between subtropical stratocumulus and cumulus cloud regimes.
[Associated factors in newborns with intrauterine growth retardation].
Thompson-Chagoyán, Oscar C; Vega-Franco, Leopoldo
2008-01-01
To identify the risk factors implicated in the intrauterine growth retardation (IUGR) of neonates born in a social security institution. Case controls design study in 376 neonates: 188 with IUGR (weight < 10 percentile) and 188 without IUGR. When they born, information about 30 variables of risk for IUGR were obtained from mothers. Risk analysis and logistical regression (stepwise) were used. Odds ratios were significant for 12 of the variables. The model obtains by stepwise regression included: weight gain at pregnancy, prenatal care attendance, toxemia, chocolate ingestion, father's weight, and the environmental house. Must of the variables included in the model are related to socioeconomic disadvantages related to the risk of RCIU in the population.
NASA Technical Reports Server (NTRS)
Wetzel, Peter J.; Chang, Jy-Tai
1988-01-01
Observations of surface heterogeneity of soil moisture from scales of meters to hundreds of kilometers are discussed, and a relationship between grid element size and soil moisture variability is presented. An evapotranspiration model is presented which accounts for the variability of soil moisture, standing surface water, and vegetation internal and stomatal resistance to moisture flow from the soil. The mean values and standard deviations of these parameters are required as input to the model. Tests of this model against field observations are reported, and extensive sensitivity tests are presented which explore the importance of including subgrid-scale variability in an evapotranspiration model.
Gartner, J.E.; Cannon, S.H.; Santi, P.M.; deWolfe, V.G.
2008-01-01
Recently burned basins frequently produce debris flows in response to moderate-to-severe rainfall. Post-fire hazard assessments of debris flows are most useful when they predict the volume of material that may flow out of a burned basin. This study develops a set of empirically-based models that predict potential volumes of wildfire-related debris flows in different regions and geologic settings. The models were developed using data from 53 recently burned basins in Colorado, Utah and California. The volumes of debris flows in these basins were determined by either measuring the volume of material eroded from the channels, or by estimating the amount of material removed from debris retention basins. For each basin, independent variables thought to affect the volume of the debris flow were determined. These variables include measures of basin morphology, basin areas burned at different severities, soil material properties, rock type, and rainfall amounts and intensities for storms triggering debris flows. Using these data, multiple regression analyses were used to create separate predictive models for volumes of debris flows generated by burned basins in six separate regions or settings, including the western U.S., southern California, the Rocky Mountain region, and basins underlain by sedimentary, metamorphic and granitic rocks. An evaluation of these models indicated that the best model (the Western U.S. model) explains 83% of the variability in the volumes of the debris flows, and includes variables that describe the basin area with slopes greater than or equal to 30%, the basin area burned at moderate and high severity, and total storm rainfall. This model was independently validated by comparing volumes of debris flows reported in the literature, to volumes estimated using the model. Eighty-seven percent of the reported volumes were within two residual standard errors of the volumes predicted using the model. This model is an improvement over previous models in that it includes a measure of burn severity and an estimate of modeling errors. The application of this model, in conjunction with models for the probability of debris flows, will enable more complete and rapid assessments of debris flow hazards following wildfire.
Assessment of mid-latitude atmospheric variability in CMIP5 models using a process oriented-metric
NASA Astrophysics Data System (ADS)
Di Biagio, Valeria; Calmanti, Sandro; Dell'Aquila, Alessandro; Ruti, Paolo
2013-04-01
We compare, for the period 1962-2000, an estimate of the northern hemisphere mid-latitude winter atmospheric variability according several global climate models included in the fifth phase of the Climate Model Intercomparison Project (CMIP5) with the results of the models belonging to the previous CMIP3 and with the NCEP-NCAR reanalysis. We use the space-time Hayashi spectra of the 500hPa geopotential height fields to characterize the variability of atmospheric circulation regimes and we introduce an ad hoc integral measure of the variability observed in the Northern Hemisphere on different spectral sub-domains. The overall performance of each model is evaluated by considering the total wave variability as a global scalar measure of the statistical properties of different types of atmospheric disturbances. The variability associated to eastward propagating baroclinic waves and to planetary waves is instead used to describe the performance of each model in terms of specific physical processes. We find that the two model ensembles (CMIP3 and CMIP5) do not show substantial differences in the description of northern hemisphere winter mid-latitude atmospheric variability, although some CMIP5 models display performances superior to their previous versions implemented in CMIP3. Preliminary results for the 21th century RCP 4.5 scenario will be also discussed for the CMIP5 models.
Yao, Zheng-Yang; Liu, Jian-Jun
2014-01-01
Four common greening shrub species (i. e. Ligustrum quihoui, Buxus bodinieri, Berberis xinganensis and Buxus megistophylla) in Xi'an City were selected to develop the highest correlation and best-fit estimation models for the organ (branch, leaf and root) and total biomass against different independent variables. The results indicated that the organ and total biomass optimal models of the four shrubs were power functional model (CAR model) except for the leaf biomass model of B. megistophylla which was logarithmic functional model (VAR model). The independent variables included basal diameter, crown diameter, crown diameter multiplied by height, canopy area and canopy volume. B. megistophylla significantly differed from the other three shrub species in the independent variable selection, which were basal diameter and crown-related factors, respectively.
Stratospheric temperatures and tracer transport in a nudged 4-year middle atmosphere GCM simulation
NASA Astrophysics Data System (ADS)
van Aalst, M. K.; Lelieveld, J.; Steil, B.; Brühl, C.; Jöckel, P.; Giorgetta, M. A.; Roelofs, G.-J.
2005-02-01
We have performed a 4-year simulation with the Middle Atmosphere General Circulation Model MAECHAM5/MESSy, while slightly nudging the model's meteorology in the free troposphere (below 113 hPa) towards ECMWF analyses. We show that the nudging 5 technique, which leaves the middle atmosphere almost entirely free, enables comparisons with synoptic observations. The model successfully reproduces many specific features of the interannual variability, including details of the Antarctic vortex structure. In the Arctic, the model captures general features of the interannual variability, but falls short in reproducing the timing of sudden stratospheric warmings. A 10 detailed comparison of the nudged model simulations with ECMWF data shows that the model simulates realistic stratospheric temperature distributions and variabilities, including the temperature minima in the Antarctic vortex. Some small (a few K) model biases were also identified, including a summer cold bias at both poles, and a general cold bias in the lower stratosphere, most pronounced in midlatitudes. A comparison 15 of tracer distributions with HALOE observations shows that the model successfully reproduces specific aspects of the instantaneous circulation. The main tracer transport deficiencies occur in the polar lowermost stratosphere. These are related to the tropopause altitude as well as the tracer advection scheme and model resolution. The additional nudging of equatorial zonal winds, forcing the quasi-biennial oscillation, sig20 nificantly improves stratospheric temperatures and tracer distributions.
Werbart, Andrzej; Andersson, Håkan; Sandell, Rolf
2014-01-01
To explore the association between the stability or instability of services' organizational structure and patient- and therapist-initiated discontinuation of therapy in routine mental health. Three groups, comprising altogether 750 cases in routine mental health care in eight different clinics, were included: cases with patient-initiated discontinuation, therapist-initiated discontinuation, and patients remaining in treatment. Multilevel multinomial regression was used to estimate three models: An initial, unconditional intercept-only model, another one including patient variables, and a final model with significant patient and therapist variables including the organizational stability of the therapists' clinic. High between-therapist variability was noted. Odds ratios and significance tests indicated a strong association of organizational instability with patient-initiated premature termination in particular. The question of how organizational factors influence the treatment results needs further research. Future studies have to be designed in ways that permit clinically meaningful subdivision of the patients' and the therapists' decisions for premature termination.
A Model for the Correlates of Students' Creative Thinking
ERIC Educational Resources Information Center
Sarsani, Mahender Reddy
2007-01-01
The present study was aimed to explore the relationships between orgainsational or school variables, students' personal background variables, and cognitive and motivational variables. The sample for the survey included 373 students drawn from nine Government schools in Andhra Pradesh, India. Students' creative thinking abilities were measured by…
A critical re-evaluation of the regression model specification in the US D1 EQ-5D value function
2012-01-01
Background The EQ-5D is a generic health-related quality of life instrument (five dimensions with three levels, 243 health states), used extensively in cost-utility/cost-effectiveness analyses. EQ-5D health states are assigned values on a scale anchored in perfect health (1) and death (0). The dominant procedure for defining values for EQ-5D health states involves regression modeling. These regression models have typically included a constant term, interpreted as the utility loss associated with any movement away from perfect health. The authors of the United States EQ-5D valuation study replaced this constant with a variable, D1, which corresponds to the number of impaired dimensions beyond the first. The aim of this study was to illustrate how the use of the D1 variable in place of a constant is problematic. Methods We compared the original D1 regression model with a mathematically equivalent model with a constant term. Comparisons included implications for the magnitude and statistical significance of the coefficients, multicollinearity (variance inflation factors, or VIFs), number of calculation steps needed to determine tariff values, and consequences for tariff interpretation. Results Using the D1 variable in place of a constant shifted all dummy variable coefficients away from zero by the value of the constant, greatly increased the multicollinearity of the model (maximum VIF of 113.2 vs. 21.2), and increased the mean number of calculation steps required to determine health state values. Discussion Using the D1 variable in place of a constant constitutes an unnecessary complication of the model, obscures the fact that at least two of the main effect dummy variables are statistically nonsignificant, and complicates and biases interpretation of the tariff algorithm. PMID:22244261
A critical re-evaluation of the regression model specification in the US D1 EQ-5D value function.
Rand-Hendriksen, Kim; Augestad, Liv A; Dahl, Fredrik A
2012-01-13
The EQ-5D is a generic health-related quality of life instrument (five dimensions with three levels, 243 health states), used extensively in cost-utility/cost-effectiveness analyses. EQ-5D health states are assigned values on a scale anchored in perfect health (1) and death (0).The dominant procedure for defining values for EQ-5D health states involves regression modeling. These regression models have typically included a constant term, interpreted as the utility loss associated with any movement away from perfect health. The authors of the United States EQ-5D valuation study replaced this constant with a variable, D1, which corresponds to the number of impaired dimensions beyond the first. The aim of this study was to illustrate how the use of the D1 variable in place of a constant is problematic. We compared the original D1 regression model with a mathematically equivalent model with a constant term. Comparisons included implications for the magnitude and statistical significance of the coefficients, multicollinearity (variance inflation factors, or VIFs), number of calculation steps needed to determine tariff values, and consequences for tariff interpretation. Using the D1 variable in place of a constant shifted all dummy variable coefficients away from zero by the value of the constant, greatly increased the multicollinearity of the model (maximum VIF of 113.2 vs. 21.2), and increased the mean number of calculation steps required to determine health state values. Using the D1 variable in place of a constant constitutes an unnecessary complication of the model, obscures the fact that at least two of the main effect dummy variables are statistically nonsignificant, and complicates and biases interpretation of the tariff algorithm.
Gustafson, William Jr; Vogelmann, Andrew; Endo, Satoshi; Toto, Tami; Xiao, Heng; Li, Zhijin; Cheng, Xiaoping; Kim, Jinwon; Krishna, Bhargavi
2015-08-31
The Alpha 2 release is the second release from the LASSO Pilot Phase that builds upon the Alpha 1 release. Alpha 2 contains additional diagnostics in the data bundles and focuses on cases from spring-summer 2016. A data bundle is a unified package consisting of LASSO LES input and output, observations, evaluation diagnostics, and model skill scores. LES input include model configuration information and forcing data. LES output includes profile statistics and full domain fields of cloud and environmental variables. Model evaluation data consists of LES output and ARM observations co-registered on the same grid and sampling frequency. Model performance is quantified by skill scores and diagnostics in terms of cloud and environmental variables.
How much rainfall sustained a Green Sahara during the mid-Holocene?
NASA Astrophysics Data System (ADS)
Hopcroft, Peter; Valdes, Paul; Harper, Anna
2016-04-01
The present-day Sahara desert has periodically transformed to an area of lakes and vegetation during the Quaternary in response to orbitally-induced changes in the monsoon circulation. Coupled atmosphere-ocean general circulation model simulations of the mid-Holocene generally underestimate the required monsoon shift, casting doubt on the fidelity of these models. However, the climatic regime that characterised this period remains unclear. To address this, we applied an ensemble of dynamic vegetation model simulations using two different models: JULES (Joint UK Land Environment Simulator) a comprehensive land surface model, and LPJ (Lund-Potsdam-Jena model) a widely used dynamic vegetation model. The simulations are forced with a number of idealized climate scenarios, in which an observational climatology is progressively altered with imposed anomalies of precipitation and other related variables, including cloud cover and humidity. The applied anomalies are based on an ensemble of general circulation model simulations, and include seasonal variations but are spatially uniform across the region. When perturbing precipitation alone, a significant increase of at least 700mm/year is required to produce model simulations with non-negligible vegetation coverage in the Sahara region. Changes in related variables including cloud cover, surface radiation fluxes and humidity are found to be important in the models, as they modify the water balance and so affect plant growth. Including anomalies in all of these variables together reduces the precipitation change required for a Green Sahara compared to the case of increasing precipitation alone. We assess whether the precipitation changes implied by these vegetation model simulations are consistent with reconstructions for the mid-Holocene from pollen samples. Further, Earth System models predict precipitation increases that are significantly smaller than that inferred from these vegetation model simulations. Understanding this difference presents an ongoing challenge.
NASA Technical Reports Server (NTRS)
Hastings, L. J.; Hedayat, A.; Brown, T. M.
2004-01-01
A unique foam/multilayer insulation (MLI) combination concept for orbital cryogenic storage was experimentally evaluated using a large-scale hydrogen tank. The foam substrate insulates for ground-hold periods and enables a gaseous nitrogen purge as opposed to helium. The MLI, designed for an on-orbit storage period for 45 days, includes several unique features including a variable layer density and larger but fewer perforations for venting during ascent to orbit. Test results with liquid hydrogen indicated that the MLI weight or tank heat leak is reduced by about half in comparison with standard MLI. The focus of this effort is on analytical modeling of the variable density MLI (VD-MLI) on-orbit performance. The foam/VD-MLI model is considered to have five segments. The first segment represents the optional foam layer. The second, third, and fourth segments represent three different MLI layer densities. The last segment is an environmental boundary or shroud that surrounds the last MLI layer. Two approaches are considered: a variable density MLI modeled layer by layer and a semiempirical model or "modified Lockheed equation." Results from the two models were very comparable and were within 5-8 percent of the measured data at the 300 K boundary condition.
Prediction of Incident Diabetes in the Jackson Heart Study Using High-Dimensional Machine Learning
Casanova, Ramon; Saldana, Santiago; Simpson, Sean L.; Lacy, Mary E.; Subauste, Angela R.; Blackshear, Chad; Wagenknecht, Lynne; Bertoni, Alain G.
2016-01-01
Statistical models to predict incident diabetes are often based on limited variables. Here we pursued two main goals: 1) investigate the relative performance of a machine learning method such as Random Forests (RF) for detecting incident diabetes in a high-dimensional setting defined by a large set of observational data, and 2) uncover potential predictors of diabetes. The Jackson Heart Study collected data at baseline and in two follow-up visits from 5,301 African Americans. We excluded those with baseline diabetes and no follow-up, leaving 3,633 individuals for analyses. Over a mean 8-year follow-up, 584 participants developed diabetes. The full RF model evaluated 93 variables including demographic, anthropometric, blood biomarker, medical history, and echocardiogram data. We also used RF metrics of variable importance to rank variables according to their contribution to diabetes prediction. We implemented other models based on logistic regression and RF where features were preselected. The RF full model performance was similar (AUC = 0.82) to those more parsimonious models. The top-ranked variables according to RF included hemoglobin A1C, fasting plasma glucose, waist circumference, adiponectin, c-reactive protein, triglycerides, leptin, left ventricular mass, high-density lipoprotein cholesterol, and aldosterone. This work shows the potential of RF for incident diabetes prediction while dealing with high-dimensional data. PMID:27727289
NASA Astrophysics Data System (ADS)
WU, Chunhung
2015-04-01
The research built the original logistic regression landslide susceptibility model (abbreviated as or-LRLSM) and landslide ratio-based ogistic regression landslide susceptibility model (abbreviated as lr-LRLSM), compared the performance and explained the error source of two models. The research assumes that the performance of the logistic regression model can be better if the distribution of landslide ratio and weighted value of each variable is similar. Landslide ratio is the ratio of landslide area to total area in the specific area and an useful index to evaluate the seriousness of landslide disaster in Taiwan. The research adopted the landside inventory induced by 2009 Typhoon Morakot in the Chishan watershed, which was the most serious disaster event in the last decade, in Taiwan. The research adopted the 20 m grid as the basic unit in building the LRLSM, and six variables, including elevation, slope, aspect, geological formation, accumulated rainfall, and bank erosion, were included in the two models. The six variables were divided as continuous variables, including elevation, slope, and accumulated rainfall, and categorical variables, including aspect, geological formation and bank erosion in building the or-LRLSM, while all variables, which were classified based on landslide ratio, were categorical variables in building the lr-LRLSM. Because the count of whole basic unit in the Chishan watershed was too much to calculate by using commercial software, the research took random sampling instead of the whole basic units. The research adopted equal proportions of landslide unit and not landslide unit in logistic regression analysis. The research took 10 times random sampling and selected the group with the best Cox & Snell R2 value and Nagelkerker R2 value as the database for the following analysis. Based on the best result from 10 random sampling groups, the or-LRLSM (lr-LRLSM) is significant at the 1% level with Cox & Snell R2 = 0.190 (0.196) and Nagelkerke R2 = 0.253 (0.260). The unit with the landslide susceptibility value > 0.5 (≦ 0.5) will be classified as a predicted landslide unit (not landslide unit). The AUC, i.e. the area under the relative operating characteristic curve, of or-LRLSM in the Chishan watershed is 0.72, while that of lr-LRLSM is 0.77. Furthermore, the average correct ratio of lr-LRLSM (73.3%) is better than that of or-LRLSM (68.3%). The research analyzed in detail the error sources from the two models. In continuous variables, using the landslide ratio-based classification in building the lr-LRLSM can let the distribution of weighted value more similar to distribution of landslide ratio in the range of continuous variable than that in building the or-LRLSM. In categorical variables, the meaning of using the landslide ratio-based classification in building the lr-LRLSM is to gather the parameters with approximate landslide ratio together. The mean correct ratio in continuous variables (categorical variables) by using the lr-LRLSM is better than that in or-LRLSM by 0.6 ~ 2.6% (1.7% ~ 6.0%). Building the landslide susceptibility model by using landslide ratio-based classification is practical and of better performance than that by using the original logistic regression.
McGill, M J; Hart, W D; McKay, J A; Spinhirne, J D
1999-10-20
Previous modeling of the performance of spaceborne direct-detection Doppler lidar systems assumed extremely idealized atmospheric models. Here we develop a technique for modeling the performance of these systems in a more realistic atmosphere, based on actual airborne lidar observations. The resulting atmospheric model contains cloud and aerosol variability that is absent in other simulations of spaceborne Doppler lidar instruments. To produce a realistic simulation of daytime performance, we include solar radiance values that are based on actual measurements and are allowed to vary as the viewing scene changes. Simulations are performed for two types of direct-detection Doppler lidar system: the double-edge and the multichannel techniques. Both systems were optimized to measure winds from Rayleigh backscatter at 355 nm. Simulations show that the measurement uncertainty during daytime is degraded by only approximately 10-20% compared with nighttime performance, provided that a proper solar filter is included in the instrument design.
Dynamic rupture modeling with laboratory-derived constitutive relations
Okubo, P.G.
1989-01-01
A laboratory-derived state variable friction constitutive relation is used in the numerical simulation of the dynamic growth of an in-plane or mode II shear crack. According to this formulation, originally presented by J.H. Dieterich, frictional resistance varies with the logarithm of the slip rate and with the logarithm of the frictional state variable as identified by A.L. Ruina. Under conditions of steady sliding, the state variable is proportional to (slip rate)-1. Following suddenly introduced increases in slip rate, the rate and state dependencies combine to produce behavior which resembles slip weakening. When rupture nucleation is artificially forced at fixed rupture velocity, rupture models calculated with the state variable friction in a uniformly distributed initial stress field closely resemble earlier rupture models calculated with a slip weakening fault constitutive relation. Model calculations suggest that dynamic rupture following a state variable friction relation is similar to that following a simpler fault slip weakening law. However, when modeling the full cycle of fault motions, rate-dependent frictional responses included in the state variable formulation are important at low slip rates associated with rupture nucleation. -from Author
Varga, Leah M.; Surratt, Hilary L.
2014-01-01
Background Patterns of social and structural factors experienced by vulnerable populations may negatively affect willingness and ability to seek out health care services, and ultimately, their health. Methods The outcome variable was utilization of health care services in the previous 12 months. Using Andersen’s Behavioral Model for Vulnerable Populations, we examined self-reported data on utilization of health care services among a sample of 546 Black, street-based female sex workers in Miami, Florida. To evaluate the impact of each domain of the model on predicting health care utilization, domains were included in the logistic regression analysis by blocks using the traditional variables first and then adding the vulnerable domain variables. Findings The most consistent variables predicting health care utilization were having a regular source of care and self-rated health. The model that included only enabling variables was the most efficient model in predicting health care utilization. Conclusions Any type of resource, link, or connection to or with an institution, or any consistent point of care contributes significantly to health care utilization behaviors. A consistent and reliable source for health care may increase health care utilization and subsequently decrease health disparities among vulnerable and marginalized populations, as well as contribute to public health efforts that encourage preventive health. PMID:24657047
Compensation for Lithography Induced Process Variations during Physical Design
NASA Astrophysics Data System (ADS)
Chin, Eric Yiow-Bing
This dissertation addresses the challenge of designing robust integrated circuits in the deep sub micron regime in the presence of lithography process variability. By extending and combining existing process and circuit analysis techniques, flexible software frameworks are developed to provide detailed studies of circuit performance in the presence of lithography variations such as focus and exposure. Applications of these software frameworks to select circuits demonstrate the electrical impact of these variations and provide insight into variability aware compact models that capture the process dependent circuit behavior. These variability aware timing models abstract lithography variability from the process level to the circuit level and are used to estimate path level circuit performance with high accuracy with very little overhead in runtime. The Interconnect Variability Characterization (IVC) framework maps lithography induced geometrical variations at the interconnect level to electrical delay variations. This framework is applied to one dimensional repeater circuits patterned with both 90nm single patterning and 32nm double patterning technologies, under the presence of focus, exposure, and overlay variability. Studies indicate that single and double patterning layouts generally exhibit small variations in delay (between 1--3%) due to self compensating RC effects associated with dense layouts and overlay errors for layouts without self-compensating RC effects. The delay response of each double patterned interconnect structure is fit with a second order polynomial model with focus, exposure, and misalignment parameters with 12 coefficients and residuals of less than 0.1ps. The IVC framework is also applied to a repeater circuit with cascaded interconnect structures to emulate more complex layout scenarios, and it is observed that the variations on each segment average out to reduce the overall delay variation. The Standard Cell Variability Characterization (SCVC) framework advances existing layout-level lithography aware circuit analysis by extending it to cell-level applications utilizing a physically accurate approach that integrates process simulation, compact transistor models, and circuit simulation to characterize electrical cell behavior. This framework is applied to combinational and sequential cells in the Nangate 45nm Open Cell Library, and the timing response of these cells to lithography focus and exposure variations demonstrate Bossung like behavior. This behavior permits the process parameter dependent response to be captured in a nine term variability aware compact model based on Bossung fitting equations. For a two input NAND gate, the variability aware compact model captures the simulated response to an accuracy of 0.3%. The SCVC framework is also applied to investigate advanced process effects including misalignment and layout proximity. The abstraction of process variability from the layout level to the cell level opens up an entire new realm of circuit analysis and optimization and provides a foundation for path level variability analysis without the computationally expensive costs associated with joint process and circuit simulation. The SCVC framework is used with slight modification to illustrate the speedup and accuracy tradeoffs of using compact models. With variability aware compact models, the process dependent performance of a three stage logic circuit can be estimated to an accuracy of 0.7% with a speedup of over 50,000. Path level variability analysis also provides an accurate estimate (within 1%) of ring oscillator period in well under a second. Another significant advantage of variability aware compact models is that they can be easily incorporated into existing design methodologies for design optimization. This is demonstrated by applying cell swapping on a logic circuit to reduce the overall delay variability along a circuit path. By including these variability aware compact models in cell characterization libraries, design metrics such as circuit timing, power, area, and delay variability can be quickly assessed to optimize for the correct balance of all design metrics, including delay variability. Deterministic lithography variations can be easily captured using the variability aware compact models described in this dissertation. However, another prominent source of variability is random dopant fluctuations, which affect transistor threshold voltage and in turn circuit performance. The SCVC framework is utilized to investigate the interactions between deterministic lithography variations and random dopant fluctuations. Monte Carlo studies show that the output delay distribution in the presence of random dopant fluctuations is dependent on lithography focus and exposure conditions, with a 3.6 ps change in standard deviation across the focus exposure process window. This indicates that the electrical impact of random variations is dependent on systematic lithography variations, and this dependency should be included for precise analysis.
Developing a Model for Forecasting Road Traffic Accident (RTA) Fatalities in Yemen
NASA Astrophysics Data System (ADS)
Karim, Fareed M. A.; Abdo Saleh, Ali; Taijoobux, Aref; Ševrović, Marko
2017-12-01
The aim of this paper is to develop a model for forecasting RTA fatalities in Yemen. The yearly fatalities was modeled as the dependent variable, while the number of independent variables included the population, number of vehicles, GNP, GDP and Real GDP per capita. It was determined that all these variables are highly correlated with the correlation coefficient (r ≈ 0.9); in order to avoid multicollinearity in the model, a single variable with the highest r value was selected (real GDP per capita). A simple regression model was developed; the model was very good (R2=0.916); however, the residuals were serially correlated. The Prais-Winsten procedure was used to overcome this violation of the regression assumption. The data for a 20-year period from 1991-2010 were analyzed to build the model; the model was validated by using data for the years 2011-2013; the historical fit for the period 1991 - 2011 was very good. Also, the validation for 2011-2013 proved accurate.
Effects of Ensemble Configuration on Estimates of Regional Climate Uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldenson, N.; Mauger, G.; Leung, L. R.
Internal variability in the climate system can contribute substantial uncertainty in climate projections, particularly at regional scales. Internal variability can be quantified using large ensembles of simulations that are identical but for perturbed initial conditions. Here we compare methods for quantifying internal variability. Our study region spans the west coast of North America, which is strongly influenced by El Niño and other large-scale dynamics through their contribution to large-scale internal variability. Using a statistical framework to simultaneously account for multiple sources of uncertainty, we find that internal variability can be quantified consistently using a large ensemble or an ensemble ofmore » opportunity that includes small ensembles from multiple models and climate scenarios. The latter also produce estimates of uncertainty due to model differences. We conclude that projection uncertainties are best assessed using small single-model ensembles from as many model-scenario pairings as computationally feasible, which has implications for ensemble design in large modeling efforts.« less
A comparative modeling analysis of multiscale temporal variability of rainfall in Australia
NASA Astrophysics Data System (ADS)
Samuel, Jos M.; Sivapalan, Murugesu
2008-07-01
The effects of long-term natural climate variability and human-induced climate change on rainfall variability have become the focus of much concern and recent research efforts. In this paper, we present the results of a comparative analysis of observed multiscale temporal variability of rainfall in the Perth, Newcastle, and Darwin regions of Australia. This empirical and stochastic modeling analysis explores multiscale rainfall variability, i.e., ranging from short to long term, including within-storm patterns, and intra-annual, interannual, and interdecadal variabilities, using data taken from each of these regions. The analyses investigated how storm durations, interstorm periods, and average storm rainfall intensities differ for different climate states and demonstrated significant differences in this regard between the three selected regions. In Perth, the average storm intensity is stronger during La Niña years than during El Niño years, whereas in Newcastle and Darwin storm duration is longer during La Niña years. Increase of either storm duration or average storm intensity is the cause of higher average annual rainfall during La Niña years as compared to El Niño years. On the other hand, within-storm variability does not differ significantly between different ENSO states in all three locations. In the case of long-term rainfall variability, the statistical analyses indicated that in Newcastle the long-term rainfall pattern reflects the variability of the Interdecadal Pacific Oscillation (IPO) index, whereas in Perth and Darwin the long-term variability exhibits a step change in average annual rainfall (up in Darwin and down in Perth) which occurred around 1970. The step changes in Perth and Darwin and the switch in IPO states in Newcastle manifested differently in the three study regions in terms of changes in the annual number of rainy days or the average daily rainfall intensity or both. On the basis of these empirical data analyses, a stochastic rainfall time series model was developed that incorporates the entire range of multiscale variabilities observed in each region, including within-storm, intra-annual, interannual, and interdecadal variability. Such ability to characterize, model, and synthetically generate realistic time series of rainfall intensities is essential for addressing many hydrological problems, including estimation of flood and drought frequencies, pesticide risk assessment, and landslide frequencies.
Pradip Saud; Thomas B. Lynch; Duncan S. Wilson; John Stewart; James M. Guldin; Bob Heinemann; Randy Holeman; Dennis Wilson; Keith Anderson
2015-01-01
An individual-tree basal area growth model previously developed for even-aged naturally occurring shortleaf pine trees (Pinus echinata Mill.) in western Arkansas and southeastern Oklahoma did not include weather variables. Individual-tree growth and yield modeling of shortleaf pine has been carried out using the remeasurements of over 200 plots...
ERIC Educational Resources Information Center
Mohr, Jonathan J.; Fassinger, Ruth E.
2003-01-01
A model linking attachment variables with self-acceptance and self-disclosure of sexual orientation was tested using data from 489 lesbian, gay, and bisexual (LGB) adults. The model included the following 4 domains of variables: (a) representations of childhood attachment experiences with parents, (b) perceptions of parental support for sexual…
ERIC Educational Resources Information Center
von Eye, Alexander; Mun, Eun Young; Bogat, G. Anne
2008-01-01
This article reviews the premises of configural frequency analysis (CFA), including methods of choosing significance tests and base models, as well as protecting [alpha], and discusses why CFA is a useful approach when conducting longitudinal person-oriented research. CFA operates at the manifest variable level. Longitudinal CFA seeks to identify…
Esperón-Rodríguez, Manuel; Baumgartner, John B.; Beaumont, Linda J.
2017-01-01
Background Shrubs play a key role in biogeochemical cycles, prevent soil and water erosion, provide forage for livestock, and are a source of food, wood and non-wood products. However, despite their ecological and societal importance, the influence of different environmental variables on shrub distributions remains unclear. We evaluated the influence of climate and soil characteristics, and whether including soil variables improved the performance of a species distribution model (SDM), Maxent. Methods This study assessed variation in predictions of environmental suitability for 29 Australian shrub species (representing dominant members of six shrubland classes) due to the use of alternative sets of predictor variables. Models were calibrated with (1) climate variables only, (2) climate and soil variables, and (3) soil variables only. Results The predictive power of SDMs differed substantially across species, but generally models calibrated with both climate and soil data performed better than those calibrated only with climate variables. Models calibrated solely with soil variables were the least accurate. We found regional differences in potential shrub species richness across Australia due to the use of different sets of variables. Conclusions Our study provides evidence that predicted patterns of species richness may be sensitive to the choice of predictor set when multiple, plausible alternatives exist, and demonstrates the importance of considering soil properties when modeling availability of habitat for plants. PMID:28652933
Electromagnetic interference modeling and suppression techniques in variable-frequency drive systems
NASA Astrophysics Data System (ADS)
Yang, Le; Wang, Shuo; Feng, Jianghua
2017-11-01
Electromagnetic interference (EMI) causes electromechanical damage to the motors and degrades the reliability of variable-frequency drive (VFD) systems. Unlike fundamental frequency components in motor drive systems, high-frequency EMI noise, coupled with the parasitic parameters of the trough system, are difficult to analyze and reduce. In this article, EMI modeling techniques for different function units in a VFD system, including induction motors, motor bearings, and rectifierinverters, are reviewed and evaluated in terms of applied frequency range, model parameterization, and model accuracy. The EMI models for the motors are categorized based on modeling techniques and model topologies. Motor bearing and shaft models are also reviewed, and techniques that are used to eliminate bearing current are evaluated. Modeling techniques for conventional rectifierinverter systems are also summarized. EMI noise suppression techniques, including passive filter, Wheatstone bridge balance, active filter, and optimized modulation, are reviewed and compared based on the VFD system models.
Computer program for design analysis of radial-inflow turbines
NASA Technical Reports Server (NTRS)
Glassman, A. J.
1976-01-01
A computer program written in FORTRAN that may be used for the design analysis of radial-inflow turbines was documented. The following information is included: loss model (estimation of losses), the analysis equations, a description of the input and output data, the FORTRAN program listing and list of variables, and sample cases. The input design requirements include the power, mass flow rate, inlet temperature and pressure, and rotational speed. The program output data includes various diameters, efficiencies, temperatures, pressures, velocities, and flow angles for the appropriate calculation stations. The design variables include the stator-exit angle, rotor radius ratios, and rotor-exit tangential velocity distribution. The losses are determined by an internal loss model.
The NASA Marshall Space Flight Center Earth Global Reference Atmospheric Model-2010 Version
NASA Technical Reports Server (NTRS)
Leslie, F. W.; Justus, C. G.
2011-01-01
Reference or standard atmospheric models have long been used for design and mission planning of various aerospace systems. The NASA Marshall Space Flight Center Global Reference Atmospheric Model was developed in response to the need for a design reference atmosphere that provides complete global geographical variability and complete altitude coverage (surface to orbital altitudes), as well as complete seasonal and monthly variability of the thermodynamic variables and wind components. In addition to providing the geographical, height, and monthly variation of the mean atmospheric state, it includes the ability to simulate spatial and temporal perturbations.
Networks for image acquisition, processing and display
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.
1990-01-01
The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.
Predictors of persistent pain after total knee arthroplasty: a systematic review and meta-analysis.
Lewis, G N; Rice, D A; McNair, P J; Kluger, M
2015-04-01
Several studies have identified clinical, psychosocial, patient characteristic, and perioperative variables that are associated with persistent postsurgical pain; however, the relative effect of these variables has yet to be quantified. The aim of the study was to provide a systematic review and meta-analysis of predictor variables associated with persistent pain after total knee arthroplasty (TKA). Included studies were required to measure predictor variables prior to or at the time of surgery, include a pain outcome measure at least 3 months post-TKA, and include a statistical analysis of the effect of the predictor variable(s) on the outcome measure. Counts were undertaken of the number of times each predictor was analysed and the number of times it was found to have a significant relationship with persistent pain. Separate meta-analyses were performed to determine the effect size of each predictor on persistent pain. Outcomes from studies implementing uni- and multivariable statistical models were analysed separately. Thirty-two studies involving almost 30 000 patients were included in the review. Preoperative pain was the predictor that most commonly demonstrated a significant relationship with persistent pain across uni- and multivariable analyses. In the meta-analyses of data from univariate models, the largest effect sizes were found for: other pain sites, catastrophizing, and depression. For data from multivariate models, significant effects were evident for: catastrophizing, preoperative pain, mental health, and comorbidities. Catastrophizing, mental health, preoperative knee pain, and pain at other sites are the strongest independent predictors of persistent pain after TKA. © The Author 2014. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Branscome, Lee E.; Bleck, Rainer; Obrien, Enda
1990-01-01
The project objectives are to develop process models to investigate the interaction of planetary and synoptic-scale waves including the effects of latent heat release (precipitation), nonlinear dynamics, physical and boundary-layer processes, and large-scale topography; to determine the importance of latent heat release for temporal variability and time-mean behavior of planetary and synoptic-scale waves; to compare the model results with available observations of planetary and synoptic wave variability; and to assess the implications of the results for monitoring precipitation in oceanic-storm tracks by satellite observing systems. Researchers have utilized two different models for this project: a two-level quasi-geostrophic model to study intraseasonal variability, anomalous circulations and the seasonal cycle, and a 10-level, multi-wave primitive equation model to validate the two-level Q-G model and examine effects of convection, surface processes, and spherical geometry. It explicitly resolves several planetary and synoptic waves and includes specific humidity (as a predicted variable), moist convection, and large-scale precipitation. In the past year researchers have concentrated on experiments with the multi-level primitive equation model. The dynamical part of that model is similar to the spectral model used by the National Meteorological Center for medium-range forecasts. The model includes parameterizations of large-scale condensation and moist convection. To test the validity of results regarding the influence of convective precipitation, researchers can use either one of two different convective schemes in the model, a Kuo convective scheme or a modified Arakawa-Schubert scheme which includes downdrafts. By choosing one or the other scheme, they can evaluate the impact of the convective parameterization on the circulation. In the past year researchers performed a variety of initial-value experiments with the primitive-equation model. Using initial conditions typical of climatological winter conditions, they examined the behavior of synoptic and planetary waves growing in moist and dry environments. Surface conditions were representative of a zonally averaged ocean. They found that moist convection associated with baroclinic wave development was confined to the subtropics.
Bio-inspired online variable recruitment control of fluidic artificial muscles
NASA Astrophysics Data System (ADS)
Jenkins, Tyler E.; Chapman, Edward M.; Bryant, Matthew
2016-12-01
This paper details the creation of a hybrid variable recruitment control scheme for fluidic artificial muscle (FAM) actuators with an emphasis on maximizing system efficiency and switching control performance. Variable recruitment is the process of altering a system’s active number of actuators, allowing operation in distinct force regimes. Previously, FAM variable recruitment was only quantified with offline, manual valve switching; this study addresses the creation and characterization of novel, on-line FAM switching control algorithms. The bio-inspired algorithms are implemented in conjunction with a PID and model-based controller, and applied to a simulated plant model. Variable recruitment transition effects and chatter rejection are explored via a sensitivity analysis, allowing a system designer to weigh tradeoffs in actuator modeling, algorithm choice, and necessary hardware. Variable recruitment is further developed through simulation of a robotic arm tracking a variety of spline position inputs, requiring several levels of actuator recruitment. Switching controller performance is quantified and compared with baseline systems lacking variable recruitment. The work extends current variable recruitment knowledge by creating novel online variable recruitment control schemes, and exploring how online actuator recruitment affects system efficiency and control performance. Key topics associated with implementing a variable recruitment scheme, including the effects of modeling inaccuracies, hardware considerations, and switching transition concerns are also addressed.
Real-time predictive seasonal influenza model in Catalonia, Spain
Basile, Luca; Oviedo de la Fuente, Manuel; Torner, Nuria; Martínez, Ana; Jané, Mireia
2018-01-01
Influenza surveillance is critical to monitoring the situation during epidemic seasons and predictive mathematic models may aid the early detection of epidemic patterns. The objective of this study was to design a real-time spatial predictive model of ILI (Influenza Like Illness) incidence rate in Catalonia using one- and two-week forecasts. The available data sources used to select explanatory variables to include in the model were the statutory reporting disease system and the sentinel surveillance system in Catalonia for influenza incidence rates, the official climate service in Catalonia for meteorological data, laboratory data and Google Flu Trend. Time series for every explanatory variable with data from the last 4 seasons (from 2010–2011 to 2013–2014) was created. A pilot test was conducted during the 2014–2015 season to select the explanatory variables to be included in the model and the type of model to be applied. During the 2015–2016 season a real-time model was applied weekly, obtaining the intensity level and predicted incidence rates with 95% confidence levels one and two weeks away for each health region. At the end of the season, the confidence interval success rate (CISR) and intensity level success rate (ILSR) were analysed. For the 2015–2016 season a CISR of 85.3% at one week and 87.1% at two weeks and an ILSR of 82.9% and 82% were observed, respectively. The model described is a useful tool although it is hard to evaluate due to uncertainty. The accuracy of prediction at one and two weeks was above 80% globally, but was lower during the peak epidemic period. In order to improve the predictive power, new explanatory variables should be included. PMID:29513710
Year-class formation of upper St. Lawrence River northern pike
Smith, B.M.; Farrell, J.M.; Underwood, H.B.; Smith, S.J.
2007-01-01
Variables associated with year-class formation in upper St. Lawrence River northern pike Esox lucius were examined to explore population trends. A partial least-squares (PLS) regression model (PLS 1) was used to relate a year-class strength index (YCSI; 1974-1997) to explanatory variables associated with spawning and nursery areas (seasonal water level and temperature and their variability, number of ice days, and last day of ice presence). A second model (PLS 2) incorporated four additional ecological variables: potential predators (abundance of double-crested cormorants Phalacrocorax auritus and yellow perch Perca flavescens), female northern pike biomass (as a measure of stock-recruitment effects), and total phosphorus (productivity). Trends in adult northern pike catch revealed a decline (1981-2005), and year-class strength was positively related to catch per unit effort (CPUE; R2 = 0.58). The YCSI exceeded the 23-year mean in only 2 of the last 10 years. Cyclic patterns in the YCSI time series (along with strong year-classes every 4-6 years) were apparent, as was a dampening effect of amplitude beginning around 1990. The PLS 1 model explained over 50% of variation in both explanatory variables and the dependent variable, YCSI first-order moving-average residuals. Variables retained (N = 10; Wold's statistic ??? 0.8) included negative YCSI associations with high summer water levels, high variability in spring and fall water levels, and variability in fall water temperature. The YCSI exhibited positive associations with high spring, summer, and fall water temperature, variability in spring temperature, and high winter and spring water level. The PLS 2 model led to positive YCSI associations with phosphorus and yellow perch CPUE and a negative correlation with double-crested cormorant abundance. Environmental variables (water level and temperature) are hypothesized to regulate northern pike YCSI cycles, and dampening in YCSI magnitude may be related to a combination of factors, including wetland habitat changes, reduced nutrient loading, and increased predation by double-crested cormorants. ?? Copyright by the American Fisheries Society 2007.
Friedel, Michael J.
2001-01-01
This report describes a model for simulating transient, Variably Saturated, coupled water-heatsolute Transport in heterogeneous, anisotropic, 2-Dimensional, ground-water systems with variable fluid density (VST2D). VST2D was developed to help understand the effects of natural and anthropogenic factors on quantity and quality of variably saturated ground-water systems. The model solves simultaneously for one or more dependent variables (pressure, temperature, and concentration) at nodes in a horizontal or vertical mesh using a quasi-linearized general minimum residual method. This approach enhances computational speed beyond the speed of a sequential approach. Heterogeneous and anisotropic conditions are implemented locally using individual element property descriptions. This implementation allows local principal directions to differ among elements and from the global solution domain coordinates. Boundary conditions can include time-varying pressure head (or moisture content), heat, and/or concentration; fluxes distributed along domain boundaries and/or at internal node points; and/or convective moisture, heat, and solute fluxes along the domain boundaries; and/or unit hydraulic gradient along domain boundaries. Other model features include temperature and concentration dependent density (liquid and vapor) and viscosity, sorption and/or decay of a solute, and capability to determine moisture content beyond residual to zero. These features are described in the documentation together with development of the governing equations, application of the finite-element formulation (using the Galerkin approach), solution procedure, mass and energy balance considerations, input requirements, and output options. The VST2D model was verified, and results included solutions for problems of water transport under isohaline and isothermal conditions, heat transport under isobaric and isohaline conditions, solute transport under isobaric and isothermal conditions, and coupled water-heat-solute transport. The first three problems considered in model verification were compared to either analytical or numerical solutions, whereas the coupled problem was compared to measured laboratory results for which no known analytic solutions or numerical models are available. The test results indicate the model is accurate and applicable for a wide range of conditions, including when water (liquid and vapor), heat (sensible and latent), and solute are coupled in ground-water systems. The cumulative residual errors for the coupled problem tested was less than 10-8 cubic centimeter per cubic centimeter, 10-5 moles per kilogram, and 102 calories per cubic meter for liquid water content, solute concentration and heat content, respectively. This model should be useful to hydrologists, engineers, and researchers interested in studying coupled processes associated with variably saturated transport in ground-water systems.
Statistical and Biophysical Models for Predicting Total and Outdoor Water Use in Los Angeles
NASA Astrophysics Data System (ADS)
Mini, C.; Hogue, T. S.; Pincetl, S.
2012-04-01
Modeling water demand is a complex exercise in the choice of the functional form, techniques and variables to integrate in the model. The goal of the current research is to identify the determinants that control total and outdoor residential water use in semi-arid cities and to utilize that information in the development of statistical and biophysical models that can forecast spatial and temporal urban water use. The City of Los Angeles is unique in its highly diverse socio-demographic, economic and cultural characteristics across neighborhoods, which introduces significant challenges in modeling water use. Increasing climate variability also contributes to uncertainties in water use predictions in urban areas. Monthly individual water use records were acquired from the Los Angeles Department of Water and Power (LADWP) for the 2000 to 2010 period. Study predictors of residential water use include socio-demographic, economic, climate and landscaping variables at the zip code level collected from US Census database. Climate variables are estimated from ground-based observations and calculated at the centroid of each zip code by inverse-distance weighting method. Remotely-sensed products of vegetation biomass and landscape land cover are also utilized. Two linear regression models were developed based on the panel data and variables described: a pooled-OLS regression model and a linear mixed effects model. Both models show income per capita and the percentage of landscape areas in each zip code as being statistically significant predictors. The pooled-OLS model tends to over-estimate higher water use zip codes and both models provide similar RMSE values.Outdoor water use was estimated at the census tract level as the residual between total water use and indoor use. This residual is being compared with the output from a biophysical model including tree and grass cover areas, climate variables and estimates of evapotranspiration at very high spatial resolution. A genetic algorithm based model (Shuffled Complex Evolution-UA; SCE-UA) is also being developed to provide estimates of the predictions and parameters uncertainties and to compare against the linear regression models. Ultimately, models will be selected to undertake predictions for a range of climate change and landscape scenarios. Finally, project results will contribute to a better understanding of water demand to help predict future water use and implement targeted landscaping conservation programs to maintain sustainable water needs for a growing population under uncertain climate variability.
Hannan, Edward L; Samadashvili, Zaza; Cozzens, Kimberly; Jacobs, Alice K; Venditti, Ferdinand J; Holmes, David R; Berger, Peter B; Stamato, Nicholas J; Hughes, Suzanne; Walford, Gary
2016-05-01
Hospitals' risk-standardized mortality rates and outlier status (significantly higher/lower rates) are reported by the Centers for Medicare and Medicaid Services (CMS) for acute myocardial infarction (AMI) patients using Medicare claims data. New York now has AMI claims data with blood pressure and heart rate added. The objective of this study was to see whether the appended database yields different hospital assessments than standard claims data. New York State clinically appended claims data for AMI were used to create 2 different risk models based on CMS methods: 1 with and 1 without the added clinical data. Model discrimination was compared, and differences between the models in hospital outlier status and tertile status were examined. Mean arterial pressure and heart rate were both significant predictors of mortality in the clinically appended model. The C statistic for the model with the clinical variables added was significantly higher (0.803 vs. 0.773, P<0.001). The model without clinical variables identified 10 low outliers and all of them were percutaneous coronary intervention hospitals. When clinical variables were included in the model, only 6 of those 10 hospitals were low outliers, but there were 2 new low outliers. The model without clinical variables had only 3 high outliers, and the model with clinical variables included identified 2 new high outliers. Appending even a small number of clinical data elements to administrative data resulted in a difference in the assessment of hospital mortality outliers for AMI. The strategy of adding limited but important clinical data elements to administrative datasets should be considered when evaluating hospital quality for procedures and other medical conditions.
2006-09-30
disturbances from the lower atmosphere and ocean affect the upper atmosphere and how this variability interacts with the variability generated by solar and...represents “ general circulation model.” Both models include self-consistent ionospheric electrodynamics, that is, a calculation of the electric fields...and currents generated by the ionospheric dynamo, and consideration of their effects on the neutral dynamics. The TIE-GCM is used for studies that
Multivariable Parametric Cost Model for Ground Optical: Telescope Assembly
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Rowell, Ginger Holmes; Reese, Gayle; Byberg, Alicia
2004-01-01
A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis of both engineering and performance parameters. While diameter continues to be the dominant cost driver, diffraction limited wavelength is found to be a secondary driver. Other parameters such as radius of curvature were examined. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter were derived.
Leveraging organismal biology to forecast the effects of climate change.
Buckley, Lauren B; Cannistra, Anthony F; John, Aji
2018-04-26
Despite the pressing need for accurate forecasts of ecological and evolutionary responses to environmental change, commonly used modelling approaches exhibit mixed performance because they omit many important aspects of how organisms respond to spatially and temporally variable environments. Integrating models based on organismal phenotypes at the physiological, performance and fitness levels can improve model performance. We summarize current limitations of environmental data and models and discuss potential remedies. The paper reviews emerging techniques for sensing environments at fine spatial and temporal scales, accounting for environmental extremes, and capturing how organisms experience the environment. Intertidal mussel data illustrate biologically important aspects of environmental variability. We then discuss key challenges in translating environmental conditions into organismal performance including accounting for the varied timescales of physiological processes, for responses to environmental fluctuations including the onset of stress and other thresholds, and for how environmental sensitivities vary across lifecycles. We call for the creation of phenotypic databases to parameterize forecasting models and advocate for improved sharing of model code and data for model testing. We conclude with challenges in organismal biology that must be solved to improve forecasts over the next decade.acclimation, biophysical models, ecological forecasting, extremes, microclimate, spatial and temporal variability.
Dalvand, Sahar; Koohpayehzadeh, Jalil; Karimlou, Masoud; Asgari, Fereshteh; Rafei, Ali; Seifi, Behjat; Niksima, Seyed Hassan; Bakhshi, Enayatollah
2015-01-01
Because the use of BMI (Body Mass Index) alone as a measure of adiposity has been criticized, in the present study our aim was to fit a latent variable model to simultaneously examine the factors that affect waist circumference (continuous outcome) and obesity (binary outcome) among Iranian adults. Data included 18,990 Iranian individuals aged 20-65 years that are derived from the third National Survey of Noncommunicable Diseases Risk Factors in Iran. Using latent variable model, we estimated the relation of two correlated responses (waist circumference and obesity) with independent variables including age, gender, PR (Place of Residence), PA (physical activity), smoking status, SBP (Systolic Blood Pressure), DBP (Diastolic Blood Pressure), CHOL (cholesterol), FBG (Fasting Blood Glucose), diabetes, and FHD (family history of diabetes). All variables were related to both obesity and waist circumference (WC). Older age, female sex, being an urban resident, physical inactivity, nonsmoking, hypertension, hypercholesterolemia, hyperglycemia, diabetes, and having family history of diabetes were significant risk factors that increased WC and obesity. Findings from this study of Iranian adult settings offer more insights into factors associated with high WC and high prevalence of obesity in this population.
Predictors of adjustment and growth in women with recurrent ovarian cancer.
Ponto, Julie Ann; Ellington, Lee; Mellon, Suzanne; Beck, Susan L
2010-05-01
To analyze predictors of adjustment and growth in women who had experienced recurrent ovarian cancer using components of the Resiliency Model of Family Stress, Adjustment, and Adaptation as a conceptual framework. Cross-sectional. Participants were recruited from national cancer advocacy groups. 60 married or partnered women with recurrent ovarian cancer. Participants completed an online or paper survey. Independent variables included demographic and illness variables and meaning of illness. Outcome variables were psychological adjustment and post-traumatic growth. A model of five predictor variables (younger age, fewer years in the relationship, poorer performance status, greater symptom distress, and more negative meaning) accounted for 64% of the variance in adjustment but did not predict post-traumatic growth. This study supports the use of a model of adjustment that includes demographic, illness, and appraisal variables for women with recurrent ovarian cancer. Symptom distress and poorer performance status were the most significant predictors of adjustment. Younger age and fewer years in the relationship also predicted poorer adjustment. Nurses have the knowledge and skills to influence the predictors of adjustment to recurrent ovarian cancer, particularly symptom distress and poor performance status. Nurses who recognize the predictors of poorer adjustment can anticipate problems and intervene to improve adjustment for women.
Miñano Pérez, Pablo; Castejón Costa, Juan-Luis; Gilar Corbí, Raquel
2012-03-01
As a result of studies examining factors involved in the learning process, various structural models have been developed to explain the direct and indirect effects that occur between the variables in these models. The objective was to evaluate a structural model of cognitive and motivational variables predicting academic achievement, including general intelligence, academic self-concept, goal orientations, effort and learning strategies. The sample comprised of 341 Spanish students in the first year of compulsory secondary education. Different tests and questionnaires were used to evaluate each variable, and Structural Equation Modelling (SEM) was applied to contrast the relationships of the initial model. The model proposed had a satisfactory fit, and all the hypothesised relationships were significant. General intelligence was the variable most able to explain academic achievement. Also important was the direct influence of academic self-concept on achievement, goal orientations and effort, as well as the mediating ability of effort and learning strategies between academic goals and final achievement.
General phase spaces: from discrete variables to rotor and continuum limits
NASA Astrophysics Data System (ADS)
Albert, Victor V.; Pascazio, Saverio; Devoret, Michel H.
2017-12-01
We provide a basic introduction to discrete-variable, rotor, and continuous-variable quantum phase spaces, explaining how the latter two can be understood as limiting cases of the first. We extend the limit-taking procedures used to travel between phase spaces to a general class of Hamiltonians (including many local stabilizer codes) and provide six examples: the Harper equation, the Baxter parafermionic spin chain, the Rabi model, the Kitaev toric code, the Haah cubic code (which we generalize to qudits), and the Kitaev honeycomb model. We obtain continuous-variable generalizations of all models, some of which are novel. The Baxter model is mapped to a chain of coupled oscillators and the Rabi model to the optomechanical radiation pressure Hamiltonian. The procedures also yield rotor versions of all models, five of which are novel many-body extensions of the almost Mathieu equation. The toric and cubic codes are mapped to lattice models of rotors, with the toric code case related to U(1) lattice gauge theory.
Peer Educators and Close Friends as Predictors of Male College Students' Willingness to Prevent Rape
ERIC Educational Resources Information Center
Stein, Jerrold L.
2007-01-01
Astin's (1977, 1991, 1993) input-environment-outcome (I-E-O) model provided a conceptual framework for this study which measured 156 male college students' willingness to prevent rape (outcome variable). Predictor variables included personal attitudes (input variable), perceptions of close friends' attitudes toward rape and rape prevention…
Dissociation Predicts Later Attention Problems in Sexually Abused Children
ERIC Educational Resources Information Center
Kaplow, Julie B.; Hall, Erin; Koenen, Karestan C.; Dodge, Kenneth A.; Amaya-Jackson, Lisa
2008-01-01
Objective: The goals of this research are to develop and test a prospective model of attention problems in sexually abused children that includes fixed variables (e.g., gender), trauma, and disclosure-related pathways. Methods: At Time 1, fixed variables, trauma variables, and stress reactions upon disclosure were assessed in 156 children aged…
Development and Testing of a Coupled Ocean-atmosphere Mesoscale Ensemble Prediction System
2011-06-28
wind, temperature, and moisture variables, while the oceanographic ET is derived from ocean current, temperature, and salinity variables. Estimates of...wind, temperature, and moisture variables while the oceanographic ET is derived from ocean current temperature, and salinity variables. Estimates of...uncertainty in the model. Rigorously accurate ensemble methods for describing the distribution of future states given past information include particle
NASA Technical Reports Server (NTRS)
Callis, S. L.; Sakamoto, C.
1984-01-01
A model based on multiple regression was developed to estimate soybean yields for the country of Argentina. A meteorological data set was obtained for the country by averaging data for stations within the soybean growing area. Predictor variables for the model were derived from monthly total precipitation and monthly average temperature. A trend variable was included for the years 1969 to 1978 since an increasing trend in yields due to technology was observed between these years.
Effects of ice shelf basal melt variability on evolution of Thwaites Glacier
NASA Astrophysics Data System (ADS)
Hoffman, M. J.; Fyke, J. G.; Price, S. F.; Asay-Davis, X.; Perego, M.
2017-12-01
Theory, modeling, and observations indicate that marine ice sheets on a retrograde bed, including Thwaites Glacier, Antarctica, are only conditionally stable. Previous modeling studies have shown that rapid, unstable retreat can occur when steady ice-shelf basal melting causes the grounding line to retreat past restraining bedrock bumps. Here we explore the initiation and evolution of unstable retreat of Thwaites Glacier when the ice-shelf basal melt forcing includes temporal variability mimicking realistic climate variability. We use the three-dimensional, higher-order Model for Prediction Across Scales-Land Ice (MPASLI) model forced with an ice shelf basal melt parameterization derived from previous coupled ice sheet/ocean simulations. We add sinusoidal temporal variability to the melt parameterization that represents shoaling and deepening of Circumpolar Deep Water. We perform an ensemble of 250 year duration simulations with different values for the amplitude, period, and phase of the variability. Preliminary results suggest that, overall, variability leads to slower grounding line retreat and less mass loss than steady simulations. Short period (2 yr) variability leads to similar results as steady forcing, whereas decadal variability can result in up to one-third less mass loss. Differences in phase lead to a large range in mass loss/grounding line retreat, but it is always less than the steady forcing. The timing of ungrounding from each restraining bedrock bump, which is strongly affected by the melt variability, is the rate limiting factor, and variability-driven delays in ungrounding at each bump accumulate. Grounding line retreat in the regions between bedrock bumps is relatively unaffected by ice shelf melt variability. While the results are sensitive to the form of the melt parameterization and its variability, we conclude that decadal period ice shelf melt variability could potentially delay marine ice sheet instability by up to many decades. However, it does not alter the eventual mass loss and sea level rise at centennial scales. The potential differences are significant enough to highlight the need for further observations to constrain the amplitude and period of the modes of climate and ocean variability relevant to Antarctic ice shelf melting.
Linville, John W; Schumann, Douglas; Aston, Christopher; Defibaugh-Chavez, Stephanie; Seebohm, Scott; Touhey, Lucy
2016-12-01
A six sigma fishbone analysis approach was used to develop a machine learning model in SAS, Version 9.4, by using stepwise linear regression. The model evaluated the effect of a wide variety of variables, including slaughter establishment operational measures, normal (30-year average) weather, and extreme weather events on the rate of Salmonella -positive carcasses in young chicken slaughter establishments. Food Safety and Inspection Service (FSIS) verification carcass sampling data, as well as corresponding data from the National Oceanographic and Atmospheric Administration and the Federal Emergency Management Agency, from September 2011 through April 2015, were included in the model. The results of the modeling show that in addition to basic establishment operations, normal weather patterns, differences from normal and disaster events, including time lag weather and disaster variables, played a role in explaining the Salmonella percent positive that varied by slaughter volume quartile. Findings show that weather and disaster events should be considered as explanatory variables when assessing pathogen-related prevalence analysis or research and slaughter operational controls. The apparent significance of time lag weather variables suggested that at least some of the impact on Salmonella rates occurred after the weather events, which may offer opportunities for FSIS or the poultry industry to implement interventions to mitigate those effects.
Uncertainty and variability in computational and mathematical models of cardiac physiology.
Mirams, Gary R; Pathmanathan, Pras; Gray, Richard A; Challenor, Peter; Clayton, Richard H
2016-12-01
Mathematical and computational models of cardiac physiology have been an integral component of cardiac electrophysiology since its inception, and are collectively known as the Cardiac Physiome. We identify and classify the numerous sources of variability and uncertainty in model formulation, parameters and other inputs that arise from both natural variation in experimental data and lack of knowledge. The impact of uncertainty on the outputs of Cardiac Physiome models is not well understood, and this limits their utility as clinical tools. We argue that incorporating variability and uncertainty should be a high priority for the future of the Cardiac Physiome. We suggest investigating the adoption of approaches developed in other areas of science and engineering while recognising unique challenges for the Cardiac Physiome; it is likely that novel methods will be necessary that require engagement with the mathematics and statistics community. The Cardiac Physiome effort is one of the most mature and successful applications of mathematical and computational modelling for describing and advancing the understanding of physiology. After five decades of development, physiological cardiac models are poised to realise the promise of translational research via clinical applications such as drug development and patient-specific approaches as well as ablation, cardiac resynchronisation and contractility modulation therapies. For models to be included as a vital component of the decision process in safety-critical applications, rigorous assessment of model credibility will be required. This White Paper describes one aspect of this process by identifying and classifying sources of variability and uncertainty in models as well as their implications for the application and development of cardiac models. We stress the need to understand and quantify the sources of variability and uncertainty in model inputs, and the impact of model structure and complexity and their consequences for predictive model outputs. We propose that the future of the Cardiac Physiome should include a probabilistic approach to quantify the relationship of variability and uncertainty of model inputs and outputs. © 2016 The Authors. The Journal of Physiology published by John Wiley & Sons Ltd on behalf of The Physiological Society.
Zimmerman, Tammy M.
2006-01-01
The Lake Erie shoreline in Pennsylvania spans nearly 40 miles and is a valuable recreational resource for Erie County. Nearly 7 miles of the Lake Erie shoreline lies within Presque Isle State Park in Erie, Pa. Concentrations of Escherichia coli (E. coli) bacteria at permitted Presque Isle beaches occasionally exceed the single-sample bathing-water standard, resulting in unsafe swimming conditions and closure of the beaches. E. coli concentrations and other water-quality and environmental data collected at Presque Isle Beach 2 during the 2004 and 2005 recreational seasons were used to develop models using tobit regression analyses to predict E. coli concentrations. All variables statistically related to E. coli concentrations were included in the initial regression analyses, and after several iterations, only those explanatory variables that made the models significantly better at predicting E. coli concentrations were included in the final models. Regression models were developed using data from 2004, 2005, and the combined 2-year dataset. Variables in the 2004 model and the combined 2004-2005 model were log10 turbidity, rain weight, wave height (calculated), and wind direction. Variables in the 2005 model were log10 turbidity and wind direction. Explanatory variables not included in the final models were water temperature, streamflow, wind speed, and current speed; model results indicated these variables did not meet significance criteria at the 95-percent confidence level (probabilities were greater than 0.05). The predicted E. coli concentrations produced by the models were used to develop probabilities that concentrations would exceed the single-sample bathing-water standard for E. coli of 235 colonies per 100 milliliters. Analysis of the exceedence probabilities helped determine a threshold probability for each model, chosen such that the correct number of exceedences and nonexceedences was maximized and the number of false positives and false negatives was minimized. Future samples with computed exceedence probabilities higher than the selected threshold probability, as determined by the model, will likely exceed the E. coli standard and a beach advisory or closing may need to be issued; computed exceedence probabilities lower than the threshold probability will likely indicate the standard will not be exceeded. Additional data collected each year can be used to test and possibly improve the model. This study will aid beach managers in more rapidly determining when waters are not safe for recreational use and, subsequently, when to issue beach advisories or closings.
Suzuki, Etsuji; Yamamoto, Eiji; Takao, Soshi; Kawachi, Ichiro; Subramanian, S. V.
2012-01-01
Background Multilevel analyses are ideally suited to assess the effects of ecological (higher level) and individual (lower level) exposure variables simultaneously. In applying such analyses to measures of ecologies in epidemiological studies, individual variables are usually aggregated into the higher level unit. Typically, the aggregated measure includes responses of every individual belonging to that group (i.e. it constitutes a self-included measure). More recently, researchers have developed an aggregate measure which excludes the response of the individual to whom the aggregate measure is linked (i.e. a self-excluded measure). In this study, we clarify the substantive and technical properties of these two measures when they are used as exposures in multilevel models. Methods Although the differences between the two aggregated measures are mathematically subtle, distinguishing between them is important in terms of the specific scientific questions to be addressed. We then show how these measures can be used in two distinct types of multilevel models—self-included model and self-excluded model—and interpret the parameters in each model by imposing hypothetical interventions. The concept is tested on empirical data of workplace social capital and employees' systolic blood pressure. Results Researchers assume group-level interventions when using a self-included model, and individual-level interventions when using a self-excluded model. Analytical re-parameterizations of these two models highlight their differences in parameter interpretation. Cluster-mean centered self-included models enable researchers to decompose the collective effect into its within- and between-group components. The benefit of cluster-mean centering procedure is further discussed in terms of hypothetical interventions. Conclusions When investigating the potential roles of aggregated variables, researchers should carefully explore which type of model—self-included or self-excluded—is suitable for a given situation, particularly when group sizes are relatively small. PMID:23251609
Seasonal precipitation forecasting for the Melbourne region using a Self-Organizing Maps approach
NASA Astrophysics Data System (ADS)
Pidoto, Ross; Wallner, Markus; Haberlandt, Uwe
2017-04-01
The Melbourne region experiences highly variable inter-annual rainfall. For close to a decade during the 2000s, below average rainfall seriously affected the environment, water supplies and agriculture. A seasonal rainfall forecasting model for the Melbourne region based on the novel approach of a Self-Organizing Map has been developed and tested for its prediction performance. Predictor variables at varying lead times were first assessed for inclusion within the model by calculating their importance via Random Forests. Predictor variables tested include the climate indices SOI, DMI and N3.4, in addition to gridded global sea surface temperature data. Five forecasting models were developed: an annual model and four seasonal models, each individually optimized for performance through Pearson's correlation r and the Nash-Sutcliffe Efficiency. The annual model showed a prediction performance of r = 0.54 and NSE = 0.14. The best seasonal model was for spring, with r = 0.61 and NSE = 0.31. Autumn was the worst performing seasonal model. The sea surface temperature data contributed fewer predictor variables compared to climate indices. Most predictor variables were supplied at a minimum lead, however some predictors were found at lead times of up to a year.
Regression Analysis of Stage Variability for West-Central Florida Lakes
Sacks, Laura A.; Ellison, Donald L.; Swancar, Amy
2008-01-01
The variability in a lake's stage depends upon many factors, including surface-water flows, meteorological conditions, and hydrogeologic characteristics near the lake. An understanding of the factors controlling lake-stage variability for a population of lakes may be helpful to water managers who set regulatory levels for lakes. The goal of this study is to determine whether lake-stage variability can be predicted using multiple linear regression and readily available lake and basin characteristics defined for each lake. Regressions were evaluated for a recent 10-year period (1996-2005) and for a historical 10-year period (1954-63). Ground-water pumping is considered to have affected stage at many of the 98 lakes included in the recent period analysis, and not to have affected stage at the 20 lakes included in the historical period analysis. For the recent period, regression models had coefficients of determination (R2) values ranging from 0.60 to 0.74, and up to five explanatory variables. Standard errors ranged from 21 to 37 percent of the average stage variability. Net leakage was the most important explanatory variable in regressions describing the full range and low range in stage variability for the recent period. The most important explanatory variable in the model predicting the high range in stage variability was the height over median lake stage at which surface-water outflow would occur. Other explanatory variables in final regression models for the recent period included the range in annual rainfall for the period and several variables related to local and regional hydrogeology: (1) ground-water pumping within 1 mile of each lake, (2) the amount of ground-water inflow (by category), (3) the head gradient between the lake and the Upper Floridan aquifer, and (4) the thickness of the intermediate confining unit. Many of the variables in final regression models are related to hydrogeologic characteristics, underscoring the importance of ground-water exchange in controlling the stage of karst lakes in Florida. Regression equations were used to predict lake-stage variability for the recent period for 12 additional lakes, and the median difference between predicted and observed values ranged from 11 to 23 percent. Coefficients of determination for the historical period were considerably lower (maximum R2 of 0.28) than for the recent period. Reasons for these low R2 values are probably related to the small number of lakes (20) with stage data for an equivalent time period that were unaffected by ground-water pumping, the similarity of many of the lake types (large surface-water drainage lakes), and the greater uncertainty in defining historical basin characteristics. The lack of lake-stage data unaffected by ground-water pumping and the poor regression results obtained for that group of lakes limit the ability to predict natural lake-stage variability using this method in west-central Florida.
Karstoft, Karen-Inge; Vedtofte, Mia S.; Nielsen, Anni B.S.; Osler, Merete; Mortensen, Erik L.; Christensen, Gunhild T.; Andersen, Søren B.
2017-01-01
Background Studies of the association between pre-deployment cognitive ability and post-deployment post-traumatic stress disorder (PTSD) have shown mixed results. Aims To study the influence of pre-deployment cognitive ability on PTSD symptoms 6–8 months post-deployment in a large population while controlling for pre-deployment education and deployment-related variables. Method Study linking prospective pre-deployment conscription board data with post-deployment self-reported data in 9695 Danish Army personnel deployed to different war zones in 1997–2013. The association between pre-deployment cognitive ability and post-deployment PTSD was investigated using repeated-measure logistic regression models. Two models with cognitive ability score as the main exposure variable were created (model 1 and model 2). Model 1 was only adjusted for pre-deployment variables, while model 2 was adjusted for both pre-deployment and deployment-related variables. Results When including only variables recorded pre-deployment (cognitive ability score and educational level) and gender (model 1), all variables predicted post-deployment PTSD. When deployment-related variables were added (model 2), this was no longer the case for cognitive ability score. However, when educational level was removed from the model adjusted for deployment-related variables, the association between cognitive ability and post-deployment PTSD became significant. Conclusions Pre-deployment lower cognitive ability did not predict post-deployment PTSD independently of educational level after adjustment for deployment-related variables. Declaration of interest None. Copyright and usage © The Royal College of Psychiatrists 2017. This is an open access article distributed under the terms of the Creative Commons Non-Commercial, No Derivatives (CC BY-NC-ND) license. PMID:29163983
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jackman, C.H.; Douglass, A.R., Chandra, S.; Stolarski, R.S.
1991-03-20
Eight years of NMC (National Meteorological Center) temperature and SBUV (solar backscattered ultraviolet) ozone data were used to calculate the monthly mean heating rates and residual circulation for use in a two-dimensional photochemical model in order to examine the interannual variability of modeled ozone. Fairly good correlations were found in the interannual behavior of modeled and measured SBUV ozone in the upper stratosphere at middle to low latitudes, where temperature dependent photochemistry is thought to dominate ozone behavior. The calculated total ozone is found to be more sensitive to the interannual residual circulation changes than to the interannual temperature changes.more » The magnitude of the modeled ozone variability is similar to the observed variability, but the observed and modeled year to year deviations are mostly uncorrelated. The large component of the observed total ozone variability at low latitudes due to the quasi-biennial oscillation (QBO) is not seen in the modeled total ozone, as only a small QBO signal is present in the heating rates, temperatures, and monthly mean residual circulation. Large interanual changes in tropospheric dynamics are believed to influence the interannual variability in the total ozone, especially at middle and high latitudes. Since these tropospheric changes and most of the QBO forcing are not included in the model formulation, it is not surprising that the interannual variability in total ozione is not well represented in the model computations.« less
Project EASE: a study to test a psychosocial model of epilepsy medication managment.
DiIorio, Collen; Shafer, Patricia Osborne; Letz, Richard; Henry, Thomas R; Schomer, Donal L; Yeager, Kate
2004-12-01
The purpose of this study was to test a psychosocial model of medication self-management among people with epilepsy. This model was based primarily on social cognitive theory and included personal (self-efficacy, outcome expectations, goals, stigma, and depressive symptoms), social (social support), and provider (patient satisfaction and desire for control) variables. Participants for the study were enrolled at research sites in Atlanta, Georgia, and Boston, Massachusetts and completed computer-based assessments that included measures of the study variables listed above. The mean age of the 317 participants was 43.3 years; about 50% were female, and 81%white. Self-efficacy and patient satisfaction explained the most variance in medication management. Social support was related to self-efficacy; stigma to self-efficacy and depressive symptoms; and self-efficacy to outcome expectations and depressive symptoms. Findings reinforce that medication-taking behavior is affected by a complex set of interactions among psychosocial variables.
A multivariate model of parent-adolescent relationship variables in early adolescence.
McKinney, Cliff; Renk, Kimberly
2011-08-01
Given the importance of predicting outcomes for early adolescents, this study examines a multivariate model of parent-adolescent relationship variables, including parenting, family environment, and conflict. Participants, who completed measures assessing these variables, included 710 culturally diverse 11-14-year-olds who were attending a middle school in a Southeastern state. The parents of a subset of these adolescents (i.e., 487 mother-father pairs) participated in this study as well. Correlational analyses indicate that authoritative and authoritarian parenting, family cohesion and adaptability, and conflict are significant predictors of early adolescents' internalizing and externalizing problems. Structural equation modeling analyses indicate that fathers' parenting may not predict directly externalizing problems in male and female adolescents but instead may act through conflict. More direct relationships exist when examining mothers' parenting. The impact of parenting, family environment, and conflict on early adolescents' internalizing and externalizing problems and the importance of both gender and cross-informant ratings are emphasized.
Identification of Dynamic Simulation Models for Variable Speed Pumped Storage Power Plants
NASA Astrophysics Data System (ADS)
Moreira, C.; Fulgêncio, N.; Silva, B.; Nicolet, C.; Béguin, A.
2017-04-01
This paper addresses the identification of reduced order models for variable speed pump-turbine plants, including the representation of the dynamic behaviour of the main components: hydraulic system, turbine governors, electromechanical equipment and power converters. A methodology for the identification of appropriated reduced order models both for turbine and pump operating modes is presented and discussed. The methodological approach consists of three main steps: 1) detailed pumped-storage power plant modelling in SIMSEN; 2) reduced order models identification and 3) specification of test conditions for performance evaluation.
Andrew J. Hansen; Linda Bowers Phillips; Curtis H. Flather; Jim Robinson-Cox
2011-01-01
We evaluated the leading hypotheses on biophysical factors affecting species richness for Breeding Bird Survey routes from areas with little influence of human activities.We then derived a best model based on information theory, and used this model to extrapolate SK across North America based on the biophysical predictor variables. The predictor variables included the...
Inventory implications of using sampling variances in estimation of growth model coefficients
Albert R. Stage; William R. Wykoff
2000-01-01
Variables based on stand densities or stocking have sampling errors that depend on the relation of tree size to plot size and on the spatial structure of the population, ignoring the sampling errors of such variables, which include most measures of competition used in both distance-dependent and distance-independent growth models, can bias the predictions obtained from...
Mathematical Model Of Variable-Polarity Plasma Arc Welding
NASA Technical Reports Server (NTRS)
Hung, R. J.
1996-01-01
Mathematical model of variable-polarity plasma arc (VPPA) welding process developed for use in predicting characteristics of welds and thus serves as guide for selection of process parameters. Parameters include welding electric currents in, and durations of, straight and reverse polarities; rates of flow of plasma and shielding gases; and sizes and relative positions of welding electrode, welding orifice, and workpiece.
ERIC Educational Resources Information Center
Spada, Marcantonio M.; Moneta, Giovanni B.
2014-01-01
The objective of this study was to verify the structure of a model of how surface approach to studying is influenced by the trait variables of motivation and metacognition and the state variables of avoidance coping and evaluation anxiety. We extended the model to include: (1) the investigation of the relative contribution of the five…
ERIC Educational Resources Information Center
Kirikkanat, Berke; Soyer, Makbule Kali
2018-01-01
The major purpose of this study was to create a path analysis model of academic success in a group of university students, which included the variables of academic confidence and psychological capital with a mediator variable--academic coping. 400 undergraduates from Marmara University and Istanbul Commerce University who were in sophomore, junior…
ERIC Educational Resources Information Center
Crow, Wendell C.
This paper suggests ways in which manifest, physical attributes of graphic elements can be described and measured. It also proposes a preliminary conceptual model that accounts for the readily apparent, measurable variables in a visual message. The graphic elements that are described include format, typeface, and photographs/artwork. The…
ERIC Educational Resources Information Center
Simon, Katherine; Barakat, Lamia P.; Patterson, Chavis A.; Dampier, Carlton
2009-01-01
Sickle cell disease (SCD) complications place patients at risk for poor psychosocial adaptation, including depression and anxiety symptoms. This study aimed to test a mediator model based on the Risk and Resistance model to explore the role of intrapersonal characteristics and stress processing variables in psychosocial functioning. Participants…
Triviño, Maria; Thuiller, Wilfried; Cabeza, Mar; Hickler, Thomas; Araújo, Miguel B.
2011-01-01
Although climate is known to be one of the key factors determining animal species distributions amongst others, projections of global change impacts on their distributions often rely on bioclimatic envelope models. Vegetation structure and landscape configuration are also key determinants of distributions, but they are rarely considered in such assessments. We explore the consequences of using simulated vegetation structure and composition as well as its associated landscape configuration in models projecting global change effects on Iberian bird species distributions. Both present-day and future distributions were modelled for 168 bird species using two ensemble forecasting methods: Random Forests (RF) and Boosted Regression Trees (BRT). For each species, several models were created, differing in the predictor variables used (climate, vegetation, and landscape configuration). Discrimination ability of each model in the present-day was then tested with four commonly used evaluation methods (AUC, TSS, specificity and sensitivity). The different sets of predictor variables yielded similar spatial patterns for well-modelled species, but the future projections diverged for poorly-modelled species. Models using all predictor variables were not significantly better than models fitted with climate variables alone for ca. 50% of the cases. Moreover, models fitted with climate data were always better than models fitted with landscape configuration variables, and vegetation variables were found to correlate with bird species distributions in 26–40% of the cases with BRT, and in 1–18% of the cases with RF. We conclude that improvements from including vegetation and its landscape configuration variables in comparison with climate only variables might not always be as great as expected for future projections of Iberian bird species. PMID:22216263
The Impact of ARM on Climate Modeling. Chapter 26
NASA Technical Reports Server (NTRS)
Randall, David A.; Del Genio, Anthony D.; Donner, Leo J.; Collins, William D.; Klein, Stephen A.
2016-01-01
Climate models are among humanity's most ambitious and elaborate creations. They are designed to simulate the interactions of the atmosphere, ocean, land surface, and cryosphere on time scales far beyond the limits of deterministic predictability, and including the effects of time-dependent external forcings. The processes involved include radiative transfer, fluid dynamics, microphysics, and some aspects of geochemistry, biology, and ecology. The models explicitly simulate processes on spatial scales ranging from the circumference of the Earth down to one hundred kilometers or smaller, and implicitly include the effects of processes on even smaller scales down to a micron or so. The atmospheric component of a climate model can be called an atmospheric global circulation model (AGCM). In an AGCM, calculations are done on a three-dimensional grid, which in some of today's climate models consists of several million grid cells. For each grid cell, about a dozen variables are time-stepped as the model integrates forward from its initial conditions. These so-called prognostic variables have special importance because they are the only things that a model remembers from one time step to the next; everything else is recreated on each time step by starting from the prognostic variables and the boundary conditions. The prognostic variables typically include information about the mass of dry air, the temperature, the wind components, water vapor, various condensed-water species, and at least a few chemical species such as ozone. A good way to understand how climate models work is to consider the lengthy and complex process used to develop one. Lets imagine that a new AGCM is to be created, starting from a blank piece of paper. The model may be intended for a particular class of applications, e.g., high-resolution simulations on time scales of a few decades. Before a single line of code is written, the conceptual foundation of the model must be designed through a creative envisioning that starts from the intended application and is based on current understanding of how the atmosphere works and the inventory of mathematical methods available.
Bastistella, Luciane; Rousset, Patrick; Aviz, Antonio; Caldeira-Pires, Armando; Humbert, Gilles; Nogueira, Manoel
2018-02-09
New experimental techniques, as well as modern variants on known methods, have recently been employed to investigate the fundamental reactions underlying the oxidation of biochar. The purpose of this paper was to experimentally and statistically study how the relative humidity of air, mass, and particle size of four biochars influenced the adsorption of water and the increase in temperature. A random factorial design was employed using the intuitive statistical software Xlstat. A simple linear regression model and an analysis of variance with a pairwise comparison were performed. The experimental study was carried out on the wood of Quercus pubescens , Cyclobalanopsis glauca , Trigonostemon huangmosun , and Bambusa vulgaris , and involved five relative humidity conditions (22, 43, 75, 84, and 90%), two mass samples (0.1 and 1 g), and two particle sizes (powder and piece). Two response variables including water adsorption and temperature increase were analyzed and discussed. The temperature did not increase linearly with the adsorption of water. Temperature was modeled by nine explanatory variables, while water adsorption was modeled by eight. Five variables, including factors and their interactions, were found to be common to the two models. Sample mass and relative humidity influenced the two qualitative variables, while particle size and biochar type only influenced the temperature.
NASA Astrophysics Data System (ADS)
Hobbs, J.; Turmon, M.; David, C. H.; Reager, J. T., II; Famiglietti, J. S.
2017-12-01
NASA's Western States Water Mission (WSWM) combines remote sensing of the terrestrial water cycle with hydrological models to provide high-resolution state estimates for multiple variables. The effort includes both land surface and river routing models that are subject to several sources of uncertainty, including errors in the model forcing and model structural uncertainty. Computational and storage constraints prohibit extensive ensemble simulations, so this work outlines efficient but flexible approaches for estimating and reporting uncertainty. Calibrated by remote sensing and in situ data where available, we illustrate the application of these techniques in producing state estimates with associated uncertainties at kilometer-scale resolution for key variables such as soil moisture, groundwater, and streamflow.
Integrated research in constitutive modelling at elevated temperatures, part 1
NASA Technical Reports Server (NTRS)
Haisler, W. E.; Allen, D. H.
1986-01-01
Topics covered include: numerical integration techniques; thermodynamics and internal state variables; experimental lab development; comparison of models at room temperature; comparison of models at elevated temperature; and integrated software development.
NASA Technical Reports Server (NTRS)
DeSmidt, Hans A.; Smith, Edward C.; Bill, Robert C.; Wang, Kon-Well
2013-01-01
This project develops comprehensive modeling and simulation tools for analysis of variable rotor speed helicopter propulsion system dynamics. The Comprehensive Variable-Speed Rotorcraft Propulsion Modeling (CVSRPM) tool developed in this research is used to investigate coupled rotor/engine/fuel control/gearbox/shaft/clutch/flight control system dynamic interactions for several variable rotor speed mission scenarios. In this investigation, a prototypical two-speed Dual-Clutch Transmission (DCT) is proposed and designed to achieve 50 percent rotor speed variation. The comprehensive modeling tool developed in this study is utilized to analyze the two-speed shift response of both a conventional single rotor helicopter and a tiltrotor drive system. In the tiltrotor system, both a Parallel Shift Control (PSC) strategy and a Sequential Shift Control (SSC) strategy for constant and variable forward speed mission profiles are analyzed. Under the PSC strategy, selecting clutch shift-rate results in a design tradeoff between transient engine surge margins and clutch frictional power dissipation. In the case of SSC, clutch power dissipation is drastically reduced in exchange for the necessity to disengage one engine at a time which requires a multi-DCT drive system topology. In addition to comprehensive simulations, several sections are dedicated to detailed analysis of driveline subsystem components under variable speed operation. In particular an aeroelastic simulation of a stiff in-plane rotor using nonlinear quasi-steady blade element theory was conducted to investigate variable speed rotor dynamics. It was found that 2/rev and 4/rev flap and lag vibrations were significant during resonance crossings with 4/rev lagwise loads being directly transferred into drive-system torque disturbances. To capture the clutch engagement dynamics, a nonlinear stick-slip clutch torque model is developed. Also, a transient gas-turbine engine model based on first principles mean-line compressor and turbine approximations is developed. Finally an analysis of high frequency gear dynamics including the effect of tooth mesh stiffness variation under variable speed operation is conducted including experimental validation. Through exploring the interactions between the various subsystems, this investigation provides important insights into the continuing development of variable-speed rotorcraft propulsion systems.
Kontis, A.L.
2001-01-01
The Variable-Recharge Package is a computerized method designed for use with the U.S. Geological Survey three-dimensional finitedifference ground-water flow model (MODFLOW-88) to simulate areal recharge to an aquifer. It is suitable for simulations of aquifers in which the relation between ground-water levels and land surface can affect the amount and distribution of recharge. The method is based on the premise that recharge to an aquifer cannot occur where the water level is at or above land surface. Consequently, recharge will vary spatially in simulations in which the Variable- Recharge Package is applied, if the water levels are sufficiently high. The input data required by the program for each model cell that can potentially receive recharge includes the average land-surface elevation and a quantity termed ?water available for recharge,? which is equal to precipitation minus evapotranspiration. The Variable-Recharge Package also can be used to simulate recharge to a valley-fill aquifer in which the valley fill and the adjoining uplands are explicitly simulated. Valley-fill aquifers, which are the most common type of aquifer in the glaciated northeastern United States, receive much of their recharge from upland sources as channeled and(or) unchanneled surface runoff and as lateral ground-water flow. Surface runoff in the uplands is generated in the model when the applied water available for recharge is rejected because simulated water levels are at or above land surface. The surface runoff can be distributed to other parts of the model by (1) applying the amount of the surface runoff that flows to upland streams (channeled runoff) to explicitly simulated streams that flow onto the valley floor, and(or) (2) applying the amount that flows downslope toward the valley- fill aquifer (unchanneled runoff) to specified model cells, typically those near the valley wall. An example model of an idealized valley- fill aquifer is presented to demonstrate application of the method and the type of information that can be derived from its use. Documentation of the Variable-Recharge Package is provided in the appendixes and includes listings of model code and of program variables. Comment statements in the program listings provide a narrative of the code. Input-data instructions and printed model output for the package are included.
NASA Astrophysics Data System (ADS)
Yahya, K.; Wang, K.; Campbell, P.; Glotfelty, T.; He, J.; Zhang, Y.
2015-08-01
The Weather Research and Forecasting model with Chemistry (WRF/Chem) v3.6.1 with the Carbon Bond 2005 (CB05) gas-phase mechanism is evaluated for its first decadal application during 2001-2010 using the Representative Concentration Pathway (RCP 8.5) emissions to assess its capability and appropriateness for long-term climatological simulations. The initial and boundary conditions are downscaled from the modified Community Earth System Model/Community Atmosphere Model (CESM/CAM5) v1.2.2. The meteorological initial and boundary conditions are bias-corrected using the National Center for Environmental Protection's Final (FNL) Operational Global Analysis data. Climatological evaluations are carried out for meteorological, chemical, and aerosol-cloud-radiation variables against data from surface networks and satellite retrievals. The model performs very well for the 2 m temperature (T2) for the 10 year period with only a small cold bias of -0.3 °C. Biases in other meteorological variables including relative humidity at 2 m, wind speed at 10 m, and precipitation tend to be site- and season-specific; however, with the exception of T2, consistent annual biases exist for most of the years from 2001 to 2010. Ozone mixing ratios are slightly overpredicted at both urban and rural locations but underpredicted at rural locations. PM2.5 concentrations are slightly overpredicted at rural sites, but slightly underpredicted at urban/suburban sites. In general, the model performs relatively well for chemical and meteorological variables, and not as well for aerosol-cloud-radiation variables. Cloud-aerosol variables including aerosol optical depth, cloud water path, cloud optical thickness, and cloud droplet number concentration are generally underpredicted on average across the continental US. Overpredictions of several cloud variables over eastern US result in underpredictions of radiation variables and overpredictions of shortwave and longwave cloud forcing which are important climate variables. While the current performance is deemed to be acceptable, improvements to the bias-correction method for CESM downscaling and the model parameterizations of cloud dynamics and thermodynamics, as well as aerosol-cloud interactions can potentially improve model performance for long-term climate simulations.
Using decision trees to understand structure in missing data
Tierney, Nicholas J; Harden, Fiona A; Harden, Maurice J; Mengersen, Kerrie L
2015-01-01
Objectives Demonstrate the application of decision trees—classification and regression trees (CARTs), and their cousins, boosted regression trees (BRTs)—to understand structure in missing data. Setting Data taken from employees at 3 different industrial sites in Australia. Participants 7915 observations were included. Materials and methods The approach was evaluated using an occupational health data set comprising results of questionnaires, medical tests and environmental monitoring. Statistical methods included standard statistical tests and the ‘rpart’ and ‘gbm’ packages for CART and BRT analyses, respectively, from the statistical software ‘R’. A simulation study was conducted to explore the capability of decision tree models in describing data with missingness artificially introduced. Results CART and BRT models were effective in highlighting a missingness structure in the data, related to the type of data (medical or environmental), the site in which it was collected, the number of visits, and the presence of extreme values. The simulation study revealed that CART models were able to identify variables and values responsible for inducing missingness. There was greater variation in variable importance for unstructured as compared to structured missingness. Discussion Both CART and BRT models were effective in describing structural missingness in data. CART models may be preferred over BRT models for exploratory analysis of missing data, and selecting variables important for predicting missingness. BRT models can show how values of other variables influence missingness, which may prove useful for researchers. Conclusions Researchers are encouraged to use CART and BRT models to explore and understand missing data. PMID:26124509
Kulinkina, Alexandra V; Walz, Yvonne; Koch, Magaly; Biritwum, Nana-Kwadwo; Utzinger, Jürg; Naumova, Elena N
2018-06-04
Schistosomiasis is a water-related neglected tropical disease. In many endemic low- and middle-income countries, insufficient surveillance and reporting lead to poor characterization of the demographic and geographic distribution of schistosomiasis cases. Hence, modeling is relied upon to predict areas of high transmission and to inform control strategies. We hypothesized that utilizing remotely sensed (RS) environmental data in combination with water, sanitation, and hygiene (WASH) variables could improve on the current predictive modeling approaches. Schistosoma haematobium prevalence data, collected from 73 rural Ghanaian schools, were used in a random forest model to investigate the predictive capacity of 15 environmental variables derived from RS data (Landsat 8, Sentinel-2, and Global Digital Elevation Model) with fine spatial resolution (10-30 m). Five methods of variable extraction were tested to determine the spatial linkage between school-based prevalence and the environmental conditions of potential transmission sites, including applying the models to known human water contact locations. Lastly, measures of local water access and groundwater quality were incorporated into RS-based models to assess the relative importance of environmental and WASH variables. Predictive models based on environmental characterization of specific locations where people contact surface water bodies offered some improvement as compared to the traditional approach based on environmental characterization of locations where prevalence is measured. A water index (MNDWI) and topographic variables (elevation and slope) were important environmental risk factors, while overall, groundwater iron concentration predominated in the combined model that included WASH variables. The study helps to understand localized drivers of schistosomiasis transmission. Specifically, unsatisfactory water quality in boreholes perpetuates reliance of surface water bodies, indirectly increasing schistosomiasis risk and resulting in rapid reinfection (up to 40% prevalence six months following preventive chemotherapy). Considering WASH-related risk factors in schistosomiasis prediction can help shift the focus of control strategies from treating symptoms to reducing exposure.
Azmal, Mohammad; Sari, Ali Akbari; Foroushani, Abbas Rahimi; Ahmadi, Batoul
2016-06-01
Patient and public involvement is engaging patients, providers, community representatives, and the public in healthcare planning and decision-making. The purpose of this study was to develop a model for the application of patient and public involvement in decision making in the Iranian healthcare system. A mixed qualitative-quantitative approach was used to develop a conceptual model. Thirty three key informants were purposely recruited in the qualitative stage, and 420 people (patients and their companions) were included in a protocol study that was implemented in five steps: 1) Identifying antecedents, consequences, and variables associated with the patient and the publics' involvement in healthcare decision making through a comprehensive literature review; 2) Determining the main variables in the context of Iran's health system using conceptual framework analysis; 3) Prioritizing and weighting variables by Shannon entropy; 4) designing and validating a tool for patient and public involvement in healthcare decision making; and 5) Providing a conceptual model of patient and the public involvement in planning and developing healthcare using structural equation modeling. We used various software programs, including SPSS (17), Max QDA (10), EXCEL, and LISREL. Content analysis, Shannon entropy, and descriptive and analytic statistics were used to analyze the data. In this study, seven antecedents variable, five dimensions of involvement, and six consequences were identified. These variables were used to design a valid tool. A logical model was derived that explained the logical relationships between antecedent and consequent variables and the dimensions of patient and public involvement as well. Given the specific context of the political, social, and innovative environments in Iran, it was necessary to design a model that would be compatible with these features. It can improve the quality of care and promote the patient and the public satisfaction with healthcare and legitimate the representative of people they served for. This model can provide a practical guide for managers and policy makers to involve people in making the decisions that influence their lives.
Warner, Kelly L.; Arnold, Terri L.
2010-01-01
Nitrate in private wells in the glacial aquifer system is a concern for an estimated 17 million people using private wells because of the proximity of many private wells to nitrogen sources. Yet, less than 5 percent of private wells sampled in this study contained nitrate in concentrations that exceeded the U.S. Environmental Protection Agency (USEPA) Maximum Contaminant Level (MCL) of 10 mg/L (milligrams per liter) as N (nitrogen). However, this small group with nitrate concentrations above the USEPA MCL includes some of the highest nitrate concentrations detected in groundwater from private wells (77 mg/L). Median nitrate concentration measured in groundwater from private wells in the glacial aquifer system (0.11 mg/L as N) is lower than that in water from other unconsolidated aquifers and is not strongly related to surface sources of nitrate. Background concentration of nitrate is less than 1 mg/L as N. Although overall nitrate concentration in private wells was low relative to the MCL, concentrations were highly variable over short distances and at various depths below land surface. Groundwater from wells in the glacial aquifer system at all depths was a mixture of old and young water. Oxidation and reduction potential changes with depth and groundwater age were important influences on nitrate concentrations in private wells. A series of 10 logistic regression models was developed to estimate the probability of nitrate concentration above various thresholds. The threshold concentration (1 to 10 mg/L) affected the number of variables in the model. Fewer explanatory variables are needed to predict nitrate at higher threshold concentrations. The variables that were identified as significant predictors for nitrate concentration above 4 mg/L as N included well characteristics such as open-interval diameter, open-interval length, and depth to top of open interval. Environmental variables in the models were mean percent silt in soil, soil type, and mean depth to saturated soil. The 10-year mean (1992-2001) application rate of nitrogen fertilizer applied to farms was included as the potential source variable. A linear regression model also was developed to predict mean nitrate concentrations in well networks. The model is based on network averages because nitrate concentrations are highly variable over short distances. Using values for each of the predictor variables averaged by network (network mean value) from the logistic regression models, the linear regression model developed in this study predicted the mean nitrate concentration in well networks with a 95 percent confidence in predictions.
Woelmer, Whitney; Kao, Yu-Chun; Bunnell, David B.; Deines, Andrew M.; Bennion, David; Rogers, Mark W.; Brooks, Colin N.; Sayers, Michael J.; Banach, David M.; Grimm, Amanda G.; Shuchman, Robert A.
2016-01-01
Prediction of primary production of lentic water bodies (i.e., lakes and reservoirs) is valuable to researchers and resource managers alike, but is very rarely done at the global scale. With the development of remote sensing technologies, it is now feasible to gather large amounts of data across the world, including understudied and remote regions. To determine which factors were most important in explaining the variation of chlorophyll a (Chl-a), an indicator of primary production in water bodies, at global and regional scales, we first developed a geospatial database of 227 water bodies and watersheds with corresponding Chl-a, nutrient, hydrogeomorphic, and climate data. Then we used a generalized additive modeling approach and developed model selection criteria to select models that most parsimoniously related Chl-a to predictor variables for all 227 water bodies and for 51 lakes in the Laurentian Great Lakes region in the data set. Our best global model contained two hydrogeomorphic variables (water body surface area and the ratio of watershed to water body surface area) and a climate variable (average temperature in the warmest model selection criteria to select models that most parsimoniously related Chl-a to predictor variables quarter) and explained ~ 30% of variation in Chl-a. Our regional model contained one hydrogeomorphic variable (flow accumulation) and the same climate variable, but explained substantially more variation (58%). Our results indicate that a regional approach to watershed modeling may be more informative to predicting Chl-a, and that nearly a third of global variability in Chl-a may be explained using hydrogeomorphic and climate variables.
NASA Astrophysics Data System (ADS)
Achutarao, K. M.; Singh, R.
2017-12-01
There are various sources of uncertainty in model projections of future climate change. These include differences in the formulation of climate models, internal variability, and differences in scenarios. Internal variability in a climate system represents the unforced change due to the chaotic nature of the climate system and is considered irreducible (Deser et al., 2012). Internal variability becomes important at regional scales where it can dominate forced changes. Therefore it needs to be carefully assessed in future projections. In this study we segregate the role of internal variability in the future temperature and precipitation projections over the Indian region. We make use of the Coupled Model Inter-comparison Project - phase 5 (CMIP5; Taylor et al., 2012) database containing climate model simulations carried out by various modeling centers around the world. While the CMIP5 experimental protocol recommended producing numerous ensemble members, only a handful of the modeling groups provided multiple realizations. Having a small number of realizations is a limitation in producing a quantification of internal variability. We therefore exploit the Community Earth System Model Large Ensemble (CESM-LE; Kay et al., 2014) dataset which contains a 40 member ensemble of a single model- CESM1 (CAM5) to explore the role of internal variability in Future Projections. Surface air temperature and precipitation change projections over regional and sub-regional scale are analyzed under the IPCC emission scenario (RCP8.5) for different seasons and homogeneous climatic zones over India. We analyze the spread in projections due to internal variability in the CESM-LE and CMIP5 datasets over these regions.
Properties of added variable plots in Cox's regression model.
Lindkvist, M
2000-03-01
The added variable plot is useful for examining the effect of a covariate in regression models. The plot provides information regarding the inclusion of a covariate, and is useful in identifying influential observations on the parameter estimates. Hall et al. (1996) proposed a plot for Cox's proportional hazards model derived by regarding the Cox model as a generalized linear model. This paper proves and discusses properties of this plot. These properties make the plot a valuable tool in model evaluation. Quantities considered include parameter estimates, residuals, leverage, case influence measures and correspondence to previously proposed residuals and diagnostics.
Modeling heart rate variability by stochastic feedback
NASA Technical Reports Server (NTRS)
Amaral, L. A.; Goldberger, A. L.; Stanley, H. E.
1999-01-01
We consider the question of how the cardiac rhythm spontaneously self-regulates and propose a new mechanism as a possible answer. We model the neuroautonomic regulation of the heart rate as a stochastic feedback system and find that the model successfully accounts for key characteristics of cardiac variability, including the 1/f power spectrum, the functional form and scaling of the distribution of variations of the interbeat intervals, and the correlations in the Fourier phases which indicate nonlinear dynamics.
NASA Technical Reports Server (NTRS)
Callis, S. L.; Sakamoto, C.
1984-01-01
Five models based on multiple regression were developed to estimate wheat yields for the five wheat growing provinces of Argentina. Meteorological data sets were obtained for each province by averaging data for stations within each province. Predictor variables for the models were derived from monthly total precipitation, average monthly mean temperature, and average monthly maximum temperature. Buenos Aires was the only province for which a trend variable was included because of increasing trend in yield due to technology from 1950 to 1963.
NASA Technical Reports Server (NTRS)
Callis, S. L.; Sakamoto, C.
1984-01-01
A model based on multiple regression was developed to estimate corn yields for the country of Argentina. A meteorological data set was obtained for the country by averaging data for stations within the corn-growing area. Predictor variables for the model were derived from monthly total precipitation, average monthly mean temperature, and average monthly maximum temperature. A trend variable was included for the years 1965 to 1980 since an increasing trend in yields due to technology was observed between these years.
Dengue: recent past and future threats
Rogers, David J.
2015-01-01
This article explores four key questions about statistical models developed to describe the recent past and future of vector-borne diseases, with special emphasis on dengue: (1) How many variables should be used to make predictions about the future of vector-borne diseases?(2) Is the spatial resolution of a climate dataset an important determinant of model accuracy?(3) Does inclusion of the future distributions of vectors affect predictions of the futures of the diseases they transmit?(4) Which are the key predictor variables involved in determining the distributions of vector-borne diseases in the present and future?Examples are given of dengue models using one, five or 10 meteorological variables and at spatial resolutions of from one-sixth to two degrees. Model accuracy is improved with a greater number of descriptor variables, but is surprisingly unaffected by the spatial resolution of the data. Dengue models with a reduced set of climate variables derived from the HadCM3 global circulation model predictions for the 1980s are improved when risk maps for dengue's two main vectors (Aedes aegypti and Aedes albopictus) are also included as predictor variables; disease and vector models are projected into the future using the global circulation model predictions for the 2020s, 2040s and 2080s. The Garthwaite–Koch corr-max transformation is presented as a novel way of showing the relative contribution of each of the input predictor variables to the map predictions. PMID:25688021
Post-processing method for wind speed ensemble forecast using wind speed and direction
NASA Astrophysics Data System (ADS)
Sofie Eide, Siri; Bjørnar Bremnes, John; Steinsland, Ingelin
2017-04-01
Statistical methods are widely applied to enhance the quality of both deterministic and ensemble NWP forecasts. In many situations, like wind speed forecasting, most of the predictive information is contained in one variable in the NWP models. However, in statistical calibration of deterministic forecasts it is often seen that including more variables can further improve forecast skill. For ensembles this is rarely taken advantage of, mainly due to that it is generally not straightforward how to include multiple variables. In this study, it is demonstrated how multiple variables can be included in Bayesian model averaging (BMA) by using a flexible regression method for estimating the conditional means. The method is applied to wind speed forecasting at 204 Norwegian stations based on wind speed and direction forecasts from the ECMWF ensemble system. At about 85 % of the sites the ensemble forecasts were improved in terms of CRPS by adding wind direction as predictor compared to only using wind speed. On average the improvements were about 5 %, but mainly for moderate to strong wind situations. For weak wind speeds adding wind direction had more or less neutral impact.
Follow-Up Care for Older Women With Breast Cancer
1998-08-01
and node status (positive/negative); and breast cancer treatments received. For the breast cancer treatments variables , we used two different ...interview. Independent Variables . We constructed five different measures of comorbidity. The first was a self-reported measure of cardiopulmonary...Candidate variables for our multivariate models included: baseline measures of the relevant outcome, age, stage, comorbidity, primary tumor therapy
Varga, Leah M; Surratt, Hilary L
2014-01-01
Patterns of social and structural factors experienced by vulnerable populations may negatively affect willingness and ability to seek out health care services, and ultimately, their health. The outcome variable was utilization of health care services in the previous 12 months. Using Andersen's Behavioral Model for Vulnerable Populations, we examined self-reported data on utilization of health care services among a sample of 546 Black, street-based, female sex workers in Miami, Florida. To evaluate the impact of each domain of the model on predicting health care utilization, domains were included in the logistic regression analysis by blocks using the traditional variables first and then adding the vulnerable domain variables. The most consistent variables predicting health care utilization were having a regular source of care and self-rated health. The model that included only enabling variables was the most efficient model in predicting health care utilization. Any type of resource, link, or connection to or with an institution, or any consistent point of care, contributes significantly to health care utilization behaviors. A consistent and reliable source for health care may increase health care utilization and subsequently decrease health disparities among vulnerable and marginalized populations, as well as contribute to public health efforts that encourage preventive health. Copyright © 2014 Jacobs Institute of Women's Health. Published by Elsevier Inc. All rights reserved.
Ranucci, Marco; Castelvecchio, Serenella; Menicanti, Lorenzo; Frigiola, Alessandro; Pelissero, Gabriele
2010-03-01
The European system for cardiac operative risk evaluation (EuroSCORE) is currently used in many institutions and is considered a reference tool in many countries. We hypothesised that too many variables were included in the EuroSCORE using limited patient series. We tested different models using a limited number of variables. A total of 11150 adult patients undergoing cardiac operations at our institution (2001-2007) were retrospectively analysed. The 17 risk factors composing the EuroSCORE were separately analysed and ranked for accuracy of prediction of hospital mortality. Seventeen models were created by progressively including one factor at a time. The models were compared for accuracy with a receiver operating characteristics (ROC) analysis and area under the curve (AUC) evaluation. Calibration was tested with Hosmer-Lemeshow statistics. Clinical performance was assessed by comparing the predicted with the observed mortality rates. The best accuracy (AUC 0.76) was obtained using a model including only age, left ventricular ejection fraction, serum creatinine, emergency operation and non-isolated coronary operation. The EuroSCORE AUC (0.75) was not significantly different. Calibration and clinical performance were better in the five-factor model than in the EuroSCORE. Only in high-risk patients were 12 factors needed to achieve a good performance. Including many factors in multivariable logistic models increases the risk for overfitting, multicollinearity and human error. A five-factor model offers the same level of accuracy but demonstrated better calibration and clinical performance. Models with a limited number of factors may work better than complex models when applied to a limited number of patients. Copyright (c) 2009 European Association for Cardio-Thoracic Surgery. Published by Elsevier B.V. All rights reserved.
Identifying Nonprovider Factors Affecting Pediatric Emergency Medicine Provider Efficiency.
Saleh, Fareed; Breslin, Kristen; Mullan, Paul C; Tillett, Zachary; Chamberlain, James M
2017-10-31
The aim of this study was to create a multivariable model of standardized relative value units per hour by adjusting for nonprovider factors that influence efficiency. We obtained productivity data based on billing records measured in emergency relative value units for (1) both evaluation and management of visits and (2) procedures for 16 pediatric emergency medicine providers with more than 750 hours worked per year. Eligible shifts were in an urban, academic pediatric emergency department (ED) with 2 sites: a tertiary care main campus and a satellite community site. We used multivariable linear regression to adjust for the impact of shift and pediatric ED characteristics on individual-provider efficiency and then removed variables from the model with minimal effect on productivity. There were 2998 eligible shifts for the 16 providers during a 3-year period. The resulting model included 4 variables when looking at both ED sites combined. These variables include the following: (1) number of procedures billed by provider, (2) season of the year, (3) shift start time, and (4) day of week. Results were improved when we separately modeled each ED location. A 3-variable model using procedures billed by provider, shift start time, and season explained 23% of the variation in provider efficiency at the academic ED site. A 3-variable model using procedures billed by provider, patient arrivals per hour, and shift start time explained 45% of the variation in provider efficiency at the satellite ED site. Several nonprovider factors affect provider efficiency. These factors should be considered when designing productivity-based incentives.
Strand, Matthew; Sillau, Stefan; Grunwald, Gary K; Rabinovitch, Nathan
2014-02-10
Regression calibration provides a way to obtain unbiased estimators of fixed effects in regression models when one or more predictors are measured with error. Recent development of measurement error methods has focused on models that include interaction terms between measured-with-error predictors, and separately, methods for estimation in models that account for correlated data. In this work, we derive explicit and novel forms of regression calibration estimators and associated asymptotic variances for longitudinal models that include interaction terms, when data from instrumental and unbiased surrogate variables are available but not the actual predictors of interest. The longitudinal data are fit using linear mixed models that contain random intercepts and account for serial correlation and unequally spaced observations. The motivating application involves a longitudinal study of exposure to two pollutants (predictors) - outdoor fine particulate matter and cigarette smoke - and their association in interactive form with levels of a biomarker of inflammation, leukotriene E4 (LTE 4 , outcome) in asthmatic children. Because the exposure concentrations could not be directly observed, we used measurements from a fixed outdoor monitor and urinary cotinine concentrations as instrumental variables, and we used concentrations of fine ambient particulate matter and cigarette smoke measured with error by personal monitors as unbiased surrogate variables. We applied the derived regression calibration methods to estimate coefficients of the unobserved predictors and their interaction, allowing for direct comparison of toxicity of the different pollutants. We used simulations to verify accuracy of inferential methods based on asymptotic theory. Copyright © 2013 John Wiley & Sons, Ltd.
Prediction of performance on the RCMP physical ability requirement evaluation.
Stanish, H I; Wood, T M; Campagna, P
1999-08-01
The Royal Canadian Mounted Police use the Physical Ability Requirement Evaluation (PARE) for screening applicants. The purposes of this investigation were to identify those field tests of physical fitness that were associated with PARE performance and determine which most accurately classified successful and unsuccessful PARE performers. The participants were 27 female and 21 male volunteers. Testing included measures of aerobic power, anaerobic power, agility, muscular strength, muscular endurance, and body composition. Multiple regression analysis revealed a three-variable model for males (70-lb bench press, standing long jump, and agility) explaining 79% of the variability in PARE time, whereas a one-variable model (agility) explained 43% of the variability for females. Analysis of the classification accuracy of the males' data was prohibited because 91% of the males passed the PARE. Classification accuracy of the females' data, using logistic regression, produced a two-variable model (agility, 1.5-mile endurance run) with 93% overall classification accuracy.
Development of a working Hovercraft model
NASA Astrophysics Data System (ADS)
Noor, S. H. Mohamed; Syam, K.; Jaafar, A. A.; Mohamad Sharif, M. F.; Ghazali, M. R.; Ibrahim, W. I.; Atan, M. F.
2016-02-01
This paper presents the development process to fabricate a working hovercraft model. The purpose of this study is to design and investigate of a fully functional hovercraft, based on the studies that had been done. The different designs of hovercraft model had been made and tested but only one of the models is presented in this paper. In this thesis, the weight, the thrust, the lift and the drag force of the model had been measured and the electrical and mechanical parts are also presented. The processing unit of this model is Arduino Uno by using the PSP2 (Playstation 2) as the controller. Since our prototype should be functioning on all kind of earth surface, our model also had been tested in different floor condition. They include water, grass, cement and tile. The Speed of the model is measured in every case as the respond variable, Current (I) as the manipulated variable and Voltage (V) as the constant variable.
METEOROLOGICAL AND TRANSPORT MODELING
Advanced air quality simulation models, such as CMAQ, as well as other transport and dispersion models, require accurate and detailed meteorology fields. These meteorology fields include primary 3-dimensional dynamical and thermodynamical variables (e.g., winds, temperature, mo...
Current Status and Challenges of Atmospheric Data Assimilation
NASA Astrophysics Data System (ADS)
Atlas, R. M.; Gelaro, R.
2016-12-01
The issues of modern atmospheric data assimilation are fairly simple to comprehend but difficult to address, involving the combination of literally billions of model variables and tens of millions of observations daily. In addition to traditional meteorological variables such as wind, temperature pressure and humidity, model state vectors are being expanded to include explicit representation of precipitation, clouds, aerosols and atmospheric trace gases. At the same time, model resolutions are approaching single-kilometer scales globally and new observation types have error characteristics that are increasingly non-Gaussian. This talk describes the current status and challenges of atmospheric data assimilation, including an overview of current methodologies, the difficulty of estimating error statistics, and progress toward coupled earth system analyses.
NASA Astrophysics Data System (ADS)
Logan, J. A.; Megretskaia, I.; Liu, J.; Rodriguez, J. M.; Strahan, S. E.; Damon, M.; Steenrod, S. D.
2012-12-01
Simulations of atmospheric composition in the recent past (hindcasts) are a valuable tool for determining the causes of interannual variability (IAV) and trends in tropospheric ozone, including factors such as anthropogenic emissions, biomass burning, stratospheric input, and variability in meteorology. We will review the ozone data sets (balloon, satellite, and surface) that are the most reliable for evaluating hindcasts, and demonstrate their application with the GMI model. The GMI model is driven by the GEOS-5/MERRA reanalysis and includes both stratospheric and tropospheric chemistry. Preliminary analysis of a simulation for 1990-2010 using constant fossil fuel emissions is promising. The model reproduces the recent interannual variability (IAV) in ozone in the lowermost stratosphere seen in MLS and sonde data, as well as the IAV seen in sonde data in the lower stratosphere since 1995, and captures much of the IAV and short-term trends in surface ozone at remote sites, showing the influence of variability in dynamics. There was considerable IAV in ozone in the lowermost stratosphere in the Aura period, but almost none at European alpine sites in winter/spring, when ozone at 150 hPa has been shown to be correlated with that at 700 hPa in earlier years. The model matches the IAV in alpine ozone in Europe in July-September, including the high values in heat-waves, showing the role of variability in meteorology. A focus on IAV in each season is essential. The model matches IAV in MLS in the upper troposphere, TES tropical ozone, and the tropospheric ozone column (OMI/MLS) the best in tSropical regions controlled by ENSO related changes in dynamics. This study, combined with sensitivity simulations with changes to emissions, and simulations with passive tracers (see Abstract by Rodriguez et al. Session A76), lays the foundations for assessment of the mechanisms that have influenced tropospheric ozone in the past two decades.
ERIC Educational Resources Information Center
von Davier, Matthias
2016-01-01
This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…
Structural Equation Modeling: A Framework for Ocular and Other Medical Sciences Research
Christ, Sharon L.; Lee, David J.; Lam, Byron L.; Diane, Zheng D.
2017-01-01
Structural equation modeling (SEM) is a modeling framework that encompasses many types of statistical models and can accommodate a variety of estimation and testing methods. SEM has been used primarily in social sciences but is increasingly used in epidemiology, public health, and the medical sciences. SEM provides many advantages for the analysis of survey and clinical data, including the ability to model latent constructs that may not be directly observable. Another major feature is simultaneous estimation of parameters in systems of equations that may include mediated relationships, correlated dependent variables, and in some instances feedback relationships. SEM allows for the specification of theoretically holistic models because multiple and varied relationships may be estimated together in the same model. SEM has recently expanded by adding generalized linear modeling capabilities that include the simultaneous estimation of parameters of different functional form for outcomes with different distributions in the same model. Therefore, mortality modeling and other relevant health outcomes may be evaluated. Random effects estimation using latent variables has been advanced in the SEM literature and software. In addition, SEM software has increased estimation options. Therefore, modern SEM is quite general and includes model types frequently used by health researchers, including generalized linear modeling, mixed effects linear modeling, and population average modeling. This article does not present any new information. It is meant as an introduction to SEM and its uses in ocular and other health research. PMID:24467557
Factors associated with early cyclicity in postpartum dairy cows.
Vercouteren, M M A A; Bittar, J H J; Pinedo, P J; Risco, C A; Santos, J E P; Vieira-Neto, A; Galvão, K N
2015-01-01
The objective of this study was to evaluate factors associated with resumption of ovarian cyclicity within 21 days in milk (DIM) in dairy cows. Cows (n=768) from 2 herds in north Florida had their ovaries scanned at 17±3, 21±3, and 24±3 DIM. Cows that had a corpus luteum ≥20mm at 17±3 or at 21±3 DIM or that had a corpus luteum <20mm in 2 consecutive examinations were determined to be cyclic by 21±3 DIM. The following information was collected for up to 14 DIM: calving season, parity, calving problems, metabolic problems, metritis, mastitis, digestive problems, lameness, body weight loss, dry period length, and average daily milk yield. Body condition was scored at 17±3 DIM. Multivariable mixed logistic regression analysis was performed using the GLIMMIX procedure of SAS. Variables with P≤0.2 were considered in each model. Herd was included as a random variable. Three models were constructed: model 1 included all cows, model 2 included only cows from dairy 1 that had daily body weights available, and model 3 included only multiparous cows with a previous dry period length recorded. In model 1, variables associated with greater cyclicity by 21±3 DIM were calving in the summer and fall rather than in the winter or spring, being multiparous rather than primiparous, and not having metabolic or digestive problems. In model 2, variables associated with greater cyclicity by 21±3 DIM were calving in the summer and fall, not having metritis or digestive problems and not losing >28 kg of BW within 14 DIM. In model 3, variables associated with greater cyclicity by 21±3 DIM were absence of metabolic problems and dry period ≤76 d. In summary, cyclicity by 21±3 DIM was negatively associated with calving in winter or spring, primiparity, metritis, metabolic or digestive problems, loss of >28 kg of body weight, and a dry period >76d. Strategies preventing extended dry period length and loss of BW, together with reductions in the incidence of metritis as well as metabolic and digestive problems should improve early cyclicity postpartum. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Riis, R G C; Gudbergsen, H; Simonsen, O; Henriksen, M; Al-Mashkur, N; Eld, M; Petersen, K K; Kubassova, O; Bay Jensen, A C; Damm, J; Bliddal, H; Arendt-Nielsen, L; Boesen, M
2017-02-01
To investigate the association between magnetic resonance imaging (MRI), macroscopic and histological assessments of synovitis in end-stage knee osteoarthritis (KOA). Synovitis of end-stage osteoarthritic knees was assessed using non-contrast-enhanced (CE), contrast-enhanced magnetic resonance imaging (CE-MRI) and dynamic contrast-enhanced (DCE)-MRI prior to (TKR) and correlated with microscopic and macroscopic assessments of synovitis obtained intraoperatively. Multiple bivariate correlations were used with a pre-specified threshold of 0.70 for significance. Also, multiple regression analyses with different subsets of MRI-variables as explanatory variables and the histology score as outcome variable were performed with the intention to find MRI-variables that best explain the variance in histological synovitis (i.e., highest R 2 ). A stepped approach was taken starting with basic characteristics and non-CE MRI-variables (model 1), after which CE-MRI-variables were added (model 2) with the final model also including DCE-MRI-variables (model 3). 39 patients (56.4% women, mean age 68 years, Kellgren-Lawrence (KL) grade 4) had complete MRI and histological data. Only the DCE-MRI variable MExNvoxel (surrogate of the volume and degree of synovitis) and the macroscopic score showed correlations above the pre-specified threshold for acceptance with histological inflammation. The maximum R 2 -value obtained in Model 1 was R 2 = 0.39. In Model 2, where the CE-MRI-variables were added, the highest R 2 = 0.52. In Model 3, a four-variable model consisting of the gender, one CE-MRI and two DCE-MRI-variables yielded a R 2 = 0.71. DCE-MRI is correlated with histological synovitis in end-stage KOA and the combination of CE and DCE-MRI may be a useful, non-invasive tool in characterising synovitis in KOA. Copyright © 2016 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
Toma, Luiza; Mathijs, Erik
2007-04-01
This paper aims to identify the factors underlying farmers' propensity to participate in organic farming programmes in a Romanian rural region that confronts non-point source pollution. For this, we employ structural equation modelling with latent variables using a specific data set collected through an agri-environmental farm survey in 2001. The model includes one 'behavioural intention' latent variable ('propensity to participate in organic farming programmes') and five 'attitude' and 'socio-economic' latent variables ('socio-demographic characteristics', 'economic characteristics', 'agri-environmental information access', 'environmental risk perception' and 'general environmental concern'). The results indicate that, overall, the model has an adequate fit to the data. All loadings are statistically significant, supporting the theoretical basis for assignment of indicators for each latent variable. The significance tests for the structural model parameters show 'environmental risk perception' as the strongest determinant of farmers' propensity to participate in organic farming programmes.
Using a Bayesian network to predict barrier island geomorphologic characteristics
Gutierrez, Ben; Plant, Nathaniel G.; Thieler, E. Robert; Turecek, Aaron
2015-01-01
Quantifying geomorphic variability of coastal environments is important for understanding and describing the vulnerability of coastal topography, infrastructure, and ecosystems to future storms and sea level rise. Here we use a Bayesian network (BN) to test the importance of multiple interactions between barrier island geomorphic variables. This approach models complex interactions and handles uncertainty, which is intrinsic to future sea level rise, storminess, or anthropogenic processes (e.g., beach nourishment and other forms of coastal management). The BN was developed and tested at Assateague Island, Maryland/Virginia, USA, a barrier island with sufficient geomorphic and temporal variability to evaluate our approach. We tested the ability to predict dune height, beach width, and beach height variables using inputs that included longer-term, larger-scale, or external variables (historical shoreline change rates, distances to inlets, barrier width, mean barrier elevation, and anthropogenic modification). Data sets from three different years spanning nearly a decade sampled substantial temporal variability and serve as a proxy for analysis of future conditions. We show that distinct geomorphic conditions are associated with different long-term shoreline change rates and that the most skillful predictions of dune height, beach width, and beach height depend on including multiple input variables simultaneously. The predictive relationships are robust to variations in the amount of input data and to variations in model complexity. The resulting model can be used to evaluate scenarios related to coastal management plans and/or future scenarios where shoreline change rates may differ from those observed historically.
An Investigation of the Connection between Outdoor Orientation and Thriving
ERIC Educational Resources Information Center
Rude, Wally James; Bobilya, Andrew J.; Bell, Brent J.
2017-01-01
This study explored the contribution of outdoor orientation experiences to student thriving. Participants included 295 first-year college students from three institutions across North America. A thriving model was tested using structural equation modeling and included the following variables: outdoor orientation, thriving, involvement,…
Fine-scale habitat modeling of a top marine predator: do prey data improve predictive capacity?
Torres, Leigh G; Read, Andrew J; Halpin, Patrick
2008-10-01
Predators and prey assort themselves relative to each other, the availability of resources and refuges, and the temporal and spatial scale of their interaction. Predictive models of predator distributions often rely on these relationships by incorporating data on environmental variability and prey availability to determine predator habitat selection patterns. This approach to predictive modeling holds true in marine systems where observations of predators are logistically difficult, emphasizing the need for accurate models. In this paper, we ask whether including prey distribution data in fine-scale predictive models of bottlenose dolphin (Tursiops truncatus) habitat selection in Florida Bay, Florida, U.S.A., improves predictive capacity. Environmental characteristics are often used as predictor variables in habitat models of top marine predators with the assumption that they act as proxies of prey distribution. We examine the validity of this assumption by comparing the response of dolphin distribution and fish catch rates to the same environmental variables. Next, the predictive capacities of four models, with and without prey distribution data, are tested to determine whether dolphin habitat selection can be predicted without recourse to describing the distribution of their prey. The final analysis determines the accuracy of predictive maps of dolphin distribution produced by modeling areas of high fish catch based on significant environmental characteristics. We use spatial analysis and independent data sets to train and test the models. Our results indicate that, due to high habitat heterogeneity and the spatial variability of prey patches, fine-scale models of dolphin habitat selection in coastal habitats will be more successful if environmental variables are used as predictor variables of predator distributions rather than relying on prey data as explanatory variables. However, predictive modeling of prey distribution as the response variable based on environmental variability did produce high predictive performance of dolphin habitat selection, particularly foraging habitat.
On the Asymptotic Relative Efficiency of Planned Missingness Designs.
Rhemtulla, Mijke; Savalei, Victoria; Little, Todd D
2016-03-01
In planned missingness (PM) designs, certain data are set a priori to be missing. PM designs can increase validity and reduce cost; however, little is known about the loss of efficiency that accompanies these designs. The present paper compares PM designs to reduced sample (RN) designs that have the same total number of data points concentrated in fewer participants. In 4 studies, we consider models for both observed and latent variables, designs that do or do not include an "X set" of variables with complete data, and a full range of between- and within-set correlation values. All results are obtained using asymptotic relative efficiency formulas, and thus no data are generated; this novel approach allows us to examine whether PM designs have theoretical advantages over RN designs removing the impact of sampling error. Our primary findings are that (a) in manifest variable regression models, estimates of regression coefficients have much lower relative efficiency in PM designs as compared to RN designs, (b) relative efficiency of factor correlation or latent regression coefficient estimates is maximized when the indicators of each latent variable come from different sets, and (c) the addition of an X set improves efficiency in manifest variable regression models only for the parameters that directly involve the X-set variables, but it substantially improves efficiency of most parameters in latent variable models. We conclude that PM designs can be beneficial when the model of interest is a latent variable model; recommendations are made for how to optimize such a design.
Improved estimation of PM2.5 using Lagrangian satellite-measured aerosol optical depth
NASA Astrophysics Data System (ADS)
Olivas Saunders, Rolando
Suspended particulate matter (aerosols) with aerodynamic diameters less than 2.5 mum (PM2.5) has negative effects on human health, plays an important role in climate change and also causes the corrosion of structures by acid deposition. Accurate estimates of PM2.5 concentrations are thus relevant in air quality, epidemiology, cloud microphysics and climate forcing studies. Aerosol optical depth (AOD) retrieved by the Moderate Resolution Imaging Spectroradiometer (MODIS) satellite instrument has been used as an empirical predictor to estimate ground-level concentrations of PM2.5 . These estimates usually have large uncertainties and errors. The main objective of this work is to assess the value of using upwind (Lagrangian) MODIS-AOD as predictors in empirical models of PM2.5. The upwind locations of the Lagrangian AOD were estimated using modeled backward air trajectories. Since the specification of an arrival elevation is somewhat arbitrary, trajectories were calculated to arrive at four different elevations at ten measurement sites within the continental United States. A systematic examination revealed trajectory model calculations to be sensitive to starting elevation. With a 500 m difference in starting elevation, the 48-hr mean horizontal separation of trajectory endpoints was 326 km. When the difference in starting elevation was doubled and tripled to 1000 m and 1500m, the mean horizontal separation of trajectory endpoints approximately doubled and tripled to 627 km and 886 km, respectively. A seasonal dependence of this sensitivity was also found: the smallest mean horizontal separation of trajectory endpoints was exhibited during the summer and the largest separations during the winter. A daily average AOD product was generated and coupled to the trajectory model in order to determine AOD values upwind of the measurement sites during the period 2003-2007. Empirical models that included in situ AOD and upwind AOD as predictors of PM2.5 were generated by multivariate linear regressions using the least squares method. The multivariate models showed improved performance over the single variable regression (PM2.5 and in situ AOD) models. The statistical significance of the improvement of the multivariate models over the single variable regression models was tested using the extra sum of squares principle. In many cases, even when the R-squared was high for the multivariate models, the improvement over the single models was not statistically significant. The R-squared of these multivariate models varied with respect to seasons, with the best performance occurring during the summer months. A set of seasonal categorical variables was included in the regressions to exploit this variability. The multivariate regression models that included these categorical seasonal variables performed better than the models that didn't account for seasonal variability. Furthermore, 71% of these regressions exhibited improvement over the single variable models that was statistically significant at a 95% confidence level.
Svenning, J.-C.; Engelbrecht, B.M.J.; Kinner, D.A.; Kursar, T.A.; Stallard, R.F.; Wright, S.J.
2006-01-01
We used regression models and information-theoretic model selection to assess the relative importance of environment, local dispersal and historical contingency as controls of the distributions of 26 common plant species in tropical forest on Barro Colorado Island (BCI), Panama. We censused eighty-eight 0.09-ha plots scattered across the landscape. Environmental control, local dispersal and historical contingency were represented by environmental variables (soil moisture, slope, soil type, distance to shore, old-forest presence), a spatial autoregressive parameter (??), and four spatial trend variables, respectively. We built regression models, representing all combinations of the three hypotheses, for each species. The probability that the best model included the environmental variables, spatial trend variables and ?? averaged 33%, 64% and 50% across the study species, respectively. The environmental variables, spatial trend variables, ??, and a simple intercept model received the strongest support for 4, 15, 5 and 2 species, respectively. Comparing the model results to information on species traits showed that species with strong spatial trends produced few and heavy diaspores, while species with strong soil moisture relationships were particularly drought-sensitive. In conclusion, history and local dispersal appeared to be the dominant controls of the distributions of common plant species on BCI. Copyright ?? 2006 Cambridge University Press.
Smyczynska, Joanna; Hilczer, Maciej; Smyczynska, Urszula; Stawerska, Renata; Tadeusiewicz, Ryszard; Lewinski, Andrzej
2015-01-01
The leading method for prediction of growth hormone (GH) therapy effectiveness are multiple linear regression (MLR) models. Best of our knowledge, we are the first to apply artificial neural networks (ANN) to solve this problem. For ANN there is no necessity to assume the functions linking independent and dependent variables. The aim of study is to compare ANN and MLR models of GH therapy effectiveness. Analysis comprised the data of 245 GH-deficient children (170 boys) treated with GH up to final height (FH). Independent variables included: patients' height, pre-treatment height velocity, chronological age, bone age, gender, pubertal status, parental heights, GH peak in 2 stimulation tests, IGF-I concentration. The output variable was FH. For testing dataset, MLR model predicted FH SDS with average error (RMSE) 0.64 SD, explaining 34.3% of its variability; ANN model derived on the same pre-processed data predicted FH SDS with RMSE 0.60 SD, explaining 42.0% of its variability; ANN model derived on raw data predicted FH with RMSE 3.9 cm (0.63 SD), explaining 78.7% of its variability. ANN seem to be valuable tool in prediction of GH treatment effectiveness, especially since they can be applied to raw clinical data.
Saraswat, Prabhav; MacWilliams, Bruce A; Davis, Roy B
2012-04-01
Several multi-segment foot models to measure the motion of intrinsic joints of the foot have been reported. Use of these models in clinical decision making is limited due to lack of rigorous validation including inter-clinician, and inter-lab variability measures. A model with thoroughly quantified variability may significantly improve the confidence in the results of such foot models. This study proposes a new clinical foot model with the underlying strategy of using separate anatomic and technical marker configurations and coordinate systems. Anatomical landmark and coordinate system identification is determined during a static subject calibration. Technical markers are located at optimal sites for dynamic motion tracking. The model is comprised of the tibia and three foot segments (hindfoot, forefoot and hallux) and inter-segmental joint angles are computed in three planes. Data collection was carried out on pediatric subjects at two sites (Site 1: n=10 subjects by two clinicians and Site 2: five subjects by one clinician). A plaster mold method was used to quantify static intra-clinician and inter-clinician marker placement variability by allowing direct comparisons of marker data between sessions for each subject. Intra-clinician and inter-clinician joint angle variability were less than 4°. For dynamic walking kinematics, intra-clinician, inter-clinician and inter-laboratory variability were less than 6° for the ankle and forefoot, but slightly higher for the hallux. Inter-trial variability accounted for 2-4° of the total dynamic variability. Results indicate the proposed foot model reduces the effects of marker placement variability on computed foot kinematics during walking compared to similar measures in previous models. Copyright © 2011 Elsevier B.V. All rights reserved.
The origin of Total Solar Irradiance variability on timescales less than a day
NASA Astrophysics Data System (ADS)
Shapiro, Alexander; Krivova, Natalie; Schmutz, Werner; Solanki, Sami K.; Leng Yeo, Kok; Cameron, Robert; Beeck, Benjamin
2016-07-01
Total Solar Irradiance (TSI) varies on timescales from minutes to decades. It is generally accepted that variability on timescales of a day and longer is dominated by solar surface magnetic fields. For shorter time scales, several additional sources of variability have been proposed, including convection and oscillation. However, available simplified and highly parameterised models could not accurately explain the observed variability in high-cadence TSI records. We employed the high-cadence solar imagery from the Helioseismic and Magnetic Imager onboard the Solar Dynamics Observatory and the SATIRE (Spectral And Total Irradiance Reconstruction) model of solar irradiance variability to recreate the magnetic component of TSI variability. The recent 3D simulations of solar near-surface convection with MURAM code have been used to calculate the TSI variability caused by convection. This allowed us to determine the threshold timescale between TSI variability caused by the magnetic field and convection. Our model successfully replicates the TSI measurements by the PICARD/PREMOS radiometer which span the period of July 2010 to February 2014 at 2-minute cadence. Hence, we demonstrate that solar magnetism and convection can account for TSI variability at all timescale it has ever been measured (sans the 5-minute component from p-modes).
Bennett, S.N.; Olson, J.R.; Kershner, J.L.; Corbett, P.
2010-01-01
Hybridization and introgression between introduced and native salmonids threaten the continued persistence of many inland cutthroat trout species. Environmental models have been developed to predict the spread of introgression, but few studies have assessed the role of propagule pressure. We used an extensive set of fish stocking records and geographic information system (GIS) data to produce a spatially explicit index of potential propagule pressure exerted by introduced rainbow trout in the Upper Kootenay River, British Columbia, Canada. We then used logistic regression and the information-theoretic approach to test the ability of a set of environmental and spatial variables to predict the level of introgression between native westslope cutthroat trout and introduced rainbow trout. Introgression was assessed using between four and seven co-dominant, diagnostic nuclear markers at 45 sites in 31 different streams. The best model for predicting introgression included our GIS propagule pressure index and an environmental variable that accounted for the biogeoclimatic zone of the site (r2 = 0.62). This model was 1.4 times more likely to explain introgression than the next-best model, which consisted of only the propagule pressure index variable. We created a composite model based on the model-averaged results of the seven top models that included environmental, spatial, and propagule pressure variables. The propagule pressure index had the highest importance weight (0.995) of all variables tested and was negatively related to sites with no introgression. This study used an index of propagule pressure and demonstrated that propagule pressure had the greatest influence on the level of introgression between a native and introduced trout in a human-induced hybrid zone. ?? 2010 by the Ecological Society of America.
The development of a model to predict BW gain of growing cattle fed grass silage-based diets.
Huuskonen, A; Huhtanen, P
2015-08-01
The objective of this meta-analysis was to develop and validate empirical equations predicting BW gain (BWG) and carcass traits of growing cattle from intake and diet composition variables. The modelling was based on treatment mean data from feeding trials in growing cattle, in which the nutrient supply was manipulated by wide ranges of forage and concentrate factors. The final dataset comprised 527 diets in 116 studies. The diets were mainly based on grass silage or grass silage partly or completely replaced by whole-crop silages, hay or straw. The concentrate feeds consisted of cereal grains, fibrous by-products and protein supplements. Mixed model regression analysis with a random study effect was used to develop prediction equations for BWG and carcass traits. The best-fit models included linear and quadratic effects of metabolisable energy (ME) intake per metabolic BW (BW0.75), linear effects of BW0.75, and dietary concentrations of NDF, fat and feed metabolisable protein (MP) as significant variables. Although diet variables had significant effects on BWG, their contribution to improve the model predictions compared with ME intake models was small. Feed MP rather than total MP was included in the final model, since it is less correlated to dietary ME concentration than total MP. None of the quadratic terms of feed variables was significant (P>0.10) when included in the final models. Further, additional feed variables (e.g. silage fermentation products, forage digestibility) did not have significant effects on BWG. For carcass traits, increased ME intake (ME/BW0.75) improved both dressing proportion (P0.10) effect on dressing proportion or carcass conformation score, but it increased (P<0.01) carcass fat score. The current study demonstrated that ME intake per BW0.75 was clearly the most important variable explaining the BWG response in growing cattle. The effect of increased ME supply displayed diminishing responses that could be associated with increased energy concentration of BWG, reduced diet metabolisability (proportion of ME of gross energy) and/or decreased efficiency of ME utilisation for growth with increased intake. Negative effects of increased dietary NDF concentration on BWG were smaller compared to responses that energy evaluation systems predict for energy retention. The present results showed only marginal effects of protein supply on BWG in growing cattle.
A global distributed basin morphometric dataset
NASA Astrophysics Data System (ADS)
Shen, Xinyi; Anagnostou, Emmanouil N.; Mei, Yiwen; Hong, Yang
2017-01-01
Basin morphometry is vital information for relating storms to hydrologic hazards, such as landslides and floods. In this paper we present the first comprehensive global dataset of distributed basin morphometry at 30 arc seconds resolution. The dataset includes nine prime morphometric variables; in addition we present formulas for generating twenty-one additional morphometric variables based on combination of the prime variables. The dataset can aid different applications including studies of land-atmosphere interaction, and modelling of floods and droughts for sustainable water management. The validity of the dataset has been consolidated by successfully repeating the Hack's law.
Fan, Shu-Xiang; Huang, Wen-Qian; Li, Jiang-Bo; Guo, Zhi-Ming; Zhaq, Chun-Jiang
2014-10-01
In order to detect the soluble solids content(SSC)of apple conveniently and rapidly, a ring fiber probe and a portable spectrometer were applied to obtain the spectroscopy of apple. Different wavelength variable selection methods, including unin- formative variable elimination (UVE), competitive adaptive reweighted sampling (CARS) and genetic algorithm (GA) were pro- posed to select effective wavelength variables of the NIR spectroscopy of the SSC in apple based on PLS. The back interval LS- SVM (BiLS-SVM) and GA were used to select effective wavelength variables based on LS-SVM. Selected wavelength variables and full wavelength range were set as input variables of PLS model and LS-SVM model, respectively. The results indicated that PLS model built using GA-CARS on 50 characteristic variables selected from full-spectrum which had 1512 wavelengths achieved the optimal performance. The correlation coefficient (Rp) and root mean square error of prediction (RMSEP) for prediction sets were 0.962, 0.403°Brix respectively for SSC. The proposed method of GA-CARS could effectively simplify the portable detection model of SSC in apple based on near infrared spectroscopy and enhance the predictive precision. The study can provide a reference for the development of portable apple soluble solids content spectrometer.
Remote-sensing based approach to forecast habitat quality under climate change scenarios.
Requena-Mullor, Juan M; López, Enrique; Castro, Antonio J; Alcaraz-Segura, Domingo; Castro, Hermelindo; Reyes, Andrés; Cabello, Javier
2017-01-01
As climate change is expected to have a significant impact on species distributions, there is an urgent challenge to provide reliable information to guide conservation biodiversity policies. In addressing this challenge, we propose a remote sensing-based approach to forecast the future habitat quality for European badger, a species not abundant and at risk of local extinction in the arid environments of southeastern Spain, by incorporating environmental variables related with the ecosystem functioning and correlated with climate and land use. Using ensemble prediction methods, we designed global spatial distribution models for the distribution range of badger using presence-only data and climate variables. Then, we constructed regional models for an arid region in the southeast Spain using EVI (Enhanced Vegetation Index) derived variables and weighting the pseudo-absences with the global model projections applied to this region. Finally, we forecast the badger potential spatial distribution in the time period 2071-2099 based on IPCC scenarios incorporating the uncertainty derived from the predicted values of EVI-derived variables. By including remotely sensed descriptors of the temporal dynamics and spatial patterns of ecosystem functioning into spatial distribution models, results suggest that future forecast is less favorable for European badgers than not including them. In addition, change in spatial pattern of habitat suitability may become higher than when forecasts are based just on climate variables. Since the validity of future forecast only based on climate variables is currently questioned, conservation policies supported by such information could have a biased vision and overestimate or underestimate the potential changes in species distribution derived from climate change. The incorporation of ecosystem functional attributes derived from remote sensing in the modeling of future forecast may contribute to the improvement of the detection of ecological responses under climate change scenarios.
Remote-sensing based approach to forecast habitat quality under climate change scenarios
Requena-Mullor, Juan M.; López, Enrique; Castro, Antonio J.; Alcaraz-Segura, Domingo; Castro, Hermelindo; Reyes, Andrés; Cabello, Javier
2017-01-01
As climate change is expected to have a significant impact on species distributions, there is an urgent challenge to provide reliable information to guide conservation biodiversity policies. In addressing this challenge, we propose a remote sensing-based approach to forecast the future habitat quality for European badger, a species not abundant and at risk of local extinction in the arid environments of southeastern Spain, by incorporating environmental variables related with the ecosystem functioning and correlated with climate and land use. Using ensemble prediction methods, we designed global spatial distribution models for the distribution range of badger using presence-only data and climate variables. Then, we constructed regional models for an arid region in the southeast Spain using EVI (Enhanced Vegetation Index) derived variables and weighting the pseudo-absences with the global model projections applied to this region. Finally, we forecast the badger potential spatial distribution in the time period 2071–2099 based on IPCC scenarios incorporating the uncertainty derived from the predicted values of EVI-derived variables. By including remotely sensed descriptors of the temporal dynamics and spatial patterns of ecosystem functioning into spatial distribution models, results suggest that future forecast is less favorable for European badgers than not including them. In addition, change in spatial pattern of habitat suitability may become higher than when forecasts are based just on climate variables. Since the validity of future forecast only based on climate variables is currently questioned, conservation policies supported by such information could have a biased vision and overestimate or underestimate the potential changes in species distribution derived from climate change. The incorporation of ecosystem functional attributes derived from remote sensing in the modeling of future forecast may contribute to the improvement of the detection of ecological responses under climate change scenarios. PMID:28257501
An Economic Model of U.S. Airline Operating Expenses
NASA Technical Reports Server (NTRS)
Harris, Franklin D.
2005-01-01
This report presents a new economic model of operating expenses for 67 airlines. The model is based on data that the airlines reported to the United States Department of Transportation in 1999. The model incorporates expense-estimating equations that capture direct and indirect expenses of both passenger and cargo airlines. The variables and business factors included in the equations are detailed enough to calculate expenses at the flight equipment reporting level. Total operating expenses for a given airline are then obtained by summation over all aircraft operated by the airline. The model's accuracy is demonstrated by correlation with the DOT Form 41 data from which it was derived. Passenger airlines are more accurately modeled than cargo airlines. An appendix presents a concise summary of the expense estimating equations with explanatory notes. The equations include many operational and aircraft variables, which accommodate any changes that airline and aircraft manufacturers might make to lower expenses in the future. In 1999, total operating expenses of the 67 airlines included in this study amounted to slightly over $100.5 billion. The economic model reported herein estimates $109.3 billion.
Preliminary assessment of factors influencing riverine fish communities in Massachusetts.
Armstrong, David S.; Richards, Todd A.; Brandt, Sara L.
2010-01-01
The U.S. Geological Survey, in cooperation with the Massachusetts Department of Conservation and Recreation (MDCR), Massachusetts Department of Environmental Protection (MDEP), and the Massachusetts Department of Fish and Game (MDFG), conducted a preliminary investigation of fish communities in small- to medium-sized Massachusetts streams. The objective of this investigation was to determine relations between fish-community characteristics and anthropogenic alteration, including flow alteration and impervious cover, relative to the effect of physical basin and land-cover (environmental) characteristics. Fish data were obtained for 756 fish-sampling sites from the Massachusetts Division of Fisheries and Wildlife fish-community database. A review of the literature was used to select a set of fish metrics responsive to flow alteration. Fish metrics tested include two fish-community metrics (fluvial-fish relative abundance and fluvial-fish species richness), and five indicator species metrics (relative abundance of brook trout, blacknose dace, fallfish, white sucker, and redfin pickerel). Streamflows were simulated for each fish-sampling site using the Sustainable Yield Estimator application (SYE). Daily streamflows and the SYE water-use database were used to determine a set of indicators of flow alteration, including percent alteration of August median flow, water-use intensity, and withdrawal and return-flow fraction. The contributing areas to the fish-sampling sites were delineated and used with a Geographic Information System (GIS) to determine a set of environmental characteristics, including elevation, basin slope, percent sand and gravel, percent wetland, and percent open water, and a set of anthropogenic-alteration variables, including impervious cover and dam density. Two analytical techniques, quantile regression and generalized linear modeling, were applied to determine the association between fish-response variables and the selected environmental and anthropogenic explanatory variables. Quantile regression indicated that flow alteration and impervious cover were negatively associated with both fluvial-fish relative abundance and fluvial-fish species richness. Three generalized linear models (GLMs) were developed to quantify the response of fish communities to multiple environmental and anthropogenic variables. Flow-alteration variables are statistically significant for the fluvial-fish relative-abundance model. Impervious cover is statistically significant for the fluvial-fish relative-abundance, fluvial-fish species richness, and brook trout relative-abundance models. The variables in the equations were demonstrated to be significant, and the variability explained by the models, as measured by the correlation between observed and predicted values, ranges from 39 to 65 percent. The GLM models indicated that, keeping all other variables the same, a one-unit (1 percent) increase in the percent depletion or percent surcharging of August median flow would result in a 0.4-percent decrease in the relative abundance (in counts per hour) of fluvial fish and that the relative abundance of fluvial fish was expected to be about 55 percent lower in net-depleted streams than in net-surcharged streams. The GLM models also indicated that a unit increase in impervious cover resulted in a 5.5-percent decrease in the relative abundance of fluvial fish and a 2.5-percent decrease in fluvial-fish species richness.
Biosphere model simulations of interannual variability in terrestrial 13C/12C exchange
NASA Astrophysics Data System (ADS)
van der Velde, I. R.; Miller, J. B.; Schaefer, K.; Masarie, K. A.; Denning, S.; White, J. W. C.; Tans, P. P.; Krol, M. C.; Peters, W.
2013-09-01
Previous studies suggest that a large part of the variability in the atmospheric ratio of 13CO2/12CO2originates from carbon exchange with the terrestrial biosphere rather than with the oceans. Since this variability is used to quantitatively partition the total carbon sink, we here investigate the contribution of interannual variability (IAV) in biospheric exchange to the observed atmospheric 13C variations. We use the Simple Biosphere - Carnegie-Ames-Stanford Approach biogeochemical model, including a detailed isotopic fractionation scheme, separate 12C and 13C biogeochemical pools, and satellite-observed fire disturbances. This model of 12CO2 and 13CO2 thus also produces return fluxes of 13CO2from its differently aged pools, contributing to the so-called disequilibrium flux. Our simulated terrestrial 13C budget closely resembles previously published model results for plant discrimination and disequilibrium fluxes and similarly suggests that variations in C3 discrimination and year-to-year variations in C3and C4 productivity are the main drivers of their IAV. But the year-to-year variability in the isotopic disequilibrium flux is much lower (1σ=±1.5 PgC ‰ yr-1) than required (±12.5 PgC ‰ yr-1) to match atmospheric observations, under the common assumption of low variability in net ocean CO2 fluxes. This contrasts with earlier published results. It is currently unclear how to increase IAV in these drivers suggesting that SiBCASA still misses processes that enhance variability in plant discrimination and relative C3/C4productivity. Alternatively, 13C budget terms other than terrestrial disequilibrium fluxes, including possibly the atmospheric growth rate, must have significantly different IAV in order to close the atmospheric 13C budget on a year-to-year basis.
A novel model incorporating two variability sources for describing motor evoked potentials
Goetz, Stefan M.; Luber, Bruce; Lisanby, Sarah H.; Peterchev, Angel V.
2014-01-01
Objective Motor evoked potentials (MEPs) play a pivotal role in transcranial magnetic stimulation (TMS), e.g., for determining the motor threshold and probing cortical excitability. Sampled across the range of stimulation strengths, MEPs outline an input–output (IO) curve, which is often used to characterize the corticospinal tract. More detailed understanding of the signal generation and variability of MEPs would provide insight into the underlying physiology and aid correct statistical treatment of MEP data. Methods A novel regression model is tested using measured IO data of twelve subjects. The model splits MEP variability into two independent contributions, acting on both sides of a strong sigmoidal nonlinearity that represents neural recruitment. Traditional sigmoidal regression with a single variability source after the nonlinearity is used for comparison. Results The distribution of MEP amplitudes varied across different stimulation strengths, violating statistical assumptions in traditional regression models. In contrast to the conventional regression model, the dual variability source model better described the IO characteristics including phenomena such as changing distribution spread and skewness along the IO curve. Conclusions MEP variability is best described by two sources that most likely separate variability in the initial excitation process from effects occurring later on. The new model enables more accurate and sensitive estimation of the IO curve characteristics, enhancing its power as a detection tool, and may apply to other brain stimulation modalities. Furthermore, it extracts new information from the IO data concerning the neural variability—information that has previously been treated as noise. PMID:24794287
Self-dual form of Ruijsenaars-Schneider models and ILW equation with discrete Laplacian
NASA Astrophysics Data System (ADS)
Zabrodin, A.; Zotov, A.
2018-02-01
We discuss a self-dual form or the Bäcklund transformations for the continuous (in time variable) glN Ruijsenaars-Schneider model. It is based on the first order equations in N + M complex variables which include N positions of particles and M dual variables. The latter satisfy equations of motion of the glM Ruijsenaars-Schneider model. In the elliptic case it holds M = N while for the rational and trigonometric models M is not necessarily equal to N. Our consideration is similar to the previously obtained results for the Calogero-Moser models which are recovered in the non-relativistic limit. We also show that the self-dual description of the Ruijsenaars-Schneider models can be derived from complexified intermediate long wave equation with discrete Laplacian by means of the simple pole ansatz likewise the Calogero-Moser models arise from ordinary intermediate long wave and Benjamin-Ono equations.
Estimating the Effects of Pre-College Education on College Performance
2013-05-10
background information from many variables into a single measure of the expected likelihood of a person receiving treatment. This leads into a discussion of...but do not directly effect outcome variables like academic order of merit, graduation rates, or academic grades. Our model had to not only include the...both indicator variables for whether the individual’s parents ever served in any of the armed forces. High School Quality Measure is a variable
Further improvements of a new model for turbulent convection in stars
NASA Technical Reports Server (NTRS)
Canuto, V. M.; Mazzitelli, I.
1992-01-01
The effects of including a variable molecular weight and of using the newest opacities of Rogers and Iglesias (1991) as inputs to a recent model by Canuto and Mazzitelli (1991) for stellar turbulent convection are studied. Solar evolutionary tracks are used to conclude that the the original model for turbulence with mixing length Lambda = z, Giuli's variable Q unequal to 1 and the new opacities yields a fit to solar T(eff) within 0.5 percent. A formulation of Lambda is proposed that extends the purely nonlocal Lambda = z expression to include local effects. A new expression for Lambda is obtained which generalizes both the mixing length theory (MLT) phenomenological expression for Lambda as well as the model Lambda = z. It is argued that the MLT should now be abandoned.
Zhang, Peng; Parenteau, Chantal; Wang, Lu; Holcombe, Sven; Kohoyda-Inglis, Carla; Sullivan, June; Wang, Stewart
2013-11-01
This study resulted in a model-averaging methodology that predicts crash injury risk using vehicle, demographic, and morphomic variables and assesses the importance of individual predictors. The effectiveness of this methodology was illustrated through analysis of occupant chest injuries in frontal vehicle crashes. The crash data were obtained from the International Center for Automotive Medicine (ICAM) database for calendar year 1996 to 2012. The morphomic data are quantitative measurements of variations in human body 3-dimensional anatomy. Morphomics are obtained from imaging records. In this study, morphomics were obtained from chest, abdomen, and spine CT using novel patented algorithms. A NASS-trained crash investigator with over thirty years of experience collected the in-depth crash data. There were 226 cases available with occupants involved in frontal crashes and morphomic measurements. Only cases with complete recorded data were retained for statistical analysis. Logistic regression models were fitted using all possible configurations of vehicle, demographic, and morphomic variables. Different models were ranked by the Akaike Information Criteria (AIC). An averaged logistic regression model approach was used due to the limited sample size relative to the number of variables. This approach is helpful when addressing variable selection, building prediction models, and assessing the importance of individual variables. The final predictive results were developed using this approach, based on the top 100 models in the AIC ranking. Model-averaging minimized model uncertainty, decreased the overall prediction variance, and provided an approach to evaluating the importance of individual variables. There were 17 variables investigated: four vehicle, four demographic, and nine morphomic. More than 130,000 logistic models were investigated in total. The models were characterized into four scenarios to assess individual variable contribution to injury risk. Scenario 1 used vehicle variables; Scenario 2, vehicle and demographic variables; Scenario 3, vehicle and morphomic variables; and Scenario 4 used all variables. AIC was used to rank the models and to address over-fitting. In each scenario, the results based on the top three models and the averages of the top 100 models were presented. The AIC and the area under the receiver operating characteristic curve (AUC) were reported in each model. The models were re-fitted after removing each variable one at a time. The increases of AIC and the decreases of AUC were then assessed to measure the contribution and importance of the individual variables in each model. The importance of the individual variables was also determined by their weighted frequencies of appearance in the top 100 selected models. Overall, the AUC was 0.58 in Scenario 1, 0.78 in Scenario 2, 0.76 in Scenario 3 and 0.82 in Scenario 4. The results showed that morphomic variables are as accurate at predicting injury risk as demographic variables. The results of this study emphasize the importance of including morphomic variables when assessing injury risk. The results also highlight the need for morphomic data in the development of human mathematical models when assessing restraint performance in frontal crashes, since morphomic variables are more "tangible" measurements compared to demographic variables such as age and gender. Copyright © 2013 Elsevier Ltd. All rights reserved.
Seasonally adjusted birth frequencies follow the Poisson distribution.
Barra, Mathias; Lindstrøm, Jonas C; Adams, Samantha S; Augestad, Liv A
2015-12-15
Variations in birth frequencies have an impact on activity planning in maternity wards. Previous studies of this phenomenon have commonly included elective births. A Danish study of spontaneous births found that birth frequencies were well modelled by a Poisson process. Somewhat unexpectedly, there were also weekly variations in the frequency of spontaneous births. Another study claimed that birth frequencies follow the Benford distribution. Our objective was to test these results. We analysed 50,017 spontaneous births at Akershus University Hospital in the period 1999-2014. To investigate the Poisson distribution of these births, we plotted their variance over a sliding average. We specified various Poisson regression models, with the number of births on a given day as the outcome variable. The explanatory variables included various combinations of years, months, days of the week and the digit sum of the date. The relationship between the variance and the average fits well with an underlying Poisson process. A Benford distribution was disproved by a goodness-of-fit test (p < 0.01). The fundamental model with year and month as explanatory variables is significantly improved (p < 0.001) by adding day of the week as an explanatory variable. Altogether 7.5% more children are born on Tuesdays than on Sundays. The digit sum of the date is non-significant as an explanatory variable (p = 0.23), nor does it increase the explained variance. INERPRETATION: Spontaneous births are well modelled by a time-dependent Poisson process when monthly and day-of-the-week variation is included. The frequency is highest in summer towards June and July, Friday and Tuesday stand out as particularly busy days, and the activity level is at its lowest during weekends.
Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale
NASA Astrophysics Data System (ADS)
Kreibich, Heidi; Schröter, Kai; Merz, Bruno
2016-05-01
Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB).In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.
Factors Determining Success in Youth Judokas
Krstulović, Saša; Caput, Petra Đapić
2017-01-01
Abstract The aim of this study was to compare two models of determining factors for success in judo. The first model (Model A) included testing motor abilities of high-level Croatian judokas in the cadet age category. The sample in Model A consisted of 71 male and female judokas aged 16 ± 0.6 years who were divided into four subsamples according to sex and weight category. The second model (Model B) consisted of interviewing 40 top-level judo experts on the importance of motor abilities for cadets’ success in judo. According to Model A, the greatest impact on the criterion variable of success in males and females of heavier weight categories were variables assessing maximum strength, coordination and jumping ability. In the lighter weight male categories, the highest correlation with the criterion variable of success was the variable assessing agility. However, in the lighter weight female categories, the greatest impact on success had the variable assessing muscular endurance. In Model B, specific endurance was crucial for success in judo, while flexibility was the least important, regardless of sex and weight category. Spearman’s rank correlation coefficients showed that there were no significant correlations in the results obtained in Models A and B for all observed subsamples. Although no significant correlations between the factors for success obtained through Models A and B were found, common determinants of success, regardless of the applied model, were identified. PMID:28469759
NASA Astrophysics Data System (ADS)
Tian, Ran; Dai, Xiaoye; Wang, Dabiao; Shi, Lin
2018-06-01
In order to improve the prediction performance of the numerical simulations for heat transfer of supercritical pressure fluids, a variable turbulent Prandtl number (Prt) model for vertical upward flow at supercritical pressures was developed in this study. The effects of Prt on the numerical simulation were analyzed, especially for the heat transfer deterioration conditions. Based on the analyses, the turbulent Prandtl number was modeled as a function of the turbulent viscosity ratio and molecular Prandtl number. The model was evaluated using experimental heat transfer data of CO2, water and Freon. The wall temperatures, including the heat transfer deterioration cases, were more accurately predicted by this model than by traditional numerical calculations with a constant Prt. By analyzing the predicted results with and without the variable Prt model, it was found that the predicted velocity distribution and turbulent mixing characteristics with the variable Prt model are quite different from that predicted by a constant Prt. When heat transfer deterioration occurs, the radial velocity profile deviates from the log-law profile and the restrained turbulent mixing then leads to the deteriorated heat transfer.
A Protective Factors Model for Alcohol Abuse and Suicide Prevention among Alaska Native Youth
Allen, James; Mohatt, Gerald V.; Fok, Carlotta Ching Ting; Henry, David; Burkett, Rebekah
2014-01-01
This study provides an empirical test of a culturally grounded theoretical model for prevention of alcohol abuse and suicide risk with Alaska Native youth, using a promising set of culturally appropriate measures for the study of the process of change and outcome. This model is derived from qualitative work that generated an heuristic model of protective factors from alcohol (Allen at al., 2006; Mohatt, Hazel et al., 2004; Mohatt, Rasmus et al., 2004). Participants included 413 rural Alaska Native youth ages 12-18 who assisted in testing a predictive model of Reasons for Life and Reflective Processes about alcohol abuse consequences as co-occurring outcomes. Specific individual, family, peer, and community level protective factor variables predicted these outcomes. Results suggest prominent roles for these predictor variables as intermediate prevention strategy target variables in a theoretical model for a multilevel intervention. The model guides understanding of underlying change processes in an intervention to increase the ultimate outcome variables of Reasons for Life and Reflective Processes regarding the consequences of alcohol abuse. PMID:24952249
Response of benthic algae to environmental gradients in an agriculturally dominated landscape
Munn, M.D.; Black, R.W.; Gruber, S.J.
2002-01-01
Benthic algal communities were assessed in an agriculturally dominated landscape in the Central Columbia Plateau, Washington, to determine which environmental variables best explained species distributions, and whether algae species optima models were useful in predicting specific water-quality parameters. Land uses in the study area included forest, range, urban, and agriculture. Most of the streams in this region can be characterized as open-channel systems influenced by intensive dryland (nonirrigated) and irrigated agriculture. Algal communities in forested streams were dominated by blue-green algae, with communities in urban and range streams dominated by diatoms. The predominance of either blue-greens or diatoms in agricultural streams varied greatly depending on the specific site. Canonical correspondence analysis (CCA) indicated a strong gradient effect of several key environmental variables on benthic algal community composition. Conductivity and % agriculture were the dominant explanatory variables when all sites (n = 24) were included in the CCA; water velocity replaced conductivity when the CCA included only agricultural and urban sites. Other significant explanatory variables included dissolved inorganic nitrogen (DIN), orthophosphate (OP), discharge, and precipitation. Regression and calibration models accurately predicted conductivity based on benthic algal communities, with OP having slightly lower predictability. The model for DIN was poor, and therefore may be less useful in this system. Thirty-four algal taxa were identified as potential indicators of conductivity and nutrient conditions, with most indicators being diatoms except for the blue-greens Anabaenasp. and Lyngbya sp.
NASA Technical Reports Server (NTRS)
Pap, Judit M. (Editor); Froehlich, Claus (Editor); Hudson, Hugh S. (Editor); Tobiska, W. Kent (Editor)
1994-01-01
Variations in solar and stellar irradiances have long been of interest. An International Astronomical Union (IAU) colloquium reviewed such relevant subjects as observations, theoretical interpretations, and empirical and physical models, with a special emphasis on climatic impact of solar irradiance variability. Specific topics discussed included: (1) General Reviews on Observations of Solar and Stellar Irradiance Variability; (2) Observational Programs for Solar and Stellar Irradiance Variability; (3) Variability of Solar and Stellar Irradiance Related to the Network, Active Regions (Sunspots and Plages), and Large-Scale Magnetic Structures; (4) Empirical Models of Solar Total and Spectral Irradiance Variability; (5) Solar and Stellar Oscillations, Irradiance Variations and their Interpretations; and (6) The Response of the Earth's Atmosphere to Solar Irradiance Variations and Sun-Climate Connections.
NASA Astrophysics Data System (ADS)
Tesemma, Z. K.; Wei, Y.; Peel, M. C.; Western, A. W.
2014-09-01
This study assessed the effect of using observed monthly leaf area index (LAI) on hydrologic model performance and the simulation of streamflow during drought using the variable infiltration capacity (VIC) hydrological model in the Goulburn-Broken catchment of Australia, which has heterogeneous vegetation, soil and climate zones. VIC was calibrated with both observed monthly LAI and long-term mean monthly LAI, which were derived from the Global Land Surface Satellite (GLASS) observed monthly LAI dataset covering the period from 1982 to 2012. The model performance under wet and dry climates for the two different LAI inputs was assessed using three criteria, the classical Nash-Sutcliffe efficiency, the logarithm transformed flow Nash-Sutcliffe efficiency and the percentage bias. Finally, the percentage deviation of the simulated monthly streamflow using the observed monthly LAI from simulated streamflow using long-term mean monthly LAI was computed. The VIC model predicted monthly streamflow in the selected sub-catchments with model efficiencies ranging from 61.5 to 95.9% during calibration (1982-1997) and 59 to 92.4% during validation (1998-2012). Our results suggest systematic improvements from 4 to 25% in the Nash-Sutcliffe efficiency in pasture dominated catchments when the VIC model was calibrated with the observed monthly LAI instead of the long-term mean monthly LAI. There was limited systematic improvement in tree dominated catchments. The results also suggest that the model overestimation or underestimation of streamflow during wet and dry periods can be reduced to some extent by including the year-to-year variability of LAI in the model, thus reflecting the responses of vegetation to fluctuations in climate and other factors. Hence, the year-to-year variability in LAI should not be neglected; rather it should be included in model calibration as well as simulation of monthly water balance.
NASA Astrophysics Data System (ADS)
Tesemma, Z. K.; Wei, Y.; Peel, M. C.; Western, A. W.
2015-09-01
This study assessed the effect of using observed monthly leaf area index (LAI) on hydrological model performance and the simulation of runoff using the Variable Infiltration Capacity (VIC) hydrological model in the Goulburn-Broken catchment of Australia, which has heterogeneous vegetation, soil and climate zones. VIC was calibrated with both observed monthly LAI and long-term mean monthly LAI, which were derived from the Global Land Surface Satellite (GLASS) leaf area index dataset covering the period from 1982 to 2012. The model performance under wet and dry climates for the two different LAI inputs was assessed using three criteria, the classical Nash-Sutcliffe efficiency, the logarithm transformed flow Nash-Sutcliffe efficiency and the percentage bias. Finally, the deviation of the simulated monthly runoff using the observed monthly LAI from simulated runoff using long-term mean monthly LAI was computed. The VIC model predicted monthly runoff in the selected sub-catchments with model efficiencies ranging from 61.5% to 95.9% during calibration (1982-1997) and 59% to 92.4% during validation (1998-2012). Our results suggest systematic improvements, from 4% to 25% in Nash-Sutcliffe efficiency, in sparsely forested sub-catchments when the VIC model was calibrated with observed monthly LAI instead of long-term mean monthly LAI. There was limited systematic improvement in tree dominated sub-catchments. The results also suggest that the model overestimation or underestimation of runoff during wet and dry periods can be reduced to 25 mm and 35 mm respectively by including the year-to-year variability of LAI in the model, thus reflecting the responses of vegetation to fluctuations in climate and other factors. Hence, the year-to-year variability in LAI should not be neglected; rather it should be included in model calibration as well as simulation of monthly water balance.
Akter, Rokeya; Hu, Wenbiao; Naish, Suchithra; Banu, Shahera; Tong, Shilu
2017-06-01
To assess the epidemiological evidence on the joint effects of climate variability and socioecological factors on dengue transmission. Following PRISMA guidelines, a detailed literature search was conducted in PubMed, Web of Science and Scopus. Peer-reviewed, freely available and full-text articles, considering both climate and socioecological factors in relation to dengue, published in English from January 1993 to October 2015 were included in this review. Twenty studies have met the inclusion criteria and assessed the impact of both climatic and socioecological factors on dengue dynamics. Among those, four studies have further investigated the relative importance of climate variability and socioecological factors on dengue transmission. A few studies also developed predictive models including both climatic and socioecological factors. Due to insufficient data, methodological issues and contextual variability of the studies, it is hard to draw conclusion on the joint effects of climate variability and socioecological factors on dengue transmission. Future research should take into account socioecological factors in combination with climate variables for a better understanding of the complex nature of dengue transmission as well as for improving the predictive capability of dengue forecasting models, to develop effective and reliable early warning systems. © 2017 John Wiley & Sons Ltd.
Variability and Predictability of Land-Atmosphere Interactions: Observational and Modeling Studies
NASA Technical Reports Server (NTRS)
Roads, John; Oglesby, Robert; Marshall, Susan; Robertson, Franklin R.
2002-01-01
The overall goal of this project is to increase our understanding of seasonal to interannual variability and predictability of atmosphere-land interactions. The project objectives are to: 1. Document the low frequency variability in land surface features and associated water and energy cycles from general circulation models (GCMs), observations and reanalysis products. 2. Determine what relatively wet and dry years have in common on a region-by-region basis and then examine the physical mechanisms that may account for a significant portion of the variability. 3. Develop GCM experiments to examine the hypothesis that better knowledge of the land surface enhances long range predictability. This investigation is aimed at evaluating and predicting seasonal to interannual variability for selected regions emphasizing the role of land-atmosphere interactions. Of particular interest are the relationships between large, regional and local scales and how they interact to account for seasonal and interannual variability, including extreme events such as droughts and floods. North and South America, including the Global Energy and Water Cycle Experiment Continental International Project (GEWEX GCIP), MacKenzie, and LBA basins, are currently being emphasized. We plan to ultimately generalize and synthesize to other land regions across the globe, especially those pertinent to other GEWEX projects.
Acuña, Gonzalo; Ramirez, Cristian; Curilem, Millaray
2014-01-01
The lack of sensors for some relevant state variables in fermentation processes can be coped by developing appropriate software sensors. In this work, NARX-ANN, NARMAX-ANN, NARX-SVM and NARMAX-SVM models are compared when acting as software sensors of biomass concentration for a solid substrate cultivation (SSC) process. Results show that NARMAX-SVM outperforms the other models with an SMAPE index under 9 for a 20 % amplitude noise. In addition, NARMAX models perform better than NARX models under the same noise conditions because of their better predictive capabilities as they include prediction errors as inputs. In the case of perturbation of initial conditions of the autoregressive variable, NARX models exhibited better convergence capabilities. This work also confirms that a difficult to measure variable, like biomass concentration, can be estimated on-line from easy to measure variables like CO₂ and O₂ using an adequate software sensor based on computational intelligence techniques.
Abstract: Inference and Interval Estimation for Indirect Effects With Latent Variable Models.
Falk, Carl F; Biesanz, Jeremy C
2011-11-30
Models specifying indirect effects (or mediation) and structural equation modeling are both popular in the social sciences. Yet relatively little research has compared methods that test for indirect effects among latent variables and provided precise estimates of the effectiveness of different methods. This simulation study provides an extensive comparison of methods for constructing confidence intervals and for making inferences about indirect effects with latent variables. We compared the percentile (PC) bootstrap, bias-corrected (BC) bootstrap, bias-corrected accelerated (BC a ) bootstrap, likelihood-based confidence intervals (Neale & Miller, 1997), partial posterior predictive (Biesanz, Falk, and Savalei, 2010), and joint significance tests based on Wald tests or likelihood ratio tests. All models included three reflective latent variables representing the independent, dependent, and mediating variables. The design included the following fully crossed conditions: (a) sample size: 100, 200, and 500; (b) number of indicators per latent variable: 3 versus 5; (c) reliability per set of indicators: .7 versus .9; (d) and 16 different path combinations for the indirect effect (α = 0, .14, .39, or .59; and β = 0, .14, .39, or .59). Simulations were performed using a WestGrid cluster of 1680 3.06GHz Intel Xeon processors running R and OpenMx. Results based on 1,000 replications per cell and 2,000 resamples per bootstrap method indicated that the BC and BC a bootstrap methods have inflated Type I error rates. Likelihood-based confidence intervals and the PC bootstrap emerged as methods that adequately control Type I error and have good coverage rates.
Modeling annual mallard production in the prairie-parkland region
Miller, M.W.
2000-01-01
Biologists have proposed several environmental factors that might influence production of mallards (Anas platyrhynchos) nesting in the prairie-parkland region of the United States and Canada. These factors include precipitation, cold spring temperatures, wetland abundance, and upland breeding habitat. I used long-term historical data sets of climate, wetland numbers, agricultural land use, and size of breeding mallard populations in multiple regression analyses to model annual indices of mallard production. Models were constructed at 2 scales: a continental scale that encompassed most of the mid-continental breeding range of mallards and a stratum-level scale that included 23 portions of that same breeding range. The production index at the continental scale was the estimated age ratio of mid-continental mallards in early fall; at the stratum scale my production index was the estimated number of broods of all duck species within an aerial survey stratum. Size of breeding mallard populations in May, and pond numbers in May and July, best modeled production at the continental scale. Variables that best modeled production at the stratum scale differed by region. Crop variables tended to appear more in models for western Canadian strata; pond variables predominated in models for United States strata; and spring temperature and pond variables dominated models for eastern Canadian strata. An index of cold spring temperatures appeared in 4 of 6 models for aspen parkland strata, and in only 1 of 11 models for strata dominated by prairie. Stratum-level models suggest that regional factors influencing mallard production are not evident at a larger scale. Testing these potential factors in a manipulative fashion would improve our understanding of mallard population dynamics, improving our ability to manage the mid-continental mallard population.
Ackerman, Daniel J.; Rattray, Gordon W.; Rousseau, Joseph P.; Davis, Linda C.; Orr, Brennon R.
2006-01-01
Ground-water flow in the west-central part of the eastern Snake River Plain aquifer is described in a conceptual model that will be used in numerical simulations to evaluate contaminant transport at the Idaho National Laboratory (INL) and vicinity. The model encompasses an area of 1,940 square miles (mi2) and includes most of the 890 mi2 of the INL. A 50-year history of waste disposal associated with research activities at the INL has resulted in measurable concentrations of waste contaminants in the aquifer. A thorough understanding of the fate and movement of these contaminants in the subsurface is needed by the U.S. Department of Energy to minimize the effect that contaminated ground water may have on the region and to plan effectively for remediation. Three hydrogeologic units were used to represent the complex stratigraphy of the aquifer in the model area. Collectively, these hydrogeologic units include at least 65 basalt-flow groups, 5 andesite-flow groups, and 61 sedimentary interbeds. Three rhyolite domes in the model area extend deep enough to penetrate the aquifer. The rhyolite domes are represented in the conceptual model as low permeability, vertical pluglike masses, and are not included as part of the three primary hydrogeologic units. Broad differences in lithology and large variations in hydraulic properties allowed the heterogeneous, anisotropic basalt-flow groups, andesite-flow groups, and sedimentary interbeds to be grouped into three hydrogeologic units that are conceptually homogeneous and anisotropic. Younger rocks, primarily thin, densely fractured basalt, compose hydrogeologic unit 1; younger rocks, primarily of massive, less densely fractured basalt, compose hydrogeologic unit 2; and intermediate-age rocks, primarily of slightly-to-moderately altered, fractured basalt, compose hydrogeologic unit 3. Differences in hydraulic properties among adjacent hydrogeologic units result in much of the large-scale heterogeneity and anisotropy of the aquifer in the model area, and differences in horizontal and vertical hydraulic conductivity in individual hydrogeologic units result in much of the small-scale heterogeneity and anisotropy of the aquifer in the model area. The inferred three-dimensional geometry of the aquifer in the model area is very irregular. Its thickness generally increases from north to south and from west to east and is greatest south of the INL. The interpreted distribution of older rocks that underlie the aquifer indicates large changes in saturated thickness across the model area. The boundaries of the model include physical and artificial boundaries, and ground-water flows across the boundaries may be temporally constant or variable and spatially uniform or nonuniform. Physical boundaries include the water-table boundary, base of the aquifer, and northwest mountain-front boundary. Artificial boundaries include the northeast boundary, southeast-flowline boundary, and southwest boundary. Water flows into the model area as (1) underflow (1,225 cubic feet per second (ft3/s)) from the regional aquifer (northeast boundary-constant and nonuniform), (2) underflow (695 ft3/s) from the tributary valleys and mountain fronts (northwest boundary-constant and nonuniform), (3) precipitation recharge (70 ft3/s) (constant and uniform), streamflow-infiltration recharge (95 ft3/s) (variable and nonuniform), wastewater return flows (6 ft3/s) (variable and nonuniform), and irrigation-infiltration recharge (24 ft3/s) (variable and nonuniform) across the water table (water-table boundary-variable and nonuniform), and (4) upward flow across the base of the aquifer (44 ft3/s) (uniform and constant). The southeast-flowline boundary is represented as a no-flow boundary. Water flows out of the model area as underflow (2,037 ft3/s) to the regional aquifer (southwest boundary-variable and nonuniform) and as ground-water withdrawals (45 ft3/s) (water table boundary-variable and nonuniform). Ground-water flow i
From the Last Interglacial to the Anthropocene: Modelling a Complete Glacial Cycle (PalMod)
NASA Astrophysics Data System (ADS)
Brücher, Tim; Latif, Mojib
2017-04-01
We will give a short overview and update on the current status of the national climate modelling initiative PalMod (Paleo Modelling, www.palmod.de). PalMod focuses on the understanding of the climate system dynamics and its variability during the last glacial cycle. The initiative is funded by the German Federal Ministry of Education and Research (BMBF) and its specific topics are: (i) to identify and quantify the relative contributions of the fundamental processes which determined the Earth's climate trajectory and variability during the last glacial cycle, (ii) to simulate with comprehensive Earth System Models (ESMs) the climate from the peak of the last interglacial - the Eemian warm period - up to the present, including the changes in the spectrum of variability, and (iii) to assess possible future climate trajectories beyond this century during the next millennia with sophisticated ESMs tested in such a way. The research is intended to be conducted over a period of 10 years, but with shorter funding cycles. PalMod kicked off in February 2016. The first phase focuses on the last deglaciation (app. the last 23.000 years). From the ESM perspective PalMod pushes forward model development by coupling ESM with dynamical ice sheet models. Computer scientists work on speeding up climate models using different concepts (like parallelisation in time) and one working group is dedicated to perform a comprehensive data synthesis to validate model performance. The envisioned approach is innovative in three respects. First, the consortium aims at simulating a full glacial cycle in transient mode and with comprehensive ESMs which allow full interactions between the physical and biogeochemical components of the Earth system, including ice sheets. Second, we shall address climate variability during the last glacial cycle on a large range of time scales, from interannual to multi-millennial, and attempt to quantify the relative contributions of external forcing and processes internal to the Earth system to climate variability at different time scales. Third, in order to achieve a higher level of understanding of natural climate variability at time scales of millennia, its governing processes and implications for the future climate, we bring together three different research communities: the Earth system modeling community, the proxy data community and the computational science community. The consortium consists of 18 partners including all major modelling centers within Germany. The funding comprises approximately 65 PostDoc positions and more than 120 scientists are involved. PalMod is coordinated at the Helmholtz Centre for Ocean Research Kiel (GEOMAR).
Jensen, Jacob S; Egebo, Max; Meyer, Anne S
2008-05-28
Accomplishment of fast tannin measurements is receiving increased interest as tannins are important for the mouthfeel and color properties of red wines. Fourier transform mid-infrared spectroscopy allows fast measurement of different wine components, but quantification of tannins is difficult due to interferences from spectral responses of other wine components. Four different variable selection tools were investigated for the identification of the most important spectral regions which would allow quantification of tannins from the spectra using partial least-squares regression. The study included the development of a new variable selection tool, iterative backward elimination of changeable size intervals PLS. The spectral regions identified by the different variable selection methods were not identical, but all included two regions (1485-1425 and 1060-995 cm(-1)), which therefore were concluded to be particularly important for tannin quantification. The spectral regions identified from the variable selection methods were used to develop calibration models. All four variable selection methods identified regions that allowed an improved quantitative prediction of tannins (RMSEP = 69-79 mg of CE/L; r = 0.93-0.94) as compared to a calibration model developed using all variables (RMSEP = 115 mg of CE/L; r = 0.87). Only minor differences in the performance of the variable selection methods were observed.
The Effects of Local Economic Conditions on Navy Enlistments.
1980-03-18
Standard Metropolitan Statistical Area (SMSA) as the basic economic unit, cross-sectional regression models were constructed for enlistment rate, recruiter...to eligible population suggesting that a cheaper alternative to raising mili- tary wages would be to increase the number of recruiters. Arima (1978...is faced with a number of cri- teria that must be satisfied by an acceptable test variable. As with other variables included in the model , economic
NASA Astrophysics Data System (ADS)
Kefauver, Shawn C.; Peñuelas, Josep; Ustin, Susan L.
2012-12-01
The impacts of tropospheric ozone on conifer health in the Sierra Nevada of California, USA, and the Pyrenees of Catalonia, Spain, were measured using field assessments and GIS variables of landscape gradients related to plant water relations, stomatal conductance and hence to ozone uptake. Measurements related to ozone injury included visible chlorotic mottling, needle retention, needle length, and crown depth, which together compose the Ozone Injury Index (OII). The OII values observed in Catalonia were similar to those in California, but OII alone correlated poorly to ambient ozone in all sites. Combining ambient ozone with GIS variables related to landscape variability of plant hydrological status, derived from stepwise regressions, produced models with R2 = 0.35, p = 0.016 in Catalonia, R2 = 0.36, p < 0.001 in Yosemite and R2 = 0.33, p = 0.007 in Sequoia/Kings Canyon National Parks in California. Individual OII components in Catalonia were modeled with improved success compared to the original full OII, in particular visible chlorotic mottling (R2 = 0.60, p < 0.001). The results show that ozone is negatively impacting forest health in California and Catalonia and also that modeling ozone injury improves by including GIS variables related to plant water relations.
NASA Astrophysics Data System (ADS)
Minaya, Veronica; Corzo, Gerald; van der Kwast, Johannes; Galarraga, Remigio; Mynett, Arthur
2014-05-01
Simulations of carbon cycling are prone to uncertainties from different sources, which in general are related to input data, parameters and the model representation capacities itself. The gross carbon uptake in the cycle is represented by the gross primary production (GPP), which deals with the spatio-temporal variability of the precipitation and the soil moisture dynamics. This variability associated with uncertainty of the parameters can be modelled by multivariate probabilistic distributions. Our study presents a novel methodology that uses multivariate Copulas analysis to assess the GPP. Multi-species and elevations variables are included in a first scenario of the analysis. Hydro-meteorological conditions that might generate a change in the next 50 or more years are included in a second scenario of this analysis. The biogeochemical model BIOME-BGC was applied in the Ecuadorian Andean region in elevations greater than 4000 masl with the presence of typical vegetation of páramo. The change of GPP over time is crucial for climate scenarios of the carbon cycling in this type of ecosystem. The results help to improve our understanding of the ecosystem function and clarify the dynamics and the relationship with the change of climate variables. Keywords: multivariate analysis, Copula, BIOME-BGC, NPP, páramos
A discriminant function model for admission at undergraduate university level
NASA Astrophysics Data System (ADS)
Ali, Hamdi F.; Charbaji, Abdulrazzak; Hajj, Nada Kassim
1992-09-01
The study is aimed at predicting objective criteria based on a statistically tested model for admitting undergraduate students to Beirut University College. The University is faced with a dual problem of having to select only a fraction of an increasing number of applicants, and of trying to minimize the number of students placed on academic probation (currently 36 percent of new admissions). Out of 659 new students, a sample of 272 students (45 percent) were selected; these were all the students on the Dean's list and on academic probation. With academic performance as the dependent variable, the model included ten independent variables and their interactions. These variables included the type of high school, the language of instruction in high school, recommendations, sex, academic average in high school, score on the English Entrance Examination, the major in high school, and whether the major was originally applied for by the student. Discriminant analysis was used to evaluate the relative weight of the independent variables, and from the analysis three equations were developed, one for each academic division in the College. The predictive power of these equations was tested by using them to classify students not in the selected sample into successful and unsuccessful ones. Applicability of the model to other institutions of higher learning is discussed.
NASA Technical Reports Server (NTRS)
Reichle, Rolf H.; De Lannoy, Gabrielle J. M.; Forman, Barton A.; Draper, Clara S.; Liu, Qing
2013-01-01
A land data assimilation system (LDAS) can merge satellite observations (or retrievals) of land surface hydrological conditions, including soil moisture, snow, and terrestrial water storage (TWS), into a numerical model of land surface processes. In theory, the output from such a system is superior to estimates based on the observations or the model alone, thereby enhancing our ability to understand, monitor, and predict key elements of the terrestrial water cycle. In practice, however, satellite observations do not correspond directly to the water cycle variables of interest. The present paper addresses various aspects of this seeming mismatch using examples drawn from recent research with the ensemble-based NASA GEOS-5 LDAS. These aspects include (1) the assimilation of coarse-scale observations into higher-resolution land surface models, (2) the partitioning of satellite observations (such as TWS retrievals) into their constituent water cycle components, (3) the forward modeling of microwave brightness temperatures over land for radiance-based soil moisture and snow assimilation, and (4) the selection of the most relevant types of observations for the analysis of a specific water cycle variable that is not observed (such as root zone soil moisture). The solution to these challenges involves the careful construction of an observation operator that maps from the land surface model variables of interest to the space of the assimilated observations.
Samad, Manar D; Ulloa, Alvaro; Wehner, Gregory J; Jing, Linyuan; Hartzel, Dustin; Good, Christopher W; Williams, Brent A; Haggerty, Christopher M; Fornwalt, Brandon K
2018-06-09
The goal of this study was to use machine learning to more accurately predict survival after echocardiography. Predicting patient outcomes (e.g., survival) following echocardiography is primarily based on ejection fraction (EF) and comorbidities. However, there may be significant predictive information within additional echocardiography-derived measurements combined with clinical electronic health record data. Mortality was studied in 171,510 unselected patients who underwent 331,317 echocardiograms in a large regional health system. We investigated the predictive performance of nonlinear machine learning models compared with that of linear logistic regression models using 3 different inputs: 1) clinical variables, including 90 cardiovascular-relevant International Classification of Diseases, Tenth Revision, codes, and age, sex, height, weight, heart rate, blood pressures, low-density lipoprotein, high-density lipoprotein, and smoking; 2) clinical variables plus physician-reported EF; and 3) clinical variables and EF, plus 57 additional echocardiographic measurements. Missing data were imputed with a multivariate imputation by using a chained equations algorithm (MICE). We compared models versus each other and baseline clinical scoring systems by using a mean area under the curve (AUC) over 10 cross-validation folds and across 10 survival durations (6 to 60 months). Machine learning models achieved significantly higher prediction accuracy (all AUC >0.82) over common clinical risk scores (AUC = 0.61 to 0.79), with the nonlinear random forest models outperforming logistic regression (p < 0.01). The random forest model including all echocardiographic measurements yielded the highest prediction accuracy (p < 0.01 across all models and survival durations). Only 10 variables were needed to achieve 96% of the maximum prediction accuracy, with 6 of these variables being derived from echocardiography. Tricuspid regurgitation velocity was more predictive of survival than LVEF. In a subset of studies with complete data for the top 10 variables, multivariate imputation by chained equations yielded slightly reduced predictive accuracies (difference in AUC of 0.003) compared with the original data. Machine learning can fully utilize large combinations of disparate input variables to predict survival after echocardiography with superior accuracy. Copyright © 2018 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Kamigaki, Taro; Chaw, Liling; Tan, Alvin G; Tamaki, Raita; Alday, Portia P; Javier, Jenaline B; Olveda, Remigio M; Oshitani, Hitoshi; Tallo, Veronica L
2016-01-01
The seasonality of influenza and respiratory syncytial virus (RSV) is well known, and many analyses have been conducted in temperate countries; however, this is still not well understood in tropical countries. Previous studies suggest that climate factors are involved in the seasonality of these viruses. However, the extent of the effect of each climate variable is yet to be defined. We investigated the pattern of seasonality and the effect of climate variables on influenza and RSV at three sites of different latitudes: the Eastern Visayas region and Baguio City in the Philippines, and Okinawa Prefecture in Japan. Wavelet analysis and the dynamic linear regression model were applied. Climate variables used in the analysis included mean temperature, relative and specific humidity, precipitation, and number of rainy days. The Akaike Information Criterion estimated in each model was used to test the improvement of fit in comparison with the baseline model. At all three study sites, annual seasonal peaks were observed in influenza A and RSV; peaks were unclear for influenza B. Ranges of climate variables at the two Philippine sites were narrower and mean variables were significantly different among the three sites. Whereas all climate variables except the number of rainy days improved model fit to the local trend model, their contributions were modest. Mean temperature and specific humidity were positively associated with influenza and RSV at the Philippine sites and negatively associated with influenza A in Okinawa. Precipitation also improved model fit for influenza and RSV at both Philippine sites, except for the influenza A model in the Eastern Visayas. Annual seasonal peaks were observed for influenza A and RSV but were less clear for influenza B at all three study sites. Including additional data from subsequent more years would help to ascertain these findings. Annual amplitude and variation in climate variables are more important than their absolute values for determining their effect on the seasonality of influenza and RSV.
Newtonian nudging for a Richards equation-based distributed hydrological model
NASA Astrophysics Data System (ADS)
Paniconi, Claudio; Marrocu, Marino; Putti, Mario; Verbunt, Mark
The objective of data assimilation is to provide physically consistent estimates of spatially distributed environmental variables. In this study a relatively simple data assimilation method has been implemented in a relatively complex hydrological model. The data assimilation technique is Newtonian relaxation or nudging, in which model variables are driven towards observations by a forcing term added to the model equations. The forcing term is proportional to the difference between simulation and observation (relaxation component) and contains four-dimensional weighting functions that can incorporate prior knowledge about the spatial and temporal variability and characteristic scales of the state variable(s) being assimilated. The numerical model couples a three-dimensional finite element Richards equation solver for variably saturated porous media and a finite difference diffusion wave approximation based on digital elevation data for surface water dynamics. We describe the implementation of the data assimilation algorithm for the coupled model and report on the numerical and hydrological performance of the resulting assimilation scheme. Nudging is shown to be successful in improving the hydrological simulation results, and it introduces little computational cost, in terms of CPU and other numerical aspects of the model's behavior, in some cases even improving numerical performance compared to model runs without nudging. We also examine the sensitivity of the model to nudging term parameters including the spatio-temporal influence coefficients in the weighting functions. Overall the nudging algorithm is quite flexible, for instance in dealing with concurrent observation datasets, gridded or scattered data, and different state variables, and the implementation presented here can be readily extended to any of these features not already incorporated. Moreover the nudging code and tests can serve as a basis for implementation of more sophisticated data assimilation techniques in a Richards equation-based hydrological model.
Evaluation of internal noise methods for Hotelling observers
NASA Astrophysics Data System (ADS)
Zhang, Yani; Pham, Binh T.; Eckstein, Miguel P.
2005-04-01
Including internal noise in computer model observers to degrade model observer performance to human levels is a common method to allow for quantitatively comparisons of human and model performance. In this paper, we studied two different types of methods for injecting internal noise to Hotelling model observers. The first method adds internal noise to the output of the individual channels: a) Independent non-uniform channel noise, b) Independent uniform channel noise. The second method adds internal noise to the decision variable arising from the combination of channel responses: a) internal noise standard deviation proportional to decision variable's standard deviation due to the external noise, b) internal noise standard deviation proportional to decision variable's variance caused by the external noise. We tested the square window Hotelling observer (HO), channelized Hotelling observer (CHO), and Laguerre-Gauss Hotelling observer (LGHO). The studied task was detection of a filling defect of varying size/shape in one of four simulated arterial segment locations with real x-ray angiography backgrounds. Results show that the internal noise method that leads to the best prediction of human performance differs across the studied models observers. The CHO model best predicts human observer performance with the channel internal noise. The HO and LGHO best predict human observer performance with the decision variable internal noise. These results might help explain why previous studies have found different results on the ability of each Hotelling model to predict human performance. Finally, the present results might guide researchers with the choice of method to include internal noise into their Hotelling models.
NASA Technical Reports Server (NTRS)
Collatz, G. James; Kawa, R.
2007-01-01
Progress in better determining CO2 sources and sinks will almost certainly rely on utilization of more extensive and intensive CO2 and related observations including those from satellite remote sensing. Use of advanced data requires improved modeling and analysis capability. Under NASA Carbon Cycle Science support we seek to develop and integrate improved formulations for 1) atmospheric transport, 2) terrestrial uptake and release, 3) biomass and 4) fossil fuel burning, and 5) observational data analysis including inverse calculations. The transport modeling is based on meteorological data assimilation analysis from the Goddard Modeling and Assimilation Office. Use of assimilated met data enables model comparison to CO2 and other observations across a wide range of scales of variability. In this presentation we focus on the short end of the temporal variability spectrum: hourly to synoptic to seasonal. Using CO2 fluxes at varying temporal resolution from the SIB 2 and CASA biosphere models, we examine the model's ability to simulate CO2 variability in comparison to observations at different times, locations, and altitudes. We find that the model can resolve much of the variability in the observations, although there are limits imposed by vertical resolution of boundary layer processes. The influence of key process representations is inferred. The high degree of fidelity in these simulations leads us to anticipate incorporation of realtime, highly resolved observations into a multiscale carbon cycle analysis system that will begin to bridge the gap between top-down and bottom-up flux estimation, which is a primary focus of NACP.
Alcohol-impaired driving: average quantity consumed and frequency of drinking do matter.
Birdsall, William C; Reed, Beth Glover; Huq, Syeda S; Wheeler, Laura; Rush, Sarah
2012-01-01
The objective of this article is to estimate and validate a logistic model of alcohol-impaired driving using previously ignored alcohol consumption behaviors, other risky behaviors, and demographic characteristics as independent variables. The determinants of impaired driving are estimated using the US Centers for Disease Control and Prevention's (CDC) Behavioral Risk Factor Surveillance System (BRFSS) surveys. Variables used in a logistic model to explain alcohol-impaired driving are not only standard sociodemographic variables and bingeing but also frequency of drinking and average quantity consumed, as well as other risky behaviors. We use interactions to understand how being female and being young affect impaired driving. Having estimated our model using the 1997 survey, we validated our model using the BRFSS data for 1999. Drinking 9 or more times in the past month doubled the odds of impaired driving. The greater average consumption of alcohol per session, the greater the odds of driving impaired, especially for persons in the highest quartile of alcohol consumed. Bingeing has the greatest effect on impaired driving. Seat belt use is the one risky behavior found to be related to such driving. Sociodemographic effects are consistent with earlier research. Being young (18-30) interacts with two of the alcohol consumption variables and being a woman interacts with always wearing a seat belt. Our model was robust in the validation analysis. All 3 dimensions of drinking behavior are important determinants of alcohol-impaired driving, including frequency and average quantity consumed. Including these factors in regressions improves the estimates of the effects of all variables.
The Climate Variability & Predictability (CVP) Program at NOAA - Recent Program Advancements
NASA Astrophysics Data System (ADS)
Lucas, S. E.; Todd, J. F.
2015-12-01
The Climate Variability & Predictability (CVP) Program supports research aimed at providing process-level understanding of the climate system through observation, modeling, analysis, and field studies. This vital knowledge is needed to improve climate models and predictions so that scientists can better anticipate the impacts of future climate variability and change. To achieve its mission, the CVP Program supports research carried out at NOAA and other federal laboratories, NOAA Cooperative Institutes, and academic institutions. The Program also coordinates its sponsored projects with major national and international scientific bodies including the World Climate Research Programme (WCRP), the International and U.S. Climate Variability and Predictability (CLIVAR/US CLIVAR) Program, and the U.S. Global Change Research Program (USGCRP). The CVP program sits within NOAA's Climate Program Office (http://cpo.noaa.gov/CVP). The CVP Program currently supports multiple projects in areas that are aimed at improved representation of physical processes in global models. Some of the topics that are currently funded include: i) Improved Understanding of Intraseasonal Tropical Variability - DYNAMO field campaign and post -field projects, and the new climate model improvement teams focused on MJO processes; ii) Climate Process Teams (CPTs, co-funded with NSF) with projects focused on Cloud macrophysical parameterization and its application to aerosol indirect effects, and Internal-Wave Driven Mixing in Global Ocean Models; iii) Improved Understanding of Tropical Pacific Processes, Biases, and Climatology; iv) Understanding Arctic Sea Ice Mechanism and Predictability;v) AMOC Mechanisms and Decadal Predictability Recent results from CVP-funded projects will be summarized. Additional information can be found at http://cpo.noaa.gov/CVP.
Ito, Yukiko; Hattori, Reiko; Mase, Hiroki; Watanabe, Masako; Shiotani, Itaru
2008-12-01
Pollen information is indispensable for allergic individuals and clinicians. This study aimed to develop forecasting models for the total annual count of airborne pollen grains based on data monitored over the last 20 years at the Mie Chuo Medical Center, Tsu, Mie, Japan. Airborne pollen grains were collected using a Durham sampler. Total annual pollen count and pollen count from October to December (OD pollen count) of the previous year were transformed to logarithms. Regression analysis of the total pollen count was performed using variables such as the OD pollen count and the maximum temperature for mid-July of the previous year. Time series analysis revealed an alternate rhythm of the series of total pollen count. The alternate rhythm consisted of a cyclic alternation of an "on" year (high pollen count) and an "off" year (low pollen count). This rhythm was used as a dummy variable in regression equations. Of the three models involving the OD pollen count, a multiple regression equation that included the alternate rhythm variable and the interaction of this rhythm with OD pollen count showed a high coefficient of determination (0.844). Of the three models involving the maximum temperature for mid-July, those including the alternate rhythm variable and the interaction of this rhythm with maximum temperature had the highest coefficient of determination (0.925). An alternate pollen dispersal rhythm represented by a dummy variable in the multiple regression analysis plays a key role in improving forecasting models for the total annual sugi pollen count.
Decision tree analysis of factors influencing rainfall-related building damage
NASA Astrophysics Data System (ADS)
Spekkers, M. H.; Kok, M.; Clemens, F. H. L. R.; ten Veldhuis, J. A. E.
2014-04-01
Flood damage prediction models are essential building blocks in flood risk assessments. Little research has been dedicated so far to damage of small-scale urban floods caused by heavy rainfall, while there is a need for reliable damage models for this flood type among insurers and water authorities. The aim of this paper is to investigate a wide range of damage-influencing factors and their relationships with rainfall-related damage, using decision tree analysis. For this, district-aggregated claim data from private property insurance companies in the Netherlands were analysed, for the period of 1998-2011. The databases include claims of water-related damage, for example, damages related to rainwater intrusion through roofs and pluvial flood water entering buildings at ground floor. Response variables being modelled are average claim size and claim frequency, per district per day. The set of predictors include rainfall-related variables derived from weather radar images, topographic variables from a digital terrain model, building-related variables and socioeconomic indicators of households. Analyses were made separately for property and content damage claim data. Results of decision tree analysis show that claim frequency is most strongly associated with maximum hourly rainfall intensity, followed by real estate value, ground floor area, household income, season (property data only), buildings age (property data only), ownership structure (content data only) and fraction of low-rise buildings (content data only). It was not possible to develop statistically acceptable trees for average claim size, which suggest that variability in average claim size is related to explanatory variables that cannot be defined at the district scale. Cross-validation results show that decision trees were able to predict 22-26% of variance in claim frequency, which is considerably better compared to results from global multiple regression models (11-18% of variance explained). Still, a large part of the variance in claim frequency is left unexplained, which is likely to be caused by variations in data at subdistrict scale and missing explanatory variables.
Effects of additional data on Bayesian clustering.
Yamazaki, Keisuke
2017-10-01
Hierarchical probabilistic models, such as mixture models, are used for cluster analysis. These models have two types of variables: observable and latent. In cluster analysis, the latent variable is estimated, and it is expected that additional information will improve the accuracy of the estimation of the latent variable. Many proposed learning methods are able to use additional data; these include semi-supervised learning and transfer learning. However, from a statistical point of view, a complex probabilistic model that encompasses both the initial and additional data might be less accurate due to having a higher-dimensional parameter. The present paper presents a theoretical analysis of the accuracy of such a model and clarifies which factor has the greatest effect on its accuracy, the advantages of obtaining additional data, and the disadvantages of increasing the complexity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Differentiating between precursor and control variables when analyzing reasoned action theories.
Hennessy, Michael; Bleakley, Amy; Fishbein, Martin; Brown, Larry; Diclemente, Ralph; Romer, Daniel; Valois, Robert; Vanable, Peter A; Carey, Michael P; Salazar, Laura
2010-02-01
This paper highlights the distinction between precursor and control variables in the context of reasoned action theory. Here the theory is combined with structural equation modeling to demonstrate how age and past sexual behavior should be situated in a reasoned action analysis. A two wave longitudinal survey sample of African-American adolescents is analyzed where the target behavior is having vaginal sex. Results differ when age and past behavior are used as control variables and when they are correctly used as precursors. Because control variables do not appear in any form of reasoned action theory, this approach to including background variables is not correct when analyzing data sets based on the theoretical axioms of the Theory of Reasoned Action, the Theory of Planned Behavior, or the Integrative Model.
Differentiating Between Precursor and Control Variables When Analyzing Reasoned Action Theories
Hennessy, Michael; Bleakley, Amy; Fishbein, Martin; Brown, Larry; DiClemente, Ralph; Romer, Daniel; Valois, Robert; Vanable, Peter A.; Carey, Michael P.; Salazar, Laura
2010-01-01
This paper highlights the distinction between precursor and control variables in the context of reasoned action theory. Here the theory is combined with structural equation modeling to demonstrate how age and past sexual behavior should be situated in a reasoned action analysis. A two wave longitudinal survey sample of African-American adolescents is analyzed where the target behavior is having vaginal sex. Results differ when age and past behavior are used as control variables and when they are correctly used as precursors. Because control variables do not appear in any form of reasoned action theory, this approach to including background variables is not correct when analyzing data sets based on the theoretical axioms of the Theory of Reasoned Action, the Theory of Planned Behavior, or the Integrative Model PMID:19370408
Advanced statistics: linear regression, part I: simple linear regression.
Marill, Keith A
2004-01-01
Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.
VALIDITY OF A TWO-DIMENSIONAL MODEL FOR VARIABLE-DENSITY HYDRODYNAMIC CIRCULATION
A three-dimensional model of temperatures and currents has been formulated to assist in the analysis and interpretation of the dynamics of stratified lakes. In this model, nonlinear eddy coefficients for viscosity and conductivities are included. A two-dimensional model (one vert...
Impact of Satellite Remote Sensing Data on Simulations of ...
We estimated surface salinity flux and solar penetration from satellite data, and performed model simulations to examine the impact of including the satellite estimates on temperature, salinity, and dissolved oxygen distributions on the Louisiana continental shelf (LCS) near the annual hypoxic zone. Rainfall data from the Tropical Rainfall Measurement Mission (TRMM) were used for the salinity flux, and the diffuse attenuation coefficient (Kd) from Moderate Resolution Imaging Spectroradiometer (MODIS) were used for solar penetration. Improvements in the model results in comparison with in situ observations occurred when the two types of satellite data were included. Without inclusion of the satellite-derived surface salinity flux, realistic monthly variability in the model salinity fields was observed, but important inter-annual variability wasmissed. Without inclusion of the satellite-derived light attenuation, model bottom water temperatures were too high nearshore due to excessive penetration of solar irradiance. In general, these salinity and temperature errors led to model stratification that was too weak, and the model failed to capture observed spatial and temporal variability in water-column vertical stratification. Inclusion of the satellite data improved temperature and salinity predictions and the vertical stratification was strengthened, which improved prediction of bottom-water dissolved oxygen. The model-predicted area of bottom-water hypoxia on the
Mani, Ashutosh; Rao, Marepalli; James, Kelley; Bhattacharya, Amit
2015-01-01
The purpose of this study was to explore data-driven models, based on decision trees, to develop practical and easy to use predictive models for early identification of firefighters who are likely to cross the threshold of hyperthermia during live-fire training. Predictive models were created for three consecutive live-fire training scenarios. The final predicted outcome was a categorical variable: will a firefighter cross the upper threshold of hyperthermia - Yes/No. Two tiers of models were built, one with and one without taking into account the outcome (whether a firefighter crossed hyperthermia or not) from the previous training scenario. First tier of models included age, baseline heart rate and core body temperature, body mass index, and duration of training scenario as predictors. The second tier of models included the outcome of the previous scenario in the prediction space, in addition to all the predictors from the first tier of models. Classification and regression trees were used independently for prediction. The response variable for the regression tree was the quantitative variable: core body temperature at the end of each scenario. The predicted quantitative variable from regression trees was compared to the upper threshold of hyperthermia (38°C) to predict whether a firefighter would enter hyperthermia. The performance of classification and regression tree models was satisfactory for the second (success rate = 79%) and third (success rate = 89%) training scenarios but not for the first (success rate = 43%). Data-driven models based on decision trees can be a useful tool for predicting physiological response without modeling the underlying physiological systems. Early prediction of heat stress coupled with proactive interventions, such as pre-cooling, can help reduce heat stress in firefighters.
Wirth, Christian; Schumacher, Jens; Schulze, Ernst-Detlef
2004-02-01
To facilitate future carbon and nutrient inventories, we used mixed-effect linear models to develop new generic biomass functions for Norway spruce (Picea abies (L.) Karst.) in Central Europe. We present both the functions and their respective variance-covariance matrices and illustrate their application for biomass prediction and uncertainty estimation for Norway spruce trees ranging widely in size, age, competitive status and site. We collected biomass data for 688 trees sampled in 102 stands by 19 authors. The total number of trees in the "base" model data sets containing the predictor variables diameter at breast height (D), height (H), age (A), site index (SI) and site elevation (HSL) varied according to compartment (roots: n = 114, stem: n = 235, dry branches: n = 207, live branches: n = 429 and needles: n = 551). "Core" data sets with about 40% fewer trees could be extracted containing the additional predictor variables crown length and social class. A set of 43 candidate models representing combinations of lnD, lnH, lnA, SI and HSL, including second-order polynomials and interactions, was established. The categorical variable "author" subsuming mainly methodological differences was included as a random effect in a mixed linear model. The Akaike Information Criterion was used for model selection. The best models for stem, root and branch biomass contained only combinations of D, H and A as predictors. More complex models that included site-related variables resulted for needle biomass. Adding crown length as a predictor for needles, branches and roots reduced both the bias and the confidence interval of predictions substantially. Applying the best models to a test data set of 17 stands ranging in age from 16 to 172 years produced realistic allocation patterns at the tree and stand levels. The 95% confidence intervals (% of mean prediction) were highest for crown compartments (approximately +/- 12%) and lowest for stem biomass (approximately +/- 5%), and within each compartment, they were highest for the youngest and oldest stands, respectively.
NASA Astrophysics Data System (ADS)
Gómez-Aguilar, J. F.
2018-03-01
In this paper, we analyze an alcoholism model which involves the impact of Twitter via Liouville-Caputo and Atangana-Baleanu-Caputo fractional derivatives with constant- and variable-order. Two fractional mathematical models are considered, with and without delay. Special solutions using an iterative scheme via Laplace and Sumudu transform were obtained. We studied the uniqueness and existence of the solutions employing the fixed point postulate. The generalized model with variable-order was solved numerically via the Adams method and the Adams-Bashforth-Moulton scheme. Stability and convergence of the numerical solutions were presented in details. Numerical examples of the approximate solutions are provided to show that the numerical methods are computationally efficient. Therefore, by including both the fractional derivatives and finite time delays in the alcoholism model studied, we believe that we have established a more complete and more realistic indicator of alcoholism model and affect the spread of the drinking.
Peng, Yong; Peng, Shuangling; Wang, Xinghua; Tan, Shiyang
2018-06-01
This study aims to identify the effects of characteristics of vehicle, roadway, driver, and environment on fatality of drivers in vehicle-fixed object accidents on expressways in Changsha-Zhuzhou-Xiangtan district of Hunan province in China by developing multinomial logistic regression models. For this purpose, 121 vehicle-fixed object accidents from 2011-2017 are included in the modeling process. First, descriptive statistical analysis is made to understand the main characteristics of the vehicle-fixed object crashes. Then, 19 explanatory variables are selected, and correlation analysis of each two variables is conducted to choose the variables to be concluded. Finally, five multinomial logistic regression models including different independent variables are compared, and the model with best fitting and prediction capability is chosen as the final model. The results showed that the turning direction in avoiding fixed objects raised the possibility that drivers would die. About 64% of drivers died in the accident were found being ejected out of the car, of which 50% did not use a seatbelt before the fatal accidents. Drivers are likely to die when they encounter bad weather on the expressway. Drivers with less than 10 years of driving experience are more likely to die in these accidents. Fatigue or distracted driving is also a significant factor in fatality of drivers. Findings from this research provide an insight into reducing fatality of drivers in vehicle-fixed object accidents.
Kinetics of phase transformation in glass forming systems
NASA Technical Reports Server (NTRS)
Ray, Chandra S.
1994-01-01
The objectives of this research were to (1) develop computer models for realistic simulations of nucleation and crystal growth in glasses, which would also have the flexibility to accomodate the different variables related to sample characteristics and experimental conditions, and (2) design and perform nucleation and crystallization experiments using calorimetric measurements, such as differential scanning calorimetry (DSC) and differential thermal analysis (DTA) to verify these models. The variables related to sample characteristics mentioned in (1) above include size of the glass particles, nucleating agents, and the relative concentration of the surface and internal nuclei. A change in any of these variables changes the mode of the transformation (crystallization) kinetics. A variation in experimental conditions includes isothermal and nonisothermal DSC/DTA measurements. This research would lead to develop improved, more realistic methods for analysis of the DSC/DTA peak profiles to determine the kinetic parameters for nucleation and crystal growth as well as to assess the relative merits and demerits of the thermoanalytical models presently used to study the phase transformation in glasses.
Advanced statistics: linear regression, part II: multiple linear regression.
Marill, Keith A
2004-01-01
The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.
The CESM Large Ensemble Project: Inspiring New Ideas and Understanding
NASA Astrophysics Data System (ADS)
Kay, J. E.; Deser, C.
2016-12-01
While internal climate variability is known to affect climate projections, its influence is often underappreciated and confused with model error. Why? In general, modeling centers contribute a small number of realizations to international climate model assessments [e.g., phase 5 of the Coupled Model Intercomparison Project (CMIP5)]. As a result, model error and internal climate variability are difficult, and at times impossible, to disentangle. In response, the Community Earth System Model (CESM) community designed the CESM Large Ensemble (CESM-LE) with the explicit goal of enabling assessment of climate change in the presence of internal climate variability. All CESM-LE simulations use a single CMIP5 model (CESM with the Community Atmosphere Model, version 5). The core simulations replay the twenty to twenty-first century (1920-2100) 40+ times under historical and representative concentration pathway 8.5 external forcing with small initial condition differences. Two companion 2000+-yr-long preindustrial control simulations (fully coupled, prognostic atmosphere and land only) allow assessment of internal climate variability in the absence of climate change. Comprehensive outputs, including many daily fields, are available as single-variable time series on the Earth System Grid for anyone to use. Examples of scientists and stakeholders that are using the CESM-LE outputs to help interpret the observational record, to understand projection spread and to plan for a range of possible futures influenced by both internal climate variability and forced climate change will be highlighted the presentation.
A Physically Based Correlation of Irradiation-Induced Transition Temperature Shifts for RPV Steels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eason, Ernest D.; Odette, George Robert; Nanstad, Randy K
2007-11-01
The reactor pressure vessels (RPVs) of commercial nuclear power plants are subject to embrittlement due to exposure to high-energy neutrons from the core, which causes changes in material toughness properties that increase with radiation exposure and are affected by many variables. Irradiation embrittlement of RPV beltline materials is currently evaluated using Regulatory Guide 1.99 Revision 2 (RG1.99/2), which presents methods for estimating the shift in Charpy transition temperature at 30 ft-lb (TTS) and the drop in Charpy upper shelf energy (ΔUSE). The purpose of the work reported here is to improve on the TTS correlation model in RG1.99/2 using themore » broader database now available and current understanding of embrittlement mechanisms. The USE database and models have not been updated since the publication of NUREG/CR-6551 and, therefore, are not discussed in this report. The revised embrittlement shift model is calibrated and validated on a substantially larger, better-balanced database compared to prior models, including over five times the amount of data used to develop RG1.99/2. It also contains about 27% more data than the most recent update to the surveillance shift database, in 2000. The key areas expanded in the current database relative to the database available in 2000 are low-flux, low-copper, and long-time, high-fluence exposures, all areas that were previously relatively sparse. All old and new surveillance data were reviewed for completeness, duplicates, and discrepancies in cooperation with the American Society for Testing and Materials (ASTM) Subcommittee E10.02 on Radiation Effects in Structural Materials. In the present modeling effort, a 10% random sample of data was reserved from the fitting process, and most aspects of the model were validated with that sample as well as other data not used in calibration. The model is a hybrid, incorporating both physically motivated features and empirical calibration to the U.S. power reactor surveillance data. It contains two terms, corresponding to the best-understood radiation damage features, matrix damage and copper-rich precipitates, although the empirical calibration will ensure that all other damage processes that are occurring are also reflected in those terms. Effects of material chemical composition, product form, and radiation exposure are incorporated, such that all effects are supported by findings of statistical significance, physical understanding, or comparison with independent data from controlled experiments, such as the Irradiation Variable (IVAR) Program. In most variable effects, the model is supported by two or three of these different forms of evidence. The key variable trends, such as the neutron fluence dependence and copper-nickel dependence in the new TTS model, are much improved over RG1.99/2 and are well supported by independent data and the current understanding of embrittlement mechanisms. The new model includes the variables copper, nickel, and fluence that are in RG1.99/2, but also includes effects of irradiation temperature, neutron flux, phosphorus, and manganese. The calibrated model is a good fit, with no significant residual error trends in any of the variables used in the model or several additional variables and variable interactions that were investigated. The report includes a chapter summarizing the current understanding of embrittlement mechanisms and one comparing the IVAR database with the TTS model predictions. Generally good agreement is found in that quantitative comparison, providing independent confirmation of the predictive capability of the TTS model. The key new insight in the TTS modeling effort, that flux effects are evident in both low (or no) copper and higher copper materials, is supported by the IVAR data. The slightly simplified version of the TTS model presented in Section 7.3 of this report is recommended for applications.« less
Holsclaw, Tracy; Hallgren, Kevin A; Steyvers, Mark; Smyth, Padhraic; Atkins, David C
2015-12-01
Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased Type I and Type II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in online supplemental materials. (c) 2016 APA, all rights reserved).
Holsclaw, Tracy; Hallgren, Kevin A.; Steyvers, Mark; Smyth, Padhraic; Atkins, David C.
2015-01-01
Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non-normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased type-I and type-II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally-technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in supplementary materials. PMID:26098126
Szymańska, Agnieszka; Dobrenko, Kamila; Grzesiuk, Lidia
2017-08-29
The study concerns the relationship between three groups of variables presenting the patient's perspective: (1) "patient's characteristics" before psychotherapy, including "expectations of the therapy"; (2) "experience in the therapy", including the "psychotherapeutic relationship"; and (3) "assessment of the direct effectiveness of the psychotherapy". Data from the literature are the basis for predicting relationships between all of these variables. Measurement of the variables was conducted using a follow-up survey. The survey was sent to a total of 1,210 former patients of the Academic Center for Psychotherapy (AOP) in which the therapy is conducted mainly with the students and employees of the University of Warsaw. Responses were received from 276 people. 55% of the respondents were women and 45% were men, under 30 years of age. The analyses were performed using structural equations. Two models emerged from an analysis of the relationship between the three above-mentioned groups of variables. One concerns the relationship between (1) the patient's characteristics (2) the course of psychotherapy, in which -from the perspective of the patient - there is a good relationship with the psychotherapist and (3) psychotherapy is effective. The second model refers to (2) the patient's experience of poor psychotherapeutic relationship and (3) ineffective psychotherapy. Patient's expectations of the psychotherapy (especially "the expectation of support") proved to be important moderating variablesin the models-among the characteristics of the patient. The mathematical model also revealed strong correlation of variables measuring "the relationship with the psychotherapist" and "therapeutic interventions".
Pereira, S; Lavado, N; Nogueira, L; Lopez, M; Abreu, J; Silva, H
2014-10-01
Orthodontic-induced external apical root resorption (EARR) is a complex phenotype determined by poorly defined mechanical and patient intrinsic factors. The aim of this work was to construct a multifactorial integrative model, including clinical and genetic susceptibility factors, to analyze the risk of developing this common orthodontic complication. This retrospective study included 195 orthodontic patients. Using a multiple-linear regression model, where the dependent variable was the maximum% of root resorption (%EARRmax) for each patient, we assessed the contribution of nine clinical variables and four polymorphisms of genes involved in bone and tooth root remodeling (rs1718119 from P2RX7, rs1143634 from IL1B, rs3102735 from TNFRSF11B, encoding OPG, and rs1805034 from TNFRSF11A, encoding RANK). Clinical and genetic variables explained 30% of%EARRmax variability. The variables with the most significant unique contribution to the model were: gender (P < 0.05), treatment duration (P < 0.001), premolar extractions (P < 0.01), Hyrax appliance (P < 0.001) and GG genotype of rs1718119 from P2RX7 gene (P < 0.01). Age, overjet, tongue thrust, skeletal class II and the other polymorphisms made minor contributions. This study highlights the P2RX7 gene as a possible factor of susceptibility to EARR. A more extensive genetic profile may improve this model. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Vital signs monitoring to detect patient deterioration: An integrative literature review.
Mok, Wen Qi; Wang, Wenru; Liaw, Sok Ying
2015-05-01
Vital signs monitoring is an important nursing assessment. Yet, nurses seem to be doing it as part of a routine and often overlooking their significance in detecting patient deterioration. An integrative literature review was conducted to explore factors surrounding ward nursing practice of vital signs monitoring in detecting and reporting deterioration. Twenty papers were included. The structural component of a Nursing Role Effectiveness Model framework, which comprises of patient, nurse and organizational variables, was used to synthesize the review. Patient variables include signs of deterioration displayed by patients which include physical cues and abnormal vital signs. Nursing variables include clinical knowledge, roles and responsibilities, and reporting of deteriorating vital signs. Organizational variables include heavy workload, technology, and observation chart design. This review has highlighted current nursing practice in vital signs monitoring. A myriad of factors were found to surround ward practice of vital signs monitoring in detecting and reporting deterioration. © 2015 Wiley Publishing Asia Pty Ltd.
Ng, Kar Yong; Awang, Norhashidah
2018-01-06
Frequent haze occurrences in Malaysia have made the management of PM 10 (particulate matter with aerodynamic less than 10 μm) pollution a critical task. This requires knowledge on factors associating with PM 10 variation and good forecast of PM 10 concentrations. Hence, this paper demonstrates the prediction of 1-day-ahead daily average PM 10 concentrations based on predictor variables including meteorological parameters and gaseous pollutants. Three different models were built. They were multiple linear regression (MLR) model with lagged predictor variables (MLR1), MLR model with lagged predictor variables and PM 10 concentrations (MLR2) and regression with time series error (RTSE) model. The findings revealed that humidity, temperature, wind speed, wind direction, carbon monoxide and ozone were the main factors explaining the PM 10 variation in Peninsular Malaysia. Comparison among the three models showed that MLR2 model was on a same level with RTSE model in terms of forecasting accuracy, while MLR1 model was the worst.
NASA Technical Reports Server (NTRS)
Roberts, J. Brent; Robertson, Franklin R.; Funk, Chris
2014-01-01
Providing advance warning of East African rainfall variations is a particular focus of several groups including those participating in the Famine Early Warming Systems Network. Both seasonal and long-term model projections of climate variability are being used to examine the societal impacts of hydrometeorological variability on seasonal to interannual and longer time scales. The NASA / USAID SERVIR project, which leverages satellite and modeling-based resources for environmental decision making in developing nations, is focusing on the evaluation of both seasonal and climate model projections to develop downscaled scenarios for using in impact modeling. The utility of these projections is reliant on the ability of current models to capture the embedded relationships between East African rainfall and evolving forcing within the coupled ocean-atmosphere-land climate system. Previous studies have posited relationships between variations in El Niño, the Walker circulation, Pacific decadal variability (PDV), and anthropogenic forcing. This study applies machine learning methods (e.g. clustering, probabilistic graphical model, nonlinear PCA) to observational datasets in an attempt to expose the importance of local and remote forcing mechanisms of East African rainfall variability. The ability of the NASA Goddard Earth Observing System (GEOS5) coupled model to capture the associated relationships will be evaluated using Coupled Model Intercomparison Project Phase 5 (CMIP5) simulations.
NASA Astrophysics Data System (ADS)
Madonna, Erica; Ginsbourger, David; Martius, Olivia
2018-05-01
In Switzerland, hail regularly causes substantial damage to agriculture, cars and infrastructure, however, little is known about its long-term variability. To study the variability, the monthly number of days with hail in northern Switzerland is modeled in a regression framework using large-scale predictors derived from ERA-Interim reanalysis. The model is developed and verified using radar-based hail observations for the extended summer season (April-September) in the period 2002-2014. The seasonality of hail is explicitly modeled with a categorical predictor (month) and monthly anomalies of several large-scale predictors are used to capture the year-to-year variability. Several regression models are applied and their performance tested with respect to standard scores and cross-validation. The chosen model includes four predictors: the monthly anomaly of the two meter temperature, the monthly anomaly of the logarithm of the convective available potential energy (CAPE), the monthly anomaly of the wind shear and the month. This model well captures the intra-annual variability and slightly underestimates its inter-annual variability. The regression model is applied to the reanalysis data back in time to 1980. The resulting hail day time series shows an increase of the number of hail days per month, which is (in the model) related to an increase in temperature and CAPE. The trend corresponds to approximately 0.5 days per month per decade. The results of the regression model have been compared to two independent data sets. All data sets agree on the sign of the trend, but the trend is weaker in the other data sets.
SPSS macros to compare any two fitted values from a regression model.
Weaver, Bruce; Dubois, Sacha
2012-12-01
In regression models with first-order terms only, the coefficient for a given variable is typically interpreted as the change in the fitted value of Y for a one-unit increase in that variable, with all other variables held constant. Therefore, each regression coefficient represents the difference between two fitted values of Y. But the coefficients represent only a fraction of the possible fitted value comparisons that might be of interest to researchers. For many fitted value comparisons that are not captured by any of the regression coefficients, common statistical software packages do not provide the standard errors needed to compute confidence intervals or carry out statistical tests-particularly in more complex models that include interactions, polynomial terms, or regression splines. We describe two SPSS macros that implement a matrix algebra method for comparing any two fitted values from a regression model. The !OLScomp and !MLEcomp macros are for use with models fitted via ordinary least squares and maximum likelihood estimation, respectively. The output from the macros includes the standard error of the difference between the two fitted values, a 95% confidence interval for the difference, and a corresponding statistical test with its p-value.
Downscaled and debiased climate simulations for North America from 21,000 years ago to 2100AD
Lorenz, David J.; Nieto-Lugilde, Diego; Blois, Jessica L.; Fitzpatrick, Matthew C.; Williams, John W.
2016-01-01
Increasingly, ecological modellers are integrating paleodata with future projections to understand climate-driven biodiversity dynamics from the past through the current century. Climate simulations from earth system models are necessary to this effort, but must be debiased and downscaled before they can be used by ecological models. Downscaling methods and observational baselines vary among researchers, which produces confounding biases among downscaled climate simulations. We present unified datasets of debiased and downscaled climate simulations for North America from 21 ka BP to 2100AD, at 0.5° spatial resolution. Temporal resolution is decadal averages of monthly data until 1950AD, average climates for 1950–2005 AD, and monthly data from 2010 to 2100AD, with decadal averages also provided. This downscaling includes two transient paleoclimatic simulations and 12 climate models for the IPCC AR5 (CMIP5) historical (1850–2005), RCP4.5, and RCP8.5 21st-century scenarios. Climate variables include primary variables and derived bioclimatic variables. These datasets provide a common set of climate simulations suitable for seamlessly modelling the effects of past and future climate change on species distributions and diversity. PMID:27377537
Downscaled and debiased climate simulations for North America from 21,000 years ago to 2100AD.
Lorenz, David J; Nieto-Lugilde, Diego; Blois, Jessica L; Fitzpatrick, Matthew C; Williams, John W
2016-07-05
Increasingly, ecological modellers are integrating paleodata with future projections to understand climate-driven biodiversity dynamics from the past through the current century. Climate simulations from earth system models are necessary to this effort, but must be debiased and downscaled before they can be used by ecological models. Downscaling methods and observational baselines vary among researchers, which produces confounding biases among downscaled climate simulations. We present unified datasets of debiased and downscaled climate simulations for North America from 21 ka BP to 2100AD, at 0.5° spatial resolution. Temporal resolution is decadal averages of monthly data until 1950AD, average climates for 1950-2005 AD, and monthly data from 2010 to 2100AD, with decadal averages also provided. This downscaling includes two transient paleoclimatic simulations and 12 climate models for the IPCC AR5 (CMIP5) historical (1850-2005), RCP4.5, and RCP8.5 21st-century scenarios. Climate variables include primary variables and derived bioclimatic variables. These datasets provide a common set of climate simulations suitable for seamlessly modelling the effects of past and future climate change on species distributions and diversity.
Montoya, Isaac D; Bell, David C
2006-11-01
This article examines the effect of target, perceiver, and relationship characteristics on the perceiver's assessment that the target may be HIV seropositive (HIV+). A sample of 267 persons was recruited from low income, high drug use neighborhoods. Respondents (perceivers) were asked to name people (targets) with whom they had a social, drug sharing, or sexual relationship. Perceivers described 1,640 such relationships. Perceivers were asked about the targets' age, gender, and race/ethnicity, whether the targets were good-looking, their level of trust with the target, and how long they had known them. Perceivers were then asked to evaluate the chances that the target mentioned was HIV+. Two regression models were estimated on the 1,640 relationships mentioned. Model 1 included variables reflecting only target characteristics as independent variables. Model 2 included variables reflecting target characteristics as well as variables reflecting perceivers and perceiver-target relationship characteristics. The results showed that targets that were female, younger, and good-looking were perceived as being less likely to be HIV+. However, when accounting for perceiver and relationship effects, some of the target characteristic effects disappeared. Copyright 2006 APA, all rights reserved.
Error propagation of partial least squares for parameters optimization in NIR modeling.
Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng
2018-03-05
A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models. Copyright © 2017. Published by Elsevier B.V.
Error propagation of partial least squares for parameters optimization in NIR modeling
NASA Astrophysics Data System (ADS)
Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng
2018-03-01
A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models.
Survival of white-tailed deer neonates in Minnesota and South Dakota
Grovenburg, T.W.; Swanson, C.C.; Jacques, C.N.; Klaver, R.W.; Brinkman, T.J.; Burris, B.M.; Deperno, C.S.; Jenks, J.A.
2011-01-01
Understanding the influence of intrinsic (e.g., age, birth mass, and sex) and habitat factors on survival of neonate white-tailed deer improves understanding of population ecology. During 2002–2004, we captured and radiocollared 78 neonates in eastern South Dakota and southwestern Minnesota, of which 16 died before 1 September. Predation accounted for 80% of mortality; the remaining 20% was attributed to starvation. Canids (coyotes [Canis latrans], domestic dogs) accounted for 100% of predation on neonates. We used known fate analysis in Program MARK to estimate survival rates and investigate the influence of intrinsic and habitat variables on survival. We developed 2 a priori model sets, including intrinsic variables (model set 1) and habitat variables (model set 2; forested cover, wetlands, grasslands, and croplands). For model set 1, model {Sage-interval} had the lowest AICc (Akaike's information criterion for small sample size) value, indicating that age at mortality (3-stage age-interval: 0–2 weeks, 2–8 weeks, and >8 weeks) best explained survival. Model set 2 indicated that habitat variables did not further influence survival in the study area; β-estimates and 95% confidence intervals for habitat variables in competing models encompassed zero; thus, we excluded these models from consideration. Overall survival rate using model {Sage-interval} was 0.87 (95% CI = 0.83–0.91); 61% of mortalities occurred at 0–2 weeks of age, 26% at 2–8 weeks of age, and 13% at >8 weeks of age. Our results indicate that variables influencing survival may be area specific. Region-specific data are needed to determine influences of intrinsic and habitat variables on neonate survival before wildlife managers can determine which habitat management activities influence neonate populations.
Predictive power of the grace score in population with diabetes.
Baeza-Román, Anna; de Miguel-Balsa, Eva; Latour-Pérez, Jaime; Carrillo-López, Andrés
2017-12-01
Current clinical practice guidelines recommend risk stratification in patients with acute coronary syndrome (ACS) upon admission to hospital. Diabetes mellitus (DM) is widely recognized as an independent predictor of mortality in these patients, although it is not included in the GRACE risk score. The objective of this study is to validate the GRACE risk score in a contemporary population and particularly in the subgroup of patients with diabetes, and to test the effects of including the DM variable in the model. Retrospective cohort study in patients included in the ARIAM-SEMICYUC registry, with a diagnosis of ACS and with available in-hospital mortality data. We tested the predictive power of the GRACE score, calculating the area under the ROC curve. We assessed the calibration of the score and the predictive ability based on type of ACS and the presence of DM. Finally, we evaluated the effect of including the DM variable in the model by calculating the net reclassification improvement. The GRACE score shows good predictive power for hospital mortality in the study population, with a moderate degree of calibration and no significant differences based on ACS type or the presence of DM. Including DM as a variable did not add any predictive value to the GRACE model. The GRACE score has an appropriate predictive power, with good calibration and clinical applicability in the subgroup of diabetic patients. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Kurt, Adile Askim; Emiroglu, Bülent Gürsel
2018-01-01
The objective of the present study was to examine students' online information searching strategies, their cognitive absorption levels and the information pollution levels on the Internet based on different variables and to determine the correlation between these variables. The study was designed with the survey model, the study group included 198…
Variability of annoyance response due to aircraft noise
NASA Technical Reports Server (NTRS)
Dempsey, T. K.; Cawthorn, J. M.
1979-01-01
An investigation was conducted to study the variability in the response of subjects participating in noise experiments. This paper presents a description of a model developed to include this variability which incorporates an aircraft-noise adaptation level or an annoyance calibration for each individual. The results indicate that the use of an aircraft-noise adaption level improved prediction accuracy of annoyance responses (and simultaneously reduced response variation).
Flexible Control of Safety Margins for Action Based on Environmental Variability.
Hadjiosif, Alkis M; Smith, Maurice A
2015-06-17
To reduce the risk of slip, grip force (GF) control includes a safety margin above the force level ordinarily sufficient for the expected load force (LF) dynamics. The current view is that this safety margin is based on the expected LF dynamics, amounting to a static safety factor like that often used in engineering design. More efficient control could be achieved, however, if the motor system reduces the safety margin when LF variability is low and increases it when this variability is high. Here we show that this is indeed the case by demonstrating that the human motor system sizes the GF safety margin in proportion to an internal estimate of LF variability to maintain a fixed statistical confidence against slip. In contrast to current models of GF control that neglect the variability of LF dynamics, we demonstrate that GF is threefold more sensitive to the SD than the expected value of LF dynamics, in line with the maintenance of a 3-sigma confidence level. We then show that a computational model of GF control that includes a variability-driven safety margin predicts highly asymmetric GF adaptation between increases versus decreases in load. We find clear experimental evidence for this asymmetry and show that it explains previously reported differences in how rapidly GFs and manipulatory forces adapt. This model further predicts bizarre nonmonotonic shapes for GF learning curves, which are faithfully borne out in our experimental data. Our findings establish a new role for environmental variability in the control of action. Copyright © 2015 the authors 0270-6474/15/359106-16$15.00/0.
A model of the human supervisor
NASA Technical Reports Server (NTRS)
Kok, J. J.; Vanwijk, R. A.
1977-01-01
A general model of the human supervisor's behavior is given. Submechanisms of the model include: the observer/reconstructor; decision-making; and controller. A set of hypothesis is postulated for the relations between the task variables and the parameters of the different submechanisms of the model. Verification of the model hypotheses is considered using variations in the task variables. An approach is suggested for the identification of the model parameters which makes use of a multidimensional error criterion. Each of the elements of this multidimensional criterion corresponds to a certain aspect of the supervisor's behavior, and is directly related to a particular part of the model and its parameters. This approach offers good possibilities for an efficient parameter adjustment procedure.
The unusual suspect: Land use is a key predictor of biodiversity patterns in the Iberian Peninsula
NASA Astrophysics Data System (ADS)
Martins, Inês Santos; Proença, Vânia; Pereira, Henrique Miguel
2014-11-01
Although land use change is a key driver of biodiversity change, related variables such as habitat area and habitat heterogeneity are seldom considered in modeling approaches at larger extents. To address this knowledge gap we tested the contribution of land use related variables to models describing richness patterns of amphibians, reptiles and passerines in the Iberian Peninsula. We analyzed the relationship between species richness and habitat heterogeneity at two spatial resolutions (i.e., 10 km × 10 km and 50 km × 50 km). Using both ordinary least square and simultaneous autoregressive models, we assessed the relative importance of land use variables, climate variables and topographic variables. We also compare the species-area relationship with a multi-habitat model, the countryside species-area relationship, to assess the role of the area of different types of habitats on species diversity across scales. The association between habitat heterogeneity and species richness varied with the taxa and spatial resolution. A positive relationship was detected for all taxa at a grain size of 10 km × 10 km, but only passerines responded at a grain size of 50 km × 50 km. Species richness patterns were well described by abiotic predictors, but habitat predictors also explained a considerable portion of the variation. Moreover, species richness patterns were better described by a multi-habitat species-area model, incorporating land use variables, than by the classic power model, which only includes area as the single explanatory variable. Our results suggest that the role of land use in shaping species richness patterns goes beyond the local scale and persists at larger spatial scales. These findings call for the need of integrating land use variables in models designed to assess species richness response to large scale environmental changes.
Assessing medication effects in the MTA study using neuropsychological outcomes.
Epstein, Jeffery N; Conners, C Keith; Hervey, Aaron S; Tonev, Simon T; Arnold, L Eugene; Abikoff, Howard B; Elliott, Glen; Greenhill, Laurence L; Hechtman, Lily; Hoagwood, Kimberly; Hinshaw, Stephen P; Hoza, Betsy; Jensen, Peter S; March, John S; Newcorn, Jeffrey H; Pelham, William E; Severe, Joanne B; Swanson, James M; Wells, Karen; Vitiello, Benedetto; Wigal, Timothy
2006-05-01
While studies have increasingly investigated deficits in reaction time (RT) and RT variability in children with attention deficit/hyperactivity disorder (ADHD), few studies have examined the effects of stimulant medication on these important neuropsychological outcome measures. 316 children who participated in the Multimodal Treatment Study of Children with ADHD (MTA) completed the Conners' Continuous Performance Test (CPT) at the 24-month assessment point. Outcome measures included standard CPT outcomes (e.g., errors of commission, mean hit reaction time (RT)) and RT indicators derived from an Ex-Gaussian distributional model (i.e., mu, sigma, and tau). Analyses revealed significant effects of medication across all neuropsychological outcome measures. Results on the Ex-Gaussian outcome measures revealed that stimulant medication slows RT and reduces RT variability. This demonstrates the importance of including analytic strategies that can accurately model the actual distributional pattern, including the positive skew. Further, the results of the study relate to several theoretical models of ADHD.
Use of Prolonged Travel to Improve Pediatric Risk-Adjustment Models
Lorch, Scott A; Silber, Jeffrey H; Even-Shoshan, Orit; Millman, Andrea
2009-01-01
Objective To determine whether travel variables could explain previously reported differences in lengths of stay (LOS), readmission, or death at children's hospitals versus other hospital types. Data Source Hospital discharge data from Pennsylvania between 1996 and 1998. Study Design A population cohort of children aged 1–17 years with one of 19 common pediatric conditions was created (N=51,855). Regression models were constructed to determine difference for LOS, readmission, or death between children's hospitals and other types of hospitals after including five types of additional illness severity variables to a traditional risk-adjustment model. Principal Findings With the traditional risk-adjustment model, children traveling longer to children's or rural hospitals had longer adjusted LOS and higher readmission rates. Inclusion of either a geocoded travel time variable or a nongeocoded travel distance variable provided the largest reduction in adjusted LOS, adjusted readmission rates, and adjusted mortality rates for children's hospitals and rural hospitals compared with other types of hospitals. Conclusions Adding a travel variable to traditional severity adjustment models may improve the assessment of an individual hospital's pediatric care by reducing systematic differences between different types of hospitals. PMID:19207591
Manual for a workstation-based generic flight simulation program (LaRCsim), version 1.4
NASA Technical Reports Server (NTRS)
Jackson, E. Bruce
1995-01-01
LaRCsim is a set of ANSI C routines that implement a full set of equations of motion for a rigid-body aircraft in atmospheric and low-earth orbital flight, suitable for pilot-in-the-loop simulations on a workstation-class computer. All six rigid-body degrees of freedom are modeled. The modules provided include calculations of the typical aircraft rigid-body simulation variables, earth geodesy, gravity and atmospheric models, and support several data recording options. Features/limitations of the current version include English units of measure, a 1962 atmosphere model in cubic spline function lookup form, ranging from sea level to 75,000 feet, rotating oblate spheroidal earth model, with aircraft C.G. coordinates in both geocentric and geodetic axes. Angular integrations are done using quaternion state variables Vehicle X-Z symmetry is assumed.
NASA Technical Reports Server (NTRS)
Furukawa, S.
1975-01-01
Current applications of simulation models for clinical research described included tilt model simulation of orthostatic intolerance with hemorrhage, and modeling long term circulatory circulation. Current capabilities include: (1) simulation of analogous pathological states and effects of abnormal environmental stressors by the manipulation of system variables and changing inputs in various sequences; (2) simulation of time courses of responses of controlled variables by the altered inputs and their relationships; (3) simulation of physiological responses of treatment such as isotonic saline transfusion; (4) simulation of the effectiveness of a treatment as well as the effects of complication superimposed on an existing pathological state; and (5) comparison of the effectiveness of various treatments/countermeasures for a given pathological state. The feasibility of applying simulation models to diagnostic and therapeutic research problems is assessed.
Age at menarche in urban Argentinian girls: association with biological and socioeconomic factors.
Orden, Alicia B; Vericat, Agustina; Apezteguía, Maria C
2011-01-01
Age at menarche is regarded as a sensitive indicator of physical, biological, and psychosocial environment. The aim of this study was to determine the age at menarche and its association with biological and socioeconomic factors in girls from Santa Rosa (La Pampa, Argentina). An observational cross-sectional study was carried out on 1,221 schoolgirls aged 9-15 years. Menarche data were obtained by the status-quo method. Height, sitting height, weight, arm circumference, tricipital and subscapular skinfolds were measured. We also calculated body mass index, measures of body composition and proportions, and fat distribution. To assess socioeconomic factors, parents completed a self-administered questionnaire about their occupation and education, family size, household, and other family characteristics. The median age at menarche - estimated by the logit method--was 12.84 years (95% CI: 12.71, 12.97). Compared with their premenarcheal age peers, postmenarcheal girls had greater anthropometric dimensions through age 12. After this age, only height was higher in the latter group. Data were processed by fitting two logistic regressions, both including age. The first model included anthropometric variables and birth weight, while the second model included the socioeconomic variables. The significant variables derived from each model were incorporated into a new regression: height, sitting height ratio (first model), and maternal education (second model). These three variables remained significantly associated with menarche. The results suggest a relationship between linear growth and menarche and agree with those found in other populations where the advancement of menarche is associated with improved living conditions. In relatively uniform urban contexts, maternal education may be a good proxy for the standard of living.
Dick, Thomas E.; Molkov, Yaroslav I.; Nieman, Gary; Hsieh, Yee-Hsee; Jacono, Frank J.; Doyle, John; Scheff, Jeremy D.; Calvano, Steve E.; Androulakis, Ioannis P.; An, Gary; Vodovotz, Yoram
2012-01-01
Acute inflammation leads to organ failure by engaging catastrophic feedback loops in which stressed tissue evokes an inflammatory response and, in turn, inflammation damages tissue. Manifestations of this maladaptive inflammatory response include cardio-respiratory dysfunction that may be reflected in reduced heart rate and ventilatory pattern variabilities. We have developed signal-processing algorithms that quantify non-linear deterministic characteristics of variability in biologic signals. Now, coalescing under the aegis of the NIH Computational Biology Program and the Society for Complexity in Acute Illness, two research teams performed iterative experiments and computational modeling on inflammation and cardio-pulmonary dysfunction in sepsis as well as on neural control of respiration and ventilatory pattern variability. These teams, with additional collaborators, have recently formed a multi-institutional, interdisciplinary consortium, whose goal is to delineate the fundamental interrelationship between the inflammatory response and physiologic variability. Multi-scale mathematical modeling and complementary physiological experiments will provide insight into autonomic neural mechanisms that may modulate the inflammatory response to sepsis and simultaneously reduce heart rate and ventilatory pattern variabilities associated with sepsis. This approach integrates computational models of neural control of breathing and cardio-respiratory coupling with models that combine inflammation, cardiovascular function, and heart rate variability. The resulting integrated model will provide mechanistic explanations for the phenomena of respiratory sinus-arrhythmia and cardio-ventilatory coupling observed under normal conditions, and the loss of these properties during sepsis. This approach holds the potential of modeling cross-scale physiological interactions to improve both basic knowledge and clinical management of acute inflammatory diseases such as sepsis and trauma. PMID:22783197
Dick, Thomas E; Molkov, Yaroslav I; Nieman, Gary; Hsieh, Yee-Hsee; Jacono, Frank J; Doyle, John; Scheff, Jeremy D; Calvano, Steve E; Androulakis, Ioannis P; An, Gary; Vodovotz, Yoram
2012-01-01
Acute inflammation leads to organ failure by engaging catastrophic feedback loops in which stressed tissue evokes an inflammatory response and, in turn, inflammation damages tissue. Manifestations of this maladaptive inflammatory response include cardio-respiratory dysfunction that may be reflected in reduced heart rate and ventilatory pattern variabilities. We have developed signal-processing algorithms that quantify non-linear deterministic characteristics of variability in biologic signals. Now, coalescing under the aegis of the NIH Computational Biology Program and the Society for Complexity in Acute Illness, two research teams performed iterative experiments and computational modeling on inflammation and cardio-pulmonary dysfunction in sepsis as well as on neural control of respiration and ventilatory pattern variability. These teams, with additional collaborators, have recently formed a multi-institutional, interdisciplinary consortium, whose goal is to delineate the fundamental interrelationship between the inflammatory response and physiologic variability. Multi-scale mathematical modeling and complementary physiological experiments will provide insight into autonomic neural mechanisms that may modulate the inflammatory response to sepsis and simultaneously reduce heart rate and ventilatory pattern variabilities associated with sepsis. This approach integrates computational models of neural control of breathing and cardio-respiratory coupling with models that combine inflammation, cardiovascular function, and heart rate variability. The resulting integrated model will provide mechanistic explanations for the phenomena of respiratory sinus-arrhythmia and cardio-ventilatory coupling observed under normal conditions, and the loss of these properties during sepsis. This approach holds the potential of modeling cross-scale physiological interactions to improve both basic knowledge and clinical management of acute inflammatory diseases such as sepsis and trauma.
Kafle, Gopi Krishna; Chen, Lide
2016-02-01
There is a lack of literature reporting the methane potential of several livestock manures under the same anaerobic digestion conditions (same inoculum, temperature, time, and size of the digester). To the best of our knowledge, no previous study has reported biochemical methane potential (BMP) predicting models developed and evaluated by solely using at least five different livestock manure tests results. The goal of this study was to evaluate the BMP of five different livestock manures (dairy manure (DM), horse manure (HM), goat manure (GM), chicken manure (CM) and swine manure (SM)) and to predict the BMP using different statistical models. Nutrients of the digested different manures were also monitored. The BMP tests were conducted under mesophilic temperatures with a manure loading factor of 3.5g volatile solids (VS)/L and a feed to inoculum ratio (F/I) of 0.5. Single variable and multiple variable regression models were developed using manure total carbohydrate (TC), crude protein (CP), total fat (TF), lignin (LIG) and acid detergent fiber (ADF), and measured BMP data. Three different kinetic models (first order kinetic model, modified Gompertz model and Chen and Hashimoto model) were evaluated for BMP predictions. The BMPs of DM, HM, GM, CM and SM were measured to be 204, 155, 159, 259, and 323mL/g VS, respectively and the VS removals were calculated to be 58.6%, 52.9%, 46.4%, 81.4%, 81.4%, respectively. The technical digestion time (T80-90, time required to produce 80-90% of total biogas production) for DM, HM, GM, CM and SM was calculated to be in the ranges of 19-28, 27-37, 31-44, 13-18, 12-17days, respectively. The effluents from the HM showed the lowest nitrogen, phosphorus and potassium concentrations. The effluents from the CM digesters showed highest nitrogen and phosphorus concentrations and digested SM showed highest potassium concentration. Based on the results of the regression analysis, the model using the variable of LIG showed the best (R(2)=0.851, p=0.026) for BMP prediction among the single variable models, and the model including variables of TC and TF showed the best prediction for BMPs (R(2)=0.913, p=0.068-0.075) comparing with other two-variable models, while the model including variables of CP, LIG and ADF performed the best in BMP prediction (R(2)=0.999, p=0.009-0.017) if three-variable models were compared. Among the three kinetic models used, the first order kinetic model fitted the measured BMPs data best (R(2)=0.996-0.998, rRMSE=0.171-0.381) and deviations between measured and the first order kinetic model predicted BMPs were less than 3.0%. Published by Elsevier Ltd.
Brandstätter, Christian; Laner, David; Prantl, Roman; Fellner, Johann
2014-12-01
Municipal solid waste landfills pose a threat on environment and human health, especially old landfills which lack facilities for collection and treatment of landfill gas and leachate. Consequently, missing information about emission flows prevent site-specific environmental risk assessments. To overcome this gap, the combination of waste sampling and analysis with statistical modeling is one option for estimating present and future emission potentials. Optimizing the tradeoff between investigation costs and reliable results requires knowledge about both: the number of samples to be taken and variables to be analyzed. This article aims to identify the optimized number of waste samples and variables in order to predict a larger set of variables. Therefore, we introduce a multivariate linear regression model and tested the applicability by usage of two case studies. Landfill A was used to set up and calibrate the model based on 50 waste samples and twelve variables. The calibrated model was applied to Landfill B including 36 waste samples and twelve variables with four predictor variables. The case study results are twofold: first, the reliable and accurate prediction of the twelve variables can be achieved with the knowledge of four predictor variables (Loi, EC, pH and Cl). For the second Landfill B, only ten full measurements would be needed for a reliable prediction of most response variables. The four predictor variables would exhibit comparably low analytical costs in comparison to the full set of measurements. This cost reduction could be used to increase the number of samples yielding an improved understanding of the spatial waste heterogeneity in landfills. Concluding, the future application of the developed model potentially improves the reliability of predicted emission potentials. The model could become a standard screening tool for old landfills if its applicability and reliability would be tested in additional case studies. Copyright © 2014 Elsevier Ltd. All rights reserved.
A 4-cylinder Stirling engine computer program with dynamic energy equations
NASA Technical Reports Server (NTRS)
Daniele, C. J.; Lorenzo, C. F.
1983-01-01
A computer program for simulating the steady state and transient performance of a four cylinder Stirling engine is presented. The thermodynamic model includes both continuity and energy equations and linear momentum terms (flow resistance). Each working space between the pistons is broken into seven control volumes. Drive dynamics and vehicle load effects are included. The model contains 70 state variables. Also included in the model are piston rod seal leakage effects. The computer program includes a model of a hydrogen supply system, from which hydrogen may be added to the system to accelerate the engine. Flow charts are provided.
Behavioral Correlates of System Operational Readiness (SOR): Summary of Workshop Proceedings.
1983-10-01
1978) time series ARIMA models Use ARIMA models for (Box & Jenkins, 1976) interrupted time series Stage 7. Interpretation 7.1 Formatting and re...This report describes a 2-day conference called to explore the methodology required to develop a behavioral model of system operational readiness (SOR...Participants discussed (4) the behavioral variables that should be included in the model , (2) the system level measures that should be included, (3
Data-driven non-Markovian closure models
NASA Astrophysics Data System (ADS)
Kondrashov, Dmitri; Chekroun, Mickaël D.; Ghil, Michael
2015-03-01
This paper has two interrelated foci: (i) obtaining stable and efficient data-driven closure models by using a multivariate time series of partial observations from a large-dimensional system; and (ii) comparing these closure models with the optimal closures predicted by the Mori-Zwanzig (MZ) formalism of statistical physics. Multilayer stochastic models (MSMs) are introduced as both a generalization and a time-continuous limit of existing multilevel, regression-based approaches to closure in a data-driven setting; these approaches include empirical model reduction (EMR), as well as more recent multi-layer modeling. It is shown that the multilayer structure of MSMs can provide a natural Markov approximation to the generalized Langevin equation (GLE) of the MZ formalism. A simple correlation-based stopping criterion for an EMR-MSM model is derived to assess how well it approximates the GLE solution. Sufficient conditions are derived on the structure of the nonlinear cross-interactions between the constitutive layers of a given MSM to guarantee the existence of a global random attractor. This existence ensures that no blow-up can occur for a broad class of MSM applications, a class that includes non-polynomial predictors and nonlinearities that do not necessarily preserve quadratic energy invariants. The EMR-MSM methodology is first applied to a conceptual, nonlinear, stochastic climate model of coupled slow and fast variables, in which only slow variables are observed. It is shown that the resulting closure model with energy-conserving nonlinearities efficiently captures the main statistical features of the slow variables, even when there is no formal scale separation and the fast variables are quite energetic. Second, an MSM is shown to successfully reproduce the statistics of a partially observed, generalized Lotka-Volterra model of population dynamics in its chaotic regime. The challenges here include the rarity of strange attractors in the model's parameter space and the existence of multiple attractor basins with fractal boundaries. The positivity constraint on the solutions' components replaces here the quadratic-energy-preserving constraint of fluid-flow problems and it successfully prevents blow-up.
Person Re-Identification via Distance Metric Learning With Latent Variables.
Sun, Chong; Wang, Dong; Lu, Huchuan
2017-01-01
In this paper, we propose an effective person re-identification method with latent variables, which represents a pedestrian as the mixture of a holistic model and a number of flexible models. Three types of latent variables are introduced to model uncertain factors in the re-identification problem, including vertical misalignments, horizontal misalignments and leg posture variations. The distance between two pedestrians can be determined by minimizing a given distance function with respect to latent variables, and then be used to conduct the re-identification task. In addition, we develop a latent metric learning method for learning the effective metric matrix, which can be solved via an iterative manner: once latent information is specified, the metric matrix can be obtained based on some typical metric learning methods; with the computed metric matrix, the latent variables can be determined by searching the state space exhaustively. Finally, extensive experiments are conducted on seven databases to evaluate the proposed method. The experimental results demonstrate that our method achieves better performance than other competing algorithms.
Ozatac, Nesrin; Gokmenoglu, Korhan K; Taspinar, Nigar
2017-07-01
This study investigates the environmental Kuznets curve (EKC) hypothesis for the case of Turkey from 1960 to 2013 by considering energy consumption, trade, urbanization, and financial development variables. Although previous literature examines various aspects of the EKC hypothesis for the case of Turkey, our model augments the basic model with several covariates to develop a better understanding of the relationship among the variables and to refrain from omitted variable bias. The results of the bounds test and the error correction model under autoregressive distributed lag mechanism suggest long-run relationships among the variables as well as proof of the EKC and the scale effect in Turkey. A conditional Granger causality test reveals that there are causal relationships among the variables. Our findings can have policy implications including the imposition of a "polluter pays" mechanism, such as the implementation of a carbon tax for pollution trading, to raise the urban population's awareness about the importance of adopting renewable energy and to support clean, environmentally friendly technology.
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
A sequential adaptive experimental design procedure for a related problem is studied. It is assumed that a finite set of potential linear models relating certain controlled variables to an observed variable is postulated, and that exactly one of these models is correct. The problem is to sequentially design most informative experiments so that the correct model equation can be determined with as little experimentation as possible. Discussion includes: structure of the linear models; prerequisite distribution theory; entropy functions and the Kullback-Leibler information function; the sequential decision procedure; and computer simulation results. An example of application is given.
NASA Astrophysics Data System (ADS)
Fahimi, Farzad; Yaseen, Zaher Mundher; El-shafie, Ahmed
2017-05-01
Since the middle of the twentieth century, artificial intelligence (AI) models have been used widely in engineering and science problems. Water resource variable modeling and prediction are the most challenging issues in water engineering. Artificial neural network (ANN) is a common approach used to tackle this problem by using viable and efficient models. Numerous ANN models have been successfully developed to achieve more accurate results. In the current review, different ANN models in water resource applications and hydrological variable predictions are reviewed and outlined. In addition, recent hybrid models and their structures, input preprocessing, and optimization techniques are discussed and the results are compared with similar previous studies. Moreover, to achieve a comprehensive view of the literature, many articles that applied ANN models together with other techniques are included. Consequently, coupling procedure, model evaluation, and performance comparison of hybrid models with conventional ANN models are assessed, as well as, taxonomy and hybrid ANN models structures. Finally, current challenges and recommendations for future researches are indicated and new hybrid approaches are proposed.
Abdollahi, Yadollah; Sairi, Nor Asrina; Said, Suhana Binti Mohd; Abouzari-lotf, Ebrahim; Zakaria, Azmi; Sabri, Mohd Faizul Bin Mohd; Islam, Aminul; Alias, Yatimah
2015-11-05
It is believe that 80% industrial of carbon dioxide can be controlled by separation and storage technologies which use the blended ionic liquids absorber. Among the blended absorbers, the mixture of water, N-methyldiethanolamine (MDEA) and guanidinium trifluoromethane sulfonate (gua) has presented the superior stripping qualities. However, the blended solution has illustrated high viscosity that affects the cost of separation process. In this work, the blended fabrication was scheduled with is the process arranging, controlling and optimizing. Therefore, the blend's components and operating temperature were modeled and optimized as input effective variables to minimize its viscosity as the final output by using back-propagation artificial neural network (ANN). The modeling was carried out by four mathematical algorithms with individual experimental design to obtain the optimum topology using root mean squared error (RMSE), R-squared (R(2)) and absolute average deviation (AAD). As a result, the final model (QP-4-8-1) with minimum RMSE and AAD as well as the highest R(2) was selected to navigate the fabrication of the blended solution. Therefore, the model was applied to obtain the optimum initial level of the input variables which were included temperature 303-323 K, x[gua], 0-0.033, x[MDAE], 0.3-0.4, and x[H2O], 0.7-1.0. Moreover, the model has obtained the relative importance ordered of the variables which included x[gua]>temperature>x[MDEA]>x[H2O]. Therefore, none of the variables was negligible in the fabrication. Furthermore, the model predicted the optimum points of the variables to minimize the viscosity which was validated by further experiments. The validated results confirmed the model schedulability. Accordingly, ANN succeeds to model the initial components of the blended solutions as absorber of CO2 capture in separation technologies that is able to industries scale up. Copyright © 2015 Elsevier B.V. All rights reserved.
A site specific model and analysis of the neutral somatic mutation rate in whole-genome cancer data.
Bertl, Johanna; Guo, Qianyun; Juul, Malene; Besenbacher, Søren; Nielsen, Morten Muhlig; Hornshøj, Henrik; Pedersen, Jakob Skou; Hobolth, Asger
2018-04-19
Detailed modelling of the neutral mutational process in cancer cells is crucial for identifying driver mutations and understanding the mutational mechanisms that act during cancer development. The neutral mutational process is very complex: whole-genome analyses have revealed that the mutation rate differs between cancer types, between patients and along the genome depending on the genetic and epigenetic context. Therefore, methods that predict the number of different types of mutations in regions or specific genomic elements must consider local genomic explanatory variables. A major drawback of most methods is the need to average the explanatory variables across the entire region or genomic element. This procedure is particularly problematic if the explanatory variable varies dramatically in the element under consideration. To take into account the fine scale of the explanatory variables, we model the probabilities of different types of mutations for each position in the genome by multinomial logistic regression. We analyse 505 cancer genomes from 14 different cancer types and compare the performance in predicting mutation rate for both regional based models and site-specific models. We show that for 1000 randomly selected genomic positions, the site-specific model predicts the mutation rate much better than regional based models. We use a forward selection procedure to identify the most important explanatory variables. The procedure identifies site-specific conservation (phyloP), replication timing, and expression level as the best predictors for the mutation rate. Finally, our model confirms and quantifies certain well-known mutational signatures. We find that our site-specific multinomial regression model outperforms the regional based models. The possibility of including genomic variables on different scales and patient specific variables makes it a versatile framework for studying different mutational mechanisms. Our model can serve as the neutral null model for the mutational process; regions that deviate from the null model are candidates for elements that drive cancer development.
Relating Factor Models for Longitudinal Data to Quasi-Simplex and NARMA Models
ERIC Educational Resources Information Center
Rovine, Michael J.; Molenaar, Peter C. M.
2005-01-01
In this article we show the one-factor model can be rewritten as a quasi-simplex model. Using this result along with addition theorems from time series analysis, we describe a common general model, the nonstationary autoregressive moving average (NARMA) model, that includes as a special case, any latent variable model with continuous indicators…
Forward Modeling of Oxygen Isotope Variability in Tropical Andean Ice Cores
NASA Astrophysics Data System (ADS)
Vuille, M. F.; Hurley, J. V.; Hardy, D. R.
2016-12-01
Ice core records from the tropical Andes serve as important archives of past tropical Pacific SST variability and changes in monsoon intensity upstream over the Amazon basin. Yet the interpretation of the oxygen isotopic signal in these ice cores remains controversial. Based on 10 years of continuous on-site glaciologic, meteorologic and isotopic measurements at the summit of the world's largest tropical ice cap, Quelccaya, in southern Peru, we developed a process-based physical forward model (proxy system model), capable of simulating intraseasonal, seasonal and interannual variability in delta-18O as observed in snow pits and short cores. Our results highlight the importance of taking into account post-depositional effects (sublimation and isotopic enrichment) to properly simulate the seasonal cycle. Intraseasonal variability is underestimated in our model unless the effects of cold air incursions, triggering significant monsoonal snowfall and more negative delta-18O values, are included. A number of sensitivity test highlight the influence of changing boundary conditions on the final snow isotopic profile. Such tests also show that our model provides much more realistic data than applying direct model output of precipitation delta-18O from isotope-enabled climate models (SWING ensemble). The forward model was calibrated with and run under present-day conditions, but it can also be driven with past climate forcings to reconstruct paleo-monsoon variability and investigate the influence of changes in radiative forcings (solar, volcanic) on delta-18O variability in Andean snow. The model is transferable and may be used to render a paleoclimatic context at other ice core locations.
Viscoelasticity, postseismic slip, fault interactions, and the recurrence of large earthquakes
Michael, A.J.
2005-01-01
The Brownian Passage Time (BPT) model for earthquake recurrence is modified to include transient deformation due to either viscoelasticity or deep post seismic slip. Both of these processes act to increase the rate of loading on the seismogenic fault for some time after a large event. To approximate these effects, a decaying exponential term is added to the BPT model's uniform loading term. The resulting interevent time distributions remain approximately lognormal, but the balance between the level of noise (e.g., unknown fault interactions) and the coefficient of variability of the interevent time distribution changes depending on the shape of the loading function. For a given level of noise in the loading process, transient deformation has the effect of increasing the coefficient of variability of earthquake interevent times. Conversely, the level of noise needed to achieve a given level of variability is reduced when transient deformation is included. Using less noise would then increase the effect of known fault interactions modeled as stress or strain steps because they would be larger with respect to the noise. If we only seek to estimate the shape of the interevent time distribution from observed earthquake occurrences, then the use of a transient deformation model will not dramatically change the results of a probability study because a similar shaped distribution can be achieved with either uniform or transient loading functions. However, if the goal is to estimate earthquake probabilities based on our increasing understanding of the seismogenic process, including earthquake interactions, then including transient deformation is important to obtain accurate results. For example, a loading curve based on the 1906 earthquake, paleoseismic observations of prior events, and observations of recent deformation in the San Francisco Bay region produces a 40% greater variability in earthquake recurrence than a uniform loading model with the same noise level.
Use of LANDSAT images of vegetation cover to estimate effective hydraulic properties of soils
NASA Technical Reports Server (NTRS)
Eagleson, Peter S.; Jasinski, Michael F.
1988-01-01
This work focuses on the characterization of natural, spatially variable, semivegetated landscapes using a linear, stochastic, canopy-soil reflectance model. A first application of the model was the investigation of the effects of subpixel and regional variability of scenes on the shape and structure of red-infrared scattergrams. Additionally, the model was used to investigate the inverse problem, the estimation of subpixel vegetation cover, given only the scattergrams of simulated satellite scale multispectral scenes. The major aspects of that work, including recent field investigations, are summarized.
Clark, Renee M; Besterfield-Sacre, Mary E
2009-03-01
We take a novel approach to analyzing hazardous materials transportation risk in this research. Previous studies analyzed this risk from an operations research (OR) or quantitative risk assessment (QRA) perspective by minimizing or calculating risk along a transport route. Further, even though the majority of incidents occur when containers are unloaded, the research has not focused on transportation-related activities, including container loading and unloading. In this work, we developed a decision model of a hazardous materials release during unloading using actual data and an exploratory data modeling approach. Previous studies have had a theoretical perspective in terms of identifying and advancing the key variables related to this risk, and there has not been a focus on probability and statistics-based approaches for doing this. Our decision model empirically identifies the critical variables using an exploratory methodology for a large, highly categorical database involving latent class analysis (LCA), loglinear modeling, and Bayesian networking. Our model identified the most influential variables and countermeasures for two consequences of a hazmat incident, dollar loss and release quantity, and is one of the first models to do this. The most influential variables were found to be related to the failure of the container. In addition to analyzing hazmat risk, our methodology can be used to develop data-driven models for strategic decision making in other domains involving risk.
Black-white preterm birth disparity: a marker of inequality
Purpose. The racial disparity in preterrn birth (PTB) is a persistent feature of perinatal epidemiology, inconsistently modeled in the literature. Rather than include race as an explanatory variable, or employ race-stratified models, we sought to directly model the PTB disparity ...
Modeling the survival kinetics of Salmonella in tree nuts for use in risk assessment.
Santillana Farakos, Sofia M; Pouillot, Régis; Anderson, Nathan; Johnson, Rhoma; Son, Insook; Van Doren, Jane
2016-06-16
Salmonella has been shown to survive in tree nuts over long periods of time. This survival capacity and its variability are key elements for risk assessment of Salmonella in tree nuts. The aim of this study was to develop a mathematical model to predict survival of Salmonella in tree nuts at ambient storage temperatures that considers variability and uncertainty separately and can easily be incorporated into a risk assessment model. Data on Salmonella survival on raw almonds, pecans, pistachios and walnuts were collected from the peer reviewed literature. The Weibull model was chosen as the baseline model and various fixed effect and mixed effect models were fit to the data. The best model identified through statistical analysis testing was then used to develop a hierarchical Bayesian model. Salmonella in tree nuts showed slow declines at temperatures ranging from 21°C to 24°C. A high degree of variability in survival was observed across tree nut studies reported in the literature. Statistical analysis results indicated that the best applicable model was a mixed effect model that included a fixed and random variation of δ per tree nut (which is the time it takes for the first log10 reduction) and a fixed variation of ρ per tree nut (parameter which defines the shape of the curve). Higher estimated survival rates (δ) were obtained for Salmonella on pistachios, followed in decreasing order by pecans, almonds and walnuts. The posterior distributions obtained from Bayesian inference were used to estimate the variability in the log10 decrease levels in survival for each tree nut, and the uncertainty of these estimates. These modeled uncertainty and variability distributions of the estimates can be used to obtain a complete exposure assessment of Salmonella in tree nuts when including time-temperature parameters for storage and consumption data. The statistical approach presented in this study may be applied to any studies that aim to develop predictive models to be implemented in a probabilistic exposure assessment or a quantitative microbial risk assessment. Published by Elsevier B.V.
Rule, Michael E.; Vargas-Irwin, Carlos; Donoghue, John P.; Truccolo, Wilson
2015-01-01
Understanding the sources of variability in single-neuron spiking responses is an important open problem for the theory of neural coding. This variability is thought to result primarily from spontaneous collective dynamics in neuronal networks. Here, we investigate how well collective dynamics reflected in motor cortex local field potentials (LFPs) can account for spiking variability during motor behavior. Neural activity was recorded via microelectrode arrays implanted in ventral and dorsal premotor and primary motor cortices of non-human primates performing naturalistic 3-D reaching and grasping actions. Point process models were used to quantify how well LFP features accounted for spiking variability not explained by the measured 3-D reach and grasp kinematics. LFP features included the instantaneous magnitude, phase and analytic-signal components of narrow band-pass filtered (δ,θ,α,β) LFPs, and analytic signal and amplitude envelope features in higher-frequency bands. Multiband LFP features predicted single-neuron spiking (1ms resolution) with substantial accuracy as assessed via ROC analysis. Notably, however, models including both LFP and kinematics features displayed marginal improvement over kinematics-only models. Furthermore, the small predictive information added by LFP features to kinematic models was redundant to information available in fast-timescale (<100 ms) spiking history. Overall, information in multiband LFP features, although predictive of single-neuron spiking during movement execution, was redundant to information available in movement parameters and spiking history. Our findings suggest that, during movement execution, collective dynamics reflected in motor cortex LFPs primarily relate to sensorimotor processes directly controlling movement output, adding little explanatory power to variability not accounted by movement parameters. PMID:26157365
NASA Astrophysics Data System (ADS)
Hanan, Lu; Qiushi, Li; Shaobin, Li
2016-12-01
This paper presents an integrated optimization design method in which uniform design, response surface methodology and genetic algorithm are used in combination. In detail, uniform design is used to select the experimental sampling points in the experimental domain and the system performance is evaluated by means of computational fluid dynamics to construct a database. After that, response surface methodology is employed to generate a surrogate mathematical model relating the optimization objective and the design variables. Subsequently, genetic algorithm is adopted and applied to the surrogate model to acquire the optimal solution in the case of satisfying some constraints. The method has been applied to the optimization design of an axisymmetric diverging duct, dealing with three design variables including one qualitative variable and two quantitative variables. The method of modeling and optimization design performs well in improving the duct aerodynamic performance and can be also applied to wider fields of mechanical design and seen as a useful tool for engineering designers, by reducing the design time and computation consumption.
China's Air Quality and Respiratory Disease Mortality Based on the Spatial Panel Model.
Cao, Qilong; Liang, Ying; Niu, Xueting
2017-09-18
Background : Air pollution has become an important factor restricting China's economic development and has subsequently brought a series of social problems, including the impact of air pollution on the health of residents, which is a topical issue in China. Methods : Taking into account this spatial imbalance, the paper is based on the spatial panel data model PM 2.5 . Respiratory disease mortality in 31 Chinese provinces from 2004 to 2008 is taken as the main variable to study the spatial effect and impact of air quality and respiratory disease mortality on a large scale. Results : It was found that there is a spatial correlation between the mortality of respiratory diseases in Chinese provinces. The spatial correlation can be explained by the spatial effect of PM 2.5 pollutions in the control of other variables. Conclusions : Compared with the traditional non-spatial model, the spatial model is better for describing the spatial relationship between variables, ensuring the conclusions are scientific and can measure the spatial effect between variables.
A variable-gain output feedback control design approach
NASA Technical Reports Server (NTRS)
Haylo, Nesim
1989-01-01
A multi-model design technique to find a variable-gain control law defined over the whole operating range is proposed. The design is formulated as an optimal control problem which minimizes a cost function weighing the performance at many operating points. The solution is obtained by embedding into the Multi-Configuration Control (MCC) problem, a multi-model robust control design technique. In contrast to conventional gain scheduling which uses a curve fit of single model designs, the optimal variable-gain control law stabilizes the plant at every operating point included in the design. An iterative algorithm to compute the optimal control gains is presented. The methodology has been successfully applied to reconfigurable aircraft flight control and to nonlinear flight control systems.
Vector wind profile gust model
NASA Technical Reports Server (NTRS)
Adelfang, S. I.
1981-01-01
To enable development of a vector wind gust model suitable for orbital flight test operations and trade studies, hypotheses concerning the distributions of gust component variables were verified. Methods for verification of hypotheses that observed gust variables, including gust component magnitude, gust length, u range, and L range, are gamma distributed and presented. Observed gust modulus has been drawn from a bivariate gamma distribution that can be approximated with a Weibull distribution. Zonal and meridional gust components are bivariate gamma distributed. An analytical method for testing for bivariate gamma distributed variables is presented. Two distributions for gust modulus are described and the results of extensive hypothesis testing of one of the distributions are presented. The validity of the gamma distribution for representation of gust component variables is established.
Common quandaries and their practical solutions in Bayesian network modeling
Bruce G. Marcot
2017-01-01
Use and popularity of Bayesian network (BN) modeling has greatly expanded in recent years, but many common problems remain. Here, I summarize key problems in BN model construction and interpretation,along with suggested practical solutions. Problems in BN model construction include parameterizing probability values, variable definition, complex network structures,...
A Cognitive Diagnosis Model for Cognitively Based Multiple-Choice Options
ERIC Educational Resources Information Center
de la Torre, Jimmy
2009-01-01
Cognitive or skills diagnosis models are discrete latent variable models developed specifically for the purpose of identifying the presence or absence of multiple fine-grained skills. However, applications of these models typically involve dichotomous or dichotomized data, including data from multiple-choice (MC) assessments that are scored as…
Random vectors and spatial analysis by geostatistics for geotechnical applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, D.S.
1987-08-01
Geostatistics is extended to the spatial analysis of vector variables by defining the estimation variance and vector variogram in terms of the magnitude of difference vectors. Many random variables in geotechnology are in vectorial terms rather than scalars, and its structural analysis requires those sample variable interpolations to construct and characterize structural models. A better local estimator will result in greater quality of input models; geostatistics can provide such estimators; kriging estimators. The efficiency of geostatistics for vector variables is demonstrated in a case study of rock joint orientations in geological formations. The positive cross-validation encourages application of geostatistics tomore » spatial analysis of random vectors in geoscience as well as various geotechnical fields including optimum site characterization, rock mechanics for mining and civil structures, cavability analysis of block cavings, petroleum engineering, and hydrologic and hydraulic modelings.« less
Continuous-variable protocol for oblivious transfer in the noisy-storage model.
Furrer, Fabian; Gehring, Tobias; Schaffner, Christian; Pacher, Christoph; Schnabel, Roman; Wehner, Stephanie
2018-04-13
Cryptographic protocols are the backbone of our information society. This includes two-party protocols which offer protection against distrustful players. Such protocols can be built from a basic primitive called oblivious transfer. We present and experimentally demonstrate here a quantum protocol for oblivious transfer for optical continuous-variable systems, and prove its security in the noisy-storage model. This model allows us to establish security by sending more quantum signals than an attacker can reliably store during the protocol. The security proof is based on uncertainty relations which we derive for continuous-variable systems, that differ from the ones used in quantum key distribution. We experimentally demonstrate in a proof-of-principle experiment the proposed oblivious transfer protocol for various channel losses by using entangled two-mode squeezed states measured with balanced homodyne detection. Our work enables the implementation of arbitrary two-party quantum cryptographic protocols with continuous-variable communication systems.
An uncertainty analysis of the flood-stage upstream from a bridge.
Sowiński, M
2006-01-01
The paper begins with the formulation of the problem in the form of a general performance function. Next the Latin hypercube sampling (LHS) technique--a modified version of the Monte Carlo method is briefly described. The essential uncertainty analysis of the flood-stage upstream from a bridge starts with a description of the hydraulic model. This model concept is based on the HEC-RAS model developed for subcritical flow under a bridge without piers in which the energy equation is applied. The next section contains the characteristic of the basic variables including a specification of their statistics (means and variances). Next the problem of correlated variables is discussed and assumptions concerning correlation among basic variables are formulated. The analysis of results is based on LHS ranking lists obtained from the computer package UNCSAM. Results fot two examples are given: one for independent and the other for correlated variables.
Measuring individual differences in responses to date-rape vignettes using latent variable models.
Tuliao, Antover P; Hoffman, Lesa; McChargue, Dennis E
2017-01-01
Vignette methodology can be a flexible and powerful way to examine individual differences in response to dangerous real-life scenarios. However, most studies underutilize the usefulness of such methodology by analyzing only one outcome, which limits the ability to track event-related changes (e.g., vacillation in risk perception). The current study was designed to illustrate the dynamic influence of risk perception on exit point from a date-rape vignette. Our primary goal was to provide an illustrative example of how to use latent variable models for vignette methodology, including latent growth curve modeling with piecewise slopes, as well as latent variable measurement models. Through the combination of a step-by-step exposition in this text and corresponding model syntax available electronically, we detail an alternative statistical "blueprint" to enhance future violence research efforts using vignette methodology. Aggr. Behav. 43:60-73, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Abbas, Khaled A.; Fattah, Nabil Abdel; Reda, Hala R.
2003-01-01
This research is concerned with developing passenger demand models for international aviation from/to Egypt. In this context, aviation sector in Egypt is represented by the biggest and main airport namely Cairo airport as well as by the main Egyptian international air carrier namely Egyptair. The developed models utilize two variables to represent aviation demand, namely total number of international flights originating from and attracted to Cairo airport as well as total number of passengers using Egyptair international flights originating from and attracted to Cairo airport. Such demand variables were related, using different functional forms, to several explanatory variables including population, GDP and number of foreign tourists. Finally, two models were selected based on their logical acceptability, best fit and statistical significance. To demonstrate usefulness of developed models, these were used to forecast future demand patterns.
Applications of MIDAS regression in analysing trends in water quality
NASA Astrophysics Data System (ADS)
Penev, Spiridon; Leonte, Daniela; Lazarov, Zdravetz; Mann, Rob A.
2014-04-01
We discuss novel statistical methods in analysing trends in water quality. Such analysis uses complex data sets of different classes of variables, including water quality, hydrological and meteorological. We analyse the effect of rainfall and flow on trends in water quality utilising a flexible model called Mixed Data Sampling (MIDAS). This model arises because of the mixed frequency in the data collection. Typically, water quality variables are sampled fortnightly, whereas the rain data is sampled daily. The advantage of using MIDAS regression is in the flexible and parsimonious modelling of the influence of the rain and flow on trends in water quality variables. We discuss the model and its implementation on a data set from the Shoalhaven Supply System and Catchments in the state of New South Wales, Australia. Information criteria indicate that MIDAS modelling improves upon simplistic approaches that do not utilise the mixed data sampling nature of the data.
Variable Renewable Energy in Long-Term Planning Models: A Multi-Model Perspective
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cole, Wesley; Frew, Bethany; Mai, Trieu
Long-term capacity expansion models of the U.S. electricity sector have long been used to inform electric sector stakeholders and decision-makers. With the recent surge in variable renewable energy (VRE) generators — primarily wind and solar photovoltaics — the need to appropriately represent VRE generators in these long-term models has increased. VRE generators are especially difficult to represent for a variety of reasons, including their variability, uncertainty, and spatial diversity. This report summarizes the analyses and model experiments that were conducted as part of two workshops on modeling VRE for national-scale capacity expansion models. It discusses the various methods for treatingmore » VRE among four modeling teams from the Electric Power Research Institute (EPRI), the U.S. Energy Information Administration (EIA), the U.S. Environmental Protection Agency (EPA), and the National Renewable Energy Laboratory (NREL). The report reviews the findings from the two workshops and emphasizes the areas where there is still need for additional research and development on analysis tools to incorporate VRE into long-term planning and decision-making. This research is intended to inform the energy modeling community on the modeling of variable renewable resources, and is not intended to advocate for or against any particular energy technologies, resources, or policies.« less
Undergraduate Nurse Variables that Predict Academic Achievement and Clinical Competence in Nursing
ERIC Educational Resources Information Center
Blackman, Ian; Hall, Margaret; Darmawan, I Gusti Ngurah.
2007-01-01
A hypothetical model was formulated to explore factors that influenced academic and clinical achievement for undergraduate nursing students. Sixteen latent variables were considered including the students' background, gender, type of first language, age, their previous successes with their undergraduate nursing studies and status given for…
NASA Astrophysics Data System (ADS)
Lopez, Jon; Moreno, Gala; Lennert-Cody, Cleridy; Maunder, Mark; Sancristobal, Igor; Caballero, Ainhoa; Dagorn, Laurent
2017-06-01
Understanding the relationship between environmental variables and pelagic species concentrations and dynamics is helpful to improve fishery management, especially in a changing environment. Drifting fish aggregating device (DFAD)-associated tuna and non-tuna biomass data from the fishers' echo-sounder buoys operating in the Atlantic Ocean have been modelled as functions of oceanographic (Sea Surface Temperature, Chlorophyll-a, Salinity, Sea Level Anomaly, Thermocline depth and gradient, Geostrophic current, Total Current, Depth) and DFAD variables (DFAD speed, bearing and soak time) using Generalized Additive Mixed Models (GAMMs). Biological interaction (presence of non-tuna species at DFADs) was also included in the tuna model, and found to be significant at this time scale. All variables were included in the analyses but only some of them were highly significant, and variable significance differed among fish groups. In general, most of the fish biomass distribution was explained by the ocean productivity and DFAD-variables. Indeed, this study revealed different environmental preferences for tunas and non-tuna species and suggested the existence of active habitat selection. This improved assessment of environmental and DFAD effects on tuna and non-tuna catchability in the purse seine tuna fishery will contribute to transfer of better scientific advice to regional tuna commissions for the management and conservation of exploited resources.
Effort reward imbalance is associated with vagal withdrawal in Danish public sector employees.
Eller, Nanna Hurwitz; Blønd, Morten; Nielsen, Martin; Kristiansen, Jesper; Netterstrøm, Bo
2011-09-01
The current study analyzed the relationship between psychosocial work environment assessed by the Effort Reward Imbalance Model (ERI-model) and heart rate variability (HRV) measured at baseline and again, two years later, as this relationship is scarcely covered by the literature. Measurements of HRV during seated rest were obtained from 231 public sector employees. The associations between the ERI-model, and HRV were examined using a series of mixed effects models. The dependent variables were the logarithmically transformed levels of HRV-measures. Gender and year of measurement were included as factors, whereas age, and time of measurement were included as covariates. Subject was included as a random effect. Effort and effort reward imbalance were positively associated with heart rate and the ratio between low frequency (LF) and high frequency power (HF) and negatively associated with total power (TP) and HF. Reward was positively associated with TP. Adverse psychosocial work environment according to the ERI-model was associated with HRV, especially in the form of vagal withdrawal and most pronounced in women. Copyright © 2011 Elsevier B.V. All rights reserved.
Schmitt, Neal; Golubovich, Juliya; Leong, Frederick T L
2011-12-01
The impact of measurement invariance and the provision for partial invariance in confirmatory factor analytic models on factor intercorrelations, latent mean differences, and estimates of relations with external variables is investigated for measures of two sets of widely assessed constructs: Big Five personality and the six Holland interests (RIASEC). In comparing models that include provisions for partial invariance with models that do not, the results indicate quite small differences in parameter estimates involving the relations between factors, one relatively large standardized mean difference in factors between the subgroups compared and relatively small differences in the regression coefficients when the factors are used to predict external variables. The results provide support for the use of partially invariant models, but there does not seem to be a great deal of difference between structural coefficients when the measurement model does or does not include separate estimates of subgroup parameters that differ across subgroups. Future research should include simulations in which the impact of various factors related to invariance is estimated.
Stage-by-Stage and Parallel Flow Path Compressor Modeling for a Variable Cycle Engine
NASA Technical Reports Server (NTRS)
Kopasakis, George; Connolly, Joseph W.; Cheng, Larry
2015-01-01
This paper covers the development of stage-by-stage and parallel flow path compressor modeling approaches for a Variable Cycle Engine. The stage-by-stage compressor modeling approach is an extension of a technique for lumped volume dynamics and performance characteristic modeling. It was developed to improve the accuracy of axial compressor dynamics over lumped volume dynamics modeling. The stage-by-stage compressor model presented here is formulated into a parallel flow path model that includes both axial and rotational dynamics. This is done to enable the study of compressor and propulsion system dynamic performance under flow distortion conditions. The approaches utilized here are generic and should be applicable for the modeling of any axial flow compressor design.
Rotary ultrasonic machining of CFRP: a mechanistic predictive model for cutting force.
Cong, W L; Pei, Z J; Sun, X; Zhang, C L
2014-02-01
Cutting force is one of the most important output variables in rotary ultrasonic machining (RUM) of carbon fiber reinforced plastic (CFRP) composites. Many experimental investigations on cutting force in RUM of CFRP have been reported. However, in the literature, there are no cutting force models for RUM of CFRP. This paper develops a mechanistic predictive model for cutting force in RUM of CFRP. The material removal mechanism of CFRP in RUM has been analyzed first. The model is based on the assumption that brittle fracture is the dominant mode of material removal. CFRP micromechanical analysis has been conducted to represent CFRP as an equivalent homogeneous material to obtain the mechanical properties of CFRP from its components. Based on this model, relationships between input variables (including ultrasonic vibration amplitude, tool rotation speed, feedrate, abrasive size, and abrasive concentration) and cutting force can be predicted. The relationships between input variables and important intermediate variables (indentation depth, effective contact time, and maximum impact force of single abrasive grain) have been investigated to explain predicted trends of cutting force. Experiments are conducted to verify the model, and experimental results agree well with predicted trends from this model. Copyright © 2013 Elsevier B.V. All rights reserved.
A protective factors model for alcohol abuse and suicide prevention among Alaska Native youth.
Allen, James; Mohatt, Gerald V; Fok, Carlotta Ching Ting; Henry, David; Burkett, Rebekah
2014-09-01
This study provides an empirical test of a culturally grounded theoretical model for prevention of alcohol abuse and suicide risk with Alaska Native youth, using a promising set of culturally appropriate measures for the study of the process of change and outcome. This model is derived from qualitative work that generated an heuristic model of protective factors from alcohol (Allen et al. in J Prev Interv Commun 32:41-59, 2006; Mohatt et al. in Am J Commun Psychol 33:263-273, 2004a; Harm Reduct 1, 2004b). Participants included 413 rural Alaska Native youth ages 12-18 who assisted in testing a predictive model of Reasons for Life and Reflective Processes about alcohol abuse consequences as co-occurring outcomes. Specific individual, family, peer, and community level protective factor variables predicted these outcomes. Results suggest prominent roles for these predictor variables as intermediate prevention strategy target variables in a theoretical model for a multilevel intervention. The model guides understanding of underlying change processes in an intervention to increase the ultimate outcome variables of Reasons for Life and Reflective Processes regarding the consequences of alcohol abuse.
Variables affecting the academic and social integration of nursing students.
Zeitlin-Ophir, Iris; Melitz, Osnat; Miller, Rina; Podoshin, Pia; Mesh, Gustavo
2004-07-01
This study attempted to analyze the variables that influence the academic integration of nursing students. The theoretical model presented by Leigler was adapted to the existing conditions in a school of nursing in northern Israel. The independent variables included the student's background; amount of support received in the course of studies; extent of outside family and social commitments; satisfaction with the school's facilities and services; and level of social integration. The dependent variable was the student's level of academic integration. The findings substantiated four central hypotheses, with the study model explaining approximately 45% of the variance in the dependent variable. Academic integration is influenced by a number of variables, the most prominent of which is the social integration of the student with colleagues and educational staff. Among the background variables, country of origin was found to be significant to both social and academic integration for two main groups in the sample: Israeli-born students (both Jewish and Arab) and immigrant students.
NASA Astrophysics Data System (ADS)
Barker, J. Burdette
Spatially informed irrigation management may improve the optimal use of water resources. Sub-field scale water balance modeling and measurement were studied in the context of irrigation management. A spatial remote-sensing-based evapotranspiration and soil water balance model was modified and validated for use in real-time irrigation management. The modeled ET compared well with eddy covariance data from eastern Nebraska. Placement and quantity of sub-field scale soil water content measurement locations was also studied. Variance reduction factor and temporal stability were used to analyze soil water content data from an eastern Nebraska field. No consistent predictor of soil water temporal stability patterns was identified. At least three monitoring locations were needed per irrigation management zone to adequately quantify the mean soil water content. The remote-sensing-based water balance model was used to manage irrigation in a field experiment. The research included an eastern Nebraska field in 2015 and 2016 and a western Nebraska field in 2016 for a total of 210 plot-years. The response of maize and soybean to irrigation using variations of the model were compared with responses from treatments using soil water content measurement and a rainfed treatment. The remote-sensing-based treatment prescribed more irrigation than the other treatments in all cases. Excessive modeled soil evaporation and insufficient drainage times were suspected causes of the model drift. Modifying evaporation and drainage reduced modeled soil water depletion error. None of the included response variables were significantly different between treatments in western Nebraska. In eastern Nebraska, treatment differences for maize and soybean included evapotranspiration and a combined variable including evapotranspiration and deep percolation. Both variables were greatest for the remote-sensing model when differences were found to be statistically significant. Differences in maize yield in 2015 were attributed to random error. Soybean yield was lowest for the remote-sensing-based treatment and greatest for rainfed, possibly because of overwatering and lodging. The model performed well considering that it did not include soil water content measurements during the season. Future work should improve the soil evaporation and drainage formulations, because of excessive precipitation and include aerial remote sensing imagery and soil water content measurement as model inputs.
Servo Controlled Variable Pressure Modification to Space Shuttle Hydraulic Pump
NASA Technical Reports Server (NTRS)
Kouns, H. H.
1983-01-01
Engineering drawings show modifications made to the constant pressure control of the model AP27V-7 hydraulic pump to an electrically controlled variable pressure setting compensator. A hanger position indicator was included for continuously monitoring hanger angle. A simplex servo driver was furnished for controlling the pressure setting servovalve. Calibration of the rotary variable displacement transducer is described as well as pump performance and response characteristics.
Reconstructing Mammalian Sleep Dynamics with Data Assimilation
Sedigh-Sarvestani, Madineh; Schiff, Steven J.; Gluckman, Bruce J.
2012-01-01
Data assimilation is a valuable tool in the study of any complex system, where measurements are incomplete, uncertain, or both. It enables the user to take advantage of all available information including experimental measurements and short-term model forecasts of a system. Although data assimilation has been used to study other biological systems, the study of the sleep-wake regulatory network has yet to benefit from this toolset. We present a data assimilation framework based on the unscented Kalman filter (UKF) for combining sparse measurements together with a relatively high-dimensional nonlinear computational model to estimate the state of a model of the sleep-wake regulatory system. We demonstrate with simulation studies that a few noisy variables can be used to accurately reconstruct the remaining hidden variables. We introduce a metric for ranking relative partial observability of computational models, within the UKF framework, that allows us to choose the optimal variables for measurement and also provides a methodology for optimizing framework parameters such as UKF covariance inflation. In addition, we demonstrate a parameter estimation method that allows us to track non-stationary model parameters and accommodate slow dynamics not included in the UKF filter model. Finally, we show that we can even use observed discretized sleep-state, which is not one of the model variables, to reconstruct model state and estimate unknown parameters. Sleep is implicated in many neurological disorders from epilepsy to schizophrenia, but simultaneous observation of the many brain components that regulate this behavior is difficult. We anticipate that this data assimilation framework will enable better understanding of the detailed interactions governing sleep and wake behavior and provide for better, more targeted, therapies. PMID:23209396
Coupling of snow and permafrost processes using the Basic Modeling Interface (BMI)
NASA Astrophysics Data System (ADS)
Wang, K.; Overeem, I.; Jafarov, E. E.; Piper, M.; Stewart, S.; Clow, G. D.; Schaefer, K. M.
2017-12-01
We developed a permafrost modeling tool based by implementing the Kudryavtsev empirical permafrost active layer depth model (the so-called "Ku" component). The model is specifically set up to have a basic model interface (BMI), which enhances the potential coupling to other earth surface processes model components. This model is accessible through the Web Modeling Tool in Community Surface Dynamics Modeling System (CSDMS). The Kudryavtsev model has been applied for entire Alaska to model permafrost distribution at high spatial resolution and model predictions have been verified by Circumpolar Active Layer Monitoring (CALM) in-situ observations. The Ku component uses monthly meteorological forcing, including air temperature, snow depth, and snow density, and predicts active layer thickness (ALT) and temperature on the top of permafrost (TTOP), which are important factors in snow-hydrological processes. BMI provides an easy approach to couple the models with each other. Here, we provide a case of coupling the Ku component to snow process components, including the Snow-Degree-Day (SDD) method and Snow-Energy-Balance (SEB) method, which are existing components in the hydrological model TOPOFLOW. The work flow is (1) get variables from meteorology component, set the values to snow process component, and advance the snow process component, (2) get variables from meteorology and snow component, provide these to the Ku component and advance, (3) get variables from snow process component, set the values to meteorology component, and advance the meteorology component. The next phase is to couple the permafrost component with fully BMI-compliant TOPOFLOW hydrological model, which could provide a useful tool to investigate the permafrost hydrological effect.
Moderation analysis using a two-level regression model.
Yuan, Ke-Hai; Cheng, Ying; Maxwell, Scott
2014-10-01
Moderation analysis is widely used in social and behavioral research. The most commonly used model for moderation analysis is moderated multiple regression (MMR) in which the explanatory variables of the regression model include product terms, and the model is typically estimated by least squares (LS). This paper argues for a two-level regression model in which the regression coefficients of a criterion variable on predictors are further regressed on moderator variables. An algorithm for estimating the parameters of the two-level model by normal-distribution-based maximum likelihood (NML) is developed. Formulas for the standard errors (SEs) of the parameter estimates are provided and studied. Results indicate that, when heteroscedasticity exists, NML with the two-level model gives more efficient and more accurate parameter estimates than the LS analysis of the MMR model. When error variances are homoscedastic, NML with the two-level model leads to essentially the same results as LS with the MMR model. Most importantly, the two-level regression model permits estimating the percentage of variance of each regression coefficient that is due to moderator variables. When applied to data from General Social Surveys 1991, NML with the two-level model identified a significant moderation effect of race on the regression of job prestige on years of education while LS with the MMR model did not. An R package is also developed and documented to facilitate the application of the two-level model.
Huang, Xiaobi; Elliott, Michael R.; Harlow, Siobán D.
2013-01-01
SUMMARY As women approach menopause, the patterns of their menstrual cycle lengths change. To study these changes, we need to jointly model both the mean and variability of cycle length. Our proposed model incorporates separate mean and variance change points for each woman and a hierarchical model to link them together, along with regression components to include predictors of menopausal onset such as age at menarche and parity. Additional complexity arises from the fact that the calendar data have substantial missingness due to hormone use, surgery, and failure to report. We integrate multiple imputation and time-to event modeling in a Bayesian estimation framework to deal with different forms of the missingness. Posterior predictive model checks are applied to evaluate the model fit. Our method successfully models patterns of women’s menstrual cycle trajectories throughout their late reproductive life and identifies change points for mean and variability of segment length, providing insight into the menopausal process. More generally, our model points the way toward increasing use of joint mean-variance models to predict health outcomes and better understand disease processes. PMID:24729638
Roelen, Corné A M; Stapelfeldt, Christina M; Heymans, Martijn W; van Rhenen, Willem; Labriola, Merete; Nielsen, Claus V; Bültmann, Ute; Jensen, Chris
2015-06-01
To validate Dutch prognostic models including age, self-rated health and prior sickness absence (SA) for ability to predict high SA in Danish eldercare. The added value of work environment variables to the models' risk discrimination was also investigated. 2,562 municipal eldercare workers (95% women) participated in the Working in Eldercare Survey. Predictor variables were measured by questionnaire at baseline in 2005. Prognostic models were validated for predictions of high (≥30) SA days and high (≥3) SA episodes retrieved from employer records during 1-year follow-up. The accuracy of predictions was assessed by calibration graphs and the ability of the models to discriminate between high- and low-risk workers was investigated by ROC-analysis. The added value of work environment variables was measured with Integrated Discrimination Improvement (IDI). 1,930 workers had complete data for analysis. The models underestimated the risk of high SA in eldercare workers and the SA episodes model had to be re-calibrated to the Danish data. Discrimination was practically useful for the re-calibrated SA episodes model, but not the SA days model. Physical workload improved the SA days model (IDI = 0.40; 95% CI 0.19-0.60) and psychosocial work factors, particularly the quality of leadership (IDI = 0.70; 95% CI 053-0.86) improved the SA episodes model. The prognostic model predicting high SA days showed poor performance even after physical workload was added. The prognostic model predicting high SA episodes could be used to identify high-risk workers, especially when psychosocial work factors are added as predictor variables.
NASA Technical Reports Server (NTRS)
Merrill, W. C.
1978-01-01
The Routh approximation technique for reducing the complexity of system models was applied in the frequency domain to a 16th order, state variable model of the F100 engine and to a 43d order, transfer function model of a launch vehicle boost pump pressure regulator. The results motivate extending the frequency domain formulation of the Routh method to the time domain in order to handle the state variable formulation directly. The time domain formulation was derived and a characterization that specifies all possible Routh similarity transformations was given. The characterization was computed by solving two eigenvalue-eigenvector problems. The application of the time domain Routh technique to the state variable engine model is described, and some results are given. Additional computational problems are discussed, including an optimization procedure that can improve the approximation accuracy by taking advantage of the transformation characterization.
The Potential for Predicting Precipitation on Seasonal-to-Interannual Timescales
NASA Technical Reports Server (NTRS)
Koster, R. D.
1999-01-01
The ability to predict precipitation several months in advance would have a significant impact on water resource management. This talk provides an overview of a project aimed at developing this prediction capability. NASA's Seasonal-to-Interannual Prediction Project (NSIPP) will generate seasonal-to-interannual sea surface temperature predictions through detailed ocean circulation modeling and will then translate these SST forecasts into forecasts of continental precipitation through the application of an atmospheric general circulation model and a "SVAT"-type land surface model. As part of the process, ocean variables (e.g., height) and land variables (e.g., soil moisture) will be updated regularly via data assimilation. The overview will include a discussion of the variability inherent in such a modeling system and will provide some quantitative estimates of the absolute upper limits of seasonal-to-interannual precipitation predictability.
Factors associated with fear of falling in people with Parkinson’s disease
2014-01-01
Background This study aimed to comprehensibly investigate potential contributing factors to fear of falling (FOF) among people with idiopathic Parkinson’s disease (PD). Methods The study included 104 people with PD. Mean (SD) age and PD-duration were 68 (9.4) and 5 (4.2) years, respectively, and the participants’ PD-symptoms were relatively mild. FOF (the dependent variable) was investigated with the Swedish version of the Falls Efficacy Scale, i.e. FES(S). The first multiple linear regression model replicated a previous study and independent variables targeted: walking difficulties in daily life; freezing of gait; dyskinesia; fatigue; need of help in daily activities; age; PD-duration; history of falls/near falls and pain. Model II included also the following clinically assessed variables: motor symptoms, cognitive functions, gait speed, dual-task difficulties and functional balance performance as well as reactive postural responses. Results Both regression models showed that the strongest contributing factor to FOF was walking difficulties, i.e. explaining 60% and 64% of the variance in FOF-scores, respectively. Other significant independent variables in both models were needing help from others in daily activities and fatigue. Functional balance was the only clinical variable contributing additional significant information to model I, increasing the explained variance from 66% to 73%. Conclusions The results imply that one should primarily target walking difficulties in daily life in order to reduce FOF in people mildly affected by PD. This finding applies even when considering a broad variety of aspects not previously considered in PD-studies targeting FOF. Functional balance performance, dependence in daily activities, and fatigue were also independently associated with FOF, but to a lesser extent. Longitudinal studies are warranted to gain an increased understanding of predictors of FOF in PD and who is at risk of developing a FOF. PMID:24456482
VARIABLE SELECTION FOR REGRESSION MODELS WITH MISSING DATA
Garcia, Ramon I.; Ibrahim, Joseph G.; Zhu, Hongtu
2009-01-01
We consider the variable selection problem for a class of statistical models with missing data, including missing covariate and/or response data. We investigate the smoothly clipped absolute deviation penalty (SCAD) and adaptive LASSO and propose a unified model selection and estimation procedure for use in the presence of missing data. We develop a computationally attractive algorithm for simultaneously optimizing the penalized likelihood function and estimating the penalty parameters. Particularly, we propose to use a model selection criterion, called the ICQ statistic, for selecting the penalty parameters. We show that the variable selection procedure based on ICQ automatically and consistently selects the important covariates and leads to efficient estimates with oracle properties. The methodology is very general and can be applied to numerous situations involving missing data, from covariates missing at random in arbitrary regression models to nonignorably missing longitudinal responses and/or covariates. Simulations are given to demonstrate the methodology and examine the finite sample performance of the variable selection procedures. Melanoma data from a cancer clinical trial is presented to illustrate the proposed methodology. PMID:20336190
NASA Astrophysics Data System (ADS)
Hassanzadeh, S.; Hosseinibalam, F.; Omidvari, M.
2008-04-01
Data of seven meteorological variables (relative humidity, wet temperature, dry temperature, maximum temperature, minimum temperature, ground temperature and sun radiation time) and ozone values have been used for statistical analysis. Meteorological variables and ozone values were analyzed using both multiple linear regression and principal component methods. Data for the period 1999-2004 are analyzed jointly using both methods. For all periods, temperature dependent variables were highly correlated, but were all negatively correlated with relative humidity. Multiple regression analysis was used to fit the meteorological variables using the meteorological variables as predictors. A variable selection method based on high loading of varimax rotated principal components was used to obtain subsets of the predictor variables to be included in the linear regression model of the meteorological variables. In 1999, 2001 and 2002 one of the meteorological variables was weakly influenced predominantly by the ozone concentrations. However, the model did not predict that the meteorological variables for the year 2000 were not influenced predominantly by the ozone concentrations that point to variation in sun radiation. This could be due to other factors that were not explicitly considered in this study.
Are Subject-Specific Musculoskeletal Models Robust to the Uncertainties in Parameter Identification?
Valente, Giordano; Pitto, Lorenzo; Testi, Debora; Seth, Ajay; Delp, Scott L.; Stagni, Rita; Viceconti, Marco; Taddei, Fulvia
2014-01-01
Subject-specific musculoskeletal modeling can be applied to study musculoskeletal disorders, allowing inclusion of personalized anatomy and properties. Independent of the tools used for model creation, there are unavoidable uncertainties associated with parameter identification, whose effect on model predictions is still not fully understood. The aim of the present study was to analyze the sensitivity of subject-specific model predictions (i.e., joint angles, joint moments, muscle and joint contact forces) during walking to the uncertainties in the identification of body landmark positions, maximum muscle tension and musculotendon geometry. To this aim, we created an MRI-based musculoskeletal model of the lower limbs, defined as a 7-segment, 10-degree-of-freedom articulated linkage, actuated by 84 musculotendon units. We then performed a Monte-Carlo probabilistic analysis perturbing model parameters according to their uncertainty, and solving a typical inverse dynamics and static optimization problem using 500 models that included the different sets of perturbed variable values. Model creation and gait simulations were performed by using freely available software that we developed to standardize the process of model creation, integrate with OpenSim and create probabilistic simulations of movement. The uncertainties in input variables had a moderate effect on model predictions, as muscle and joint contact forces showed maximum standard deviation of 0.3 times body-weight and maximum range of 2.1 times body-weight. In addition, the output variables significantly correlated with few input variables (up to 7 out of 312) across the gait cycle, including the geometry definition of larger muscles and the maximum muscle tension in limited gait portions. Although we found subject-specific models not markedly sensitive to parameter identification, researchers should be aware of the model precision in relation to the intended application. In fact, force predictions could be affected by an uncertainty in the same order of magnitude of its value, although this condition has low probability to occur. PMID:25390896
Human Language Technology: Opportunities and Challenges
2005-01-01
because of the connections to and reliance on signal processing. Audio diarization critically includes indexing of speakers [12], since speaker ...to reduce inter- speaker variability in training. Standard techniques include vocal-tract length normalization, adaptation of acoustic models using...maximum likelihood linear regression (MLLR), and speaker -adaptive training based on MLLR. The acoustic models are mixtures of Gaussians, typically with
Strudwick, Gillian
2015-05-01
The benefits of healthcare technologies can only be attained if nurses accept and intend to fully use them. One of the most common models utilized to understand user acceptance of technology is the Technology Acceptance Model. This model and modified versions of it have only recently been applied in the healthcare literature among nurse participants. An integrative literature review was conducted on this topic. Ovid/MEDLINE, PubMed, Google Scholar, and CINAHL were searched yielding a total of 982 references. Upon eliminating duplicates and applying the inclusion and exclusion criteria, the review included a total of four dissertations, three symposium proceedings, and 13 peer-reviewed journal articles. These documents were appraised and reviewed. The results show that a modified Technology Acceptance Model with added variables could provide a better explanation of nurses' acceptance of healthcare technology. These added variables to modified versions of the Technology Acceptance Model are discussed, and the studies' methodologies are critiqued. Limitations of the studies included in the integrative review are also examined.
A Geostatistical Scaling Approach for the Generation of Non Gaussian Random Variables and Increments
NASA Astrophysics Data System (ADS)
Guadagnini, Alberto; Neuman, Shlomo P.; Riva, Monica; Panzeri, Marco
2016-04-01
We address manifestations of non-Gaussian statistical scaling displayed by many variables, Y, and their (spatial or temporal) increments. Evidence of such behavior includes symmetry of increment distributions at all separation distances (or lags) with sharp peaks and heavy tails which tend to decay asymptotically as lag increases. Variables reported to exhibit such distributions include quantities of direct relevance to hydrogeological sciences, e.g. porosity, log permeability, electrical resistivity, soil and sediment texture, sediment transport rate, rainfall, measured and simulated turbulent fluid velocity, and other. No model known to us captures all of the documented statistical scaling behaviors in a unique and consistent manner. We recently proposed a generalized sub-Gaussian model (GSG) which reconciles within a unique theoretical framework the probability distributions of a target variable and its increments. We presented an algorithm to generate unconditional random realizations of statistically isotropic or anisotropic GSG functions and illustrated it in two dimensions. In this context, we demonstrated the feasibility of estimating all key parameters of a GSG model underlying a single realization of Y by analyzing jointly spatial moments of Y data and corresponding increments. Here, we extend our GSG model to account for noisy measurements of Y at a discrete set of points in space (or time), present an algorithm to generate conditional realizations of corresponding isotropic or anisotropic random field, and explore them on one- and two-dimensional synthetic test cases.
[Probabilistic models of mortality for patients hospitalized in conventional units].
Rué, M; Roqué, M; Solà, J; Macià, M
2001-09-29
We have developed a tool to measure disease severity of patients hospitalized in conventional units in order to evaluate and compare the effectiveness and quality of health care in our setting. A total of 2,274 adult patients admitted consecutively to inpatient units from the Medicine, Surgery and Orthopaedic Surgery, and Trauma Departments of the Corporació Sanitària Parc Taulí of Sabadell, Spain, between November 1, 1997 and September 30, 1998 were included. The following variables were collected: demographic data, previous health state, substance abuse, comorbidity prior to admission, characteristics of the admission, clinical parameters within the first 24 hours of admission, laboratory results and data from the Basic Minimum Data Set of hospital discharges. Multiple logistic regression analysis was used to develop mortality probability models during the hospital stay. The mortality probability model at admission (MPMHOS-0) contained 7 variables associated with mortality during hospital stay: age, urgent admission, chronic cardiac insufficiency, chronic respiratory insufficiency, chronic liver disease, neoplasm, and dementia syndrome. The mortality probability model at 24-48 hours from admission (MPMHOS-24) contained 9 variables: those included in the MPMHOS-0 plus two statistically significant laboratory variables: hemoglobin and creatinine. Severity measures, in particular those presented in this study, can be helpful for the interpretation of hospital mortality rates and can guide mortality or quality committees at the time of investigating health care-related problems.
The method of belief scales as a means for dealing with uncertainty in tough regulatory decisions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pilch, Martin M.
Modeling and simulation is playing an increasing role in supporting tough regulatory decisions, which are typically characterized by variabilities and uncertainties in the scenarios, input conditions, failure criteria, model parameters, and even model form. Variability exists when there is a statistically significant database that is fully relevant to the application. Uncertainty, on the other hand, is characterized by some degree of ignorance. A simple algebraic problem was used to illustrate how various risk methodologies address variability and uncertainty in a regulatory context. These traditional risk methodologies include probabilistic methods (including frequensic and Bayesian perspectives) and second-order methods where variabilities andmore » uncertainties are treated separately. Representing uncertainties with (subjective) probability distributions and using probabilistic methods to propagate subjective distributions can lead to results that are not logically consistent with available knowledge and that may not be conservative. The Method of Belief Scales (MBS) is developed as a means to logically aggregate uncertain input information and to propagate that information through the model to a set of results that are scrutable, easily interpretable by the nonexpert, and logically consistent with the available input information. The MBS, particularly in conjunction with sensitivity analyses, has the potential to be more computationally efficient than other risk methodologies. The regulatory language must be tailored to the specific risk methodology if ambiguity and conflict are to be avoided.« less
Load estimator (LOADEST): a FORTRAN program for estimating constituent loads in streams and rivers
Runkel, Robert L.; Crawford, Charles G.; Cohn, Timothy A.
2004-01-01
LOAD ESTimator (LOADEST) is a FORTRAN program for estimating constituent loads in streams and rivers. Given a time series of streamflow, additional data variables, and constituent concentration, LOADEST assists the user in developing a regression model for the estimation of constituent load (calibration). Explanatory variables within the regression model include various functions of streamflow, decimal time, and additional user-specified data variables. The formulated regression model then is used to estimate loads over a user-specified time interval (estimation). Mean load estimates, standard errors, and 95 percent confidence intervals are developed on a monthly and(or) seasonal basis. The calibration and estimation procedures within LOADEST are based on three statistical estimation methods. The first two methods, Adjusted Maximum Likelihood Estimation (AMLE) and Maximum Likelihood Estimation (MLE), are appropriate when the calibration model errors (residuals) are normally distributed. Of the two, AMLE is the method of choice when the calibration data set (time series of streamflow, additional data variables, and concentration) contains censored data. The third method, Least Absolute Deviation (LAD), is an alternative to maximum likelihood estimation when the residuals are not normally distributed. LOADEST output includes diagnostic tests and warnings to assist the user in determining the appropriate estimation method and in interpreting the estimated loads. This report describes the development and application of LOADEST. Sections of the report describe estimation theory, input/output specifications, sample applications, and installation instructions.
Datamining approaches for modeling tumor control probability.
Naqa, Issam El; Deasy, Joseph O; Mu, Yi; Huang, Ellen; Hope, Andrew J; Lindsay, Patricia E; Apte, Aditya; Alaly, James; Bradley, Jeffrey D
2010-11-01
Tumor control probability (TCP) to radiotherapy is determined by complex interactions between tumor biology, tumor microenvironment, radiation dosimetry, and patient-related variables. The complexity of these heterogeneous variable interactions constitutes a challenge for building predictive models for routine clinical practice. We describe a datamining framework that can unravel the higher order relationships among dosimetric dose-volume prognostic variables, interrogate various radiobiological processes, and generalize to unseen data before when applied prospectively. Several datamining approaches are discussed that include dose-volume metrics, equivalent uniform dose, mechanistic Poisson model, and model building methods using statistical regression and machine learning techniques. Institutional datasets of non-small cell lung cancer (NSCLC) patients are used to demonstrate these methods. The performance of the different methods was evaluated using bivariate Spearman rank correlations (rs). Over-fitting was controlled via resampling methods. Using a dataset of 56 patients with primary NCSLC tumors and 23 candidate variables, we estimated GTV volume and V75 to be the best model parameters for predicting TCP using statistical resampling and a logistic model. Using these variables, the support vector machine (SVM) kernel method provided superior performance for TCP prediction with an rs=0.68 on leave-one-out testing compared to logistic regression (rs=0.4), Poisson-based TCP (rs=0.33), and cell kill equivalent uniform dose model (rs=0.17). The prediction of treatment response can be improved by utilizing datamining approaches, which are able to unravel important non-linear complex interactions among model variables and have the capacity to predict on unseen data for prospective clinical applications.
NASA Astrophysics Data System (ADS)
Badawi, Ahmed M.; Weiss, Elisabeth; Sleeman, William C., IV; Hugo, Geoffrey D.
2012-01-01
The purpose of this study is to develop and evaluate a lung tumour interfraction geometric variability classification scheme as a means to guide adaptive radiotherapy and improve measurement of treatment response. Principal component analysis (PCA) was used to generate statistical shape models of the gross tumour volume (GTV) for 12 patients with weekly breath hold CT scans. Each eigenmode of the PCA model was classified as ‘trending’ or ‘non-trending’ depending on whether its contribution to the overall GTV variability included a time trend over the treatment course. Trending eigenmodes were used to reconstruct the original semi-automatically delineated GTVs into a reduced model containing only time trends. Reduced models were compared to the original GTVs by analyzing the reconstruction error in the GTV and position. Both retrospective (all weekly images) and prospective (only the first four weekly images) were evaluated. The average volume difference from the original GTV was 4.3% ± 2.4% for the trending model. The positional variability of the GTV over the treatment course, as measured by the standard deviation of the GTV centroid, was 1.9 ± 1.4 mm for the original GTVs, which was reduced to 1.2 ± 0.6 mm for the trending-only model. In 3/13 cases, the dominant eigenmode changed class between the prospective and retrospective models. The trending-only model preserved GTV and shape relative to the original GTVs, while reducing spurious positional variability. The classification scheme appears feasible for separating types of geometric variability by time trend.
NASA Astrophysics Data System (ADS)
Frederiksen, Carsten; Grainger, Simon; Zheng, Xiaogu; Sisson, Janice
2013-04-01
ENSO variability is an important driver of the Southern Hemisphere (SH) atmospheric circulation. Understanding the observed and projected changes in ENSO variability is therefore important to understanding changes in Australian surface climate. Using a recently developed methodology (Zheng et al., 2009), the coherent patterns, or modes, of ENSO-related variability in the SH atmospheric circulation can be separated from modes that are related to intraseasonal variability or to changes in radiative forcings. Under this methodology, the seasonal mean SH 500 hPa geopotential height is considered to consist of three components. These are: (1) an intraseasonal component related to internal dynamics on intraseasonal time scales; (2) a slow-internal component related to internal dynamics on slowly varying (interannual or longer) time scales, including ENSO; and (3) a slow-external component related to external (i.e. radiative) forcings. Empirical Orthogonal Functions (EOFs) are used to represent the modes of variability of the interannual covariance of the three components. An assessment is first made of the modes in models from the Coupled Model Intercomparison Project Phase 5 (CMIP5) dataset for the SH summer and winter seasons in the 20th century. In reanalysis data, two EOFs of the slow component (which includes the slow-internal and slow-external components) have been found to be related to ENSO variability (Frederiksen and Zheng, 2007). In SH summer, the CMIP5 models reproduce the leading ENSO mode very well when the structures of the EOF and the associated SST, and associated variance are considered. There is substantial improvement in this mode when compared with the CMIP3 models shown in Grainger et al. (2012). However, the second ENSO mode in SH summer has a poorly reproduced EOF structure in the CMIP5 models, and the associated variance is generally underestimated. In SH winter, the performance of the CMIP5 models in reproducing the structure and variance is similar for both ENSO modes, with the associated variance being generally underestimated. Projected changes in the modes in the 21st century are then investigated using ensembles of CMIP5 models that reproduce well the 20th century slow modes. The slow-internal and slow-external components are examined separately, allowing the projected changes in the response to ENSO variability to be separated from the response to changes in greenhouse gas concentrations. By using several ensembles, the model-dependency of the projected changes in the ENSO-related slow-internal modes is examined. Frederiksen, C. S., and X. Zheng, 2007: Variability of seasonal-mean fields arising from intraseasonal variability. Part 3: Application to SH winter and summer circulations. Climate Dyn., 28, 849-866. Grainger, S., C. S. Frederiksen, and X. Zheng, 2012: Modes of interannual variability of Southern Hemisphere atmospheric circulation in CMIP3 models: Assessment and Projections. Climate Dyn., in press. Zheng, X., D. M. Straus, C. S. Frederiksen, and S. Grainger, 2009: Potentially predictable patterns of extratropical tropospheric circulation in an ensemble of climate simulations with the COLA AGCM. Quart. J. Roy. Meteor. Soc., 135, 1816-1829.
A further assessment of vegetation feedback on decadal Sahel rainfall variability
NASA Astrophysics Data System (ADS)
Kucharski, Fred; Zeng, Ning; Kalnay, Eugenia
2013-03-01
The effect of vegetation feedback on decadal-scale Sahel rainfall variability is analyzed using an ensemble of climate model simulations in which the atmospheric general circulation model ICTPAGCM ("SPEEDY") is coupled to the dynamic vegetation model VEGAS to represent feedbacks from surface albedo change and evapotranspiration, forced externally by observed sea surface temperature (SST) changes. In the control experiment, where the full vegetation feedback is included, the ensemble is consistent with the observed decadal rainfall variability, with a forced component 60 % of the observed variability. In a sensitivity experiment where climatological vegetation cover and albedo are prescribed from the control experiment, the ensemble of simulations is not consistent with the observations because of strongly reduced amplitude of decadal rainfall variability, and the forced component drops to 35 % of the observed variability. The decadal rainfall variability is driven by SST forcing, but significantly enhanced by land-surface feedbacks. Both, local evaporation and moisture flux convergence changes are important for the total rainfall response. Also the internal decadal variability across the ensemble members (not SST-forced) is much stronger in the control experiment compared with the one where vegetation cover and albedo are prescribed. It is further shown that this positive vegetation feedback is physically related to the albedo feedback, supporting the Charney hypothesis.
Predator Persistence through Variability of Resource Productivity in Tritrophic Systems.
Soudijn, Floor H; de Roos, André M
2017-12-01
The trophic structure of species communities depends on the energy transfer between trophic levels. Primary productivity varies strongly through time, challenging the persistence of species at higher trophic levels. Yet resource variability has mostly been studied in systems with only one or two trophic levels. We test the effect of variability in resource productivity in a tritrophic model system including a resource, a size-structured consumer, and a size-specific predator. The model complies with fundamental principles of mass conservation and the body-size dependence of individual-level energetics and predator-prey interactions. Surprisingly, we find that resource variability may promote predator persistence. The positive effect of variability on the predator arises through periods with starvation mortality of juvenile prey, which reduces the intraspecific competition in the prey population. With increasing variability in productivity and starvation mortality in the juvenile prey, the prey availability increases in the size range preferred by the predator. The positive effect of prey mortality on the trophic transfer efficiency depends on the biologically realistic consideration of body size-dependent and food-dependent functions for growth and reproduction in our model. Our findings show that variability may promote the trophic transfer efficiency, indicating that environmental variability may sustain species at higher trophic levels in natural ecosystems.
Newtonian Nudging For A Richards Equation-based Distributed Hydrological Model
NASA Astrophysics Data System (ADS)
Paniconi, C.; Marrocu, M.; Putti, M.; Verbunt, M.
In this study a relatively simple data assimilation method has been implemented in a relatively complex hydrological model. The data assimilation technique is Newtonian relaxation or nudging, in which model variables are driven towards observations by a forcing term added to the model equations. The forcing term is proportional to the difference between simulation and observation (relaxation component) and contains four-dimensional weighting functions that can incorporate prior knowledge about the spatial and temporal variability and characteristic scales of the state variable(s) being assimilated. The numerical model couples a three-dimensional finite element Richards equation solver for variably saturated porous media and a finite difference diffusion wave approximation based on digital elevation data for surface water dynamics. We describe the implementation of the data assimilation algorithm for the coupled model and report on the numerical and hydrological performance of the resulting assimila- tion scheme. Nudging is shown to be successful in improving the hydrological sim- ulation results, and it introduces little computational cost, in terms of CPU and other numerical aspects of the model's behavior, in some cases even improving numerical performance compared to model runs without nudging. We also examine the sensitiv- ity of the model to nudging term parameters including the spatio-temporal influence coefficients in the weighting functions. Overall the nudging algorithm is quite flexi- ble, for instance in dealing with concurrent observation datasets, gridded or scattered data, and different state variables, and the implementation presented here can be read- ily extended to any features not already incorporated. Moreover the nudging code and tests can serve as a basis for implementation of more sophisticated data assimilation techniques in a Richards equation-based hydrological model.
NASA Astrophysics Data System (ADS)
Danabasoglu, Gokhan; Yeager, Steve G.; Kim, Who M.; Behrens, Erik; Bentsen, Mats; Bi, Daohua; Biastoch, Arne; Bleck, Rainer; Böning, Claus; Bozec, Alexandra; Canuto, Vittorio M.; Cassou, Christophe; Chassignet, Eric; Coward, Andrew C.; Danilov, Sergey; Diansky, Nikolay; Drange, Helge; Farneti, Riccardo; Fernandez, Elodie; Fogli, Pier Giuseppe; Forget, Gael; Fujii, Yosuke; Griffies, Stephen M.; Gusev, Anatoly; Heimbach, Patrick; Howard, Armando; Ilicak, Mehmet; Jung, Thomas; Karspeck, Alicia R.; Kelley, Maxwell; Large, William G.; Leboissetier, Anthony; Lu, Jianhua; Madec, Gurvan; Marsland, Simon J.; Masina, Simona; Navarra, Antonio; Nurser, A. J. George; Pirani, Anna; Romanou, Anastasia; Salas y Mélia, David; Samuels, Bonita L.; Scheinert, Markus; Sidorenko, Dmitry; Sun, Shan; Treguier, Anne-Marie; Tsujino, Hiroyuki; Uotila, Petteri; Valcke, Sophie; Voldoire, Aurore; Wang, Qiang; Yashayaev, Igor
2016-01-01
Simulated inter-annual to decadal variability and trends in the North Atlantic for the 1958-2007 period from twenty global ocean - sea-ice coupled models are presented. These simulations are performed as contributions to the second phase of the Coordinated Ocean-ice Reference Experiments (CORE-II). The study is Part II of our companion paper (Danabasoglu et al., 2014) which documented the mean states in the North Atlantic from the same models. A major focus of the present study is the representation of Atlantic meridional overturning circulation (AMOC) variability in the participating models. Relationships between AMOC variability and those of some other related variables, such as subpolar mixed layer depths, the North Atlantic Oscillation (NAO), and the Labrador Sea upper-ocean hydrographic properties, are also investigated. In general, AMOC variability shows three distinct stages. During the first stage that lasts until the mid- to late-1970s, AMOC is relatively steady, remaining lower than its long-term (1958-2007) mean. Thereafter, AMOC intensifies with maximum transports achieved in the mid- to late-1990s. This enhancement is then followed by a weakening trend until the end of our integration period. This sequence of low frequency AMOC variability is consistent with previous studies. Regarding strengthening of AMOC between about the mid-1970s and the mid-1990s, our results support a previously identified variability mechanism where AMOC intensification is connected to increased deep water formation in the subpolar North Atlantic, driven by NAO-related surface fluxes. The simulations tend to show general agreement in their temporal representations of, for example, AMOC, sea surface temperature (SST), and subpolar mixed layer depth variabilities. In particular, the observed variability of the North Atlantic SSTs is captured well by all models. These findings indicate that simulated variability and trends are primarily dictated by the atmospheric datasets which include the influence of ocean dynamics from nature superimposed onto anthropogenic effects. Despite these general agreements, there are many differences among the model solutions, particularly in the spatial structures of variability patterns. For example, the location of the maximum AMOC variability differs among the models between Northern and Southern Hemispheres.
NASA Technical Reports Server (NTRS)
Danabasoglu, Gokhan; Yeager, Steve G.; Kim, Who M.; Behrens, Erik; Bentsen, Mats; Bi, Daohua; Biastoch, Arne; Bleck, Rainer; Boening, Claus; Bozec, Alexandra;
2015-01-01
Simulated inter-annual to decadal variability and trends in the North Atlantic for the 1958-2007 period from twenty global ocean - sea-ice coupled models are presented. These simulations are performed as contributions to the second phase of the Coordinated Ocean-ice Reference Experiments (CORE-II). The study is Part II of our companion paper (Danabasoglu et al., 2014) which documented the mean states in the North Atlantic from the same models. A major focus of the present study is the representation of Atlantic meridional overturning circulation (AMOC) variability in the participating models. Relationships between AMOC variability and those of some other related variables, such as subpolar mixed layer depths, the North Atlantic Oscillation (NAO), and the Labrador Sea upper-ocean hydrographic properties, are also investigated. In general, AMOC variability shows three distinct stages. During the first stage that lasts until the mid- to late-1970s, AMOC is relatively steady, remaining lower than its long-term (1958-2007) mean. Thereafter, AMOC intensifies with maximum transports achieved in the mid- to late-1990s. This enhancement is then followed by a weakening trend until the end of our integration period. This sequence of low frequency AMOC variability is consistent with previous studies. Regarding strengthening of AMOC between about the mid-1970s and the mid-1990s, our results support a previously identified variability mechanism where AMOC intensification is connected to increased deep water formation in the subpolar North Atlantic, driven by NAO-related surface fluxes. The simulations tend to show general agreement in their representations of, for example, AMOC, sea surface temperature (SST), and subpolar mixed layer depth variabilities. In particular, the observed variability of the North Atlantic SSTs is captured well by all models. These findings indicate that simulated variability and trends are primarily dictated by the atmospheric datasets which include the influence of ocean dynamics from nature superimposed onto anthropogenic effects. Despite these general agreements, there are many differences among the model solutions, particularly in the spatial structures of variability patterns. For example, the location of the maximum AMOC variability differs among the models between Northern and Southern Hemispheres.
Modelling alpha-diversities of coastal lagoon fish assemblages from the Mediterranean Sea
NASA Astrophysics Data System (ADS)
Riera, R.; Tuset, V. M.; Betancur-R, R.; Lombarte, A.; Marcos, C.; Pérez-Ruzafa, A.
2018-07-01
Coastal lagoons are marine ecosystems spread worldwide with high ecological value; however, they are increasingly becoming deteriorated as a result of anthropogenic activity. Their conservation requires a better understanding of the biodiversity factors that may help identifying priority areas. The present study is focused on 37 Mediterranean coastal lagoons and we use predictive modelling approaches based on Generalized Linear Model (GLM) analysis to investigate variables (geomorphological, environmental, trophic or biogeographic) that may predict variations in alpha-diversity. It included taxonomic diversity, average taxonomic distinctness, and phylogenetic and functional diversity. Two GLM models by index were built depending on available variables for lagoons: in the model 1 all lagoons were used, and in the model 2 only 23. All alpha-diversity indices showed variability between lagoons associated to exogenous factors considered. The biogeographic region strongly conditioned most of models, being the first variable introduced in the models. The salinity and chlorophyll a concentration played a secondary role for the models 1 and 2, respectively. In general, the highest values of alpha-diversities were found in northwestern Mediterranean (Balearic Sea, Alborán Sea and Gulf of Lion), hence they might be considered "hotspots" at the Mediterranean scale and should have a special status for their protection.
Siderius, Christian; Biemans, Hester; van Walsum, Paul E. V.; van Ierland, Ekko C.; Kabat, Pavel; Hellegers, Petra J. G. J.
2016-01-01
One of the main manifestations of climate change will be increased rainfall variability. How to deal with this in agriculture will be a major societal challenge. In this paper we explore flexibility in land use, through deliberate seasonal adjustments in cropped area, as a specific strategy for coping with rainfall variability. Such adjustments are not incorporated in hydro-meteorological crop models commonly used for food security analyses. Our paper contributes to the literature by making a comprehensive model assessment of inter-annual variability in crop production, including both variations in crop yield and cropped area. The Ganges basin is used as a case study. First, we assessed the contribution of cropped area variability to overall variability in rice and wheat production by applying hierarchical partitioning on time-series of agricultural statistics. We then introduced cropped area as an endogenous decision variable in a hydro-economic optimization model (WaterWise), coupled to a hydrology-vegetation model (LPJmL), and analyzed to what extent its performance in the estimation of inter-annual variability in crop production improved. From the statistics, we found that in the period 1999–2009 seasonal adjustment in cropped area can explain almost 50% of variability in wheat production and 40% of variability in rice production in the Indian part of the Ganges basin. Our improved model was well capable of mimicking existing variability at different spatial aggregation levels, especially for wheat. The value of flexibility, i.e. the foregone costs of choosing not to crop in years when water is scarce, was quantified at 4% of gross margin of wheat in the Indian part of the Ganges basin and as high as 34% of gross margin of wheat in the drought-prone state of Rajasthan. We argue that flexibility in land use is an important coping strategy to rainfall variability in water stressed regions. PMID:26934389
Morais, Sérgio Alberto; Delerue-Matos, Cristina; Gabarrell, Xavier
2014-08-15
In this study, the concentration probability distributions of 82 pharmaceutical compounds detected in the effluents of 179 European wastewater treatment plants were computed and inserted into a multimedia fate model. The comparative ecotoxicological impact of the direct emission of these compounds from wastewater treatment plants on freshwater ecosystems, based on a potentially affected fraction (PAF) of species approach, was assessed to rank compounds based on priority. As many pharmaceuticals are acids or bases, the multimedia fate model accounts for regressions to estimate pH-dependent fate parameters. An uncertainty analysis was performed by means of Monte Carlo analysis, which included the uncertainty of fate and ecotoxicity model input variables, as well as the spatial variability of landscape characteristics on the European continental scale. Several pharmaceutical compounds were identified as being of greatest concern, including 7 analgesics/anti-inflammatories, 3 β-blockers, 3 psychiatric drugs, and 1 each of 6 other therapeutic classes. The fate and impact modelling relied extensively on estimated data, given that most of these compounds have little or no experimental fate or ecotoxicity data available, as well as a limited reported occurrence in effluents. The contribution of estimated model input variables to the variance of freshwater ecotoxicity impact, as well as the lack of experimental abiotic degradation data for most compounds, helped in establishing priorities for further testing. Generally, the effluent concentration and the ecotoxicity effect factor were the model input variables with the most significant effect on the uncertainty of output results. Copyright © 2014. Published by Elsevier B.V.
Trends and Variability of Global Fire Emissions Due To Historical Anthropogenic Activities
NASA Astrophysics Data System (ADS)
Ward, Daniel S.; Shevliakova, Elena; Malyshev, Sergey; Rabin, Sam
2018-01-01
Globally, fires are a major source of carbon from the terrestrial biosphere to the atmosphere, occurring on a seasonal cycle and with substantial interannual variability. To understand past trends and variability in sources and sinks of terrestrial carbon, we need quantitative estimates of global fire distributions. Here we introduce an updated version of the Fire Including Natural and Agricultural Lands model, version 2 (FINAL.2), modified to include multiday burning and enhanced fire spread rate in forest crowns. We demonstrate that the improved model reproduces the interannual variability and spatial distribution of fire emissions reported in present-day remotely sensed inventories. We use FINAL.2 to simulate historical (post-1700) fires and attribute past fire trends and variability to individual drivers: land use and land cover change, population growth, and lightning variability. Global fire emissions of carbon increase by about 10% between 1700 and 1900, reaching a maximum of 3.4 Pg C yr-1 in the 1910s, followed by a decrease to about 5% below year 1700 levels by 2010. The decrease in emissions from the 1910s to the present day is driven mainly by land use change, with a smaller contribution from increased fire suppression due to increased human population and is largest in Sub-Saharan Africa and South Asia. Interannual variability of global fire emissions is similar in the present day as in the early historical period, but present-day wildfires would be more variable in the absence of land use change.
Shi, Yuan; Lau, Kevin Ka-Lun; Ng, Edward
2017-08-01
Urban air quality serves as an important function of the quality of urban life. Land use regression (LUR) modelling of air quality is essential for conducting health impacts assessment but more challenging in mountainous high-density urban scenario due to the complexities of the urban environment. In this study, a total of 21 LUR models are developed for seven kinds of air pollutants (gaseous air pollutants CO, NO 2 , NO x , O 3 , SO 2 and particulate air pollutants PM 2.5 , PM 10 ) with reference to three different time periods (summertime, wintertime and annual average of 5-year long-term hourly monitoring data from local air quality monitoring network) in Hong Kong. Under the mountainous high-density urban scenario, we improved the traditional LUR modelling method by incorporating wind availability information into LUR modelling based on surface geomorphometrical analysis. As a result, 269 independent variables were examined to develop the LUR models by using the "ADDRESS" independent variable selection method and stepwise multiple linear regression (MLR). Cross validation has been performed for each resultant model. The results show that wind-related variables are included in most of the resultant models as statistically significant independent variables. Compared with the traditional method, a maximum increase of 20% was achieved in the prediction performance of annual averaged NO 2 concentration level by incorporating wind-related variables into LUR model development. Copyright © 2017 Elsevier Inc. All rights reserved.
Estimating water temperatures in small streams in western Oregon using neural network models
Risley, John C.; Roehl, Edwin A.; Conrads, Paul
2003-01-01
Artificial neural network models were developed to estimate water temperatures in small streams using data collected at 148 sites throughout western Oregon from June to September 1999. The sites were located on 1st-, 2nd-, or 3rd-order streams having undisturbed or minimally disturbed conditions. Data collected at each site for model development included continuous hourly water temperature and description of riparian habitat. Additional data pertaining to the landscape characteristics of the basins upstream of the sites were assembled using geographic information system (GIS) techniques. Hourly meteorological time series data collected at 25 locations within the study region also were assembled. Clustering analysis was used to partition 142 sites into 3 groups. Separate models were developed for each group. The riparian habitat, basin characteristic, and meteorological time series data were independent variables and water temperature time series were dependent variables to the models, respectively. Approximately one-third of the data vectors were used for model training, and the remaining two-thirds were used for model testing. Critical input variables included riparian shade, site elevation, and percentage of forested area of the basin. Coefficient of determination and root mean square error for the models ranged from 0.88 to 0.99 and 0.05 to 0.59 oC, respectively. The models also were tested and validated using temperature time series, habitat, and basin landscape data from 6 sites that were separate from the 142 sites that were used to develop the models. The models are capable of estimating water temperatures at locations along 1st-, 2nd-, and 3rd-order streams in western Oregon. The model user must assemble riparian habitat and basin landscape characteristics data for a site of interest. These data, in addition to meteorological data, are model inputs. Output from the models include simulated hourly water temperatures for the June to September period. Adjustments can be made to the shade input data to simulate the effects of minimum or maximum shade on water temperatures.
Aircraft Anomaly Detection Using Performance Models Trained on Fleet Data
NASA Technical Reports Server (NTRS)
Gorinevsky, Dimitry; Matthews, Bryan L.; Martin, Rodney
2012-01-01
This paper describes an application of data mining technology called Distributed Fleet Monitoring (DFM) to Flight Operational Quality Assurance (FOQA) data collected from a fleet of commercial aircraft. DFM transforms the data into aircraft performance models, flight-to-flight trends, and individual flight anomalies by fitting a multi-level regression model to the data. The model represents aircraft flight performance and takes into account fixed effects: flight-to-flight and vehicle-to-vehicle variability. The regression parameters include aerodynamic coefficients and other aircraft performance parameters that are usually identified by aircraft manufacturers in flight tests. Using DFM, the multi-terabyte FOQA data set with half-million flights was processed in a few hours. The anomalies found include wrong values of competed variables, (e.g., aircraft weight), sensor failures and baises, failures, biases, and trends in flight actuators. These anomalies were missed by the existing airline monitoring of FOQA data exceedances.
Mars Global Reference Atmospheric Model (Mars-GRAM 3.34): Programmer's Guide
NASA Technical Reports Server (NTRS)
Justus, C. G.; James, Bonnie F.; Johnson, Dale L.
1996-01-01
This is a programmer's guide for the Mars Global Reference Atmospheric Model (Mars-GRAM 3.34). Included are a brief history and review of the model since its origin in 1988 and a technical discussion of recent additions and modifications. Examples of how to run both the interactive and batch (subroutine) forms are presented. Instructions are provided on how to customize output of the model for various parameters of the Mars atmosphere. Detailed descriptions are given of the main driver programs, subroutines, and associated computational methods. Lists and descriptions include input, output, and local variables in the programs. These descriptions give a summary of program steps and 'map' of calling relationships among the subroutines. Definitions are provided for the variables passed between subroutines through common lists. Explanations are provided for all diagnostic and progress messages generated during execution of the program. A brief outline of future plans for Mars-GRAM is also presented.
NASA Astrophysics Data System (ADS)
Zakaria, Dzaki; Lubis, Sandro W.; Setiawan, Sonni
2018-05-01
Tropical weather system is controlled by periodic atmospheric disturbances ranging from daily to subseasonal time scales. One of the most prominent atmospheric disturbances in the tropics is convectively coupled equatorial waves (CCEWs). CCEWs are excited by latent heating due to a large-scale convective system and have a significant influence on weather system. They include atmospheric equatorial Kelvin wave, Mixed Rossby Gravity (MRG) wave, Equatorial Rossby (ER) wave and Tropical Depression (TD-type) wave. In this study, we will evaluate the seasonal variability of CCEWs activity in nine high-top CMIP5 models, including their spatial distribution in the troposphere. Our results indicate that seasonal variability of Kelvin waves is well represented in MPI-ESM-LR and MPI-ESM-MR, with maximum activity occurring during boreal spring. The seasonal variability of MRG waves is well represented in CanESM2, HadGEM2-CC, IPSL-CM5A-LR and IPSL-CM5A-MR, with maximum activity observed during boreal summer. On the other hand, ER waves are well captured by IPSL-CM5A-LR and IPSL-CM5A-MR and maximize during boreal fall; while TD-type waves, with maximum activity observed during boreal summer, are well observed in CanESM2, HadGEM2-CC, IPSL-CM5A-LR and IPSL-CM5A-MR. Our results indicate that the skill of CMIP5 models in representing seasonal variability of CCEWs highly depends on the convective parameterization and the spatial or vertical resolution used by each model.
Liang, Ying; Lu, Peiyi
2014-02-08
Life satisfaction research in China is in development, requiring new perspectives for enrichment. In China, occupational mobility is accompanied by changes in economic liberalization and the emergence of occupational stratification. On the whole, however, occupational mobility has rarely been used as an independent variable. Health status is always used as the observed or dependent variable in studies of the phenomenon and its influencing factors. A research gap still exists for enriching this field. The data used in this study were obtained from the China Health and Nutrition Survey (CHNS). The study included nine provinces in China. The survey was conducted from 1989 to 2009.Every survey involved approximately 4400 families or 19,000 individual samples and parts of community data. First, we built a 5 × 5 social mobility table and calculated life satisfaction of Chinese residents of different occupations in each table. Second, gender, age, marital status, education level, annual income and hukou, health status, occupational mobility were used as independent variables. Lastly, we used logistic diagonal mobility models to analyze the relationship between life satisfaction and the variables. Model 1 was the basic model, which consisted of the standard model and controlled variables and excluded drift variables. Model 2 was the total model, which consisted of all variables of interest in this study. Model 3 was the screening model, which excluded the insignificant drift effect index in Model 2. From the perspective of the analysis of controlled variables, health conditions, direction, and distance of occupational mobility significantly affected life satisfaction of Chinese residents of different occupations. (1) From the perspective of health status, respondents who have not been sick or injured had better life satisfaction than those who had been sick or injured. (2) From the perspective of occupational mobility direction, the coefficients of occupational mobility in the models are less than 0, which means that upward mobility negatively affects life satisfaction. (3) From the perspective of distance, when analyzing mobility distance in Models 2 and 3, a greater distance indicates better life satisfaction.
2014-01-01
Background Life satisfaction research in China is in development, requiring new perspectives for enrichment. In China, occupational mobility is accompanied by changes in economic liberalization and the emergence of occupational stratification. On the whole, however, occupational mobility has rarely been used as an independent variable. Health status is always used as the observed or dependent variable in studies of the phenomenon and its influencing factors. A research gap still exists for enriching this field. Methods The data used in this study were obtained from the China Health and Nutrition Survey (CHNS). The study included nine provinces in China. The survey was conducted from 1989 to 2009.Every survey involved approximately 4400 families or 19,000 individual samples and parts of community data. Results First, we built a 5 × 5 social mobility table and calculated life satisfaction of Chinese residents of different occupations in each table. Second, gender, age, marital status, education level, annual income and hukou, health status, occupational mobility were used as independent variables. Lastly, we used logistic diagonal mobility models to analyze the relationship between life satisfaction and the variables. Model 1 was the basic model, which consisted of the standard model and controlled variables and excluded drift variables. Model 2 was the total model, which consisted of all variables of interest in this study. Model 3 was the screening model, which excluded the insignificant drift effect index in Model 2. Conclusion From the perspective of the analysis of controlled variables, health conditions, direction, and distance of occupational mobility significantly affected life satisfaction of Chinese residents of different occupations. (1) From the perspective of health status, respondents who have not been sick or injured had better life satisfaction than those who had been sick or injured. (2) From the perspective of occupational mobility direction, the coefficients of occupational mobility in the models are less than 0, which means that upward mobility negatively affects life satisfaction. (3) From the perspective of distance, when analyzing mobility distance in Models 2 and 3, a greater distance indicates better life satisfaction. PMID:24506976
Jones, Mirkka M; Tuomisto, Hanna; Borcard, Daniel; Legendre, Pierre; Clark, David B; Olivas, Paulo C
2008-03-01
The degree to which variation in plant community composition (beta-diversity) is predictable from environmental variation, relative to other spatial processes, is of considerable current interest. We addressed this question in Costa Rican rain forest pteridophytes (1,045 plots, 127 species). We also tested the effect of data quality on the results, which has largely been overlooked in earlier studies. To do so, we compared two alternative spatial models [polynomial vs. principal coordinates of neighbour matrices (PCNM)] and ten alternative environmental models (all available environmental variables vs. four subsets, and including their polynomials vs. not). Of the environmental data types, soil chemistry contributed most to explaining pteridophyte community variation, followed in decreasing order of contribution by topography, soil type and forest structure. Environmentally explained variation increased moderately when polynomials of the environmental variables were included. Spatially explained variation increased substantially when the multi-scale PCNM spatial model was used instead of the traditional, broad-scale polynomial spatial model. The best model combination (PCNM spatial model and full environmental model including polynomials) explained 32% of pteridophyte community variation, after correcting for the number of sampling sites and explanatory variables. Overall evidence for environmental control of beta-diversity was strong, and the main floristic gradients detected were correlated with environmental variation at all scales encompassed by the study (c. 100-2,000 m). Depending on model choice, however, total explained variation differed more than fourfold, and the apparent relative importance of space and environment could be reversed. Therefore, we advocate a broader recognition of the impacts that data quality has on analysis results. A general understanding of the relative contributions of spatial and environmental processes to species distributions and beta-diversity requires that methodological artefacts are separated from real ecological differences.
Bolandzadeh, Niousha; Kording, Konrad; Salowitz, Nicole; Davis, Jennifer C; Hsu, Liang; Chan, Alison; Sharma, Devika; Blohm, Gunnar; Liu-Ambrose, Teresa
2015-01-01
Current research suggests that the neuropathology of dementia-including brain changes leading to memory impairment and cognitive decline-is evident years before the onset of this disease. Older adults with cognitive decline have reduced functional independence and quality of life, and are at greater risk for developing dementia. Therefore, identifying biomarkers that can be easily assessed within the clinical setting and predict cognitive decline is important. Early recognition of cognitive decline could promote timely implementation of preventive strategies. We included 89 community-dwelling adults aged 70 years and older in our study, and collected 32 measures of physical function, health status and cognitive function at baseline. We utilized an L1-L2 regularized regression model (elastic net) to identify which of the 32 baseline measures were strongly predictive of cognitive function after one year. We built three linear regression models: 1) based on baseline cognitive function, 2) based on variables consistently selected in every cross-validation loop, and 3) a full model based on all the 32 variables. Each of these models was carefully tested with nested cross-validation. Our model with the six variables consistently selected in every cross-validation loop had a mean squared prediction error of 7.47. This number was smaller than that of the full model (115.33) and the model with baseline cognitive function (7.98). Our model explained 47% of the variance in cognitive function after one year. We built a parsimonious model based on a selected set of six physical function and health status measures strongly predictive of cognitive function after one year. In addition to reducing the complexity of the model without changing the model significantly, our model with the top variables improved the mean prediction error and R-squared. These six physical function and health status measures can be easily implemented in a clinical setting.
NASA Astrophysics Data System (ADS)
Brown, S. M.; Behn, M. D.; Grove, T. L.
2017-12-01
We present results of a combined petrologic - geochemical (major and trace element) - geodynamical forward model for mantle melting and subsequent melt modification. The model advances Behn & Grove (2015), and is calibrated using experimental petrology. Our model allows for melting in the plagioclase, spinel, and garnet fields with a flexible retained melt fraction (from pure batch to pure fractional), tracks residual mantle composition, and includes melting with water, variable melt productivity, and mantle mode calculations. This approach is valuable for understanding oceanic crustal accretion, which involves mantle melting and melt modification by migration and aggregation. These igneous processes result in mid-ocean ridge basalts that vary in composition at the local (segment) and global scale. The important variables are geophysical and geochemical and include mantle composition, potential temperature, mantle flow, and spreading rate. Accordingly, our model allows us to systematically quantify the importance of each of these external variables. In addition to discriminating melt generation effects, we are able to discriminate the effects of different melt modification processes (inefficient pooling, melt-rock reaction, and fractional crystallization) in generating both local, segment-scale and global-scale compositional variability. We quantify the influence of a specific igneous process on the generation of oceanic crust as a function of variations in the external variables. We also find that it is unlikely that garnet lherzolite melting produces a signature in either major or trace element compositions formed from aggregated melts, because when melting does occur in the garnet field at high mantle temperature, it contributes a relatively small, uniform fraction (< 10%) of the pooled melt compositions at all spreading rates. Additionally, while increasing water content and/or temperature promote garnet melting, they also increase melt extent, pushing the pooled composition to lower Sm/Yb and higher Lu/Hf.
Not Noisy, Just Wrong: The Role of Suboptimal Inference in Behavioral Variability
Beck, Jeffrey M.; Ma, Wei Ji; Pitkow, Xaq; Latham, Peter E.; Pouget, Alexandre
2015-01-01
Behavior varies from trial to trial even when the stimulus is maintained as constant as possible. In many models, this variability is attributed to noise in the brain. Here, we propose that there is another major source of variability: suboptimal inference. Importantly, we argue that in most tasks of interest, and particularly complex ones, suboptimal inference is likely to be the dominant component of behavioral variability. This perspective explains a variety of intriguing observations, including why variability appears to be larger on the sensory than on the motor side, and why our sensors are sometimes surprisingly unreliable. PMID:22500627
Drivers of Variability in Public-Supply Water Use Across the Contiguous United States
NASA Astrophysics Data System (ADS)
Worland, Scott C.; Steinschneider, Scott; Hornberger, George M.
2018-03-01
This study explores the relationship between municipal water use and an array of climate, economic, behavioral, and policy variables across the contiguous U.S. The relationship is explored using Bayesian-hierarchical regression models for over 2,500 counties, 18 covariates, and three higher-level grouping variables. Additionally, a second analysis is included for 83 cities where water price and water conservation policy information is available. A hierarchical model using the nine climate regions (product of National Oceanic and Atmospheric Administration) as the higher-level groups results in the best out-of-sample performance, as estimated by the Widely Available Information Criterion, compared to counties grouped by urban continuum classification or primary economic activity. The regression coefficients indicate that the controls on water use are not uniform across the nation: e.g., counties in the Northeast and Northwest climate regions are more sensitive to social variables, whereas counties in the Southwest and East North Central climate regions are more sensitive to environmental variables. For the national city-level model, it appears that arid cities with a high cost of living and relatively low water bills sell more water per customer, but as with the county-level model, the effect of each variable depends heavily on where a city is located.
Reid, Colleen E; Jerrett, Michael; Petersen, Maya L; Pfister, Gabriele G; Morefield, Philip E; Tager, Ira B; Raffuse, Sean M; Balmes, John R
2015-03-17
Estimating population exposure to particulate matter during wildfires can be difficult because of insufficient monitoring data to capture the spatiotemporal variability of smoke plumes. Chemical transport models (CTMs) and satellite retrievals provide spatiotemporal data that may be useful in predicting PM2.5 during wildfires. We estimated PM2.5 concentrations during the 2008 northern California wildfires using 10-fold cross-validation (CV) to select an optimal prediction model from a set of 11 statistical algorithms and 29 predictor variables. The variables included CTM output, three measures of satellite aerosol optical depth, distance to the nearest fires, meteorological data, and land use, traffic, spatial location, and temporal characteristics. The generalized boosting model (GBM) with 29 predictor variables had the lowest CV root mean squared error and a CV-R2 of 0.803. The most important predictor variable was the Geostationary Operational Environmental Satellite Aerosol/Smoke Product (GASP) Aerosol Optical Depth (AOD), followed by the CTM output and distance to the nearest fire cluster. Parsimonious models with various combinations of fewer variables also predicted PM2.5 well. Using machine learning algorithms to combine spatiotemporal data from satellites and CTMs can reliably predict PM2.5 concentrations during a major wildfire event.
Cerda, Gamal; Pérez, Carlos; Navarro, José I; Aguilar, Manuel; Casas, José A; Aragón, Estíbaliz
2015-01-01
This study tested a structural model of cognitive-emotional explanatory variables to explain performance in mathematics. The predictor variables assessed were related to students' level of development of early mathematical competencies (EMCs), specifically, relational and numerical competencies, predisposition toward mathematics, and the level of logical intelligence in a population of primary school Chilean students (n = 634). This longitudinal study also included the academic performance of the students during a period of 4 years as a variable. The sampled students were initially assessed by means of an Early Numeracy Test, and, subsequently, they were administered a Likert-type scale to measure their predisposition toward mathematics (EPMAT) and a basic test of logical intelligence. The results of these tests were used to analyse the interaction of all the aforementioned variables by means of a structural equations model. This combined interaction model was able to predict 64.3% of the variability of observed performance. Preschool students' performance in EMCs was a strong predictor for achievement in mathematics for students between 8 and 11 years of age. Therefore, this paper highlights the importance of EMCs and the modulating role of predisposition toward mathematics. Also, this paper discusses the educational role of these findings, as well as possible ways to improve negative predispositions toward mathematical tasks in the school domain.
NASA Astrophysics Data System (ADS)
Hofer, Marlis; Mölg, Thomas; Marzeion, Ben; Kaser, Georg
2010-05-01
Recently initiated observation networks in the Cordillera Blanca provide temporally high-resolution, yet short-term atmospheric data. The aim of this study is to extend the existing time series into the past. We present an empirical-statistical downscaling (ESD) model that links 6-hourly NCEP/NCAR reanalysis data to the local target variables, measured at the tropical glacier Artesonraju (Northern Cordillera Blanca). The approach is particular in the context of ESD for two reasons. First, the observational time series for model calibration are short (only about two years). Second, unlike most ESD studies in climate research, we focus on variables at a high temporal resolution (i.e., six-hourly values). Our target variables are two important drivers in the surface energy balance of tropical glaciers; air temperature and specific humidity. The selection of predictor fields from the reanalysis data is based on regression analyses and climatologic considerations. The ESD modelling procedure includes combined empirical orthogonal function and multiple regression analyses. Principal component screening is based on cross-validation using the Akaike Information Criterion as model selection criterion. Double cross-validation is applied for model evaluation. Potential autocorrelation in the time series is considered by defining the block length in the resampling procedure. Apart from the selection of predictor fields, the modelling procedure is automated and does not include subjective choices. We assess the ESD model sensitivity to the predictor choice by using both single- and mixed-field predictors of the variables air temperature (1000 hPa), specific humidity (1000 hPa), and zonal wind speed (500 hPa). The chosen downscaling domain ranges from 80 to 50 degrees west and from 0 to 20 degrees south. Statistical transfer functions are derived individually for different months and times of day (month/hour-models). The forecast skill of the month/hour-models largely depends on month and time of day, ranging from 0 to 0.8, but the mixed-field predictors generally perform better than the single-field predictors. At all time scales, the ESD model shows added value against two simple reference models; (i) the direct use of reanalysis grid point values, and (ii) mean diurnal and seasonal cycles over the calibration period. The ESD model forecast 1960 to 2008 clearly reflects interannual variability related to the El Niño/Southern Oscillation, but is sensitive to the chosen predictor type. So far, we have not assessed the performance of NCEP/NCAR reanalysis data against other reanalysis products. The developed ESD model is computationally cheap and applicable wherever measurements are available for model calibration.
Developing and validating a predictive model for stroke progression.
Craig, L E; Wu, O; Gilmour, H; Barber, M; Langhorne, P
2011-01-01
Progression is believed to be a common and important complication in acute stroke, and has been associated with increased mortality and morbidity. Reliable identification of predictors of early neurological deterioration could potentially benefit routine clinical care. The aim of this study was to identify predictors of early stroke progression using two independent patient cohorts. Two patient cohorts were used for this study - the first cohort formed the training data set, which included consecutive patients admitted to an urban teaching hospital between 2000 and 2002, and the second cohort formed the test data set, which included patients admitted to the same hospital between 2003 and 2004. A standard definition of stroke progression was used. The first cohort (n = 863) was used to develop the model. Variables that were statistically significant (p < 0.1) on univariate analysis were included in the multivariate model. Logistic regression was the technique employed using backward stepwise regression to drop the least significant variables (p > 0.1) in turn. The second cohort (n = 216) was used to test the performance of the model. The performance of the predictive model was assessed in terms of both calibration and discrimination. Multiple imputation methods were used for dealing with the missing values. Variables shown to be significant predictors of stroke progression were conscious level, history of coronary heart disease, presence of hyperosmolarity, CT lesion, living alone on admission, Oxfordshire Community Stroke Project classification, presence of pyrexia and smoking status. The model appears to have reasonable discriminative properties [the median receiver-operating characteristic curve value was 0.72 (range 0.72-0.73)] and to fit well with the observed data, which is indicated by the high goodness-of-fit p value [the median p value from the Hosmer-Lemeshow test was 0.90 (range 0.50-0.92)]. The predictive model developed in this study contains variables that can be easily collected in practice therefore increasing its usability in clinical practice. Using this analysis approach, the discrimination and calibration of the predictive model appear sufficiently high to provide accurate predictions. This study also offers some discussion around the validation of predictive models for wider use in clinical practice.
Hydrologic Remote Sensing and Land Surface Data Assimilation.
Moradkhani, Hamid
2008-05-06
Accurate, reliable and skillful forecasting of key environmental variables such as soil moisture and snow are of paramount importance due to their strong influence on many water resources applications including flood control, agricultural production and effective water resources management which collectively control the behavior of the climate system. Soil moisture is a key state variable in land surface-atmosphere interactions affecting surface energy fluxes, runoff and the radiation balance. Snow processes also have a large influence on land-atmosphere energy exchanges due to snow high albedo, low thermal conductivity and considerable spatial and temporal variability resulting in the dramatic change on surface and ground temperature. Measurement of these two variables is possible through variety of methods using ground-based and remote sensing procedures. Remote sensing, however, holds great promise for soil moisture and snow measurements which have considerable spatial and temporal variability. Merging these measurements with hydrologic model outputs in a systematic and effective way results in an improvement of land surface model prediction. Data Assimilation provides a mechanism to combine these two sources of estimation. Much success has been attained in recent years in using data from passive microwave sensors and assimilating them into the models. This paper provides an overview of the remote sensing measurement techniques for soil moisture and snow data and describes the advances in data assimilation techniques through the ensemble filtering, mainly Ensemble Kalman filter (EnKF) and Particle filter (PF), for improving the model prediction and reducing the uncertainties involved in prediction process. It is believed that PF provides a complete representation of the probability distribution of state variables of interests (according to sequential Bayes law) and could be a strong alternative to EnKF which is subject to some limitations including the linear updating rule and assumption of jointly normal distribution of errors in state variables and observation.
The Built Environment and Active Travel: Evidence from Nanjing, China.
Feng, Jianxi
2016-03-08
An established relationship exists between the built environment and active travel. Nevertheless, the literature examining the impacts of different components of the built environment is limited. In addition, most existing studies are based on data from cities in the U.S. and Western Europe. The situation in Chinese cities remains largely unknown. Based on data from Nanjing, China, this study explicitly examines the influences of two components of the built environment--the neighborhood form and street form--on residents' active travel. Binary logistic regression analyses examined the effects of the neighborhood form and street form on subsistence, maintenance and discretionary travel, respectively. For each travel purpose, three models are explored: a model with only socio-demographics, a model with variables of the neighborhood form and a complete model with all variables. The model fit indicator, Nagelkerke's ρ², increased by 0.024 when neighborhood form variables are included and increased by 0.070 when street form variables are taken into account. A similar situation can be found in the models of maintenance activities and discretionary activities. Regarding specific variables, very limited significant impacts of the neighborhood form variables are observed, while almost all of the characteristics of the street form show significant influences on active transport. In Nanjing, street form factors have a more profound influence on active travel than neighborhood form factors. The focal point of the land use regulations and policy of local governments should shift from the neighborhood form to the street form to maximize the effects of policy interventions.
Use of Physiologically Based Pharmacokinetic (PBPK) Models ...
EPA announced the availability of the final report, Use of Physiologically Based Pharmacokinetic (PBPK) Models to Quantify the Impact of Human Age and Interindividual Differences in Physiology and Biochemistry Pertinent to Risk Final Report for Cooperative Agreement. This report describes and demonstrates techniques necessary to extrapolate and incorporate in vitro derived metabolic rate constants in PBPK models. It also includes two case study examples designed to demonstrate the applicability of such data for health risk assessment and addresses the quantification, extrapolation and interpretation of advanced biochemical information on human interindividual variability of chemical metabolism for risk assessment application. It comprises five chapters; topics and results covered in the first four chapters have been published in the peer reviewed scientific literature. Topics covered include: Data Quality ObjectivesExperimental FrameworkRequired DataTwo example case studies that develop and incorporate in vitro metabolic rate constants in PBPK models designed to quantify human interindividual variability to better direct the choice of uncertainty factors for health risk assessment. This report is intended to serve as a reference document for risk assors to use when quantifying, extrapolating, and interpretating advanced biochemical information about human interindividual variability of chemical metabolism.
Potential distribution of the viral haemorrhagic septicaemia virus in the Great Lakes region
Escobar, Luis E.; Kurath, Gael; Escobar-Dodero, Joaquim; Craft, Meggan E.; Phelps, Nicholas B.D.
2017-01-01
Viral haemorrhagic septicaemia virus (VHSV) genotype IVb has been responsible for large-scale fish mortality events in the Great Lakes of North America. Anticipating the areas of potential VHSV occurrence is key to designing epidemiological surveillance and disease prevention strategies in the Great Lakes basin. We explored the environmental features that could shape the distribution of VHSV, based on remote sensing and climate data via ecological niche modelling. Variables included temperature measured during the day and night, precipitation, vegetation, bathymetry, solar radiation and topographic wetness. VHSV occurrences were obtained from available reports of virus confirmation in laboratory facilities. We fit a Maxent model using VHSV-IVb reports and environmental variables under different parameterizations to identify the best model to determine potential VHSV occurrence based on environmental suitability. VHSV reports were generated from both passive and active surveillance. VHSV occurrences were most abundant near shore sites. We were, however, able to capture the environmental signature of VHSV based on the environmental variables employed in our model, allowing us to identify patterns of VHSV potential occurrence. Our findings suggest that VHSV is not at an ecological equilibrium and more areas could be affected, including areas not in close geographic proximity to past VHSV reports.
Development of LACIE CCEA-1 weather/wheat yield models. [regression analysis
NASA Technical Reports Server (NTRS)
Strommen, N. D.; Sakamoto, C. M.; Leduc, S. K.; Umberger, D. E. (Principal Investigator)
1979-01-01
The advantages and disadvantages of the casual (phenological, dynamic, physiological), statistical regression, and analog approaches to modeling for grain yield are examined. Given LACIE's primary goal of estimating wheat production for the large areas of eight major wheat-growing regions, the statistical regression approach of correlating historical yield and climate data offered the Center for Climatic and Environmental Assessment the greatest potential return within the constraints of time and data sources. The basic equation for the first generation wheat-yield model is given. Topics discussed include truncation, trend variable, selection of weather variables, episodic events, strata selection, operational data flow, weighting, and model results.
Faulhammer, E; Llusa, M; Wahl, P R; Paudel, A; Lawrence, S; Biserni, S; Calzolari, V; Khinast, J G
2016-01-01
The objectives of this study were to develop a predictive statistical model for low-fill-weight capsule filling of inhalation products with dosator nozzles via the quality by design (QbD) approach and based on that to create refined models that include quadratic terms for significant parameters. Various controllable process parameters and uncontrolled material attributes of 12 powders were initially screened using a linear model with partial least square (PLS) regression to determine their effect on the critical quality attributes (CQA; fill weight and weight variability). After identifying critical material attributes (CMAs) and critical process parameters (CPPs) that influenced the CQA, model refinement was performed to study if interactions or quadratic terms influence the model. Based on the assessment of the effects of the CPPs and CMAs on fill weight and weight variability for low-fill-weight inhalation products, we developed an excellent linear predictive model for fill weight (R(2 )= 0.96, Q(2 )= 0.96 for powders with good flow properties and R(2 )= 0.94, Q(2 )= 0.93 for cohesive powders) and a model that provides a good approximation of the fill weight variability for each powder group. We validated the model, established a design space for the performance of different types of inhalation grade lactose on low-fill weight capsule filling and successfully used the CMAs and CPPs to predict fill weight of powders that were not included in the development set.
NASA Technical Reports Server (NTRS)
Hill, Emma M.; Ponte, Rui M.; Davis, James L.
2007-01-01
Comparison of monthly mean tide-gauge time series to corresponding model time series based on a static inverted barometer (IB) for pressure-driven fluctuations and a ocean general circulation model (OM) reveals that the combined model successfully reproduces seasonal and interannual changes in relative sea level at many stations. Removal of the OM and IB from the tide-gauge record produces residual time series with a mean global variance reduction of 53%. The OM is mis-scaled for certain regions, and 68% of the residual time series contain a significant seasonal variability after removal of the OM and IB from the tide-gauge data. Including OM admittance parameters and seasonal coefficients in a regression model for each station, with IB also removed, produces residual time series with mean global variance reduction of 71%. Examination of the regional improvement in variance caused by scaling the OM, including seasonal terms, or both, indicates weakness in the model at predicting sea-level variation for constricted ocean regions. The model is particularly effective at reproducing sea-level variation for stations in North America, Europe, and Japan. The RMS residual for many stations in these areas is 25-35 mm. The production of "cleaner" tide-gauge time series, with oceanographic variability removed, is important for future analysis of nonsecular and regionally differing sea-level variations. Understanding the ocean model's strengths and weaknesses will allow for future improvements of the model.
A Multivariate Model of Physics Problem Solving
ERIC Educational Resources Information Center
Taasoobshirazi, Gita; Farley, John
2013-01-01
A model of expertise in physics problem solving was tested on undergraduate science, physics, and engineering majors enrolled in an introductory-level physics course. Structural equation modeling was used to test hypothesized relationships among variables linked to expertise in physics problem solving including motivation, metacognitive planning,…
NASA Astrophysics Data System (ADS)
Nie, W.; Zaitchik, B. F.; Kumar, S.; Rodell, M.
2017-12-01
Advanced Land Surface Models (LSM) offer a powerful tool for studying and monitoring hydrological variability. Highly managed systems, however, present a challenge for these models, which typically have simplified or incomplete representations of human water use, if the process is represented at all. GRACE, meanwhile, detects the total change in water storage, including change due to human activities, but does not resolve the source of these changes. Here we examine recent groundwater declines in the US High Plains Aquifer (HPA), a region that is heavily utilized for irrigation and that is also affected by episodic drought. To understand observed decline in groundwater (well observation) and terrestrial water storage (GRACE) during a recent multi-year drought, we modify the Noah-MP LSM to include a groundwater pumping irrigation scheme. To account for seasonal and interannual variability in active irrigated area we apply a monthly time-varying greenness vegetation fraction (GVF) dataset to the model. A set of five experiments were performed to study the impact of irrigation with groundwater withdrawal on the simulated hydrological cycle of the HPA and to assess the importance of time-varying GVF when simulating drought conditions. The results show that including the groundwater pumping irrigation scheme in Noah-MP improves model agreement with GRACE mascon solutions for TWS and well observations of groundwater anomaly in the southern HPA, including Texas and Kansas, and that accounting for time-varying GVF is important for model realism under drought. Results for the HPA in Nebraska are mixed, likely due to misrepresentation of the recharge process. This presentation will highlight the value of the GRACE constraint for model development, present estimates of the relative contribution of climate variability and irrigation to declining TWS in the HPA under drought, and identify opportunities to integrate GRACE-FO with models for water resource monitoring in heavily irrigated regions.
ERIC Educational Resources Information Center
McGrath, Lauren M.; Pennington, Bruce F.; Shanahan, Michelle A.; Santerre-Lemmon, Laura E.; Barnard, Holly D.; Willcutt, Erik G.; DeFries, John C.; Olson, Richard K.
2011-01-01
Background: This study tests a multiple cognitive deficit model of reading disability (RD), attention-deficit/hyperactivity disorder (ADHD), and their comorbidity. Methods: A structural equation model (SEM) of multiple cognitive risk factors and symptom outcome variables was constructed. The model included phonological awareness as a unique…
Modeling Achievement in Mathematics: The Role of Learner and Learning Environment Characteristics
ERIC Educational Resources Information Center
Nasser-Abu Alhija, Fadia; Amasha, Marcel
2012-01-01
This study examined a structural model of mathematics achievement among Druze 8th graders in Israel. The model integrates 2 psychosocial theories: goal theory and social learning theory. Variables in the model included gender, father's and mother's education, classroom mastery and performance goal orientation, mathematics self-efficacy and…
NASA Technical Reports Server (NTRS)
Stouffer, D. C.; Sheh, M. Y.
1988-01-01
A micromechanical model based on crystallographic slip theory was formulated for nickel-base single crystal superalloys. The current equations include both drag stress and back stress state variables to model the local inelastic flow. Specially designed experiments have been conducted to evaluate the effect of back stress in single crystals. The results showed that (1) the back stress is orientation dependent; and (2) the back stress state variable in the inelastic flow equation is necessary for predicting anelastic behavior of the material. The model also demonstrated improved fatigue predictive capability. Model predictions and experimental data are presented for single crystal superalloy Rene N4 at 982 C.
Laine, Christopher M.; Valero-Cuevas, Francisco J.
2018-01-01
Involuntary force variability below 15 Hz arises from, and is influenced by, many factors including descending neural drive, proprioceptive feedback, and mechanical properties of muscles and tendons. However, their potential interactions that give rise to the well-structured spectrum of involuntary force variability are not well understood due to a lack of experimental techniques. Here, we investigated the generation, modulation, and interactions among different sources of force variability using a physiologically-grounded closed-loop simulation of an afferented muscle model. The closed-loop simulation included a musculotendon model, muscle spindle, Golgi tendon organ (GTO), and a tracking controller which enabled target-guided force tracking. We demonstrate that closed-loop control of an afferented musculotendon suffices to replicate and explain surprisingly many cardinal features of involuntary force variability. Specifically, we present 1) a potential origin of low-frequency force variability associated with co-modulation of motor unit firing rates (i.e.,‘common drive’), 2) an in-depth characterization of how proprioceptive feedback pathways suffice to generate 5-12 Hz physiological tremor, and 3) evidence that modulation of those feedback pathways (i.e., presynaptic inhibition of Ia and Ib afferents, and spindle sensitivity via fusimotor drive) influence the full spectrum of force variability. These results highlight the previously underestimated importance of closed-loop neuromechanical interactions in explaining involuntary force variability during voluntary ‘isometric’ force control. Furthermore, these results provide the basis for a unifying theory that relates spinal circuitry to various manifestations of altered involuntary force variability in fatigue, aging and neurological disease. PMID:29309405
Nagamori, Akira; Laine, Christopher M; Valero-Cuevas, Francisco J
2018-01-01
Involuntary force variability below 15 Hz arises from, and is influenced by, many factors including descending neural drive, proprioceptive feedback, and mechanical properties of muscles and tendons. However, their potential interactions that give rise to the well-structured spectrum of involuntary force variability are not well understood due to a lack of experimental techniques. Here, we investigated the generation, modulation, and interactions among different sources of force variability using a physiologically-grounded closed-loop simulation of an afferented muscle model. The closed-loop simulation included a musculotendon model, muscle spindle, Golgi tendon organ (GTO), and a tracking controller which enabled target-guided force tracking. We demonstrate that closed-loop control of an afferented musculotendon suffices to replicate and explain surprisingly many cardinal features of involuntary force variability. Specifically, we present 1) a potential origin of low-frequency force variability associated with co-modulation of motor unit firing rates (i.e.,'common drive'), 2) an in-depth characterization of how proprioceptive feedback pathways suffice to generate 5-12 Hz physiological tremor, and 3) evidence that modulation of those feedback pathways (i.e., presynaptic inhibition of Ia and Ib afferents, and spindle sensitivity via fusimotor drive) influence the full spectrum of force variability. These results highlight the previously underestimated importance of closed-loop neuromechanical interactions in explaining involuntary force variability during voluntary 'isometric' force control. Furthermore, these results provide the basis for a unifying theory that relates spinal circuitry to various manifestations of altered involuntary force variability in fatigue, aging and neurological disease.
Park, Jangwoon; Ebert, Sheila M; Reed, Matthew P; Hallman, Jason J
2016-03-01
Previously published statistical models of driving posture have been effective for vehicle design but have not taken into account the effects of age. The present study developed new statistical models for predicting driving posture. Driving postures of 90 U.S. drivers with a wide range of age and body size were measured in laboratory mockup in nine package conditions. Posture-prediction models for female and male drivers were separately developed by employing a stepwise regression technique using age, body dimensions, vehicle package conditions, and two-way interactions, among other variables. Driving posture was significantly associated with age, and the effects of other variables depended on age. A set of posture-prediction models is presented for women and men. The results are compared with a previously developed model. The present study is the first study of driver posture to include a large cohort of older drivers and the first to report a significant effect of age. The posture-prediction models can be used to position computational human models or crash-test dummies for vehicle design and assessment. © 2015, Human Factors and Ergonomics Society.
Modeling Source Water TOC Using Hydroclimate Variables and Local Polynomial Regression.
Samson, Carleigh C; Rajagopalan, Balaji; Summers, R Scott
2016-04-19
To control disinfection byproduct (DBP) formation in drinking water, an understanding of the source water total organic carbon (TOC) concentration variability can be critical. Previously, TOC concentrations in water treatment plant source waters have been modeled using streamflow data. However, the lack of streamflow data or unimpaired flow scenarios makes it difficult to model TOC. In addition, TOC variability under climate change further exacerbates the problem. Here we proposed a modeling approach based on local polynomial regression that uses climate, e.g. temperature, and land surface, e.g., soil moisture, variables as predictors of TOC concentration, obviating the need for streamflow. The local polynomial approach has the ability to capture non-Gaussian and nonlinear features that might be present in the relationships. The utility of the methodology is demonstrated using source water quality and climate data in three case study locations with surface source waters including river and reservoir sources. The models show good predictive skill in general at these locations, with lower skills at locations with the most anthropogenic influences in their streams. Source water TOC predictive models can provide water treatment utilities important information for making treatment decisions for DBP regulation compliance under future climate scenarios.
Weaker soil carbon-climate feedbacks resulting from microbial and abiotic interactions
NASA Astrophysics Data System (ADS)
Tang, Jinyun; Riley, William J.
2015-01-01
The large uncertainty in soil carbon-climate feedback predictions has been attributed to the incorrect parameterization of decomposition temperature sensitivity (Q10; ref. ) and microbial carbon use efficiency. Empirical experiments have found that these parameters vary spatiotemporally, but such variability is not included in current ecosystem models. Here we use a thermodynamically based decomposition model to test the hypothesis that this observed variability arises from interactions between temperature, microbial biogeochemistry, and mineral surface sorptive reactions. We show that because mineral surfaces interact with substrates, enzymes and microbes, both Q10 and microbial carbon use efficiency are hysteretic (so that neither can be represented by a single static function) and the conventional labile and recalcitrant substrate characterization with static temperature sensitivity is flawed. In a 4-K temperature perturbation experiment, our fully dynamic model predicted more variable but weaker soil carbon-climate feedbacks than did the static Q10 and static carbon use efficiency model when forced with yearly, daily and hourly variable temperatures. These results imply that current Earth system models probably overestimate the response of soil carbon stocks to global warming. Future ecosystem models should therefore consider the dynamic interactions between sorptive mineral surfaces, substrates and microbial processes.
A respiratory alert model for the Shenandoah Valley, Virginia, USA
NASA Astrophysics Data System (ADS)
Hondula, David M.; Davis, Robert E.; Knight, David B.; Sitka, Luke J.; Enfield, Kyle; Gawtry, Stephen B.; Stenger, Phillip J.; Deaton, Michael L.; Normile, Caroline P.; Lee, Temple R.
2013-01-01
Respiratory morbidity (particularly COPD and asthma) can be influenced by short-term weather fluctuations that affect air quality and lung function. We developed a model to evaluate meteorological conditions associated with respiratory hospital admissions in the Shenandoah Valley of Virginia, USA. We generated ensembles of classification trees based on six years of respiratory-related hospital admissions (64,620 cases) and a suite of 83 potential environmental predictor variables. As our goal was to identify short-term weather linkages to high admission periods, the dependent variable was formulated as a binary classification of five-day moving average respiratory admission departures from the seasonal mean value. Accounting for seasonality removed the long-term apparent inverse relationship between temperature and admissions. We generated eight total models specific to the northern and southern portions of the valley for each season. All eight models demonstrate predictive skill (mean odds ratio = 3.635) when evaluated using a randomization procedure. The predictor variables selected by the ensembling algorithm vary across models, and both meteorological and air quality variables are included. In general, the models indicate complex linkages between respiratory health and environmental conditions that may be difficult to identify using more traditional approaches.
The Connection between Role Model Relationships and Self-Direction in Developmental Students
ERIC Educational Resources Information Center
Di Tommaso, Kathrynn
2010-01-01
This paper presents a qualitative study that used classroom observations, faculty interviews, and student interviews to investigate the meaning and importance of seven non-cognitive variables to a cohort of developmental writing students at an urban community college. The variables studied included finances, study management, college surroundings,…
Effects of Locus of Control, Academic Self-Efficacy, and Tutoring on Academic Performance
ERIC Educational Resources Information Center
Drago, Anthony; Rheinheimer, David C.; Detweiler, Thomas N.
2018-01-01
This study investigated the connection between locus of control (LOC), academic self-efficacy (ASE), and academic performance, and whether these variables are affected by tutoring. Additional variables of interest, including gender, students' Pell Grant status, ethnicity, and class size, were also considered for the research models. The population…
An Effect Size for Regression Predictors in Meta-Analysis
ERIC Educational Resources Information Center
Aloe, Ariel M.; Becker, Betsy Jane
2012-01-01
A new effect size representing the predictive power of an independent variable from a multiple regression model is presented. The index, denoted as r[subscript sp], is the semipartial correlation of the predictor with the outcome of interest. This effect size can be computed when multiple predictor variables are included in the regression model…
Development and Testing of an Initial Model of Curricular Leadership Culture in Middle Schools
ERIC Educational Resources Information Center
Adams, Jerry
2007-01-01
Effective school studies, for the most part, have focused on different individual school-level independent variables influencing student achievement and have largely neglected examining contextual variables within the school or school community that may evolve as a result of responding to statewide accountability pressures, including examining how…
Military Enlistments: What Can We Learn from Geographic Variation? Technical Report 620.
ERIC Educational Resources Information Center
Brown, Charles
Some economic variables were examined that affect enlistment decisions and therefore affect the continued success of the All-Volunteer Force. The study used a multiple regression, pooled cross-section/time-series model over the 1975-1982 period, including pay, unemployment, educational benefits, and recruiting resources as independent variables.…
ERIC Educational Resources Information Center
Ding, Lin
2014-01-01
This study seeks to test the causal influences of reasoning skills and epistemologies on student conceptual learning in physics. A causal model, integrating multiple variables that were investigated separately in the prior literature, is proposed and tested through path analysis. These variables include student preinstructional reasoning skills…
NASA Technical Reports Server (NTRS)
Boyce, Lola; Bast, Callie C.
1992-01-01
The research included ongoing development of methodology that provides probabilistic lifetime strength of aerospace materials via computational simulation. A probabilistic material strength degradation model, in the form of a randomized multifactor interaction equation, is postulated for strength degradation of structural components of aerospace propulsion systems subjected to a number of effects or primative variables. These primative variable may include high temperature, fatigue or creep. In most cases, strength is reduced as a result of the action of a variable. This multifactor interaction strength degradation equation has been randomized and is included in the computer program, PROMISS. Also included in the research is the development of methodology to calibrate the above described constitutive equation using actual experimental materials data together with linear regression of that data, thereby predicting values for the empirical material constraints for each effect or primative variable. This regression methodology is included in the computer program, PROMISC. Actual experimental materials data were obtained from the open literature for materials typically of interest to those studying aerospace propulsion system components. Material data for Inconel 718 was analyzed using the developed methodology.
Holistic uncertainty analysis in river basin modeling for climate vulnerability assessment
NASA Astrophysics Data System (ADS)
Taner, M. U.; Wi, S.; Brown, C.
2017-12-01
The challenges posed by uncertain future climate are a prominent concern for water resources managers. A number of frameworks exist for assessing the impacts of climate-related uncertainty, including internal climate variability and anthropogenic climate change, such as scenario-based approaches and vulnerability-based approaches. While in many cases climate uncertainty may be dominant, other factors such as future evolution of the river basin, hydrologic response and reservoir operations are potentially significant sources of uncertainty. While uncertainty associated with modeling hydrologic response has received attention, very little attention has focused on the range of uncertainty and possible effects of the water resources infrastructure and management. This work presents a holistic framework that allows analysis of climate, hydrologic and water management uncertainty in water resources systems analysis with the aid of a water system model designed to integrate component models for hydrology processes and water management activities. The uncertainties explored include those associated with climate variability and change, hydrologic model parameters, and water system operation rules. A Bayesian framework is used to quantify and model the uncertainties at each modeling steps in integrated fashion, including prior and the likelihood information about model parameters. The framework is demonstrated in a case study for the St. Croix Basin located at border of United States and Canada.
Using the entire history in the analysis of nested case cohort samples.
Rivera, C L; Lumley, T
2016-08-15
Countermatching designs can provide more efficient estimates than simple matching or case-cohort designs in certain situations such as when good surrogate variables for an exposure of interest are available. We extend pseudolikelihood estimation for the Cox model under countermatching designs to models where time-varying covariates are considered. We also implement pseudolikelihood with calibrated weights to improve efficiency in nested case-control designs in the presence of time-varying variables. A simulation study is carried out, which considers four different scenarios including a binary time-dependent variable, a continuous time-dependent variable, and the case including interactions in each. Simulation results show that pseudolikelihood with calibrated weights under countermatching offers large gains in efficiency if compared to case-cohort. Pseudolikelihood with calibrated weights yielded more efficient estimators than pseudolikelihood estimators. Additionally, estimators were more efficient under countermatching than under case-cohort for the situations considered. The methods are illustrated using the Colorado Plateau uranium miners cohort. Furthermore, we present a general method to generate survival times with time-varying covariates. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Fourier analysis of blazar variability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finke, Justin D.; Becker, Peter A., E-mail: justin.finke@nrl.navy.mil
Blazars display strong variability on multiple timescales and in multiple radiation bands. Their variability is often characterized by power spectral densities (PSDs) and time lags plotted as functions of the Fourier frequency. We develop a new theoretical model based on the analysis of the electron transport (continuity) equation, carried out in the Fourier domain. The continuity equation includes electron cooling and escape, and a derivation of the emission properties includes light travel time effects associated with a radiating blob in a relativistic jet. The model successfully reproduces the general shapes of the observed PSDs and predicts specific PSD and timemore » lag behaviors associated with variability in the synchrotron, synchrotron self-Compton, and external Compton emission components, from submillimeter to γ-rays. We discuss applications to BL Lacertae objects and to flat-spectrum radio quasars (FSRQs), where there are hints that some of the predicted features have already been observed. We also find that FSRQs should have steeper γ-ray PSD power-law indices than BL Lac objects at Fourier frequencies ≲ 10{sup –4} Hz, in qualitative agreement with previously reported observations by the Fermi Large Area Telescope.« less
'Constraint consistency' at all orders in cosmological perturbation theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nandi, Debottam; Shankaranarayanan, S., E-mail: debottam@iisertvm.ac.in, E-mail: shanki@iisertvm.ac.in
2015-08-01
We study the equivalence of two—order-by-order Einstein's equation and Reduced action—approaches to cosmological perturbation theory at all orders for different models of inflation. We point out a crucial consistency check which we refer to as 'Constraint consistency' condition that needs to be satisfied in order for the two approaches to lead to identical single variable equation of motion. The method we propose here is quick and efficient to check the consistency for any model including modified gravity models. Our analysis points out an important feature which is crucial for inflationary model building i.e., all 'constraint' inconsistent models have higher ordermore » Ostrogradsky's instabilities but the reverse is not true. In other words, one can have models with constraint Lapse function and Shift vector, though it may have Ostrogradsky's instabilities. We also obtain single variable equation for non-canonical scalar field in the limit of power-law inflation for the second-order perturbed variables.« less
A diagnostic model for chronic hypersensitivity pneumonitis
Johannson, Kerri A; Elicker, Brett M; Vittinghoff, Eric; Assayag, Deborah; de Boer, Kaïssa; Golden, Jeffrey A; Jones, Kirk D; King, Talmadge E; Koth, Laura L; Lee, Joyce S; Ley, Brett; Wolters, Paul J; Collard, Harold R
2017-01-01
The objective of this study was to develop a diagnostic model that allows for a highly specific diagnosis of chronic hypersensitivity pneumonitis using clinical and radiological variables alone. Chronic hypersensitivity pneumonitis and other interstitial lung disease cases were retrospectively identified from a longitudinal database. High-resolution CT scans were blindly scored for radiographic features (eg, ground-glass opacity, mosaic perfusion) as well as the radiologist’s diagnostic impression. Candidate models were developed then evaluated using clinical and radiographic variables and assessed by the cross-validated C-statistic. Forty-four chronic hypersensitivity pneumonitis and eighty other interstitial lung disease cases were identified. Two models were selected based on their statistical performance, clinical applicability and face validity. Key model variables included age, down feather and/or bird exposure, radiographic presence of ground-glass opacity and mosaic perfusion and moderate or high confidence in the radiographic impression of chronic hypersensitivity pneumonitis. Models were internally validated with good performance, and cut-off values were established that resulted in high specificity for a diagnosis of chronic hypersensitivity pneumonitis. PMID:27245779
Impact of Forecast and Model Error Correlations In 4dvar Data Assimilation
NASA Astrophysics Data System (ADS)
Zupanski, M.; Zupanski, D.; Vukicevic, T.; Greenwald, T.; Eis, K.; Vonder Haar, T.
A weak-constraint 4DVAR data assimilation system has been developed at Cooper- ative Institute for Research in the Atmosphere (CIRA), Colorado State University. It is based on the NCEP's ETA 4DVAR system, and it is fully parallel (MPI coding). The CIRA's 4DVAR system is aimed for satellite data assimilation research, with cur- rent focus on assimilation of cloudy radiances and microwave satellite measurements. Most important improvement over the previous 4DVAR system is a degree of gener- ality introduced into the new algorithm, namely for applications with different NWP models (e.g., RAMS, WRF, ETA, etc.), and for the choice of control variable. In cur- rent applications, the non-hydrostatic RAMS model and its adjoint are used, including all microphysical processess. The control variable includes potential temperature, ve- locity potential and stream function, vertical velocity, and seven mixing ratios with respect to all water phases. Since the statistics of the microphysical components of the control variable is not well known, a special attention will be paid to the impact of the forecast and model (prior) error correlations on the 4DVAR analysis. In particular, the sensitivity of the analysis with respect to decorrelation length will be examined. The prior error covariances are modelled using the compactly-supported, space-limited correlations developed at NASA DAO.
Norris, Peter M; da Silva, Arlindo M
2016-07-01
A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC.
Lalande, Laure; Bourguignon, Laurent; Carlier, Chloé; Ducher, Michel
2013-06-01
Falls in geriatry are associated with important morbidity, mortality and high healthcare costs. Because of the large number of variables related to the risk of falling, determining patients at risk is a difficult challenge. The aim of this work was to validate a tool to detect patients with high risk of fall using only bibliographic knowledge. Thirty articles corresponding to 160 studies were used to modelize fall risk. A retrospective case-control cohort including 288 patients (88 ± 7 years) and a prospective cohort including 106 patients (89 ± 6 years) from two geriatric hospitals were used to validate the performances of our model. We identified 26 variables associated with an increased risk of fall. These variables were split into illnesses, medications, and environment. The combination of the three associated scores gives a global fall score. The sensitivity and the specificity were 31.4, 81.6, 38.5, and 90 %, respectively, for the retrospective and the prospective cohort. The performances of the model are similar to results observed with already existing prediction tools using model adjustment to data from numerous cohort studies. This work demonstrates that knowledge from the literature can be synthesized with Bayesian networks.
NASA Technical Reports Server (NTRS)
Norris, Peter M.; Da Silva, Arlindo M.
2016-01-01
A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC.
Norris, Peter M.; da Silva, Arlindo M.
2018-01-01
A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC. PMID:29618847
Comparing mechanistic and empirical approaches to modeling the thermal niche of almond
NASA Astrophysics Data System (ADS)
Parker, Lauren E.; Abatzoglou, John T.
2017-09-01
Delineating locations that are thermally viable for cultivating high-value crops can help to guide land use planning, agronomics, and water management. Three modeling approaches were used to identify the potential distribution and key thermal constraints on on almond cultivation across the southwestern United States (US), including two empirical species distribution models (SDMs)—one using commonly used bioclimatic variables (traditional SDM) and the other using more physiologically relevant climate variables (nontraditional SDM)—and a mechanistic model (MM) developed using published thermal limitations from field studies. While models showed comparable results over the majority of the domain, including over existing croplands with high almond density, the MM suggested the greatest potential for the geographic expansion of almond cultivation, with frost susceptibility and insufficient heat accumulation being the primary thermal constraints in the southwestern US. The traditional SDM over-predicted almond suitability in locations shown by the MM to be limited by frost, whereas the nontraditional SDM showed greater agreement with the MM in these locations, indicating that incorporating physiologically relevant variables in SDMs can improve predictions. Finally, opportunities for geographic expansion of almond cultivation under current climatic conditions in the region may be limited, suggesting that increasing production may rely on agronomical advances and densifying current almond plantations in existing locations.
Modeling, Simulation, and Forecasting of Subseasonal Variability
NASA Technical Reports Server (NTRS)
Waliser, Duane; Schubert, Siegfried; Kumar, Arun; Weickmann, Klaus; Dole, Randall
2003-01-01
A planning workshop on "Modeling, Simulation and Forecasting of Subseasonal Variability" was held in June 2003. This workshop was the first of a number of meetings planned to follow the NASA-sponsored workshop entitled "Prospects For Improved Forecasts Of Weather And Short-Term Climate Variability On Sub-Seasonal Time Scales" that was held April 2002. The 2002 workshop highlighted a number of key sources of unrealized predictability on subseasonal time scales including tropical heating, soil wetness, the Madden Julian Oscillation (MJO) [a.k.a Intraseasonal Oscillation (ISO)], the Arctic Oscillation (AO) and the Pacific/North American (PNA) pattern. The overarching objective of the 2003 follow-up workshop was to proceed with a number of recommendations made from the 2002 workshop, as well as to set an agenda and collate efforts in the areas of modeling, simulation and forecasting intraseasonal and short-term climate variability. More specifically, the aims of the 2003 workshop were to: 1) develop a baseline of the "state of the art" in subseasonal prediction capabilities, 2) implement a program to carry out experimental subseasonal forecasts, and 3) develop strategies for tapping the above sources of predictability by focusing research, model development, and the development/acquisition of new observations on the subseasonal problem. The workshop was held over two days and was attended by over 80 scientists, modelers, forecasters and agency personnel. The agenda of the workshop focused on issues related to the MJO and tropicalextratropical interactions as they relate to the subseasonal simulation and prediction problem. This included the development of plans for a coordinated set of GCM hindcast experiments to assess current model subseasonal prediction capabilities and shortcomings, an emphasis on developing a strategy to rectify shortcomings associated with tropical intraseasonal variability, namely diabatic processes, and continuing the implementation of an experimental forecast and model development program that focuses on one of the key sources of untapped predictability, namely the MJO. The tangible outcomes of the meeting included: 1) the development of a recommended framework for a set of multi-year ensembles of 45-day hindcasts to be carried out by a number of GCMs so that they can be analyzed in regards to their representations of subseasonal variability, predictability and forecast skill, 2) an assessment of the present status of GCM representations of the MJO and recommendations for future steps to take in order to remedy the remaining shortcomings in these representations, and 3) a final implementation plan for a multi-institute/multi-nation Experimental MJO Prediction Program.
A biodynamic feedthrough model based on neuromuscular principles.
Venrooij, Joost; Abbink, David A; Mulder, Mark; van Paassen, Marinus M; Mulder, Max; van der Helm, Frans C T; Bulthoff, Heinrich H
2014-07-01
A biodynamic feedthrough (BDFT) model is proposed that describes how vehicle accelerations feed through the human body, causing involuntary limb motions and so involuntary control inputs. BDFT dynamics strongly depend on limb dynamics, which can vary between persons (between-subject variability), but also within one person over time, e.g., due to the control task performed (within-subject variability). The proposed BDFT model is based on physical neuromuscular principles and is derived from an established admittance model-describing limb dynamics-which was extended to include control device dynamics and account for acceleration effects. The resulting BDFT model serves primarily the purpose of increasing the understanding of the relationship between neuromuscular admittance and biodynamic feedthrough. An added advantage of the proposed model is that its parameters can be estimated using a two-stage approach, making the parameter estimation more robust, as the procedure is largely based on the well documented procedure required for the admittance model. To estimate the parameter values of the BDFT model, data are used from an experiment in which both neuromuscular admittance and biodynamic feedthrough are measured. The quality of the BDFT model is evaluated in the frequency and time domain. Results provide strong evidence that the BDFT model and the proposed method of parameter estimation put forward in this paper allows for accurate BDFT modeling across different subjects (accounting for between-subject variability) and across control tasks (accounting for within-subject variability).
Climate models predict increasing temperature variability in poor countries.
Bathiany, Sebastian; Dakos, Vasilis; Scheffer, Marten; Lenton, Timothy M
2018-05-01
Extreme events such as heat waves are among the most challenging aspects of climate change for societies. We show that climate models consistently project increases in temperature variability in tropical countries over the coming decades, with the Amazon as a particular hotspot of concern. During the season with maximum insolation, temperature variability increases by ~15% per degree of global warming in Amazonia and Southern Africa and by up to 10%°C -1 in the Sahel, India, and Southeast Asia. Mechanisms include drying soils and shifts in atmospheric structure. Outside the tropics, temperature variability is projected to decrease on average because of a reduced meridional temperature gradient and sea-ice loss. The countries that have contributed least to climate change, and are most vulnerable to extreme events, are projected to experience the strongest increase in variability. These changes would therefore amplify the inequality associated with the impacts of a changing climate.
Climate models predict increasing temperature variability in poor countries
Dakos, Vasilis; Scheffer, Marten
2018-01-01
Extreme events such as heat waves are among the most challenging aspects of climate change for societies. We show that climate models consistently project increases in temperature variability in tropical countries over the coming decades, with the Amazon as a particular hotspot of concern. During the season with maximum insolation, temperature variability increases by ~15% per degree of global warming in Amazonia and Southern Africa and by up to 10%°C−1 in the Sahel, India, and Southeast Asia. Mechanisms include drying soils and shifts in atmospheric structure. Outside the tropics, temperature variability is projected to decrease on average because of a reduced meridional temperature gradient and sea-ice loss. The countries that have contributed least to climate change, and are most vulnerable to extreme events, are projected to experience the strongest increase in variability. These changes would therefore amplify the inequality associated with the impacts of a changing climate. PMID:29732409
Reporting the accuracy of biochemical measurements for epidemiologic and nutrition studies.
McShane, L M; Clark, L C; Combs, G F; Turnbull, B W
1991-06-01
Procedures for reporting and monitoring the accuracy of biochemical measurements are presented. They are proposed as standard reporting procedures for laboratory assays for epidemiologic and clinical-nutrition studies. The recommended procedures require identification and estimation of all major sources of variability and explanations of laboratory quality control procedures employed. Variance-components techniques are used to model the total variability and calculate a maximum percent error that provides an easily understandable measure of laboratory precision accounting for all sources of variability. This avoids ambiguities encountered when reporting an SD that may taken into account only a few of the potential sources of variability. Other proposed uses of the total-variability model include estimating precision of laboratory methods for various replication schemes and developing effective quality control-checking schemes. These procedures are demonstrated with an example of the analysis of alpha-tocopherol in human plasma by using high-performance liquid chromatography.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Yuping; Zheng, Qipeng P.; Wang, Jianhui
2014-11-01
tThis paper presents a two-stage stochastic unit commitment (UC) model, which integrates non-generation resources such as demand response (DR) and energy storage (ES) while including riskconstraints to balance between cost and system reliability due to the fluctuation of variable genera-tion such as wind and solar power. This paper uses conditional value-at-risk (CVaR) measures to modelrisks associated with the decisions in a stochastic environment. In contrast to chance-constrained modelsrequiring extra binary variables, risk constraints based on CVaR only involve linear constraints and con-tinuous variables, making it more computationally attractive. The proposed models with risk constraintsare able to avoid over-conservative solutions butmore » still ensure system reliability represented by loss ofloads. Then numerical experiments are conducted to study the effects of non-generation resources ongenerator schedules and the difference of total expected generation costs with risk consideration. Sen-sitivity analysis based on reliability parameters is also performed to test the decision preferences ofconfidence levels and load-shedding loss allowances on generation cost reduction.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stetzel, KD; Aldrich, LL; Trimboli, MS
2015-03-15
This paper addresses the problem of estimating the present value of electrochemical internal variables in a lithium-ion cell in real time, using readily available measurements of cell voltage, current, and temperature. The variables that can be estimated include any desired set of reaction flux and solid and electrolyte potentials and concentrations at any set of one-dimensional spatial locations, in addition to more standard quantities such as state of charge. The method uses an extended Kalman filter along with a one-dimensional physics-based reduced-order model of cell dynamics. Simulations show excellent and robust predictions having dependable error bounds for most internal variables.more » (C) 2014 Elsevier B.V. All rights reserved.« less
Latent variable model for suicide risk in relation to social capital and socio-economic status.
Congdon, Peter
2012-08-01
There is little evidence on the association between suicide outcomes (ideation, attempts, self-harm) and social capital. This paper investigates such associations using a structural equation model based on health survey data, and allowing for both individual and contextual risk factors. Social capital and other major risk factors for suicide, namely socioeconomic status and social isolation, are modelled as latent variables that are proxied (or measured) by observed indicators or question responses for survey subjects. These latent scales predict suicide risk in the structural component of the model. Also relevant to explaining suicide risk are contextual variables, such as area deprivation and region of residence, as well as the subject's demographic status. The analysis is based on the 2007 Adult Psychiatric Morbidity Survey and includes 7,403 English subjects. A Bayesian modelling strategy is used. Models with and without social capital as a predictor of suicide risk are applied. A benefit to statistical fit is demonstrated when social capital is added as a predictor. Social capital varies significantly by geographic context variables (neighbourhood deprivation, region), and this impacts on the direct effects of these contextual variables on suicide risk. In particular, area deprivation is not confirmed as a distinct significant influence. The model develops a suicidality risk score incorporating social capital, and the success of this risk score in predicting actual suicide events is demonstrated. Social capital as reflected in neighbourhood perceptions is a significant factor affecting risks of different types of self-harm and may mediate the effects of other contextual variables such as area deprivation.
NASA Astrophysics Data System (ADS)
You, Bei; Bursa, Michal; Życki, Piotr T.
2018-05-01
We develop a Monte Carlo code to compute the Compton-scattered X-ray flux arising from a hot inner flow that undergoes Lense–Thirring precession. The hot flow intercepts seed photons from an outer truncated thin disk. A fraction of the Comptonized photons will illuminate the disk, and the reflected/reprocessed photons will contribute to the observed spectrum. The total spectrum, including disk thermal emission, hot flow Comptonization, and disk reflection, is modeled within the framework of general relativity, taking light bending and gravitational redshift into account. The simulations are performed in the context of the Lense–Thirring precession model for the low-frequency quasi-periodic oscillations, so the inner flow is assumed to precess, leading to periodic modulation of the emitted radiation. In this work, we concentrate on the energy-dependent X-ray variability of the model and, in particular, on the evolution of the variability during the spectral transition from hard to soft state, which is implemented by the decrease of the truncation radius of the outer disk toward the innermost stable circular orbit. In the hard state, where the Comptonizing flow is geometrically thick, the Comptonization is weakly variable with a fractional variability amplitude of ≤10% in the soft state, where the Comptonizing flow is cooled down and thus becomes geometrically thin, the fractional variability of the Comptonization is highly variable, increasing with photon energy. The fractional variability of the reflection increases with energy, and the reflection emission for low spin is counterintuitively more variable than the one for high spin.
Continuous-variable gate decomposition for the Bose-Hubbard model
NASA Astrophysics Data System (ADS)
Kalajdzievski, Timjan; Weedbrook, Christian; Rebentrost, Patrick
2018-06-01
In this work, we decompose the time evolution of the Bose-Hubbard model into a sequence of logic gates that can be implemented on a continuous-variable photonic quantum computer. We examine the structure of the circuit that represents this time evolution for one-dimensional and two-dimensional lattices. The elementary gates needed for the implementation are counted as a function of lattice size. We also include the contribution of the leading dipole interaction term which may be added to the Hamiltonian and its corresponding circuit.
Application of classification-tree methods to identify nitrate sources in ground water
Spruill, T.B.; Showers, W.J.; Howe, S.S.
2002-01-01
A study was conducted to determine if nitrate sources in ground water (fertilizer on crops, fertilizer on golf courses, irrigation spray from hog (Sus scrofa) wastes, and leachate from poultry litter and septic systems) could be classified with 80% or greater success. Two statistical classification-tree models were devised from 48 water samples containing nitrate from five source categories. Model I was constructed by evaluating 32 variables and selecting four primary predictor variables (??15N, nitrate to ammonia ratio, sodium to potassium ratio, and zinc) to identify nitrate sources. A ??15N value of nitrate plus potassium 18.2 indicated inorganic or soil organic N. A nitrate to ammonia ratio 575 indicated nitrate from golf courses. A sodium to potassium ratio 3.2 indicated spray or poultry wastes. A value for zinc 2.8 indicated poultry wastes. Model 2 was devised by using all variables except ??15N. This model also included four variables (sodium plus potassium, nitrate to ammonia ratio, calcium to magnesium ratio, and sodium to potassium ratio) to distinguish categories. Both models were able to distinguish all five source categories with better than 80% overall success and with 71 to 100% success in individual categories using the learning samples. Seventeen water samples that were not used in model development were tested using Model 2 for three categories, and all were correctly classified. Classification-tree models show great potential in identifying sources of contamination and variables important in the source-identification process.
Model evaluation using a community benchmarking system for land surface models
NASA Astrophysics Data System (ADS)
Mu, M.; Hoffman, F. M.; Lawrence, D. M.; Riley, W. J.; Keppel-Aleks, G.; Kluzek, E. B.; Koven, C. D.; Randerson, J. T.
2014-12-01
Evaluation of atmosphere, ocean, sea ice, and land surface models is an important step in identifying deficiencies in Earth system models and developing improved estimates of future change. For the land surface and carbon cycle, the design of an open-source system has been an important objective of the International Land Model Benchmarking (ILAMB) project. Here we evaluated CMIP5 and CLM models using a benchmarking system that enables users to specify models, data sets, and scoring systems so that results can be tailored to specific model intercomparison projects. Our scoring system used information from four different aspects of global datasets, including climatological mean spatial patterns, seasonal cycle dynamics, interannual variability, and long-term trends. Variable-to-variable comparisons enable investigation of the mechanistic underpinnings of model behavior, and allow for some control of biases in model drivers. Graphics modules allow users to evaluate model performance at local, regional, and global scales. Use of modular structures makes it relatively easy for users to add new variables, diagnostic metrics, benchmarking datasets, or model simulations. Diagnostic results are automatically organized into HTML files, so users can conveniently share results with colleagues. We used this system to evaluate atmospheric carbon dioxide, burned area, global biomass and soil carbon stocks, net ecosystem exchange, gross primary production, ecosystem respiration, terrestrial water storage, evapotranspiration, and surface radiation from CMIP5 historical and ESM historical simulations. We found that the multi-model mean often performed better than many of the individual models for most variables. We plan to publicly release a stable version of the software during fall of 2014 that has land surface, carbon cycle, hydrology, radiation and energy cycle components.
NASA Astrophysics Data System (ADS)
Ichii, K.; Suzuki, T.; Kato, T.; Ito, A.; Hajima, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.; Ohtani, Y.; Takagi, K.
2010-07-01
Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine - based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID), we conducted two simulations: (1) point simulations at four eddy flux sites in Japan and (2) spatial simulations for Japan with a default model (based on original settings) and a modified model (based on model parameter tuning using eddy flux data). Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP), most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.
Choi, Jean H; Chung, Kyong-Mee; Park, Keeho
2013-10-01
The present study aimed to examine whether demographic as well as psychosocial variables related to the five stages of change of the Transtheoretical Model can predict non-clinical adults' cancer preventive and health-promoting behaviors. This study specifically focused on cancer, one of the major chronic diseases, which is a serious threat of national health. A total of 1530 adults participated in the study and completed questionnaires. Collected data were analyzed by using multinominal logistic regression. The significant predictors of later stages varied among the types of health-promoting behaviors. Certain cancer preventive health-promoting behaviors such as well-balanced diet and exercise were significantly associated with psychosocial variables including cancer prevention-related self-efficacy, personality traits, psychosocial stress, and social support. On the other hand, smoking cessation and moderate or abstinence from drinking were more likely to be predicted by demographic variables including sex and age. The present study found that in addition to self-efficacy-a relatively well-studied psychological variable-other personality traits and psychological factors including introversion, neuroticism, psychosocial stress, and social support also significantly predicted later stages of change with respect to cancer preventive health-promoting behaviors. The implications of this study are also discussed. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Tengattini, Alessandro; Das, Arghya; Nguyen, Giang D.; Viggiani, Gioacchino; Hall, Stephen A.; Einav, Itai
2014-10-01
This is the first of two papers introducing a novel thermomechanical continuum constitutive model for cemented granular materials. Here, we establish the theoretical foundations of the model, and highlight its novelties. At the limit of no cement, the model is fully consistent with the original Breakage Mechanics model. An essential ingredient of the model is the use of measurable and micro-mechanics based internal variables, describing the evolution of the dominant inelastic processes. This imposes a link between the macroscopic mechanical behavior and the statistically averaged evolution of the microstructure. As a consequence this model requires only a few physically identifiable parameters, including those of the original breakage model and new ones describing the cement: its volume fraction, its critical damage energy and bulk stiffness, and the cohesion.
Sources and Impacts of Modeled and Observed Low-Frequency Climate Variability
NASA Astrophysics Data System (ADS)
Parsons, Luke Alexander
Here we analyze climate variability using instrumental, paleoclimate (proxy), and the latest climate model data to understand more about the sources and impacts of low-frequency climate variability. Understanding the drivers of climate variability at interannual to century timescales is important for studies of climate change, including analyses of detection and attribution of climate change impacts. Additionally, correctly modeling the sources and impacts of variability is key to the simulation of abrupt change (Alley et al., 2003) and extended drought (Seager et al., 2005; Pelletier and Turcotte, 1997; Ault et al., 2014). In Appendix A, we employ an Earth system model (GFDL-ESM2M) simulation to study the impacts of a weakening of the Atlantic meridional overturning circulation (AMOC) on the climate of the American Tropics. The AMOC drives some degree of local and global internal low-frequency climate variability (Manabe and Stouffer, 1995; Thornalley et al., 2009) and helps control the position of the tropical rainfall belt (Zhang and Delworth, 2005). We find that a major weakening of the AMOC can cause large-scale temperature, precipitation, and carbon storage changes in Central and South America. Our results suggest that possible future changes in AMOC strength alone will not be sufficient to drive a large-scale dieback of the Amazonian forest, but this key natural ecosystem is sensitive to dry-season length and timing of rainfall (Parsons et al., 2014). In Appendix B, we compare a paleoclimate record of precipitation variability in the Peruvian Amazon to climate model precipitation variability. The paleoclimate (Lake Limon) record indicates that precipitation variability in western Amazonia is 'red' (i.e., increasing variability with timescale). By contrast, most state-of-the-art climate models indicate precipitation variability in this region is nearly 'white' (i.e., equally variability across timescales). This paleo-model disagreement in the overall structure of the variance spectrum has important consequences for the probability of multi-year drought. Our lake record suggests there is a significant background threat of multi-year, and even decade-length, drought in western Amazonia, whereas climate model simulations indicate most droughts likely last no longer than one to three years. These findings suggest climate models may underestimate the future risk of extended drought in this important region. In Appendix C, we expand our analysis of climate variability beyond South America. We use observations, well-constrained tropical paleoclimate, and Earth system model data to examine the overall shape of the climate spectrum across interannual to century frequencies. We find a general agreement among observations and models that temperature variability increases with timescale across most of the globe outside the tropics. However, as compared to paleoclimate records, climate models generate too little low-frequency variability in the tropics (e.g., Laepple and Huybers, 2014). When we compare the shape of the simulated climate spectrum to the spectrum of a simple autoregressive process, we find much of the modeled surface temperature variability in the tropics could be explained by ocean smoothing of weather noise. Importantly, modeled precipitation tends to be similar to white noise across much of the globe. By contrast, paleoclimate records of various types from around the globe indicate that both temperature and precipitation variability should experience much more low-frequency variability than a simple autoregressive or white-noise process. In summary, state-of-the-art climate models generate some degree of dynamically driven low-frequency climate variability, especially at high latitudes. However, the latest climate models, observations, and paleoclimate data provide us with drastically different pictures of the background climate system and its associated risks. This research has important consequences for improving how we simulate climate extremes as we enter a warmer (and often drier) world in the coming centuries; if climate models underestimate low-frequency variability, we will underestimate the risk of future abrupt change and extreme events, such as megadroughts.
A methodology for long range prediction of air transportation
NASA Technical Reports Server (NTRS)
Ayati, M. B.; English, J. M.
1980-01-01
The paper describes the methodology for long-time projection of aircraft fuel requirements. A new concept of social and economic factors for future aviation industry which provides an estimate of predicted fuel usage is presented; it includes air traffic forecasts and lead times for producing new engines and aircraft types. An air transportation model is then developed in terms of an abstracted set of variables which represent the entire aircraft industry on a macroscale. This model was evaluated by testing the required output variables from a model based on historical data over the past decades.
Starspot detection and properties
NASA Astrophysics Data System (ADS)
Savanov, I. S.
2013-07-01
I review the currently available techniques for the starspots detection including the one-dimensional spot modelling of photometric light curves. Special attention will be paid to the modelling of photospheric activity based on the high-precision light curves obtained with space missions MOST, CoRoT, and Kepler. Physical spot parameters (temperature, sizes and variability time scales including short-term activity cycles) are discussed.
Diagnostic Studies With GLA Fields
NASA Technical Reports Server (NTRS)
Salstein, David A.
1997-01-01
Assessments of the NASA Goddard Earth Observing System-1 Data Assimilation System (GEOS-1 DAS) regarding heating rates, energetics and angular momentum quantities were made. These diagnostics can be viewed as measures of climate variability. Comparisons with the NOAA/NCEP reanalysis system of momentum and energetics diagnostics are included. Water vapor and angular momentum are diagnosed in many models, including those of NASA, as part of the Atmospheric Model Intercomparison Project. Relevant preprints are included herein.
NASA Astrophysics Data System (ADS)
Yahya, Khairunnisa; Wang, Kai; Campbell, Patrick; Glotfelty, Timothy; He, Jian; Zhang, Yang
2016-02-01
The Weather Research and Forecasting model with Chemistry (WRF/Chem) v3.6.1 with the Carbon Bond 2005 (CB05) gas-phase mechanism is evaluated for its first decadal application during 2001-2010 using the Representative Concentration Pathway 8.5 (RCP 8.5) emissions to assess its capability and appropriateness for long-term climatological simulations. The initial and boundary conditions are downscaled from the modified Community Earth System Model/Community Atmosphere Model (CESM/CAM5) v1.2.2. The meteorological initial and boundary conditions are bias-corrected using the National Center for Environmental Protection's Final (FNL) Operational Global Analysis data. Climatological evaluations are carried out for meteorological, chemical, and aerosol-cloud-radiation variables against data from surface networks and satellite retrievals. The model performs very well for the 2 m temperature (T2) for the 10-year period, with only a small cold bias of -0.3 °C. Biases in other meteorological variables including relative humidity at 2 m, wind speed at 10 m, and precipitation tend to be site- and season-specific; however, with the exception of T2, consistent annual biases exist for most of the years from 2001 to 2010. Ozone mixing ratios are slightly overpredicted at both urban and rural locations with a normalized mean bias (NMB) of 9.7 % but underpredicted at rural locations with an NMB of -8.8 %. PM2.5 concentrations are moderately overpredicted with an NMB of 23.3 % at rural sites but slightly underpredicted with an NMB of -10.8 % at urban/suburban sites. In general, the model performs relatively well for chemical and meteorological variables, and not as well for aerosol-cloud-radiation variables. Cloud-aerosol variables including aerosol optical depth, cloud water path, cloud optical thickness, and cloud droplet number concentration are generally underpredicted on average across the continental US. Overpredictions of several cloud variables over the eastern US result in underpredictions of radiation variables (such as net shortwave radiation - GSW - with a mean bias - MB - of -5.7 W m-2) and overpredictions of shortwave and longwave cloud forcing (MBs of ˜ 7 to 8 W m-2), which are important climate variables. While the current performance is deemed to be acceptable, improvements to the bias-correction method for CESM downscaling and the model parameterizations of cloud dynamics and thermodynamics, as well as aerosol-cloud interactions, can potentially improve model performance for long-term climate simulations.
The Influence of Sexual Identity on Higher Education Outcomes
ERIC Educational Resources Information Center
Sorgen, Carl H., IV.
2011-01-01
This research empirically explores how sexual identity influences higher education outcomes for lesbian, gay, bisexual, and queer (LGBQ) college students. A path model was constructed with structural equation modeling using responses from 1,125 non-heterosexual college students. The model includes four psychological variables (level of sexual…
A Structural Equation Model of Expertise in College Physics
ERIC Educational Resources Information Center
Taasoobshirazi, Gita; Carr, Martha
2009-01-01
A model of expertise in physics was tested on a sample of 374 college students in 2 different level physics courses. Structural equation modeling was used to test hypothesized relationships among variables linked to expert performance in physics including strategy use, pictorial representation, categorization skills, and motivation, and these…
A Structural Equation Model of Conceptual Change in Physics
ERIC Educational Resources Information Center
Taasoobshirazi, Gita; Sinatra, Gale M.
2011-01-01
A model of conceptual change in physics was tested on introductory-level, college physics students. Structural equation modeling was used to test hypothesized relationships among variables linked to conceptual change in physics including an approach goal orientation, need for cognition, motivation, and course grade. Conceptual change in physics…
Prediction of BP reactivity to talking using hybrid soft computing approaches.
Kaur, Gurmanik; Arora, Ajat Shatru; Jain, Vijender Kumar
2014-01-01
High blood pressure (BP) is associated with an increased risk of cardiovascular diseases. Therefore, optimal precision in measurement of BP is appropriate in clinical and research studies. In this work, anthropometric characteristics including age, height, weight, body mass index (BMI), and arm circumference (AC) were used as independent predictor variables for the prediction of BP reactivity to talking. Principal component analysis (PCA) was fused with artificial neural network (ANN), adaptive neurofuzzy inference system (ANFIS), and least square-support vector machine (LS-SVM) model to remove the multicollinearity effect among anthropometric predictor variables. The statistical tests in terms of coefficient of determination (R (2)), root mean square error (RMSE), and mean absolute percentage error (MAPE) revealed that PCA based LS-SVM (PCA-LS-SVM) model produced a more efficient prediction of BP reactivity as compared to other models. This assessment presents the importance and advantages posed by PCA fused prediction models for prediction of biological variables.
Effects of in-sewer processes: a stochastic model approach.
Vollertsen, J; Nielsen, A H; Yang, W; Hvitved-Jacobsen, T
2005-01-01
Transformations of organic matter, nitrogen and sulfur in sewers can be simulated taking into account the relevant transformation and transport processes. One objective of such simulation is the assessment and management of hydrogen sulfide formation and corrosion. Sulfide is formed in the biofilms and sediments of the water phase, but corrosion occurs on the moist surfaces of the sewer gas phase. Consequently, both phases and the transport of volatile substances between these phases must be included. Furthermore, wastewater composition and transformations in sewers are complex and subject to high, natural variability. This paper presents the latest developments of the WATS model concept, allowing integrated aerobic, anoxic and anaerobic simulation of the water phase and of gas phase processes. The resulting model is complex and with high parameter variability. An example applying stochastic modeling shows how this complexity and variability can be taken into account.
NASA Astrophysics Data System (ADS)
Henry, Christine; Kramb, Victoria; Welter, John T.; Wertz, John N.; Lindgren, Eric A.; Aldrin, John C.; Zainey, David
2018-04-01
Advances in NDE method development are greatly improved through model-guided experimentation. In the case of ultrasonic inspections, models which provide insight into complex mode conversion processes and sound propagation paths are essential for understanding the experimental data and inverting the experimental data into relevant information. However, models must also be verified using experimental data obtained under well-documented and understood conditions. Ideally, researchers would utilize the model simulations and experimental approach to efficiently converge on the optimal solution. However, variability in experimental parameters introduce extraneous signals that are difficult to differentiate from the anticipated response. This paper discusses the results of an ultrasonic experiment designed to evaluate the effect of controllable variables on the anticipated signal, and the effect of unaccounted for experimental variables on the uncertainty in those results. Controlled experimental parameters include the transducer frequency, incidence beam angle and focal depth.
Spatial generalised linear mixed models based on distances.
Melo, Oscar O; Mateu, Jorge; Melo, Carlos E
2016-10-01
Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.
Curran, Christopher A.; Eng, Ken; Konrad, Christopher P.
2012-01-01
Regional low-flow regression models for estimating Q7,10 at ungaged stream sites are developed from the records of daily discharge at 65 continuous gaging stations (including 22 discontinued gaging stations) for the purpose of evaluating explanatory variables. By incorporating the base-flow recession time constant τ as an explanatory variable in the regression model, the root-mean square error for estimating Q7,10 at ungaged sites can be lowered to 72 percent (for known values of τ), which is 42 percent less than if only basin area and mean annual precipitation are used as explanatory variables. If partial-record sites are included in the regression data set, τ must be estimated from pairs of discharge measurements made during continuous periods of declining low flows. Eight measurement pairs are optimal for estimating τ at partial-record sites, and result in a lowering of the root-mean square error by 25 percent. A low-flow survey strategy that includes paired measurements at partial-record sites requires additional effort and planning beyond a standard strategy, but could be used to enhance regional estimates of τ and potentially reduce the error of regional regression models for estimating low-flow characteristics at ungaged sites.
Hooper, Stephen R.; Woolley, Donald P.; Shenk, Chad E.
2010-01-01
Objective To examine the relationships of demographic, maltreatment, neurostructural and neuropsychological measures with total posttraumatic stress disorder (PTSD) symptoms. Methods Participants included 216 children with maltreatment histories (N = 49), maltreatment and PTSD (N = 49), or no maltreatment (N = 118). Participants received diagnostic interviews, brain imaging, and neuropsychological evaluations. Results We examined a hierarchical regression model comprised of independent variables including demographics, trauma and maltreatment-related variables, and hippocampal volumes and neuropsychological measures to model PTSD symptoms. Important independent contributors to this model were SES, and General Maltreatment and Sexual Abuse Factors. Although hippocampal volumes were not significant, Visual Memory was a significant contributor to this model. Conclusions Similar to adult PTSD, pediatric PTSD symptoms are associated with lower Visual Memory performance. It is an important correlate of PTSD beyond established predictors of PTSD symptoms. These results support models of developmental traumatology and suggest that treatments which enhance visual memory may decrease symptoms of PTSD. PMID:20008084
2014-01-01
Background Plasmodium falciparum transmission has decreased significantly in Zambia in the last decade. The malaria transmission is influenced by environmental variables. Incorporation of environmental variables in models of malaria transmission likely improves model fit and predicts probable trends in malaria disease. This work is based on the hypothesis that remotely-sensed environmental factors, including nocturnal dew point, are associated with malaria transmission and sustain foci of transmission during the low transmission season in the Southern Province of Zambia. Methods Thirty-eight rural health centres in Southern Province, Zambia were divided into three zones based on transmission patterns. Correlations between weekly malaria cases and remotely-sensed nocturnal dew point, nocturnal land surface temperature as well as vegetation indices and rainfall were evaluated in time-series analyses from 2012 week 19 to 2013 week 36. Zonal as well as clinic-based, multivariate, autoregressive, integrated, moving average (ARIMAX) models implementing environmental variables were developed to model transmission in 2011 week 19 to 2012 week 18 and forecast transmission in 2013 week 37 to week 41. Results During the dry, low transmission season significantly higher vegetation indices, nocturnal land surface temperature and nocturnal dew point were associated with the areas of higher transmission. Environmental variables improved ARIMAX models. Dew point and normalized differentiated vegetation index were significant predictors and improved all zonal transmission models. In the high-transmission zone, this was also seen for land surface temperature. Clinic models were improved by adding dew point and land surface temperature as well as normalized differentiated vegetation index. The mean average error of prediction for ARIMAX models ranged from 0.7 to 33.5%. Forecasts of malaria incidence were valid for three out of five rural health centres; however, with poor results at the zonal level. Conclusions In this study, the fit of ARIMAX models improves when environmental variables are included. There is a significant association of remotely-sensed nocturnal dew point with malaria transmission. Interestingly, dew point might be one of the factors sustaining malaria transmission in areas of general aridity during the dry season. PMID:24927747
Nygren, David; Stoyanov, Cristina; Lewold, Clemens; Månsson, Fredrik; Miller, John; Kamanga, Aniset; Shiff, Clive J
2014-06-13
Plasmodium falciparum transmission has decreased significantly in Zambia in the last decade. The malaria transmission is influenced by environmental variables. Incorporation of environmental variables in models of malaria transmission likely improves model fit and predicts probable trends in malaria disease. This work is based on the hypothesis that remotely-sensed environmental factors, including nocturnal dew point, are associated with malaria transmission and sustain foci of transmission during the low transmission season in the Southern Province of Zambia. Thirty-eight rural health centres in Southern Province, Zambia were divided into three zones based on transmission patterns. Correlations between weekly malaria cases and remotely-sensed nocturnal dew point, nocturnal land surface temperature as well as vegetation indices and rainfall were evaluated in time-series analyses from 2012 week 19 to 2013 week 36. Zonal as well as clinic-based, multivariate, autoregressive, integrated, moving average (ARIMAX) models implementing environmental variables were developed to model transmission in 2011 week 19 to 2012 week 18 and forecast transmission in 2013 week 37 to week 41. During the dry, low transmission season significantly higher vegetation indices, nocturnal land surface temperature and nocturnal dew point were associated with the areas of higher transmission. Environmental variables improved ARIMAX models. Dew point and normalized differentiated vegetation index were significant predictors and improved all zonal transmission models. In the high-transmission zone, this was also seen for land surface temperature. Clinic models were improved by adding dew point and land surface temperature as well as normalized differentiated vegetation index. The mean average error of prediction for ARIMAX models ranged from 0.7 to 33.5%. Forecasts of malaria incidence were valid for three out of five rural health centres; however, with poor results at the zonal level. In this study, the fit of ARIMAX models improves when environmental variables are included. There is a significant association of remotely-sensed nocturnal dew point with malaria transmission. Interestingly, dew point might be one of the factors sustaining malaria transmission in areas of general aridity during the dry season.
Statistical Analysis of Large Simulated Yield Datasets for Studying Climate Effects
NASA Technical Reports Server (NTRS)
Makowski, David; Asseng, Senthold; Ewert, Frank; Bassu, Simona; Durand, Jean-Louis; Martre, Pierre; Adam, Myriam; Aggarwal, Pramod K.; Angulo, Carlos; Baron, Chritian;
2015-01-01
Many studies have been carried out during the last decade to study the effect of climate change on crop yields and other key crop characteristics. In these studies, one or several crop models were used to simulate crop growth and development for different climate scenarios that correspond to different projections of atmospheric CO2 concentration, temperature, and rainfall changes (Semenov et al., 1996; Tubiello and Ewert, 2002; White et al., 2011). The Agricultural Model Intercomparison and Improvement Project (AgMIP; Rosenzweig et al., 2013) builds on these studies with the goal of using an ensemble of multiple crop models in order to assess effects of climate change scenarios for several crops in contrasting environments. These studies generate large datasets, including thousands of simulated crop yield data. They include series of yield values obtained by combining several crop models with different climate scenarios that are defined by several climatic variables (temperature, CO2, rainfall, etc.). Such datasets potentially provide useful information on the possible effects of different climate change scenarios on crop yields. However, it is sometimes difficult to analyze these datasets and to summarize them in a useful way due to their structural complexity; simulated yield data can differ among contrasting climate scenarios, sites, and crop models. Another issue is that it is not straightforward to extrapolate the results obtained for the scenarios to alternative climate change scenarios not initially included in the simulation protocols. Additional dynamic crop model simulations for new climate change scenarios are an option but this approach is costly, especially when a large number of crop models are used to generate the simulated data, as in AgMIP. Statistical models have been used to analyze responses of measured yield data to climate variables in past studies (Lobell et al., 2011), but the use of a statistical model to analyze yields simulated by complex process-based crop models is a rather new idea. We demonstrate herewith that statistical methods can play an important role in analyzing simulated yield data sets obtained from the ensembles of process-based crop models. Formal statistical analysis is helpful to estimate the effects of different climatic variables on yield, and to describe the between-model variability of these effects.
Takemura, Naohiro; Fukui, Takao; Inui, Toshio
2015-01-01
In human reach-to-grasp movement, visual occlusion of a target object leads to a larger peak grip aperture compared to conditions where online vision is available. However, no previous computational and neural network models for reach-to-grasp movement explain the mechanism of this effect. We simulated the effect of online vision on the reach-to-grasp movement by proposing a computational control model based on the hypothesis that the grip aperture is controlled to compensate for both motor variability and sensory uncertainty. In this model, the aperture is formed to achieve a target aperture size that is sufficiently large to accommodate the actual target; it also includes a margin to ensure proper grasping despite sensory and motor variability. To this end, the model considers: (i) the variability of the grip aperture, which is predicted by the Kalman filter, and (ii) the uncertainty of the object size, which is affected by visual noise. Using this model, we simulated experiments in which the effect of the duration of visual occlusion was investigated. The simulation replicated the experimental result wherein the peak grip aperture increased when the target object was occluded, especially in the early phase of the movement. Both predicted motor variability and sensory uncertainty play important roles in the online visuomotor process responsible for grip aperture control. PMID:26696874
Modeled summer background concentration nutrients and ...
We used regression models to predict background concentration of four water quality indictors: total nitrogen (N), total phosphorus (P), chloride, and total suspended solids (TSS), in the mid-continent (USA) great rivers, the Upper Mississippi, the Lower Missouri, and the Ohio. From best-model linear regressions of water quality indicators with land use and other stressor variables, we determined the concentration of the indicators when the land use and stressor variables were all set to zero the y-intercept. Except for total P on the Upper Mississippi River and chloride on the Ohio River, we were able to predict background concentration from significant regression models. In every model with more than one predictor variable, the model included at least one variable representing agricultural land use and one variable representing development. Predicted background concentration of total N was the same on the Upper Mississippi and Lower Missouri rivers (350 ug l-1), which was much lower than a published eutrophication threshold and percentile-based thresholds (25th percentile of concentration at all sites in the population) but was similar to a threshold derived from the response of sestonic chlorophyll a to great river total N concentration. Background concentration of total P on the Lower Missouri (53 ug l-1) was also lower than published and percentile-based thresholds. Background TSS concentration was higher on the Lower Missouri (30 mg l-1) than the other ri
Origins of extrinsic variability in eukaryotic gene expression
NASA Astrophysics Data System (ADS)
Volfson, Dmitri; Marciniak, Jennifer; Blake, William J.; Ostroff, Natalie; Tsimring, Lev S.; Hasty, Jeff
2006-02-01
Variable gene expression within a clonal population of cells has been implicated in a number of important processes including mutation and evolution, determination of cell fates and the development of genetic disease. Recent studies have demonstrated that a significant component of expression variability arises from extrinsic factors thought to influence multiple genes simultaneously, yet the biological origins of this extrinsic variability have received little attention. Here we combine computational modelling with fluorescence data generated from multiple promoter-gene inserts in Saccharomyces cerevisiae to identify two major sources of extrinsic variability. One unavoidable source arising from the coupling of gene expression with population dynamics leads to a ubiquitous lower limit for expression variability. A second source, which is modelled as originating from a common upstream transcription factor, exemplifies how regulatory networks can convert noise in upstream regulator expression into extrinsic noise at the output of a target gene. Our results highlight the importance of the interplay of gene regulatory networks with population heterogeneity for understanding the origins of cellular diversity.
Origins of extrinsic variability in eukaryotic gene expression
NASA Astrophysics Data System (ADS)
Volfson, Dmitri; Marciniak, Jennifer; Blake, William J.; Ostroff, Natalie; Tsimring, Lev S.; Hasty, Jeff
2006-03-01
Variable gene expression within a clonal population of cells has been implicated in a number of important processes including mutation and evolution, determination of cell fates and the development of genetic disease. Recent studies have demonstrated that a significant component of expression variability arises from extrinsic factors thought to influence multiple genes in concert, yet the biological origins of this extrinsic variability have received little attention. Here we combine computational modeling with fluorescence data generated from multiple promoter-gene inserts in Saccharomyces cerevisiae to identify two major sources of extrinsic variability. One unavoidable source arising from the coupling of gene expression with population dynamics leads to a ubiquitous noise floor in expression variability. A second source which is modeled as originating from a common upstream transcription factor exemplifies how regulatory networks can convert noise in upstream regulator expression into extrinsic noise at the output of a target gene. Our results highlight the importance of the interplay of gene regulatory networks with population heterogeneity for understanding the origins of cellular diversity.
Hall, David C; Le, Quynh B
2017-06-01
More than 70 million Vietnamese rely on small-scale farming for some form of household income. Water on many of those farms is contaminated with waste, including animal manure, partly due to non-sustainable waste management. This increases the risk of water-related zoonotic disease transmission. The purpose of this research was to examine the impact of various demographic and management factors on the likelihood of finding Escherichia coli in drinking water sourced from wells and rainwater on farms in Vietnam. A Bayesian Belief Network (BBN) was designed to describe association between various deterministic and probabilistic variables gathered from 600 small-scale integrated (SSI) farmers in Vietnam. The variables relate to E. coli content of their drinking water sourced on-farm from wells and rainwater, and stored in on-farm large vessels, including concrete water tanks. The BBN was developed using the Netica software tool; the model was calibrated and goodness of fit examined using concordance of predictability. Sensitivity analysis of the model revealed that choice variables, including engagement in mitigation of water contamination and livestock management activities, were particularly likely to influence endpoint values, reflecting the highly variable and impactful nature of preferences, attitudes and beliefs relating to mitigation strategies. Quantitative variables including numbers of livestock (particularly chickens) and income also had a high impact. The highest concordance (62%) was achieved with the BBN reported in this paper. This BBN model of SSI farming in Vietnam is helpful in understanding the complexity of small-scale agriculture and how various factors work in concert to influence contamination of on-farm drinking water as indicated by the presence of E. coli. The model will also be useful for identifying and estimating the impact of policy options such as improved delivery of clean water management training for rural areas, particularly where such analysis is combined with other analytical and policy tools. With appropriate knowledge translation, the model results will be particularly useful in helping SSI farmers understand their options for engaging in public health mitigation strategies addressing clean water that do not significantly disrupt their agriculture-based livelihoods. © The Author 2017. Published by Oxford University Press on behalf of Royal Society of Tropical Medicine and Hygiene. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Individual Differences in Well-Being in Older Breast Cancer Survivors
Perkins, Elizabeth A.; Small, Brent J.; Balducci, Lodovico; Extermann, Martine; Robb, Claire; Haley, William E.
2007-01-01
Older women who survive breast cancer may differ significantly in their long-term well-being. Using a risk and protective factors model, we studied predictors of well-being in 127 women age 70 and above with a history of at least one year's survival of breast cancer. Mean post-cancer survivorship was 5.1 years. Using life satisfaction, depression and general health perceptions as outcome variables, we assessed whether demographic variables, cancer-related variables, health status and psychosocial resources predicted variability in well-being using correlational and hierarchical regression analyses. Higher age predicted increased depression but was not associated with life satisfaction or general health perceptions. Cancer-related variables, including duration of survival, and type of cancer treatment, were not significantly associated with survivors' well-being. Poorer health status was associated with poorer well-being in all three dependent variables. After controlling for demographics, cancer-related variables, and health status, higher levels of psychosocial resources including optimism, mastery, spirituality and social support predicted better outcome in all three dependent variables. While many older women survive breast cancer without severe sequelae, there is considerable variability in their well-being after survivorship. Successful intervention with older breast cancer survivors might include greater attention not only to cancer-specific concerns, but also attention to geriatric syndromes and functional impairment, and enhancement of protective psychosocial resources. PMID:17240157