Sample records for model complex variable

  1. Variable Complexity Optimization of Composite Structures

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.

    2002-01-01

    The use of several levels of modeling in design has been dubbed variable complexity modeling. The work under the grant focused on developing variable complexity modeling strategies with emphasis on response surface techniques. Applications included design of stiffened composite plates for improved damage tolerance, the use of response surfaces for fitting weights obtained by structural optimization, and design against uncertainty using response surface techniques.

  2. Modeling Psychological Attributes in Psychology – An Epistemological Discussion: Network Analysis vs. Latent Variables

    PubMed Central

    Guyon, Hervé; Falissard, Bruno; Kop, Jean-Luc

    2017-01-01

    Network Analysis is considered as a new method that challenges Latent Variable models in inferring psychological attributes. With Network Analysis, psychological attributes are derived from a complex system of components without the need to call on any latent variables. But the ontological status of psychological attributes is not adequately defined with Network Analysis, because a psychological attribute is both a complex system and a property emerging from this complex system. The aim of this article is to reappraise the legitimacy of latent variable models by engaging in an ontological and epistemological discussion on psychological attributes. Psychological attributes relate to the mental equilibrium of individuals embedded in their social interactions, as robust attractors within complex dynamic processes with emergent properties, distinct from physical entities located in precise areas of the brain. Latent variables thus possess legitimacy, because the emergent properties can be conceptualized and analyzed on the sole basis of their manifestations, without exploring the upstream complex system. However, in opposition with the usual Latent Variable models, this article is in favor of the integration of a dynamic system of manifestations. Latent Variables models and Network Analysis thus appear as complementary approaches. New approaches combining Latent Network Models and Network Residuals are certainly a promising new way to infer psychological attributes, placing psychological attributes in an inter-subjective dynamic approach. Pragmatism-realism appears as the epistemological framework required if we are to use latent variables as representations of psychological attributes. PMID:28572780

  3. [Variable selection methods combined with local linear embedding theory used for optimization of near infrared spectral quantitative models].

    PubMed

    Hao, Yong; Sun, Xu-Dong; Yang, Qiang

    2012-12-01

    Variables selection strategy combined with local linear embedding (LLE) was introduced for the analysis of complex samples by using near infrared spectroscopy (NIRS). Three methods include Monte Carlo uninformation variable elimination (MCUVE), successive projections algorithm (SPA) and MCUVE connected with SPA were used for eliminating redundancy spectral variables. Partial least squares regression (PLSR) and LLE-PLSR were used for modeling complex samples. The results shown that MCUVE can both extract effective informative variables and improve the precision of models. Compared with PLSR models, LLE-PLSR models can achieve more accurate analysis results. MCUVE combined with LLE-PLSR is an effective modeling method for NIRS quantitative analysis.

  4. Regional-scale brine migration along vertical pathways due to CO2 injection - Part 2: A simulated case study in the North German Basin

    NASA Astrophysics Data System (ADS)

    Kissinger, Alexander; Noack, Vera; Knopf, Stefan; Konrad, Wilfried; Scheer, Dirk; Class, Holger

    2017-06-01

    Saltwater intrusion into potential drinking water aquifers due to the injection of CO2 into deep saline aquifers is one of the hazards associated with the geological storage of CO2. Thus, in a site-specific risk assessment, models for predicting the fate of the displaced brine are required. Practical simulation of brine displacement involves decisions regarding the complexity of the model. The choice of an appropriate level of model complexity depends on multiple criteria: the target variable of interest, the relevant physical processes, the computational demand, the availability of data, and the data uncertainty. In this study, we set up a regional-scale geological model for a realistic (but not real) onshore site in the North German Basin with characteristic geological features for that region. A major aim of this work is to identify the relevant parameters controlling saltwater intrusion in a complex structural setting and to test the applicability of different model simplifications. The model that is used to identify relevant parameters fully couples flow in shallow freshwater aquifers and deep saline aquifers. This model also includes variable-density transport of salt and realistically incorporates surface boundary conditions with groundwater recharge. The complexity of this model is then reduced in several steps, by neglecting physical processes (two-phase flow near the injection well, variable-density flow) and by simplifying the complex geometry of the geological model. The results indicate that the initial salt distribution prior to the injection of CO2 is one of the key parameters controlling shallow aquifer salinization. However, determining the initial salt distribution involves large uncertainties in the regional-scale hydrogeological parameterization and requires complex and computationally demanding models (regional-scale variable-density salt transport). In order to evaluate strategies for minimizing leakage into shallow aquifers, other target variables can be considered, such as the volumetric leakage rate into shallow aquifers or the pressure buildup in the injection horizon. Our results show that simplified models, which neglect variable-density salt transport, can reach an acceptable agreement with more complex models.

  5. The QSAR study of flavonoid-metal complexes scavenging rad OH free radical

    NASA Astrophysics Data System (ADS)

    Wang, Bo-chu; Qian, Jun-zhen; Fan, Ying; Tan, Jun

    2014-10-01

    Flavonoid-metal complexes have antioxidant activities. However, quantitative structure-activity relationships (QSAR) of flavonoid-metal complexes and their antioxidant activities has still not been tackled. On the basis of 21 structures of flavonoid-metal complexes and their antioxidant activities for scavenging rad OH free radical, we optimised their structures using Gaussian 03 software package and we subsequently calculated and chose 18 quantum chemistry descriptors such as dipole, charge and energy. Then we chose several quantum chemistry descriptors that are very important to the IC50 of flavonoid-metal complexes for scavenging rad OH free radical through method of stepwise linear regression, Meanwhile we obtained 4 new variables through the principal component analysis. Finally, we built the QSAR models based on those important quantum chemistry descriptors and the 4 new variables as the independent variables and the IC50 as the dependent variable using an Artificial Neural Network (ANN), and we validated the two models using experimental data. These results show that the two models in this paper are reliable and predictable.

  6. An outline of graphical Markov models in dentistry.

    PubMed

    Helfenstein, U; Steiner, M; Menghini, G

    1999-12-01

    In the usual multiple regression model there is one response variable and one block of several explanatory variables. In contrast, in reality there may be a block of several possibly interacting response variables one would like to explain. In addition, the explanatory variables may split into a sequence of several blocks, each block containing several interacting variables. The variables in the second block are explained by those in the first block; the variables in the third block by those in the first and the second block etc. During recent years methods have been developed allowing analysis of problems where the data set has the above complex structure. The models involved are called graphical models or graphical Markov models. The main result of an analysis is a picture, a conditional independence graph with precise statistical meaning, consisting of circles representing variables and lines or arrows representing significant conditional associations. The absence of a line between two circles signifies that the corresponding two variables are independent conditional on the presence of other variables in the model. An example from epidemiology is presented in order to demonstrate application and use of the models. The data set in the example has a complex structure consisting of successive blocks: the variable in the first block is year of investigation; the variables in the second block are age and gender; the variables in the third block are indices of calculus, gingivitis and mutans streptococci and the final response variables in the fourth block are different indices of caries. Since the statistical methods may not be easily accessible to dentists, this article presents them in an introductory form. Graphical models may be of great value to dentists in allowing analysis and visualisation of complex structured multivariate data sets consisting of a sequence of blocks of interacting variables and, in particular, several possibly interacting responses in the final block.

  7. RMS Spectral Modelling - a powerful tool to probe the origin of variability in Active Galactic Nuclei

    NASA Astrophysics Data System (ADS)

    Mallick, Labani; Dewangan, Gulab chand; Misra, Ranjeev

    2016-07-01

    The broadband energy spectra of Active Galactic Nuclei (AGN) are very complex in nature with the contribution from many ingredients: accretion disk, corona, jets, broad-line region (BLR), narrow-line region (NLR) and Compton-thick absorbing cloud or TORUS. The complexity of the broadband AGN spectra gives rise to mean spectral model degeneracy, e.g, there are competing models for the broad feature near 5-7 keV in terms of blurred reflection and complex absorption. In order to overcome the energy spectral model degeneracy, the most reliable approach is to study the RMS variability spectrum which connects the energy spectrum with temporal variability. The origin of variability could be pivoting of the primary continuum, reflection and/or absorption. The study of RMS (Root Mean Square) spectra would help us to connect the energy spectra with the variability. In this work, we study the energy dependent variability of AGN by developing theoretical RMS spectral model in ISIS (Interactive Spectral Interpretation System) for different input energy spectra. In this talk, I would like to present results of RMS spectral modelling for few radio-loud and radio-quiet AGN observed by XMM-Newton, Suzaku, NuSTAR and ASTROSAT and will probe the dichotomy between these two classes of AGN.

  8. Multilevel Model Prediction

    ERIC Educational Resources Information Center

    Frees, Edward W.; Kim, Jee-Seon

    2006-01-01

    Multilevel models are proven tools in social research for modeling complex, hierarchical systems. In multilevel modeling, statistical inference is based largely on quantification of random variables. This paper distinguishes among three types of random variables in multilevel modeling--model disturbances, random coefficients, and future response…

  9. Spatio-temporal error growth in the multi-scale Lorenz'96 model

    NASA Astrophysics Data System (ADS)

    Herrera, S.; Fernández, J.; Rodríguez, M. A.; Gutiérrez, J. M.

    2010-07-01

    The influence of multiple spatio-temporal scales on the error growth and predictability of atmospheric flows is analyzed throughout the paper. To this aim, we consider the two-scale Lorenz'96 model and study the interplay of the slow and fast variables on the error growth dynamics. It is shown that when the coupling between slow and fast variables is weak the slow variables dominate the evolution of fluctuations whereas in the case of strong coupling the fast variables impose a non-trivial complex error growth pattern on the slow variables with two different regimes, before and after saturation of fast variables. This complex behavior is analyzed using the recently introduced Mean-Variance Logarithmic (MVL) diagram.

  10. Surface complexation modeling

    USDA-ARS?s Scientific Manuscript database

    Adsorption-desorption reactions are important processes that affect the transport of contaminants in the environment. Surface complexation models are chemical models that can account for the effects of variable chemical conditions, such as pH, on adsorption reactions. These models define specific ...

  11. Effects of in-sewer processes: a stochastic model approach.

    PubMed

    Vollertsen, J; Nielsen, A H; Yang, W; Hvitved-Jacobsen, T

    2005-01-01

    Transformations of organic matter, nitrogen and sulfur in sewers can be simulated taking into account the relevant transformation and transport processes. One objective of such simulation is the assessment and management of hydrogen sulfide formation and corrosion. Sulfide is formed in the biofilms and sediments of the water phase, but corrosion occurs on the moist surfaces of the sewer gas phase. Consequently, both phases and the transport of volatile substances between these phases must be included. Furthermore, wastewater composition and transformations in sewers are complex and subject to high, natural variability. This paper presents the latest developments of the WATS model concept, allowing integrated aerobic, anoxic and anaerobic simulation of the water phase and of gas phase processes. The resulting model is complex and with high parameter variability. An example applying stochastic modeling shows how this complexity and variability can be taken into account.

  12. Multi-level emulation of complex climate model responses to boundary forcing data

    NASA Astrophysics Data System (ADS)

    Tran, Giang T.; Oliver, Kevin I. C.; Holden, Philip B.; Edwards, Neil R.; Sóbester, András; Challenor, Peter

    2018-04-01

    Climate model components involve both high-dimensional input and output fields. It is desirable to efficiently generate spatio-temporal outputs of these models for applications in integrated assessment modelling or to assess the statistical relationship between such sets of inputs and outputs, for example, uncertainty analysis. However, the need for efficiency often compromises the fidelity of output through the use of low complexity models. Here, we develop a technique which combines statistical emulation with a dimensionality reduction technique to emulate a wide range of outputs from an atmospheric general circulation model, PLASIM, as functions of the boundary forcing prescribed by the ocean component of a lower complexity climate model, GENIE-1. Although accurate and detailed spatial information on atmospheric variables such as precipitation and wind speed is well beyond the capability of GENIE-1's energy-moisture balance model of the atmosphere, this study demonstrates that the output of this model is useful in predicting PLASIM's spatio-temporal fields through multi-level emulation. Meaningful information from the fast model, GENIE-1 was extracted by utilising the correlation between variables of the same type in the two models and between variables of different types in PLASIM. We present here the construction and validation of several PLASIM variable emulators and discuss their potential use in developing a hybrid model with statistical components.

  13. Datamining approaches for modeling tumor control probability.

    PubMed

    Naqa, Issam El; Deasy, Joseph O; Mu, Yi; Huang, Ellen; Hope, Andrew J; Lindsay, Patricia E; Apte, Aditya; Alaly, James; Bradley, Jeffrey D

    2010-11-01

    Tumor control probability (TCP) to radiotherapy is determined by complex interactions between tumor biology, tumor microenvironment, radiation dosimetry, and patient-related variables. The complexity of these heterogeneous variable interactions constitutes a challenge for building predictive models for routine clinical practice. We describe a datamining framework that can unravel the higher order relationships among dosimetric dose-volume prognostic variables, interrogate various radiobiological processes, and generalize to unseen data before when applied prospectively. Several datamining approaches are discussed that include dose-volume metrics, equivalent uniform dose, mechanistic Poisson model, and model building methods using statistical regression and machine learning techniques. Institutional datasets of non-small cell lung cancer (NSCLC) patients are used to demonstrate these methods. The performance of the different methods was evaluated using bivariate Spearman rank correlations (rs). Over-fitting was controlled via resampling methods. Using a dataset of 56 patients with primary NCSLC tumors and 23 candidate variables, we estimated GTV volume and V75 to be the best model parameters for predicting TCP using statistical resampling and a logistic model. Using these variables, the support vector machine (SVM) kernel method provided superior performance for TCP prediction with an rs=0.68 on leave-one-out testing compared to logistic regression (rs=0.4), Poisson-based TCP (rs=0.33), and cell kill equivalent uniform dose model (rs=0.17). The prediction of treatment response can be improved by utilizing datamining approaches, which are able to unravel important non-linear complex interactions among model variables and have the capacity to predict on unseen data for prospective clinical applications.

  14. Predictive model of complexity in early palliative care: a cohort of advanced cancer patients (PALCOM study).

    PubMed

    Tuca, Albert; Gómez-Martínez, Mónica; Prat, Aleix

    2018-01-01

    Model of early palliative care (PC) integrated in oncology is based on shared care from the diagnosis to the end of life and is mainly focused on patients with greater complexity. However, there is no definition or tools to evaluate PC complexity. The objectives of the study were to identify the factors influencing level determination of complexity, propose predictive models, and build a complexity scale of PC. We performed a prospective, observational, multicenter study in a cohort of advanced cancer patients with an estimated prognosis ≤ 6 months. An ad hoc structured evaluation including socio-demographic and clinical data, symptom burden, functional and cognitive status, psychosocial problems, and existential-ethic dilemmas was recorded systematically. According to this multidimensional evaluation, investigator classified patients as high, medium, or low palliative complexity, associated to need of basic or specialized PC. Logistic regression was used to identify the variables influencing determination of level of PC complexity and explore predictive models. We included 324 patients; 41% were classified as having high PC complexity and 42.9% as medium, both levels being associated with specialized PC. Variables influencing determination of PC complexity were as follows: high symptom burden (OR 3.19 95%CI: 1.72-6.17), difficult pain (OR 2.81 95%CI:1.64-4.9), functional status (OR 0.99 95%CI:0.98-0.9), and social-ethical existential risk factors (OR 3.11 95%CI:1.73-5.77). Logistic analysis of variables allowed construct a complexity model and structured scales (PALCOM 1 and 2) with high predictive value (AUC ROC 76%). This study provides a new model and tools to assess complexity in palliative care, which may be very useful to manage referral to specialized PC services, and agree intensity of their intervention in a model of early-shared care integrated in oncology.

  15. Variable speed limit strategies analysis with mesoscopic traffic flow model based on complex networks

    NASA Astrophysics Data System (ADS)

    Li, Shu-Bin; Cao, Dan-Ni; Dang, Wen-Xiu; Zhang, Lin

    As a new cross-discipline, the complexity science has penetrated into every field of economy and society. With the arrival of big data, the research of the complexity science has reached its summit again. In recent years, it offers a new perspective for traffic control by using complex networks theory. The interaction course of various kinds of information in traffic system forms a huge complex system. A new mesoscopic traffic flow model is improved with variable speed limit (VSL), and the simulation process is designed, which is based on the complex networks theory combined with the proposed model. This paper studies effect of VSL on the dynamic traffic flow, and then analyzes the optimal control strategy of VSL in different network topologies. The conclusion of this research is meaningful to put forward some reasonable transportation plan and develop effective traffic management and control measures to help the department of traffic management.

  16. Multi-region statistical shape model for cochlear implantation

    NASA Astrophysics Data System (ADS)

    Romera, Jordi; Kjer, H. Martin; Piella, Gemma; Ceresa, Mario; González Ballester, Miguel A.

    2016-03-01

    Statistical shape models are commonly used to analyze the variability between similar anatomical structures and their use is established as a tool for analysis and segmentation of medical images. However, using a global model to capture the variability of complex structures is not enough to achieve the best results. The complexity of a proper global model increases even more when the amount of data available is limited to a small number of datasets. Typically, the anatomical variability between structures is associated to the variability of their physiological regions. In this paper, a complete pipeline is proposed for building a multi-region statistical shape model to study the entire variability from locally identified physiological regions of the inner ear. The proposed model, which is based on an extension of the Point Distribution Model (PDM), is built for a training set of 17 high-resolution images (24.5 μm voxels) of the inner ear. The model is evaluated according to its generalization ability and specificity. The results are compared with the ones of a global model built directly using the standard PDM approach. The evaluation results suggest that better accuracy can be achieved using a regional modeling of the inner ear.

  17. Are Model Transferability And Complexity Antithetical? Insights From Validation of a Variable-Complexity Empirical Snow Model in Space and Time

    NASA Astrophysics Data System (ADS)

    Lute, A. C.; Luce, Charles H.

    2017-11-01

    The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.

  18. Specifying and Refining a Complex Measurement Model.

    ERIC Educational Resources Information Center

    Levy, Roy; Mislevy, Robert J.

    This paper aims to describe a Bayesian approach to modeling and estimating cognitive models both in terms of statistical machinery and actual instrument development. Such a method taps the knowledge of experts to provide initial estimates for the probabilistic relationships among the variables in a multivariate latent variable model and refines…

  19. A consensus for the development of a vector model to assess clinical complexity.

    PubMed

    Corazza, Gino Roberto; Klersy, Catherine; Formagnana, Pietro; Lenti, Marco Vincenzo; Padula, Donatella

    2017-12-01

    The progressive rise in multimorbidity has made management of complex patients one of the most topical and challenging issues in medicine, both in clinical practice and for healthcare organizations. To make this easier, a score of clinical complexity (CC) would be useful. A vector model to evaluate biological and extra-biological (socio-economic, cultural, behavioural, environmental) domains of CC was proposed a few years ago. However, given that the variables that grade each domain had never been defined, this model has never been used in clinical practice. To overcome these limits, a consensus meeting was organised to grade each domain of CC, and to establish the hierarchy of the domains. A one-day consensus meeting consisting of a multi-professional panel of 25 people was held at our Hospital. In a preliminary phase, the proponents selected seven variables as qualifiers for each of the five above-mentioned domains. In the course of the meeting, the panel voted for five variables considered to be the most representative for each domain. Consensus was established with 2/3 agreement, and all variables were dichotomised. Finally, the various domains were parametrized and ranked within a feasible vector model. A Clinical Complexity Index was set up using the chosen variables. All the domains were graphically represented through a vector model: the biological domain was chosen as the most significant (highest slope), followed by the behavioural and socio-economic domains (intermediate slope), and lastly by the cultural and environmental ones (lowest slope). A feasible and comprehensive tool to evaluate CC in clinical practice is proposed herein.

  20. Assessment of the Suitability of High Resolution Numerical Weather Model Outputs for Hydrological Modelling in Mountainous Cold Regions

    NASA Astrophysics Data System (ADS)

    Rasouli, K.; Pomeroy, J. W.; Hayashi, M.; Fang, X.; Gutmann, E. D.; Li, Y.

    2017-12-01

    The hydrology of mountainous cold regions has a large spatial variability that is driven both by climate variability and near-surface process variability associated with complex terrain and patterns of vegetation, soils, and hydrogeology. There is a need to downscale large-scale atmospheric circulations towards the fine scales that cold regions hydrological processes operate at to assess their spatial variability in complex terrain and quantify uncertainties by comparison to field observations. In this research, three high resolution numerical weather prediction models, namely, the Intermediate Complexity Atmosphere Research (ICAR), Weather Research and Forecasting (WRF), and Global Environmental Multiscale (GEM) models are used to represent spatial and temporal patterns of atmospheric conditions appropriate for hydrological modelling. An area covering high mountains and foothills of the Canadian Rockies was selected to assess and compare high resolution ICAR (1 km × 1 km), WRF (4 km × 4 km), and GEM (2.5 km × 2.5 km) model outputs with station-based meteorological measurements. ICAR with very low computational cost was run with different initial and boundary conditions and with finer spatial resolution, which allowed an assessment of modelling uncertainty and scaling that was difficult with WRF. Results show that ICAR, when compared with WRF and GEM, performs very well in precipitation and air temperature modelling in the Canadian Rockies, while all three models show a fair performance in simulating wind and humidity fields. Representation of local-scale atmospheric dynamics leading to realistic fields of temperature and precipitation by ICAR, WRF, and GEM makes these models suitable for high resolution cold regions hydrological predictions in complex terrain, which is a key factor in estimating water security in western Canada.

  1. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

    EPA Science Inventory

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

  2. Healthcare tariffs for specialist inpatient neurorehabilitation services: rationale and development of a UK casemix and costing methodology.

    PubMed

    Turner-Stokes, Lynne; Sutch, Stephen; Dredge, Robert

    2012-03-01

    To describe the rationale and development of a casemix model and costing methodology for tariff development for specialist neurorehabilitation services in the UK. Patients with complex needs incur higher treatment costs. Fair payment should be weighted in proportion to costs of providing treatment, and should allow for variation over time CASEMIX MODEL AND BAND-WEIGHTING: Case complexity is measured by the Rehabilitation Complexity Scale (RCS). Cases are divided into five bands of complexity, based on the total RCS score. The principal determinant of costs in rehabilitation is staff time. Total staff hours/week (estimated from the Northwick Park Nursing and Therapy Dependency Scales) are analysed within each complexity band, through cross-sectional analysis of parallel ratings. A 'band-weighting' factor is derived from the relative proportions of staff time within each of the five bands. Total unit treatment costs are obtained from retrospective analysis of provider hospitals' budget and accounting statements. Mean bed-day costs (total unit cost/occupied bed days) are divided broadly into 'variable' and 'non-variable' components. In the weighted costing model, the band-weighting factor is applied to the variable portion of the bed-day cost to derive a banded cost, and thence a set of cost-multipliers. Preliminary data from one unit are presented to illustrate how this weighted costing model will be applied to derive a multilevel banded payment model, based on serial complexity ratings, to allow for change over time.

  3. Effects of additional data on Bayesian clustering.

    PubMed

    Yamazaki, Keisuke

    2017-10-01

    Hierarchical probabilistic models, such as mixture models, are used for cluster analysis. These models have two types of variables: observable and latent. In cluster analysis, the latent variable is estimated, and it is expected that additional information will improve the accuracy of the estimation of the latent variable. Many proposed learning methods are able to use additional data; these include semi-supervised learning and transfer learning. However, from a statistical point of view, a complex probabilistic model that encompasses both the initial and additional data might be less accurate due to having a higher-dimensional parameter. The present paper presents a theoretical analysis of the accuracy of such a model and clarifies which factor has the greatest effect on its accuracy, the advantages of obtaining additional data, and the disadvantages of increasing the complexity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Diversified models for portfolio selection based on uncertain semivariance

    NASA Astrophysics Data System (ADS)

    Chen, Lin; Peng, Jin; Zhang, Bo; Rosyida, Isnaini

    2017-02-01

    Since the financial markets are complex, sometimes the future security returns are represented mainly based on experts' estimations due to lack of historical data. This paper proposes a semivariance method for diversified portfolio selection, in which the security returns are given subjective to experts' estimations and depicted as uncertain variables. In the paper, three properties of the semivariance of uncertain variables are verified. Based on the concept of semivariance of uncertain variables, two types of mean-semivariance diversified models for uncertain portfolio selection are proposed. Since the models are complex, a hybrid intelligent algorithm which is based on 99-method and genetic algorithm is designed to solve the models. In this hybrid intelligent algorithm, 99-method is applied to compute the expected value and semivariance of uncertain variables, and genetic algorithm is employed to seek the best allocation plan for portfolio selection. At last, several numerical examples are presented to illustrate the modelling idea and the effectiveness of the algorithm.

  5. An example of complex modelling in dentistry using Markov chain Monte Carlo (MCMC) simulation.

    PubMed

    Helfenstein, Ulrich; Menghini, Giorgio; Steiner, Marcel; Murati, Francesca

    2002-09-01

    In the usual regression setting one regression line is computed for a whole data set. In a more complex situation, each person may be observed for example at several points in time and thus a regression line might be calculated for each person. Additional complexities, such as various forms of errors in covariables may make a straightforward statistical evaluation difficult or even impossible. During recent years methods have been developed allowing convenient analysis of problems where the data and the corresponding models show these and many other forms of complexity. The methodology makes use of a Bayesian approach and Markov chain Monte Carlo (MCMC) simulations. The methods allow the construction of increasingly elaborate models by building them up from local sub-models. The essential structure of the models can be represented visually by directed acyclic graphs (DAG). This attractive property allows communication and discussion of the essential structure and the substantial meaning of a complex model without needing algebra. After presentation of the statistical methods an example from dentistry is presented in order to demonstrate their application and use. The dataset of the example had a complex structure; each of a set of children was followed up over several years. The number of new fillings in permanent teeth had been recorded at several ages. The dependent variables were markedly different from the normal distribution and could not be transformed to normality. In addition, explanatory variables were assumed to be measured with different forms of error. Illustration of how the corresponding models can be estimated conveniently via MCMC simulation, in particular, 'Gibbs sampling', using the freely available software BUGS is presented. In addition, how the measurement error may influence the estimates of the corresponding coefficients is explored. It is demonstrated that the effect of the independent variable on the dependent variable may be markedly underestimated if the measurement error is not taken into account ('regression dilution bias'). Markov chain Monte Carlo methods may be of great value to dentists in allowing analysis of data sets which exhibit a wide range of different forms of complexity.

  6. The Aristotle Comprehensive Complexity score predicts mortality and morbidity after congenital heart surgery.

    PubMed

    Bojan, Mirela; Gerelli, Sébastien; Gioanni, Simone; Pouard, Philippe; Vouhé, Pascal

    2011-04-01

    The Aristotle Comprehensive Complexity (ACC) score has been proposed for complexity adjustment in the analysis of outcome after congenital heart surgery. The score is the sum of the Aristotle Basic Complexity score, largely used but poorly related to mortality and morbidity, and of the Comprehensive Complexity items accounting for comorbidities and procedure-specific and anatomic variability. This study aims to demonstrate the ability of the ACC score to predict 30-day mortality and morbidity assessed by the length of the intensive care unit (ICU) stay. We retrospectively enrolled patients undergoing congenital heart surgery in our institution. We modeled the ACC score as a continuous variable, mortality as a binary variable, and length of ICU stay as a censored variable. For each mortality and morbidity model we performed internal validation by bootstrapping and assessed overall performance by R(2), calibration by the calibration slope, and discrimination by the c index. Among all 1,454 patients enrolled, 30-day mortality rate was 3.4% and median length of ICU stay was 3 days. The ACC score strongly related to mortality, but related to length of ICU stay only during the first postoperative week. For the mortality model, R(2) = 0.24, calibration slope = 0.98, c index = 0.86, and 95% confidence interval was 0.82 to 0.91. For the morbidity model, R(2) = 0.094, calibration slope = 0.94, c index = 0.64, and 95% confidence interval was 0.62 to 0.66. The ACC score predicts 30-day mortality and length of ICU stay during the first postoperative week. The score is an adequate tool for complexity adjustment in the analysis of outcome after congenital heart surgery. Copyright © 2011 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  7. Evaluation of alternative model selection criteria in the analysis of unimodal response curves using CART

    USGS Publications Warehouse

    Ribic, C.A.; Miller, T.W.

    1998-01-01

    We investigated CART performance with a unimodal response curve for one continuous response and four continuous explanatory variables, where two variables were important (ie directly related to the response) and the other two were not. We explored performance under three relationship strengths and two explanatory variable conditions: equal importance and one variable four times as important as the other. We compared CART variable selection performance using three tree-selection rules ('minimum risk', 'minimum risk complexity', 'one standard error') to stepwise polynomial ordinary least squares (OLS) under four sample size conditions. The one-standard-error and minimum-risk-complexity methods performed about as well as stepwise OLS with large sample sizes when the relationship was strong. With weaker relationships, equally important explanatory variables and larger sample sizes, the one-standard-error and minimum-risk-complexity rules performed better than stepwise OLS. With weaker relationships and explanatory variables of unequal importance, tree-structured methods did not perform as well as stepwise OLS. Comparing performance within tree-structured methods, with a strong relationship and equally important explanatory variables, the one-standard-error-rule was more likely to choose the correct model than were the other tree-selection rules 1) with weaker relationships and equally important explanatory variables; and 2) under all relationship strengths when explanatory variables were of unequal importance and sample sizes were lower.

  8. Modeling the development of written language

    PubMed Central

    Puranik, Cynthia S.; Foorman, Barbara; Foster, Elizabeth; Wilson, Laura Gehron; Tschinkel, Erika; Kantor, Patricia Thatcher

    2011-01-01

    Alternative models of the structure of individual and developmental differences of written composition and handwriting fluency were tested using confirmatory factor analysis of writing samples provided by first- and fourth-grade students. For both groups, a five-factor model provided the best fit to the data. Four of the factors represented aspects of written composition: macro-organization (use of top sentence and number and ordering of ideas), productivity (number and diversity of words used), complexity (mean length of T-unit and syntactic density), and spelling and punctuation. The fifth factor represented handwriting fluency. Handwriting fluency was correlated with written composition factors at both grades. The magnitude of developmental differences between first grade and fourth grade expressed as effect sizes varied for variables representing the five constructs: large effect sizes were found for productivity and handwriting fluency variables; moderate effect sizes were found for complexity and macro-organization variables; and minimal effect sizes were found for spelling and punctuation variables. PMID:22228924

  9. Independent variable complexity for regional regression of the flow duration curve in ungauged basins

    NASA Astrophysics Data System (ADS)

    Fouad, Geoffrey; Skupin, André; Hope, Allen

    2016-04-01

    The flow duration curve (FDC) is one of the most widely used tools to quantify streamflow. Its percentile flows are often required for water resource applications, but these values must be predicted for ungauged basins with insufficient or no streamflow data. Regional regression is a commonly used approach for predicting percentile flows that involves identifying hydrologic regions and calibrating regression models to each region. The independent variables used to describe the physiographic and climatic setting of the basins are a critical component of regional regression, yet few studies have investigated their effect on resulting predictions. In this study, the complexity of the independent variables needed for regional regression is investigated. Different levels of variable complexity are applied for a regional regression consisting of 918 basins in the US. Both the hydrologic regions and regression models are determined according to the different sets of variables, and the accuracy of resulting predictions is assessed. The different sets of variables include (1) a simple set of three variables strongly tied to the FDC (mean annual precipitation, potential evapotranspiration, and baseflow index), (2) a traditional set of variables describing the average physiographic and climatic conditions of the basins, and (3) a more complex set of variables extending the traditional variables to include statistics describing the distribution of physiographic data and temporal components of climatic data. The latter set of variables is not typically used in regional regression, and is evaluated for its potential to predict percentile flows. The simplest set of only three variables performed similarly to the other more complex sets of variables. Traditional variables used to describe climate, topography, and soil offered little more to the predictions, and the experimental set of variables describing the distribution of basin data in more detail did not improve predictions. These results are largely reflective of cross-correlation existing in hydrologic datasets, and highlight the limited predictive power of many traditionally used variables for regional regression. A parsimonious approach including fewer variables chosen based on their connection to streamflow may be more efficient than a data mining approach including many different variables. Future regional regression studies may benefit from having a hydrologic rationale for including different variables and attempting to create new variables related to streamflow.

  10. The Effect of Visual Information on the Manual Approach and Landing

    NASA Technical Reports Server (NTRS)

    Wewerinke, P. H.

    1982-01-01

    The effect of visual information in combination with basic display information on the approach performance. A pre-experimental model analysis was performed in terms of the optimal control model. The resulting aircraft approach performance predictions were compared with the results of a moving base simulator program. The results illustrate that the model provides a meaningful description of the visual (scene) perception process involved in the complex (multi-variable, time varying) manual approach task with a useful predictive capability. The theoretical framework was shown to allow a straight-forward investigation of the complex interaction of a variety of task variables.

  11. Impact of gastrectomy procedural complexity on surgical outcomes and hospital comparisons.

    PubMed

    Mohanty, Sanjay; Paruch, Jennifer; Bilimoria, Karl Y; Cohen, Mark; Strong, Vivian E; Weber, Sharon M

    2015-08-01

    Most risk adjustment approaches adjust for patient comorbidities and the primary procedure. However, procedures done at the same time as the index case may increase operative risk and merit inclusion in adjustment models for fair hospital comparisons. Our objectives were to evaluate the impact of surgical complexity on postoperative outcomes and hospital comparisons in gastric cancer surgery. Patients who underwent gastric resection for cancer were identified from a large clinical dataset. Procedure complexity was characterized using secondary procedure CPT codes and work relative value units (RVUs). Regression models were developed to evaluate the association between complexity variables and outcomes. The impact of complexity adjustment on model performance and hospital comparisons was examined. Among 3,467 patients who underwent gastrectomy for adenocarcinoma, 2,171 operations were distal and 1,296 total. A secondary procedure was reported for 33% of distal gastrectomies and 59% of total gastrectomies. Six of 10 secondary procedures were associated with adverse outcomes. For example, patients who underwent a synchronous bowel resection had a higher risk of mortality (odds ratio [OR], 2.14; 95% CI, 1.07-4.29) and reoperation (OR, 2.09; 95% CI, 1.26-3.47). Model performance was slightly better for nearly all outcomes with complexity adjustment (mortality c-statistics: standard model, 0.853; secondary procedure model, 0.858; RVU model, 0.855). Hospital ranking did not change substantially after complexity adjustment. Surgical complexity variables are associated with adverse outcomes in gastrectomy, but complexity adjustment does not affect hospital rankings appreciably. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. An Improved Estimation Using Polya-Gamma Augmentation for Bayesian Structural Equation Models with Dichotomous Variables

    ERIC Educational Resources Information Center

    Kim, Seohyun; Lu, Zhenqiu; Cohen, Allan S.

    2018-01-01

    Bayesian algorithms have been used successfully in the social and behavioral sciences to analyze dichotomous data particularly with complex structural equation models. In this study, we investigate the use of the Polya-Gamma data augmentation method with Gibbs sampling to improve estimation of structural equation models with dichotomous variables.…

  13. Use of complex hydraulic variables to predict the distribution and density of unionids in a side channel of the Upper Mississippi River

    USGS Publications Warehouse

    Steuer, J.J.; Newton, T.J.; Zigler, S.J.

    2008-01-01

    Previous attempts to predict the importance of abiotic and biotic factors to unionids in large rivers have been largely unsuccessful. Many simple physical habitat descriptors (e.g., current velocity, substrate particle size, and water depth) have limited ability to predict unionid density. However, more recent studies have found that complex hydraulic variables (e.g., shear velocity, boundary shear stress, and Reynolds number) may be more useful predictors of unionid density. We performed a retrospective analysis with unionid density, current velocity, and substrate particle size data from 1987 to 1988 in a 6-km reach of the Upper Mississippi River near Prairie du Chien, Wisconsin. We used these data to model simple and complex hydraulic variables under low and high flow conditions. We then used classification and regression tree analysis to examine the relationships between hydraulic variables and unionid density. We found that boundary Reynolds number, Froude number, boundary shear stress, and grain size were the best predictors of density. Models with complex hydraulic variables were a substantial improvement over previously published discriminant models and correctly classified 65-88% of the observations for the total mussel fauna and six species. These data suggest that unionid beds may be constrained by threshold limits at both ends of the flow regime. Under low flow, mussels may require a minimum hydraulic variable (Rez.ast;, Fr) to transport nutrients, oxygen, and waste products. Under high flow, areas with relatively low boundary shear stress may provide a hydraulic refuge for mussels. Data on hydraulic preferences and identification of other conditions that constitute unionid habitat are needed to help restore and enhance habitats for unionids in rivers. ?? 2008 Springer Science+Business Media B.V.

  14. Quantifying networks complexity from information geometry viewpoint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felice, Domenico, E-mail: domenico.felice@unicam.it; Mancini, Stefano; INFN-Sezione di Perugia, Via A. Pascoli, I-06123 Perugia

    We consider a Gaussian statistical model whose parameter space is given by the variances of random variables. Underlying this model we identify networks by interpreting random variables as sitting on vertices and their correlations as weighted edges among vertices. We then associate to the parameter space a statistical manifold endowed with a Riemannian metric structure (that of Fisher-Rao). Going on, in analogy with the microcanonical definition of entropy in Statistical Mechanics, we introduce an entropic measure of networks complexity. We prove that it is invariant under networks isomorphism. Above all, considering networks as simplicial complexes, we evaluate this entropy onmore » simplexes and find that it monotonically increases with their dimension.« less

  15. Teacher Stress: Complex Model Building with LISREL. Pedagogical Reports, No. 16.

    ERIC Educational Resources Information Center

    Tellenback, Sten

    This paper presents a complex causal model of teacher stress based on data received from the responses of 1,466 teachers from Malmo, Sweden to a questionnaire. Also presented is a method for treating the model variables as higher-order factors or higher-order theoretical constructs. The paper's introduction presents a brief review of teacher…

  16. Inversion of the anomalous diffraction approximation for variable complex index of refraction near unity. [numerical tests for water-haze aerosol model

    NASA Technical Reports Server (NTRS)

    Smith, C. B.

    1982-01-01

    The Fymat analytic inversion method for retrieving a particle-area distribution function from anomalous diffraction multispectral extinction data and total area is generalized to the case of a variable complex refractive index m(lambda) near unity depending on spectral wavelength lambda. Inversion tests are presented for a water-haze aerosol model. An upper-phase shift limit of 5 pi/2 retrieved an accurate peak area distribution profile. Analytical corrections using both the total number and area improved the inversion.

  17. Variable Complexity Structural Optimization of Shells

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.; Venkataraman, Satchi

    1999-01-01

    Structural designers today face both opportunities and challenges in a vast array of available analysis and optimization programs. Some programs such as NASTRAN, are very general, permitting the designer to model any structure, to any degree of accuracy, but often at a higher computational cost. Additionally, such general procedures often do not allow easy implementation of all constraints of interest to the designer. Other programs, based on algebraic expressions used by designers one generation ago, have limited applicability for general structures with modem materials. However, when applicable, they provide easy understanding of design decisions trade-off. Finally, designers can also use specialized programs suitable for designing efficiently a subset of structural problems. For example, PASCO and PANDA2 are panel design codes, which calculate response and estimate failure much more efficiently than general-purpose codes, but are narrowly applicable in terms of geometry and loading. Therefore, the problem of optimizing structures based on simultaneous use of several models and computer programs is a subject of considerable interest. The problem of using several levels of models in optimization has been dubbed variable complexity modeling. Work under NASA grant NAG1-2110 has been concerned with the development of variable complexity modeling strategies with special emphasis on response surface techniques. In addition, several modeling issues for the design of shells of revolution were studied.

  18. Variable Complexity Structural Optimization of Shells

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.; Venkataraman, Satchi

    1998-01-01

    Structural designers today face both opportunities and challenges in a vast array of available analysis and optimization programs. Some programs such as NASTRAN, are very general, permitting the designer to model any structure, to any degree of accuracy, but often at a higher computational cost. Additionally, such general procedures often do not allow easy implementation of all constraints of interest to the designer. Other programs, based on algebraic expressions used by designers one generation ago, have limited applicability for general structures with modem materials. However, when applicable, they provide easy understanding of design decisions trade-off. Finally, designers can also use specialized programs suitable for designing efficiently a subset of structural problems. For example, PASCO and PANDA2 are panel design codes, which calculate response and estimate failure much more efficiently than general-purpose codes, but are narrowly applicable in terms of geometry and loading. Therefore, the problem of optimizing structures based on simultaneous use of several models and computer programs is a subject of considerable interest. The problem of using several levels of models in optimization has been dubbed variable complexity modeling. Work under NASA grant NAG1-1808 has been concerned with the development of variable complexity modeling strategies with special emphasis on response surface techniques. In addition several modeling issues for the design of shells of revolution were studied.

  19. Large scale landslide susceptibility assessment using the statistical methods of logistic regression and BSA - study case: the sub-basin of the small Niraj (Transylvania Depression, Romania)

    NASA Astrophysics Data System (ADS)

    Roşca, S.; Bilaşco, Ş.; Petrea, D.; Fodorean, I.; Vescan, I.; Filip, S.; Măguţ, F.-L.

    2015-11-01

    The existence of a large number of GIS models for the identification of landslide occurrence probability makes difficult the selection of a specific one. The present study focuses on the application of two quantitative models: the logistic and the BSA models. The comparative analysis of the results aims at identifying the most suitable model. The territory corresponding to the Niraj Mic Basin (87 km2) is an area characterised by a wide variety of the landforms with their morphometric, morphographical and geological characteristics as well as by a high complexity of the land use types where active landslides exist. This is the reason why it represents the test area for applying the two models and for the comparison of the results. The large complexity of input variables is illustrated by 16 factors which were represented as 72 dummy variables, analysed on the basis of their importance within the model structures. The testing of the statistical significance corresponding to each variable reduced the number of dummy variables to 12 which were considered significant for the test area within the logistic model, whereas for the BSA model all the variables were employed. The predictability degree of the models was tested through the identification of the area under the ROC curve which indicated a good accuracy (AUROC = 0.86 for the testing area) and predictability of the logistic model (AUROC = 0.63 for the validation area).

  20. Sampling and modeling riparian forest structure and riparian microclimate

    Treesearch

    Bianca N.I. Eskelson; Paul D. Anderson; Hailemariam Temesgen

    2013-01-01

    Riparian areas are extremely variable and dynamic, and represent some of the most complex terrestrial ecosystems in the world. The high variability within and among riparian areas poses challenges in developing efficient sampling and modeling approaches that accurately quantify riparian forest structure and riparian microclimate. Data from eight stream reaches that are...

  1. The GISS global climate-middle atmosphere model. II - Model variability due to interactions between planetary waves, the mean circulation and gravity wave drag

    NASA Technical Reports Server (NTRS)

    Rind, D.; Suozzo, R.; Balachandran, N. K.

    1988-01-01

    The variability which arises in the GISS Global Climate-Middle Atmosphere Model on two time scales is reviewed: interannual standard deviations, derived from the five-year control run, and intraseasonal variability as exemplified by statospheric warnings. The model's extratropical variability for both mean fields and eddy statistics appears reasonable when compared with observations, while the tropical wind variability near the stratopause may be excessive possibly, due to inertial oscillations. Both wave 1 and wave 2 warmings develop, with connections to tropospheric forcing. Variability on both time scales results from a complex set of interactions among planetary waves, the mean circulation, and gravity wave drag. Specific examples of these interactions are presented, which imply that variability in gravity wave forcing and drag may be an important component of the variability of the middle atmosphere.

  2. A stochastic-geometric model of soil variation in Pleistocene patterned ground

    NASA Astrophysics Data System (ADS)

    Lark, Murray; Meerschman, Eef; Van Meirvenne, Marc

    2013-04-01

    In this paper we examine the spatial variability of soil in parent material with complex spatial structure which arises from complex non-linear geomorphic processes. We show that this variability can be better-modelled by a stochastic-geometric model than by a standard Gaussian random field. The benefits of the new model are seen in the reproduction of features of the target variable which influence processes like water movement and pollutant dispersal. Complex non-linear processes in the soil give rise to properties with non-Gaussian distributions. Even under a transformation to approximate marginal normality, such variables may have a more complex spatial structure than the Gaussian random field model of geostatistics can accommodate. In particular the extent to which extreme values of the variable are connected in spatially coherent regions may be misrepresented. As a result, for example, geostatistical simulation generally fails to reproduce the pathways for preferential flow in an environment where coarse infill of former fluvial channels or coarse alluvium of braided streams creates pathways for rapid movement of water. Multiple point geostatistics has been developed to deal with this problem. Multiple point methods proceed by sampling from a set of training images which can be assumed to reproduce the non-Gaussian behaviour of the target variable. The challenge is to identify appropriate sources of such images. In this paper we consider a mode of soil variation in which the soil varies continuously, exhibiting short-range lateral trends induced by local effects of the factors of soil formation which vary across the region of interest in an unpredictable way. The trends in soil variation are therefore only apparent locally, and the soil variation at regional scale appears random. We propose a stochastic-geometric model for this mode of soil variation called the Continuous Local Trend (CLT) model. We consider a case study of soil formed in relict patterned ground with pronounced lateral textural variations arising from the presence of infilled ice-wedges of Pleistocene origin. We show how knowledge of the pedogenetic processes in this environment, along with some simple descriptive statistics, can be used to select and fit a CLT model for the apparent electrical conductivity (ECa) of the soil. We use the model to simulate realizations of the CLT process, and compare these with realizations of a fitted Gaussian random field. We show how statistics that summarize the spatial coherence of regions with small values of ECa, which are expected to have coarse texture and so larger saturated hydraulic conductivity, are better reproduced by the CLT model than by the Gaussian random field. This suggests that the CLT model could be used to generate an unlimited supply of training images to allow multiple point geostatistical simulation or prediction of this or similar variables.

  3. Bayesian dynamical systems modelling in the social sciences.

    PubMed

    Ranganathan, Shyam; Spaiser, Viktoria; Mann, Richard P; Sumpter, David J T

    2014-01-01

    Data arising from social systems is often highly complex, involving non-linear relationships between the macro-level variables that characterize these systems. We present a method for analyzing this type of longitudinal or panel data using differential equations. We identify the best non-linear functions that capture interactions between variables, employing Bayes factor to decide how many interaction terms should be included in the model. This method punishes overly complicated models and identifies models with the most explanatory power. We illustrate our approach on the classic example of relating democracy and economic growth, identifying non-linear relationships between these two variables. We show how multiple variables and variable lags can be accounted for and provide a toolbox in R to implement our approach.

  4. A Marked Poisson Process Driven Latent Shape Model for 3D Segmentation of Reflectance Confocal Microscopy Image Stacks of Human Skin.

    PubMed

    Ghanta, Sindhu; Jordan, Michael I; Kose, Kivanc; Brooks, Dana H; Rajadhyaksha, Milind; Dy, Jennifer G

    2017-01-01

    Segmenting objects of interest from 3D data sets is a common problem encountered in biological data. Small field of view and intrinsic biological variability combined with optically subtle changes of intensity, resolution, and low contrast in images make the task of segmentation difficult, especially for microscopy of unstained living or freshly excised thick tissues. Incorporating shape information in addition to the appearance of the object of interest can often help improve segmentation performance. However, the shapes of objects in tissue can be highly variable and design of a flexible shape model that encompasses these variations is challenging. To address such complex segmentation problems, we propose a unified probabilistic framework that can incorporate the uncertainty associated with complex shapes, variable appearance, and unknown locations. The driving application that inspired the development of this framework is a biologically important segmentation problem: the task of automatically detecting and segmenting the dermal-epidermal junction (DEJ) in 3D reflectance confocal microscopy (RCM) images of human skin. RCM imaging allows noninvasive observation of cellular, nuclear, and morphological detail. The DEJ is an important morphological feature as it is where disorder, disease, and cancer usually start. Detecting the DEJ is challenging, because it is a 2D surface in a 3D volume which has strong but highly variable number of irregularly spaced and variably shaped "peaks and valleys." In addition, RCM imaging resolution, contrast, and intensity vary with depth. Thus, a prior model needs to incorporate the intrinsic structure while allowing variability in essentially all its parameters. We propose a model which can incorporate objects of interest with complex shapes and variable appearance in an unsupervised setting by utilizing domain knowledge to build appropriate priors of the model. Our novel strategy to model this structure combines a spatial Poisson process with shape priors and performs inference using Gibbs sampling. Experimental results show that the proposed unsupervised model is able to automatically detect the DEJ with physiologically relevant accuracy in the range 10- 20 μm .

  5. A Marked Poisson Process Driven Latent Shape Model for 3D Segmentation of Reflectance Confocal Microscopy Image Stacks of Human Skin

    PubMed Central

    Ghanta, Sindhu; Jordan, Michael I.; Kose, Kivanc; Brooks, Dana H.; Rajadhyaksha, Milind; Dy, Jennifer G.

    2016-01-01

    Segmenting objects of interest from 3D datasets is a common problem encountered in biological data. Small field of view and intrinsic biological variability combined with optically subtle changes of intensity, resolution and low contrast in images make the task of segmentation difficult, especially for microscopy of unstained living or freshly excised thick tissues. Incorporating shape information in addition to the appearance of the object of interest can often help improve segmentation performance. However, shapes of objects in tissue can be highly variable and design of a flexible shape model that encompasses these variations is challenging. To address such complex segmentation problems, we propose a unified probabilistic framework that can incorporate the uncertainty associated with complex shapes, variable appearance and unknown locations. The driving application which inspired the development of this framework is a biologically important segmentation problem: the task of automatically detecting and segmenting the dermal-epidermal junction (DEJ) in 3D reflectance confocal microscopy (RCM) images of human skin. RCM imaging allows noninvasive observation of cellular, nuclear and morphological detail. The DEJ is an important morphological feature as it is where disorder, disease and cancer usually start. Detecting the DEJ is challenging because it is a 2D surface in a 3D volume which has strong but highly variable number of irregularly spaced and variably shaped “peaks and valleys”. In addition, RCM imaging resolution, contrast and intensity vary with depth. Thus a prior model needs to incorporate the intrinsic structure while allowing variability in essentially all its parameters. We propose a model which can incorporate objects of interest with complex shapes and variable appearance in an unsupervised setting by utilizing domain knowledge to build appropriate priors of the model. Our novel strategy to model this structure combines a spatial Poisson process with shape priors and performs inference using Gibbs sampling. Experimental results show that the proposed unsupervised model is able to automatically detect the DEJ with physiologically relevant accuracy in the range 10 – 20µm. PMID:27723590

  6. Effects of organizational complexity and resources on construction site risk.

    PubMed

    Forteza, Francisco J; Carretero-Gómez, Jose M; Sesé, Albert

    2017-09-01

    Our research is aimed at studying the relationship between risk level and organizational complexity and resources on constructions sites. Our general hypothesis is that site complexity increases risk, whereas more resources of the structure decrease risk. A Structural Equation Model (SEM) approach was adopted to validate our theoretical model. To develop our study, 957 building sites in Spain were visited and assessed in 2003-2009. All needed data were obtained using a specific tool developed by the authors to assess site risk, structure and resources (Construction Sites Risk Assessment Tool, or CONSRAT). This tool operationalizes the variables to fit our model, specifically, via a site risk index (SRI) and 10 organizational variables. Our random sample is composed largely of small building sites with general high levels of risk, moderate complexity, and low resources on site. The model obtained adequate fit, and results showed empirical evidence that the factors of complexity and resources can be considered predictors of site risk level. Consequently, these results can help companies, managers of construction and regulators to identify which organizational aspects should be improved to prevent risks on sites and consequently accidents. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.

  7. Circularly-symmetric complex normal ratio distribution for scalar transmissibility functions. Part I: Fundamentals

    NASA Astrophysics Data System (ADS)

    Yan, Wang-Ji; Ren, Wei-Xin

    2016-12-01

    Recent advances in signal processing and structural dynamics have spurred the adoption of transmissibility functions in academia and industry alike. Due to the inherent randomness of measurement and variability of environmental conditions, uncertainty impacts its applications. This study is focused on statistical inference for raw scalar transmissibility functions modeled as complex ratio random variables. The goal is achieved through companion papers. This paper (Part I) is dedicated to dealing with a formal mathematical proof. New theorems on multivariate circularly-symmetric complex normal ratio distribution are proved on the basis of principle of probabilistic transformation of continuous random vectors. The closed-form distributional formulas for multivariate ratios of correlated circularly-symmetric complex normal random variables are analytically derived. Afterwards, several properties are deduced as corollaries and lemmas to the new theorems. Monte Carlo simulation (MCS) is utilized to verify the accuracy of some representative cases. This work lays the mathematical groundwork to find probabilistic models for raw scalar transmissibility functions, which are to be expounded in detail in Part II of this study.

  8. Using a Bayesian network to clarify areas requiring research in a host-pathogen system.

    PubMed

    Bower, D S; Mengersen, K; Alford, R A; Schwarzkopf, L

    2017-12-01

    Bayesian network analyses can be used to interactively change the strength of effect of variables in a model to explore complex relationships in new ways. In doing so, they allow one to identify influential nodes that are not well studied empirically so that future research can be prioritized. We identified relationships in host and pathogen biology to examine disease-driven declines of amphibians associated with amphibian chytrid fungus (Batrachochytrium dendrobatidis). We constructed a Bayesian network consisting of behavioral, genetic, physiological, and environmental variables that influence disease and used them to predict host population trends. We varied the impacts of specific variables in the model to reveal factors with the most influence on host population trend. The behavior of the nodes (the way in which the variables probabilistically responded to changes in states of the parents, which are the nodes or variables that directly influenced them in the graphical model) was consistent with published results. The frog population had a 49% probability of decline when all states were set at their original values, and this probability increased when body temperatures were cold, the immune system was not suppressing infection, and the ambient environment was conducive to growth of B. dendrobatidis. These findings suggest the construction of our model reflected the complex relationships characteristic of host-pathogen interactions. Changes to climatic variables alone did not strongly influence the probability of population decline, which suggests that climate interacts with other factors such as the capacity of the frog immune system to suppress disease. Changes to the adaptive immune system and disease reservoirs had a large effect on the population trend, but there was little empirical information available for model construction. Our model inputs can be used as a base to examine other systems, and our results show that such analyses are useful tools for reviewing existing literature, identifying links poorly supported by evidence, and understanding complexities in emerging infectious-disease systems. © 2017 Society for Conservation Biology.

  9. Variable complexity online sequential extreme learning machine, with applications to streamflow prediction

    NASA Astrophysics Data System (ADS)

    Lima, Aranildo R.; Hsieh, William W.; Cannon, Alex J.

    2017-12-01

    In situations where new data arrive continually, online learning algorithms are computationally much less costly than batch learning ones in maintaining the model up-to-date. The extreme learning machine (ELM), a single hidden layer artificial neural network with random weights in the hidden layer, is solved by linear least squares, and has an online learning version, the online sequential ELM (OSELM). As more data become available during online learning, information on the longer time scale becomes available, so ideally the model complexity should be allowed to change, but the number of hidden nodes (HN) remains fixed in OSELM. A variable complexity VC-OSELM algorithm is proposed to dynamically add or remove HN in the OSELM, allowing the model complexity to vary automatically as online learning proceeds. The performance of VC-OSELM was compared with OSELM in daily streamflow predictions at two hydrological stations in British Columbia, Canada, with VC-OSELM significantly outperforming OSELM in mean absolute error, root mean squared error and Nash-Sutcliffe efficiency at both stations.

  10. Teacher Efficacy in Student Engagement, Instructional Management, Student Stressors, and Burnout: A Theoretical Model Using In-Class Variables to Predict Teachers' Intent-to-Leave

    ERIC Educational Resources Information Center

    Martin, Nancy K.; Sass, Daniel A.; Schmitt, Thomas A.

    2012-01-01

    The models presented here posit a complex relationship between efficacy in student engagement and intent-to-leave that is mediated by in-class variables of instructional management, student behavior stressors, aspects of burnout, and job satisfaction. Using data collected from 631 teachers, analyses provided support for the two models that…

  11. Bridging gaps: On the performance of airborne LiDAR to model wood mouse-habitat structure relationships in pine forests.

    PubMed

    Jaime-González, Carlos; Acebes, Pablo; Mateos, Ana; Mezquida, Eduardo T

    2017-01-01

    LiDAR technology has firmly contributed to strengthen the knowledge of habitat structure-wildlife relationships, though there is an evident bias towards flying vertebrates. To bridge this gap, we investigated and compared the performance of LiDAR and field data to model habitat preferences of wood mouse (Apodemus sylvaticus) in a Mediterranean high mountain pine forest (Pinus sylvestris). We recorded nine field and 13 LiDAR variables that were summarized by means of Principal Component Analyses (PCA). We then analyzed wood mouse's habitat preferences using three different models based on: (i) field PCs predictors, (ii) LiDAR PCs predictors; and (iii) both set of predictors in a combined model, including a variance partitioning analysis. Elevation was also included as a predictor in the three models. Our results indicate that LiDAR derived variables were better predictors than field-based variables. The model combining both data sets slightly improved the predictive power of the model. Field derived variables indicated that wood mouse was positively influenced by the gradient of increasing shrub cover and negatively affected by elevation. Regarding LiDAR data, two LiDAR PCs, i.e. gradients in canopy openness and complexity in forest vertical structure positively influenced wood mouse, although elevation interacted negatively with the complexity in vertical structure, indicating wood mouse's preferences for plots with lower elevations but with complex forest vertical structure. The combined model was similar to the LiDAR-based model and included the gradient of shrub cover measured in the field. Variance partitioning showed that LiDAR-based variables, together with elevation, were the most important predictors and that part of the variation explained by shrub cover was shared. LiDAR derived variables were good surrogates of environmental characteristics explaining habitat preferences by the wood mouse. Our LiDAR metrics represented structural features of the forest patch, such as the presence and cover of shrubs, as well as other characteristics likely including time since perturbation, food availability and predation risk. Our results suggest that LiDAR is a promising technology for further exploring habitat preferences by small mammal communities.

  12. Atmospheric icing of structures: Observations and simulations

    NASA Astrophysics Data System (ADS)

    Ágústsson, H.; Elíasson, Á. J.; Thorsteins, E.; Rögnvaldsson, Ó.; Ólafsson, H.

    2012-04-01

    This study compares observed icing in a test span in complex orography at Hallormsstaðaháls (575 m) in East-Iceland with parameterized icing based on an icing model and dynamically downscaled weather at high horizontal resolution. Four icing events have been selected from an extensive dataset of observed atmospheric icing in Iceland. A total of 86 test-spans have been erected since 1972 at 56 locations in complex terrain with more than 1000 icing events documented. The events used here have peak observed ice load between 4 and 36 kg/m. Most of the ice accretion is in-cloud icing but it may partly be mixed with freezing drizzle and wet snow icing. The calculation of atmospheric icing is made in two steps. First the atmospheric data is created by dynamically downscaling the ECMWF-analysis to high resolution using the non-hydrostatic mesoscale Advanced Research WRF-model. The horizontal resolution of 9, 3, 1 and 0.33 km is necessary to allow the atmospheric model to reproduce correctly local weather in the complex terrain of Iceland. Secondly, the Makkonen-model is used to calculate the ice accretion rate on the conductors based on the simulated temperature, wind, cloud and precipitation variables from the atmospheric data. In general, the atmospheric model correctly simulates the atmospheric variables and icing calculations based on the atmospheric variables correctly identify the observed icing events, but underestimate the load due to too slow ice accretion. This is most obvious when the temperature is slightly below 0°C and the observed icing is most intense. The model results improve significantly when additional observations of weather from an upstream weather station are used to nudge the atmospheric model. However, the large variability in the simulated atmospheric variables results in high temporal and spatial variability in the calculated ice accretion. Furthermore, there is high sensitivity of the icing model to the droplet size and the possibility that some of the icing may be due to freezing drizzle or wet snow instead of in-cloud icing of super-cooled droplets. In addition, the icing model (Makkonen) may not be accurate for the highest icing loads observed.

  13. Assessing multiscale complexity of short heart rate variability series through a model-based linear approach

    NASA Astrophysics Data System (ADS)

    Porta, Alberto; Bari, Vlasta; Ranuzzi, Giovanni; De Maria, Beatrice; Baselli, Giuseppe

    2017-09-01

    We propose a multiscale complexity (MSC) method assessing irregularity in assigned frequency bands and being appropriate for analyzing the short time series. It is grounded on the identification of the coefficients of an autoregressive model, on the computation of the mean position of the poles generating the components of the power spectral density in an assigned frequency band, and on the assessment of its distance from the unit circle in the complex plane. The MSC method was tested on simulations and applied to the short heart period (HP) variability series recorded during graded head-up tilt in 17 subjects (age from 21 to 54 years, median = 28 years, 7 females) and during paced breathing protocols in 19 subjects (age from 27 to 35 years, median = 31 years, 11 females) to assess the contribution of time scales typical of the cardiac autonomic control, namely in low frequency (LF, from 0.04 to 0.15 Hz) and high frequency (HF, from 0.15 to 0.5 Hz) bands to the complexity of the cardiac regulation. The proposed MSC technique was compared to a traditional model-free multiscale method grounded on information theory, i.e., multiscale entropy (MSE). The approach suggests that the reduction of HP variability complexity observed during graded head-up tilt is due to a regularization of the HP fluctuations in LF band via a possible intervention of sympathetic control and the decrement of HP variability complexity observed during slow breathing is the result of the regularization of the HP variations in both LF and HF bands, thus implying the action of physiological mechanisms working at time scales even different from that of respiration. MSE did not distinguish experimental conditions at time scales larger than 1. Over a short time series MSC allows a more insightful association between cardiac control complexity and physiological mechanisms modulating cardiac rhythm compared to a more traditional tool such as MSE.

  14. Selection of relevant input variables in storm water quality modeling by multiobjective evolutionary polynomial regression paradigm

    NASA Astrophysics Data System (ADS)

    Creaco, E.; Berardi, L.; Sun, Siao; Giustolisi, O.; Savic, D.

    2016-04-01

    The growing availability of field data, from information and communication technologies (ICTs) in "smart" urban infrastructures, allows data modeling to understand complex phenomena and to support management decisions. Among the analyzed phenomena, those related to storm water quality modeling have recently been gaining interest in the scientific literature. Nonetheless, the large amount of available data poses the problem of selecting relevant variables to describe a phenomenon and enable robust data modeling. This paper presents a procedure for the selection of relevant input variables using the multiobjective evolutionary polynomial regression (EPR-MOGA) paradigm. The procedure is based on scrutinizing the explanatory variables that appear inside the set of EPR-MOGA symbolic model expressions of increasing complexity and goodness of fit to target output. The strategy also enables the selection to be validated by engineering judgement. In such context, the multiple case study extension of EPR-MOGA, called MCS-EPR-MOGA, is adopted. The application of the proposed procedure to modeling storm water quality parameters in two French catchments shows that it was able to significantly reduce the number of explanatory variables for successive analyses. Finally, the EPR-MOGA models obtained after the input selection are compared with those obtained by using the same technique without benefitting from input selection and with those obtained in previous works where other data-modeling techniques were used on the same data. The comparison highlights the effectiveness of both EPR-MOGA and the input selection procedure.

  15. Snow model analysis.

    DOT National Transportation Integrated Search

    2014-01-01

    This study developed a new snow model and a database which warehouses geometric, weather and traffic : data on New Jersey highways. The complexity of the model development lies in considering variable road : width, different spreading/plowing pattern...

  16. Modeling silviculture after natural disturbance to sustain biodiversity in the longleaf pine (Pinus palustris) ecosystem : balancing complexity and implementation

    Treesearch

    Brian J. Palik; Robert J. Mitchell; J. Kevin Hiers

    2002-01-01

    Modeling silviculture after natural disturbance to maintain biodiversity is a popular concept, yet its application remains elusive. We discuss difficulties inherent to this idea, and suggest approaches to facilitate implementation, using longleaf pine (Pinus palustris) as an example. Natural disturbance regimes are spatially and temporally variable. Variability...

  17. Comparing and improving proper orthogonal decomposition (POD) to reduce the complexity of groundwater models

    NASA Astrophysics Data System (ADS)

    Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas

    2017-04-01

    Physically-based modeling is a wide-spread tool in understanding and management of natural systems. With the high complexity of many such models and the huge amount of model runs necessary for parameter estimation and uncertainty analysis, overall run times can be prohibitively long even on modern computer systems. An encouraging strategy to tackle this problem are model reduction methods. In this contribution, we compare different proper orthogonal decomposition (POD, Siade et al. (2010)) methods and their potential applications to groundwater models. The POD method performs a singular value decomposition on system states as simulated by the complex (e.g., PDE-based) groundwater model taken at several time-steps, so-called snapshots. The singular vectors with the highest information content resulting from this decomposition are then used as a basis for projection of the system of model equations onto a subspace of much lower dimensionality than the original complex model, thereby greatly reducing complexity and accelerating run times. In its original form, this method is only applicable to linear problems. Many real-world groundwater models are non-linear, tough. These non-linearities are introduced either through model structure (unconfined aquifers) or boundary conditions (certain Cauchy boundaries, like rivers with variable connection to the groundwater table). To date, applications of POD focused on groundwater models simulating pumping tests in confined aquifers with constant head boundaries. In contrast, POD model reduction either greatly looses accuracy or does not significantly reduce model run time if the above-mentioned non-linearities are introduced. We have also found that variable Dirichlet boundaries are problematic for POD model reduction. An extension to the POD method, called POD-DEIM, has been developed for non-linear groundwater models by Stanko et al. (2016). This method uses spatial interpolation points to build the equation system in the reduced model space, thereby allowing the recalculation of system matrices at every time-step necessary for non-linear models while retaining the speed of the reduced model. This makes POD-DEIM applicable for groundwater models simulating unconfined aquifers. However, in our analysis, the method struggled to reproduce variable river boundaries accurately and gave no advantage for variable Dirichlet boundaries compared to the original POD method. We have developed another extension for POD that targets to address these remaining problems by performing a second POD operation on the model matrix on the left-hand side of the equation. The method aims to at least reproduce the accuracy of the other methods where they are applicable while outperforming them for setups with changing river boundaries or variable Dirichlet boundaries. We compared the new extension with original POD and POD-DEIM for different combinations of model structures and boundary conditions. The new method shows the potential of POD extensions for applications to non-linear groundwater systems and complex boundary conditions that go beyond the current, relatively limited range of applications. References: Siade, A. J., Putti, M., and Yeh, W. W.-G. (2010). Snapshot selection for groundwater model reduction using proper orthogonal decomposition. Water Resour. Res., 46(8):W08539. Stanko, Z. P., Boyce, S. E., and Yeh, W. W.-G. (2016). Nonlinear model reduction of unconfined groundwater flow using pod and deim. Advances in Water Resources, 97:130 - 143.

  18. A canonical neural mechanism for behavioral variability

    NASA Astrophysics Data System (ADS)

    Darshan, Ran; Wood, William E.; Peters, Susan; Leblois, Arthur; Hansel, David

    2017-05-01

    The ability to generate variable movements is essential for learning and adjusting complex behaviours. This variability has been linked to the temporal irregularity of neuronal activity in the central nervous system. However, how neuronal irregularity actually translates into behavioural variability is unclear. Here we combine modelling, electrophysiological and behavioural studies to address this issue. We demonstrate that a model circuit comprising topographically organized and strongly recurrent neural networks can autonomously generate irregular motor behaviours. Simultaneous recordings of neurons in singing finches reveal that neural correlations increase across the circuit driving song variability, in agreement with the model predictions. Analysing behavioural data, we find remarkable similarities in the babbling statistics of 5-6-month-old human infants and juveniles from three songbird species and show that our model naturally accounts for these `universal' statistics.

  19. Predicting Individual Tree and Shrub Species Distributions with Empirically Derived Microclimate Surfaces in a Complex Mountain Ecosystem in Northern Idaho, USA

    NASA Astrophysics Data System (ADS)

    Holden, Z.; Cushman, S.; Evans, J.; Littell, J. S.

    2009-12-01

    The resolution of current climate interpolation models limits our ability to adequately account for temperature variability in complex mountainous terrain. We empirically derive 30 meter resolution models of June-October day and nighttime temperature and April nighttime Vapor Pressure Deficit (VPD) using hourly data from 53 Hobo dataloggers stratified by topographic setting in mixed conifer forests near Bonners Ferry, ID. 66%, of the variability in average June-October daytime temperature is explained by 3 variables (elevation, relative slope position and topographic roughness) derived from 30 meter digital elevation models. 69% of the variability in nighttime temperatures among stations is explained by elevation, relative slope position and topographic dissection (450 meter window). 54% of variability in April nighttime VPD is explained by elevation, soil wetness and the NDVIc derived from Landsat. We extract temperature and VPD predictions at 411 intensified Forest Inventory and Analysis plots (FIA). We use these variables with soil wetness and solar radiation indices derived from a 30 meter DEM to predict the presence and absence of 10 common forest tree species and 25 shrub species. Classification accuracies range from 87% for Pinus ponderosa , to > 97% for most other tree species. Shrub model accuracies are also high with greater than 90% accuracy for the majority of species. Species distribution models based on the physical variables that drive species occurrence, rather than their topographic surrogates, will eventually allow us to predict potential future distributions of these species with warming climate at fine spatial scales.

  20. High dimensional model representation method for fuzzy structural dynamics

    NASA Astrophysics Data System (ADS)

    Adhikari, S.; Chowdhury, R.; Friswell, M. I.

    2011-03-01

    Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.

  1. A generalized linear integrate-and-fire neural model produces diverse spiking behaviors.

    PubMed

    Mihalaş, Stefan; Niebur, Ernst

    2009-03-01

    For simulations of neural networks, there is a trade-off between the size of the network that can be simulated and the complexity of the model used for individual neurons. In this study, we describe a generalization of the leaky integrate-and-fire model that produces a wide variety of spiking behaviors while still being analytically solvable between firings. For different parameter values, the model produces spiking or bursting, tonic, phasic or adapting responses, depolarizing or hyperpolarizing after potentials and so forth. The model consists of a diagonalizable set of linear differential equations describing the time evolution of membrane potential, a variable threshold, and an arbitrary number of firing-induced currents. Each of these variables is modified by an update rule when the potential reaches threshold. The variables used are intuitive and have biological significance. The model's rich behavior does not come from the differential equations, which are linear, but rather from complex update rules. This single-neuron model can be implemented using algorithms similar to the standard integrate-and-fire model. It is a natural match with event-driven algorithms for which the firing times are obtained as a solution of a polynomial equation.

  2. Controls on the spatial variability of key soil properties: comparing field data with a mechanistic soilscape evolution model

    NASA Astrophysics Data System (ADS)

    Vanwalleghem, T.; Román, A.; Giraldez, J. V.

    2016-12-01

    There is a need for better understanding the processes influencing soil formation and the resulting distribution of soil properties. Soil properties can exhibit strong spatial variation, even at the small catchment scale. Especially soil carbon pools in semi-arid, mountainous areas are highly uncertain because bulk density and stoniness are very heterogeneous and rarely measured explicitly. In this study, we explore the spatial variability in key soil properties (soil carbon stocks, stoniness, bulk density and soil depth) as a function of processes shaping the critical zone (weathering, erosion, soil water fluxes and vegetation patterns). We also compare the potential of a geostatistical versus a mechanistic soil formation model (MILESD) for predicting these key soil properties. Soil core samples were collected from 67 locations at 6 depths. Total soil organic carbon stocks were 4.38 kg m-2. Solar radiation proved to be the key variable controlling soil carbon distribution. Stone content was mostly controlled by slope, indicating the importance of erosion. Spatial distribution of bulk density was found to be highly random. Finally, total carbon stocks were predicted using a random forest model whose main covariates were solar radiation and NDVI. The model predicts carbon stocks that are double as high on north versus south-facing slopes. However, validation showed that these covariates only explained 25% of the variation in the dataset. Apparently, present-day landscape and vegetation properties are not sufficient to fully explain variability in the soil carbon stocks in this complex terrain under natural vegetation. This is attributed to a high spatial variability in bulk density and stoniness, key variables controlling carbon stocks. Similar results were obtained with the mechanistic soil formation model MILESD, suggesting that more complex models might be needed to further explore this high spatial variability.

  3. Stochastic Time Models of Syllable Structure

    PubMed Central

    Shaw, Jason A.; Gafos, Adamantios I.

    2015-01-01

    Drawing on phonology research within the generative linguistics tradition, stochastic methods, and notions from complex systems, we develop a modelling paradigm linking phonological structure, expressed in terms of syllables, to speech movement data acquired with 3D electromagnetic articulography and X-ray microbeam methods. The essential variable in the models is syllable structure. When mapped to discrete coordination topologies, syllabic organization imposes systematic patterns of variability on the temporal dynamics of speech articulation. We simulated these dynamics under different syllabic parses and evaluated simulations against experimental data from Arabic and English, two languages claimed to parse similar strings of segments into different syllabic structures. Model simulations replicated several key experimental results, including the fallibility of past phonetic heuristics for syllable structure, and exposed the range of conditions under which such heuristics remain valid. More importantly, the modelling approach consistently diagnosed syllable structure proving resilient to multiple sources of variability in experimental data including measurement variability, speaker variability, and contextual variability. Prospects for extensions of our modelling paradigm to acoustic data are also discussed. PMID:25996153

  4. The Connection between the Complexity of Perception of an Event and Judging Decisions in a Complex Situation

    ERIC Educational Resources Information Center

    Rauchberger, Nirit; Kaniel, Shlomo; Gross, Zehavit

    2017-01-01

    This study examines the process of judging complex real-life events in Israel: the disengagement from Gush Katif, Rabin's assassination and the Second Lebanon War. The process of judging is based on Weiner's attribution model, (Weiner, 2000, 2006); however, due to the complexity of the events studied, variables were added to characterize the…

  5. Probing AGN Accretion Physics through AGN Variability: Insights from Kepler

    NASA Astrophysics Data System (ADS)

    Kasliwal, Vishal Pramod

    Active Galactic Nuclei (AGN) exhibit large luminosity variations over the entire electromagnetic spectrum on timescales ranging from hours to years. The variations in luminosity are devoid of any periodic character and appear stochastic. While complex correlations exist between the variability observed in different parts of the electromagnetic spectrum, no frequency band appears to be completely dominant, suggesting that the physical processes producing the variability are exceedingly rich and complex. In the absence of a clear theoretical explanation of the variability, phenomenological models are used to study AGN variability. The stochastic behavior of AGN variability makes formulating such models difficult and connecting them to the underlying physics exceedingly hard. We study AGN light curves serendipitously observed by the NASA Kepler planet-finding mission. Compared to previous ground-based observations, Kepler offers higher precision and a smaller sampling interval resulting in potentially higher quality light curves. Using structure functions, we demonstrate that (1) the simplest statistical model of AGN variability, the damped random walk (DRW), is insufficient to characterize the observed behavior of AGN light curves; and (2) variability begins to occur in AGN on time-scales as short as hours. Of the 20 light curves studied by us, only 3-8 may be consistent with the DRW. The structure functions of the AGN in our sample exhibit complex behavior with pronounced dips on time-scales of 10-100 d suggesting that AGN variability can be very complex and merits further analysis. We examine the accuracy of the Kepler pipeline-generated light curves and find that the publicly available light curves may require re-processing to reduce contamination from field sources. We show that while the re-processing changes the exact PSD power law slopes inferred by us, it is unlikely to change the conclusion of our structure function study-Kepler AGN light curves indicate that the DRW is insufficient to characterize AGN variability. We provide a new approach to probing accretion physics with variability by decomposing observed light curves into a set of impulses that drive diffusive processes using C-ARMA models. Applying our approach to Kepler data, we demonstrate how the time-scales reported in the literature can be interpreted in the context of the growth and decay time-scales for flux perturbations and tentatively identify the flux perturbation driving process with accretion disk turbulence on length-scales much longer than the characteristic eddy size. Our analysis technique is applicable to (1) studying the connection between AGN sub-type and variability properties; (2) probing the origins of variability by studying the multi-wavelength behavior of AGN; (3) testing numerical simulations of accretion flows with the goal of creating a library of the variability properties of different accretion mechanisms; (4) hunting for changes in the behavior of the accretion flow by block-analyzing observed light curves; and (5) constraining the sampling requirements of future surveys of AGN variability.

  6. Predicting Deforestation Patterns in Loreto, Peru from 2000-2010 Using a Nested GLM Approach

    NASA Astrophysics Data System (ADS)

    Vijay, V.; Jenkins, C.; Finer, M.; Pimm, S.

    2013-12-01

    Loreto is the largest province in Peru, covering about 370,000 km2. Because of its remote location in the Amazonian rainforest, it is also one of the most sparsely populated. Though a majority of the region remains covered by forest, deforestation is being driven by human encroachment through industrial activities and the spread of colonization and agriculture. The importance of accurate predictive modeling of deforestation has spawned an extensive body of literature on the topic. We present a nested GLM approach based on predictions of deforestation from 2000-2010 and using variables representing the expected drivers of deforestation. Models were constructed using 2000 to 2005 changes and tested against data for 2005 to 2010. The most complex model, which included transportation variables (roads and navigable rivers), spatial contagion processes, population centers and industrial activities, performed better in predicting the 2005 to 2010 changes (75.8% accurate) than did a simpler model using only transportation variables (69.2% accurate). Finally we contrast the GLM approach with a more complex spatially articulated model.

  7. In Silico, Experimental, Mechanistic Model for Extended-Release Felodipine Disposition Exhibiting Complex Absorption and a Highly Variable Food Interaction

    PubMed Central

    Kim, Sean H. J.; Jackson, Andre J.; Hunt, C. Anthony

    2014-01-01

    The objective of this study was to develop and explore new, in silico experimental methods for deciphering complex, highly variable absorption and food interaction pharmacokinetics observed for a modified-release drug product. Toward that aim, we constructed an executable software analog of study participants to whom product was administered orally. The analog is an object- and agent-oriented, discrete event system, which consists of grid spaces and event mechanisms that map abstractly to different physiological features and processes. Analog mechanisms were made sufficiently complicated to achieve prespecified similarity criteria. An equation-based gastrointestinal transit model with nonlinear mixed effects analysis provided a standard for comparison. Subject-specific parameterizations enabled each executed analog’s plasma profile to mimic features of the corresponding six individual pairs of subject plasma profiles. All achieved prespecified, quantitative similarity criteria, and outperformed the gastrointestinal transit model estimations. We observed important subject-specific interactions within the simulation and mechanistic differences between the two models. We hypothesize that mechanisms, events, and their causes occurring during simulations had counterparts within the food interaction study: they are working, evolvable, concrete theories of dynamic interactions occurring within individual subjects. The approach presented provides new, experimental strategies for unraveling the mechanistic basis of complex pharmacological interactions and observed variability. PMID:25268237

  8. POWER ANALYSIS FOR COMPLEX MEDIATIONAL DESIGNS USING MONTE CARLO METHODS

    PubMed Central

    Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.

    2013-01-01

    Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex mediational models. The approach is based on the well known technique of generating a large number of samples in a Monte Carlo study, and estimating power as the percentage of cases in which an estimate of interest is significantly different from zero. Examples of power calculation for commonly used mediational models are provided. Power analyses for the single mediator, multiple mediators, three-path mediation, mediation with latent variables, moderated mediation, and mediation in longitudinal designs are described. Annotated sample syntax for Mplus is appended and tabled values of required sample sizes are shown for some models. PMID:23935262

  9. Applications of Black Scholes Complexity Concepts to Combat Modelling

    DTIC Science & Technology

    2009-03-01

    Lauren, G C McIntosh, N D Perry and J Moffat, Chaos 17, 2007. 4 Lanchester Models of Warfare Volumes 1 and 2, J G Taylor, Operations Research Society...transformation matrix A Lanchester Equation solution parameter bi Dependent model variables b(x,t) Variable variance rate B Lanchester Equation solution...distribution. The similarity between this equation and the Lanchester Equations (equation 1) is clear. This suggests an obvious solution to the question of

  10. A Generalized Linear Integrate-and-Fire Neural Model Produces Diverse Spiking Behaviors

    PubMed Central

    Mihalaş, Ştefan; Niebur, Ernst

    2010-01-01

    For simulations of neural networks, there is a trade-off between the size of the network that can be simulated and the complexity of the model used for individual neurons. In this study, we describe a generalization of the leaky integrate-and-fire model that produces a wide variety of spiking behaviors while still being analytically solvable between firings. For different parameter values, the model produces spiking or bursting, tonic, phasic or adapting responses, depolarizing or hyperpolarizing after potentials and so forth. The model consists of a diagonalizable set of linear differential equations describing the time evolution of membrane potential, a variable threshold, and an arbitrary number of firing-induced currents. Each of these variables is modified by an update rule when the potential reaches threshold. The variables used are intuitive and have biological significance. The model’s rich behavior does not come from the differential equations, which are linear, but rather from complex update rules. This single-neuron model can be implemented using algorithms similar to the standard integrate-and-fire model. It is a natural match with event-driven algorithms for which the firing times are obtained as a solution of a polynomial equation. PMID:18928368

  11. Shame, Dissociation, and Complex PTSD Symptoms in Traumatized Psychiatric and Control Groups: Direct and Indirect Associations With Relationship Distress.

    PubMed

    Dorahy, Martin J; Corry, Mary; Black, Rebecca; Matheson, Laura; Coles, Holly; Curran, David; Seager, Lenaire; Middleton, Warwick; Dyer, Kevin F W

    2017-04-01

    Elevated shame and dissociation are common in dissociative identity disorder (DID) and chronic posttraumatic stress disorder (PTSD) and are part of the constellation of symptoms defined as complex PTSD. Previous work examined the relationship between shame, dissociation, and complex PTSD and whether they are associated with intimate relationship anxiety, relationship depression, and fear of relationships. This study investigated these variables in traumatized clinical samples and a nonclinical community group. Participants were drawn from the DID (n = 20), conflict-related chronic PTSD (n = 65), and nonclinical (n = 125) populations and completed questionnaires assessing the variables of interest. A model examining the direct impact of shame and dissociation on relationship functioning, and their indirect effect via complex PTSD symptoms, was tested through path analysis. The DID sample reported significantly higher dissociation, shame, complex PTSD symptom severity, relationship anxiety, relationship depression, and fear of relationships than the other two samples. Support was found for the proposed model, with shame directly affecting relationship anxiety and fear of relationships, and pathological dissociation directly affecting relationship anxiety and relationship depression. The indirect effect of shame and dissociation via complex PTSD symptom severity was evident on all relationship variables. Shame and pathological dissociation are important for not only the effect they have on the development of other complex PTSD symptoms, but also their direct and indirect effects on distress associated with relationships. © 2016 Wiley Periodicals, Inc.

  12. Health behavior change models for HIV prevention and AIDS care: practical recommendations for a multi-level approach.

    PubMed

    Kaufman, Michelle R; Cornish, Flora; Zimmerman, Rick S; Johnson, Blair T

    2014-08-15

    Despite increasing recent emphasis on the social and structural determinants of HIV-related behavior, empirical research and interventions lag behind, partly because of the complexity of social-structural approaches. This article provides a comprehensive and practical review of the diverse literature on multi-level approaches to HIV-related behavior change in the interest of contributing to the ongoing shift to more holistic theory, research, and practice. It has the following specific aims: (1) to provide a comprehensive list of relevant variables/factors related to behavior change at all points on the individual-structural spectrum, (2) to map out and compare the characteristics of important recent multi-level models, (3) to reflect on the challenges of operating with such complex theoretical tools, and (4) to identify next steps and make actionable recommendations. Using a multi-level approach implies incorporating increasing numbers of variables and increasingly context-specific mechanisms, overall producing greater intricacies. We conclude with recommendations on how best to respond to this complexity, which include: using formative research and interdisciplinary collaboration to select the most appropriate levels and variables in a given context; measuring social and institutional variables at the appropriate level to ensure meaningful assessments of multiple levels are made; and conceptualizing intervention and research with reference to theoretical models and mechanisms to facilitate transferability, sustainability, and scalability.

  13. Validation workflow for a clinical Bayesian network model in multidisciplinary decision making in head and neck oncology treatment.

    PubMed

    Cypko, Mario A; Stoehr, Matthaeus; Kozniewski, Marcin; Druzdzel, Marek J; Dietz, Andreas; Berliner, Leonard; Lemke, Heinz U

    2017-11-01

    Oncological treatment is being increasingly complex, and therefore, decision making in multidisciplinary teams is becoming the key activity in the clinical pathways. The increased complexity is related to the number and variability of possible treatment decisions that may be relevant to a patient. In this paper, we describe validation of a multidisciplinary cancer treatment decision in the clinical domain of head and neck oncology. Probabilistic graphical models and corresponding inference algorithms, in the form of Bayesian networks, can support complex decision-making processes by providing a mathematically reproducible and transparent advice. The quality of BN-based advice depends on the quality of the model. Therefore, it is vital to validate the model before it is applied in practice. For an example BN subnetwork of laryngeal cancer with 303 variables, we evaluated 66 patient records. To validate the model on this dataset, a validation workflow was applied in combination with quantitative and qualitative analyses. In the subsequent analyses, we observed four sources of imprecise predictions: incorrect data, incomplete patient data, outvoting relevant observations, and incorrect model. Finally, the four problems were solved by modifying the data and the model. The presented validation effort is related to the model complexity. For simpler models, the validation workflow is the same, although it may require fewer validation methods. The validation success is related to the model's well-founded knowledge base. The remaining laryngeal cancer model may disclose additional sources of imprecise predictions.

  14. Model building strategy for logistic regression: purposeful selection.

    PubMed

    Zhang, Zhongheng

    2016-03-01

    Logistic regression is one of the most commonly used models to account for confounders in medical literature. The article introduces how to perform purposeful selection model building strategy with R. I stress on the use of likelihood ratio test to see whether deleting a variable will have significant impact on model fit. A deleted variable should also be checked for whether it is an important adjustment of remaining covariates. Interaction should be checked to disentangle complex relationship between covariates and their synergistic effect on response variable. Model should be checked for the goodness-of-fit (GOF). In other words, how the fitted model reflects the real data. Hosmer-Lemeshow GOF test is the most widely used for logistic regression model.

  15. Deconstructing the core dynamics from a complex time-lagged regulatory biological circuit.

    PubMed

    Eriksson, O; Brinne, B; Zhou, Y; Björkegren, J; Tegnér, J

    2009-03-01

    Complex regulatory dynamics is ubiquitous in molecular networks composed of genes and proteins. Recent progress in computational biology and its application to molecular data generate a growing number of complex networks. Yet, it has been difficult to understand the governing principles of these networks beyond graphical analysis or extensive numerical simulations. Here the authors exploit several simplifying biological circumstances which thereby enable to directly detect the underlying dynamical regularities driving periodic oscillations in a dynamical nonlinear computational model of a protein-protein network. System analysis is performed using the cell cycle, a mathematically well-described complex regulatory circuit driven by external signals. By introducing an explicit time delay and using a 'tearing-and-zooming' approach the authors reduce the system to a piecewise linear system with two variables that capture the dynamics of this complex network. A key step in the analysis is the identification of functional subsystems by identifying the relations between state-variables within the model. These functional subsystems are referred to as dynamical modules operating as sensitive switches in the original complex model. By using reduced mathematical representations of the subsystems the authors derive explicit conditions on how the cell cycle dynamics depends on system parameters, and can, for the first time, analyse and prove global conditions for system stability. The approach which includes utilising biological simplifying conditions, identification of dynamical modules and mathematical reduction of the model complexity may be applicable to other well-characterised biological regulatory circuits. [Includes supplementary material].

  16. A canonical neural mechanism for behavioral variability

    PubMed Central

    Darshan, Ran; Wood, William E.; Peters, Susan; Leblois, Arthur; Hansel, David

    2017-01-01

    The ability to generate variable movements is essential for learning and adjusting complex behaviours. This variability has been linked to the temporal irregularity of neuronal activity in the central nervous system. However, how neuronal irregularity actually translates into behavioural variability is unclear. Here we combine modelling, electrophysiological and behavioural studies to address this issue. We demonstrate that a model circuit comprising topographically organized and strongly recurrent neural networks can autonomously generate irregular motor behaviours. Simultaneous recordings of neurons in singing finches reveal that neural correlations increase across the circuit driving song variability, in agreement with the model predictions. Analysing behavioural data, we find remarkable similarities in the babbling statistics of 5–6-month-old human infants and juveniles from three songbird species and show that our model naturally accounts for these ‘universal' statistics. PMID:28530225

  17. Symbol-and-Arrow Diagrams in Teaching Pharmacokinetics.

    ERIC Educational Resources Information Center

    Hayton, William L.

    1990-01-01

    Symbol-and-arrow diagrams are helpful adjuncts to equations derived from pharmacokinetic models. Both show relationships among dependent and independent variables. Diagrams show only qualitative relationships, but clearly show which variables are dependent and which are independent, helping students understand complex but important functional…

  18. Description and validation of the Simple, Efficient, Dynamic, Global, Ecological Simulator (SEDGES v.1.0)

    NASA Astrophysics Data System (ADS)

    Paiewonsky, Pablo; Elison Timm, Oliver

    2018-03-01

    In this paper, we present a simple dynamic global vegetation model whose primary intended use is auxiliary to the land-atmosphere coupling scheme of a climate model, particularly one of intermediate complexity. The model simulates and provides important ecological-only variables but also some hydrological and surface energy variables that are typically either simulated by land surface schemes or else used as boundary data input for these schemes. The model formulations and their derivations are presented here, in detail. The model includes some realistic and useful features for its level of complexity, including a photosynthetic dependency on light, full coupling of photosynthesis and transpiration through an interactive canopy resistance, and a soil organic carbon dependence for bare-soil albedo. We evaluate the model's performance by running it as part of a simple land surface scheme that is driven by reanalysis data. The evaluation against observational data includes net primary productivity, leaf area index, surface albedo, and diagnosed variables relevant for the closure of the hydrological cycle. In this setup, we find that the model gives an adequate to good simulation of basic large-scale ecological and hydrological variables. Of the variables analyzed in this paper, gross primary productivity is particularly well simulated. The results also reveal the current limitations of the model. The most significant deficiency is the excessive simulation of evapotranspiration in mid- to high northern latitudes during their winter to spring transition. The model has a relative advantage in situations that require some combination of computational efficiency, model transparency and tractability, and the simulation of the large-scale vegetation and land surface characteristics under non-present-day conditions.

  19. Input variable selection for data-driven models of Coriolis flowmeters for two-phase flow measurement

    NASA Astrophysics Data System (ADS)

    Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao

    2017-03-01

    Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.

  20. Investigating the Complexity of NGC 2992 with HETG

    NASA Astrophysics Data System (ADS)

    Canizares, Claude

    2009-09-01

    NGC 2992 is a nearby (z = 0.00771) Seyfert galaxy with a variable 1.5-2 classification. Over the past 30 years, the 2-10 keV continuum flux has varied by a factor of ~20. This was accompanied by complex variability in the multi-component Fe K line emission, which may indicate violent flaring activity in the innermost regions of the accretion disk. By observing NGC 2992 with the HETG, we will obtain the best constraint to date on the FWHM of the narrow, distant-matter Fe K line emission, along with precision measurement of its centroid energy, thereby enabling more accurate modeling of the variable broad component. We will also test models of the soft excess through measurement of narrow absorption lines attributable to a warm absorber and narrow emission lines arising from photoexcitation.

  1. Comparison of Model and Observations of Middle Atmospheric HOx Response to Solar 27-day Cycles: Quantifying Model Uncertainties due to Photochemistry

    NASA Astrophysics Data System (ADS)

    Wang, S.; Li, K. F.; Shia, R. L.; Yung, Y. L.; Sander, S. P.

    2016-12-01

    HO2 and OH (known as odd oxygen HOx), play an important role in middle atmospheric chemistry, in particular, O3 destruction through catalytic HOx reaction cycles. Due to their photochemical production and short chemical lifetimes, HOx species response rapidly to solar UV irradiance changes during solar cycles, resulting in variability in the corresponding O3 chemistry. Observational evidences for both OH and HO2 variability due to solar cycles have been reported. However, puzzling discrepancies remain. In particular, the large discrepancy between model and observations of solar 11-year cycle signal in OH and the significantly different model results when adopting different solar spectral irradiance (SSI) [Wang et al., 2013] suggest that both uncertainties in SSI variability and uncertainties in our current understanding of HOx-O3 chemistry could contribute to the discrepancy. Since the short-term SSI variability (e.g. changes during solar 27-day cycles) has little uncertainty, investigating 27-day solar cycle signals in HOx allows us to simplify the complex problem and to focus on the uncertainties in chemistry alone. We use the Caltech-JPL photochemical model to simulate observed HOx variability during 27-day cycles. The comparison between Aura Microwave Limb Sounder (MLS) observations and our model results (using standard chemistry and "adjusted chemistry", respectively) will be discussed. A better understanding of uncertainties in chemistry will eventually help us separate the contribution of chemistry from contributions of SSI uncertainties to the complex discrepancy between model and observations of OH responses to solar 11-year cycles.

  2. Components of a Model for Forecasting Future Status of Selected Social Indicators. Department of Education Project on Social Indicators. Technical Report No. 3.

    ERIC Educational Resources Information Center

    Collazo, Andres; And Others

    Since a great number of variables influence future educational outcomes, forecasting possible trends is a complex task. One such model, the cross-impact matrix, has been developed. The use of this matrix in forecasting future values of social indicators of educational outcomes is described. Variables associated with educational outcomes are used…

  3. Cost drivers and resource allocation in military health care systems.

    PubMed

    Fulton, Larry; Lasdon, Leon S; McDaniel, Reuben R

    2007-03-01

    This study illustrates the feasibility of incorporating technical efficiency considerations in the funding of military hospitals and identifies the primary drivers for hospital costs. Secondary data collected for 24 U.S.-based Army hospitals and medical centers for the years 2001 to 2003 are the basis for this analysis. Technical efficiency was measured by using data envelopment analysis; subsequently, efficiency estimates were included in logarithmic-linear cost models that specified cost as a function of volume, complexity, efficiency, time, and facility type. These logarithmic-linear models were compared against stochastic frontier analysis models. A parsimonious, three-variable, logarithmic-linear model composed of volume, complexity, and efficiency variables exhibited a strong linear relationship with observed costs (R(2) = 0.98). This model also proved reliable in forecasting (R(2) = 0.96). Based on our analysis, as much as $120 million might be reallocated to improve the United States-based Army hospital performance evaluated in this study.

  4. Addition of equilibrium air to an upwind Navier-Stokes code and other first steps toward a more generalized flow solver

    NASA Technical Reports Server (NTRS)

    Rosen, Bruce S.

    1991-01-01

    An upwind three-dimensional volume Navier-Stokes code is modified to facilitate modeling of complex geometries and flow fields represented by proposed National Aerospace Plane concepts. Code enhancements include an equilibrium air model, a generalized equilibrium gas model and several schemes to simplify treatment of complex geometric configurations. The code is also restructured for inclusion of an arbitrary number of independent and dependent variables. This latter capability is intended for eventual use to incorporate nonequilibrium/chemistry gas models, more sophisticated turbulence and transition models, or other physical phenomena which will require inclusion of additional variables and/or governing equations. Comparisons of computed results with experimental data and results obtained using other methods are presented for code validation purposes. Good correlation is obtained for all of the test cases considered, indicating the success of the current effort.

  5. Assessing the accuracy and stability of variable selection ...

    EPA Pesticide Factsheets

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used, or stepwise procedures are employed which iteratively add/remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating dataset consists of the good/poor condition of n=1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p=212) of landscape features from the StreamCat dataset. Two types of RF models are compared: a full variable set model with all 212 predictors, and a reduced variable set model selected using a backwards elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors, and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substanti

  6. Virtual Levels and Role Models: N-Level Structural Equations Model of Reciprocal Ratings Data.

    PubMed

    Mehta, Paras D

    2018-01-01

    A general latent variable modeling framework called n-Level Structural Equations Modeling (NL-SEM) for dependent data-structures is introduced. NL-SEM is applicable to a wide range of complex multilevel data-structures (e.g., cross-classified, switching membership, etc.). Reciprocal dyadic ratings obtained in round-robin design involve complex set of dependencies that cannot be modeled within Multilevel Modeling (MLM) or Structural Equations Modeling (SEM) frameworks. The Social Relations Model (SRM) for round robin data is used as an example to illustrate key aspects of the NL-SEM framework. NL-SEM introduces novel constructs such as 'virtual levels' that allows a natural specification of latent variable SRMs. An empirical application of an explanatory SRM for personality using xxM, a software package implementing NL-SEM is presented. Results show that person perceptions are an integral aspect of personality. Methodological implications of NL-SEM for the analyses of an emerging class of contextual- and relational-SEMs are discussed.

  7. Predicting fire behavior in palmetto-gallberry fuel complexes

    Treesearch

    W A. Hough; F. A. Albini

    1978-01-01

    Rate of spread, fireline intensity, and flame length can be predicted with reasonable accuracy for backfires and low-intensity head fires in the palmetto-gallberry fuel complex of the South. This fuel complex was characterized and variables were adjusted for use in Rothermel's (1972) spread model. Age of rough, height of understory, percent of area covered by...

  8. Solution Strategies and Achievement in Dutch Complex Arithmetic: Latent Variable Modeling of Change

    ERIC Educational Resources Information Center

    Hickendorff, Marian; Heiser, Willem J.; van Putten, Cornelis M.; Verhelst, Norman D.

    2009-01-01

    In the Netherlands, national assessments at the end of primary school (Grade 6) show a decline of achievement on problems of complex or written arithmetic over the last two decades. The present study aims at contributing to an explanation of the large achievement decrease on complex division, by investigating the strategies students used in…

  9. Correlated receptor transport processes buffer single-cell heterogeneity

    PubMed Central

    Kallenberger, Stefan M.; Unger, Anne L.; Legewie, Stefan; Lymperopoulos, Konstantinos; Eils, Roland

    2017-01-01

    Cells typically vary in their response to extracellular ligands. Receptor transport processes modulate ligand-receptor induced signal transduction and impact the variability in cellular responses. Here, we quantitatively characterized cellular variability in erythropoietin receptor (EpoR) trafficking at the single-cell level based on live-cell imaging and mathematical modeling. Using ensembles of single-cell mathematical models reduced parameter uncertainties and showed that rapid EpoR turnover, transport of internalized EpoR back to the plasma membrane, and degradation of Epo-EpoR complexes were essential for receptor trafficking. EpoR trafficking dynamics in adherent H838 lung cancer cells closely resembled the dynamics previously characterized by mathematical modeling in suspension cells, indicating that dynamic properties of the EpoR system are widely conserved. Receptor transport processes differed by one order of magnitude between individual cells. However, the concentration of activated Epo-EpoR complexes was less variable due to the correlated kinetics of opposing transport processes acting as a buffering system. PMID:28945754

  10. Effects of Topography-driven Micro-climatology on Evaporation

    NASA Astrophysics Data System (ADS)

    Adams, D. D.; Boll, J.; Wagenbrenner, N. S.

    2017-12-01

    The effects of spatial-temporal variation of climatic conditions on evaporation in micro-climates are not well defined. Current spatially-based remote sensing and modeling for evaporation is limited for high resolutions and complex topographies. We investigated the effect of topography-driven micro-climatology on evaporation supported by field measurements and modeling. Fourteen anemometers and thermometers were installed in intersecting transects over the complex topography of the Cook Agronomy Farm, Pullman, WA. WindNinja was used to create 2-D vector maps based on recorded observations for wind. Spatial analysis of vector maps using ArcGIS was performed for analysis of wind patterns and variation. Based on field measurements, wind speed and direction show consequential variability based on hill-slope location in this complex topography. Wind speed and wind direction varied up to threefold and more than 45 degrees, respectively for a given time interval. The use of existing wind models enables prediction of wind variability over the landscape and subsequently topography-driven evaporation patterns relative to wind. The magnitude of the spatial-temporal variability of wind therefore resulted in variable evaporation rates over the landscape. These variations may contribute to uneven crop development patterns observed during the late growth stages of the agricultural crops at the study location. Use of hill-slope location indexes and appropriate methods for estimating actual evaporation support development of methodologies to better define topography-driven heterogeneity in evaporation. The cumulative effects of spatially-variable climatic factors on evaporation are important to quantify the localized water balance and inform precision farming practices.

  11. Observing and modeling dynamics in terrestrial gross primary productivity and phenology from remote sensing: An assessment using in-situ measurements

    NASA Astrophysics Data System (ADS)

    Verma, Manish K.

    Terrestrial gross primary productivity (GPP) is the largest and most variable component of the carbon cycle and is strongly influenced by phenology. Realistic characterization of spatio-temporal variation in GPP and phenology is therefore crucial for understanding dynamics in the global carbon cycle. In the last two decades, remote sensing has become a widely-used tool for this purpose. However, no study has comprehensively examined how well remote sensing models capture spatiotemporal patterns in GPP, and validation of remote sensing-based phenology models is limited. Using in-situ data from 144 eddy covariance towers located in all major biomes, I assessed the ability of 10 remote sensing-based methods to capture spatio-temporal variation in GPP at annual and seasonal scales. The models are based on different hypotheses regarding ecophysiological controls on GPP and span a range of structural and computational complexity. The results lead to four main conclusions: (i) at annual time scale, models were more successful capturing spatial variability than temporal variability; (ii) at seasonal scale, models were more successful in capturing average seasonal variability than interannual variability; (iii) simpler models performed as well or better than complex models; and (iv) models that were best at explaining seasonal variability in GPP were different from those that were best able to explain variability in annual scale GPP. Seasonal phenology of vegetation follows bounded growth and decay, and is widely modeled using growth functions. However, the specific form of the growth function affects how phenological dynamics are represented in ecosystem and remote sensing-base models. To examine this, four different growth functions (the logistic, Gompertz, Mirror-Gompertz and Richards function) were assessed using remotely sensed and in-situ data collected at several deciduous forest sites. All of the growth functions provided good statistical representation of in-situ and remote sensing time series. However, the Richards function captured observed asymmetric dynamics that were not captured by the other functions. The timing of key phenophase transitions derived using the Richards function therefore agreed best with observations. This suggests that ecosystem models and remote-sensing algorithms would benefit from using the Richards function to represent phenological dynamics.

  12. Short-term to seasonal variability in factors driving primary productivity in a shallow estuary: Implications for modeling production

    NASA Astrophysics Data System (ADS)

    Canion, Andy; MacIntyre, Hugh L.; Phipps, Scott

    2013-10-01

    The inputs of primary productivity models may be highly variable on short timescales (hourly to daily) in turbid estuaries, but modeling of productivity in these environments is often implemented with data collected over longer timescales. Daily, seasonal, and spatial variability in primary productivity model parameters: chlorophyll a concentration (Chla), the downwelling light attenuation coefficient (kd), and photosynthesis-irradiance response parameters (Pmchl, αChl) were characterized in Weeks Bay, a nitrogen-impacted shallow estuary in the northern Gulf of Mexico. Variability in primary productivity model parameters in response to environmental forcing, nutrients, and microalgal taxonomic marker pigments were analysed in monthly and short-term datasets. Microalgal biomass (as Chla) was strongly related to total phosphorus concentration on seasonal scales. Hourly data support wind-driven resuspension as a major source of short-term variability in Chla and light attenuation (kd). The empirical relationship between areal primary productivity and a combined variable of biomass and light attenuation showed that variability in the photosynthesis-irradiance response contributed little to the overall variability in primary productivity, and Chla alone could account for 53-86% of the variability in primary productivity. Efforts to model productivity in similar shallow systems with highly variable microalgal biomass may benefit the most by investing resources in improving spatial and temporal resolution of chlorophyll a measurements before increasing the complexity of models used in productivity modeling.

  13. Comparison between splines and fractional polynomials for multivariable model building with continuous covariates: a simulation study with continuous response.

    PubMed

    Binder, Harald; Sauerbrei, Willi; Royston, Patrick

    2013-06-15

    In observational studies, many continuous or categorical covariates may be related to an outcome. Various spline-based procedures or the multivariable fractional polynomial (MFP) procedure can be used to identify important variables and functional forms for continuous covariates. This is the main aim of an explanatory model, as opposed to a model only for prediction. The type of analysis often guides the complexity of the final model. Spline-based procedures and MFP have tuning parameters for choosing the required complexity. To compare model selection approaches, we perform a simulation study in the linear regression context based on a data structure intended to reflect realistic biomedical data. We vary the sample size, variance explained and complexity parameters for model selection. We consider 15 variables. A sample size of 200 (1000) and R(2)  = 0.2 (0.8) is the scenario with the smallest (largest) amount of information. For assessing performance, we consider prediction error, correct and incorrect inclusion of covariates, qualitative measures for judging selected functional forms and further novel criteria. From limited information, a suitable explanatory model cannot be obtained. Prediction performance from all types of models is similar. With a medium amount of information, MFP performs better than splines on several criteria. MFP better recovers simpler functions, whereas splines better recover more complex functions. For a large amount of information and no local structure, MFP and the spline procedures often select similar explanatory models. Copyright © 2012 John Wiley & Sons, Ltd.

  14. COMPLEX VARIABLE BOUNDARY ELEMENT METHOD: APPLICATIONS.

    USGS Publications Warehouse

    Hromadka, T.V.; Yen, C.C.; Guymon, G.L.

    1985-01-01

    The complex variable boundary element method (CVBEM) is used to approximate several potential problems where analytical solutions are known. A modeling result produced from the CVBEM is a measure of relative error in matching the known boundary condition values of the problem. A CVBEM error-reduction algorithm is used to reduce the relative error of the approximation by adding nodal points in boundary regions where error is large. From the test problems, overall error is reduced significantly by utilizing the adaptive integration algorithm.

  15. The extraction of simple relationships in growth factor-specific multiple-input and multiple-output systems in cell-fate decisions by backward elimination PLS regression.

    PubMed

    Akimoto, Yuki; Yugi, Katsuyuki; Uda, Shinsuke; Kudo, Takamasa; Komori, Yasunori; Kubota, Hiroyuki; Kuroda, Shinya

    2013-01-01

    Cells use common signaling molecules for the selective control of downstream gene expression and cell-fate decisions. The relationship between signaling molecules and downstream gene expression and cellular phenotypes is a multiple-input and multiple-output (MIMO) system and is difficult to understand due to its complexity. For example, it has been reported that, in PC12 cells, different types of growth factors activate MAP kinases (MAPKs) including ERK, JNK, and p38, and CREB, for selective protein expression of immediate early genes (IEGs) such as c-FOS, c-JUN, EGR1, JUNB, and FOSB, leading to cell differentiation, proliferation and cell death; however, how multiple-inputs such as MAPKs and CREB regulate multiple-outputs such as expression of the IEGs and cellular phenotypes remains unclear. To address this issue, we employed a statistical method called partial least squares (PLS) regression, which involves a reduction of the dimensionality of the inputs and outputs into latent variables and a linear regression between these latent variables. We measured 1,200 data points for MAPKs and CREB as the inputs and 1,900 data points for IEGs and cellular phenotypes as the outputs, and we constructed the PLS model from these data. The PLS model highlighted the complexity of the MIMO system and growth factor-specific input-output relationships of cell-fate decisions in PC12 cells. Furthermore, to reduce the complexity, we applied a backward elimination method to the PLS regression, in which 60 input variables were reduced to 5 variables, including the phosphorylation of ERK at 10 min, CREB at 5 min and 60 min, AKT at 5 min and JNK at 30 min. The simple PLS model with only 5 input variables demonstrated a predictive ability comparable to that of the full PLS model. The 5 input variables effectively extracted the growth factor-specific simple relationships within the MIMO system in cell-fate decisions in PC12 cells.

  16. Automated reverse engineering of nonlinear dynamical systems

    PubMed Central

    Bongard, Josh; Lipson, Hod

    2007-01-01

    Complex nonlinear dynamics arise in many fields of science and engineering, but uncovering the underlying differential equations directly from observations poses a challenging task. The ability to symbolically model complex networked systems is key to understanding them, an open problem in many disciplines. Here we introduce for the first time a method that can automatically generate symbolic equations for a nonlinear coupled dynamical system directly from time series data. This method is applicable to any system that can be described using sets of ordinary nonlinear differential equations, and assumes that the (possibly noisy) time series of all variables are observable. Previous automated symbolic modeling approaches of coupled physical systems produced linear models or required a nonlinear model to be provided manually. The advance presented here is made possible by allowing the method to model each (possibly coupled) variable separately, intelligently perturbing and destabilizing the system to extract its less observable characteristics, and automatically simplifying the equations during modeling. We demonstrate this method on four simulated and two real systems spanning mechanics, ecology, and systems biology. Unlike numerical models, symbolic models have explanatory value, suggesting that automated “reverse engineering” approaches for model-free symbolic nonlinear system identification may play an increasing role in our ability to understand progressively more complex systems in the future. PMID:17553966

  17. Automated reverse engineering of nonlinear dynamical systems.

    PubMed

    Bongard, Josh; Lipson, Hod

    2007-06-12

    Complex nonlinear dynamics arise in many fields of science and engineering, but uncovering the underlying differential equations directly from observations poses a challenging task. The ability to symbolically model complex networked systems is key to understanding them, an open problem in many disciplines. Here we introduce for the first time a method that can automatically generate symbolic equations for a nonlinear coupled dynamical system directly from time series data. This method is applicable to any system that can be described using sets of ordinary nonlinear differential equations, and assumes that the (possibly noisy) time series of all variables are observable. Previous automated symbolic modeling approaches of coupled physical systems produced linear models or required a nonlinear model to be provided manually. The advance presented here is made possible by allowing the method to model each (possibly coupled) variable separately, intelligently perturbing and destabilizing the system to extract its less observable characteristics, and automatically simplifying the equations during modeling. We demonstrate this method on four simulated and two real systems spanning mechanics, ecology, and systems biology. Unlike numerical models, symbolic models have explanatory value, suggesting that automated "reverse engineering" approaches for model-free symbolic nonlinear system identification may play an increasing role in our ability to understand progressively more complex systems in the future.

  18. Numerical Modeling in Geodynamics: Success, Failure and Perspective

    NASA Astrophysics Data System (ADS)

    Ismail-Zadeh, A.

    2005-12-01

    A real success in numerical modeling of dynamics of the Earth can be achieved only by multidisciplinary research teams of experts in geodynamics, applied and pure mathematics, and computer science. The success in numerical modeling is based on the following basic, but simple, rules. (i) People need simplicity most, but they understand intricacies best (B. Pasternak, writer). Start from a simple numerical model, which describes basic physical laws by a set of mathematical equations, and move then to a complex model. Never start from a complex model, because you cannot understand the contribution of each term of the equations to the modeled geophysical phenomenon. (ii) Study the numerical methods behind your computer code. Otherwise it becomes difficult to distinguish true and erroneous solutions to the geodynamic problem, especially when your problem is complex enough. (iii) Test your model versus analytical and asymptotic solutions, simple 2D and 3D model examples. Develop benchmark analysis of different numerical codes and compare numerical results with laboratory experiments. Remember that the numerical tool you employ is not perfect, and there are small bugs in every computer code. Therefore the testing is the most important part of your numerical modeling. (iv) Prove (if possible) or learn relevant statements concerning the existence, uniqueness and stability of the solution to the mathematical and discrete problems. Otherwise you can solve an improperly-posed problem, and the results of the modeling will be far from the true solution of your model problem. (v) Try to analyze numerical models of a geological phenomenon using as less as possible tuning model variables. Already two tuning variables give enough possibilities to constrain your model well enough with respect to observations. The data fitting sometimes is quite attractive and can take you far from a principal aim of your numerical modeling: to understand geophysical phenomena. (vi) If the number of tuning model variables are greater than two, test carefully the effect of each of the variables on the modeled phenomenon. Remember: With four exponents I can fit an elephant (E. Fermi, physicist). (vii) Make your numerical model as accurate as possible, but never put the aim to reach a great accuracy: Undue precision of computations is the first symptom of mathematical illiteracy (N. Krylov, mathematician). How complex should be a numerical model? A model which images any detail of the reality is as useful as a map of scale 1:1 (J. Robinson, economist). This message is quite important for geoscientists, who study numerical models of complex geodynamical processes. I believe that geoscientists will never create a model of the real Earth dynamics, but we should try to model the dynamics such a way to simulate basic geophysical processes and phenomena. Does a particular model have a predictive power? Each numerical model has a predictive power, otherwise the model is useless. The predictability of the model varies with its complexity. Remember that a solution to the numerical model is an approximate solution to the equations, which have been chosen in believe that they describe dynamic processes of the Earth. Hence a numerical model predicts dynamics of the Earth as well as the mathematical equations describe this dynamics. What methodological advances are still needed for testable geodynamic modeling? Inverse (time-reverse) numerical modeling and data assimilation are new methodologies in geodynamics. The inverse modeling can allow to test geodynamic models forward in time using restored (from present-day observations) initial conditions instead of unknown conditions.

  19. ALGORITHM TO REDUCE APPROXIMATION ERROR FROM THE COMPLEX-VARIABLE BOUNDARY-ELEMENT METHOD APPLIED TO SOIL FREEZING.

    USGS Publications Warehouse

    Hromadka, T.V.; Guymon, G.L.

    1985-01-01

    An algorithm is presented for the numerical solution of the Laplace equation boundary-value problem, which is assumed to apply to soil freezing or thawing. The Laplace equation is numerically approximated by the complex-variable boundary-element method. The algorithm aids in reducing integrated relative error by providing a true measure of modeling error along the solution domain boundary. This measure of error can be used to select locations for adding, removing, or relocating nodal points on the boundary or to provide bounds for the integrated relative error of unknown nodal variable values along the boundary.

  20. Detection of time delays and directional interactions based on time series from complex dynamical systems

    NASA Astrophysics Data System (ADS)

    Ma, Huanfei; Leng, Siyang; Tao, Chenyang; Ying, Xiong; Kurths, Jürgen; Lai, Ying-Cheng; Lin, Wei

    2017-07-01

    Data-based and model-free accurate identification of intrinsic time delays and directional interactions is an extremely challenging problem in complex dynamical systems and their networks reconstruction. A model-free method with new scores is proposed to be generally capable of detecting single, multiple, and distributed time delays. The method is applicable not only to mutually interacting dynamical variables but also to self-interacting variables in a time-delayed feedback loop. Validation of the method is carried out using physical, biological, and ecological models and real data sets. Especially, applying the method to air pollution data and hospital admission records of cardiovascular diseases in Hong Kong reveals the major air pollutants as a cause of the diseases and, more importantly, it uncovers a hidden time delay (about 30-40 days) in the causal influence that previous studies failed to detect. The proposed method is expected to be universally applicable to ascertaining and quantifying subtle interactions (e.g., causation) in complex systems arising from a broad range of disciplines.

  1. Evaluating soil carbon in global climate models: benchmarking, future projections, and model drivers

    NASA Astrophysics Data System (ADS)

    Todd-Brown, K. E.; Randerson, J. T.; Post, W. M.; Allison, S. D.

    2012-12-01

    The carbon cycle plays a critical role in how the climate responds to anthropogenic carbon dioxide. To evaluate how well Earth system models (ESMs) from the Climate Model Intercomparison Project (CMIP5) represent the carbon cycle, we examined predictions of current soil carbon stocks from the historical simulation. We compared the soil and litter carbon pools from 17 ESMs with data on soil carbon stocks from the Harmonized World Soil Database (HWSD). We also examined soil carbon predictions for 2100 from 16 ESMs from the rcp85 (highest radiative forcing) simulation to investigate the effects of climate change on soil carbon stocks. In both analyses, we used a reduced complexity model to separate the effects of variation in model drivers from the effects of model parameters on soil carbon predictions. Drivers included NPP, soil temperature, and soil moisture, and the reduced complexity model represented one pool of soil carbon as a function of these drivers. The ESMs predicted global soil carbon totals of 500 to 2980 Pg-C, compared to 1260 Pg-C in the HWSD. This 5-fold variation in predicted soil stocks was a consequence of a 3.4-fold variation in NPP inputs and 3.8-fold variability in mean global turnover times. None of the ESMs correlated well with the global distribution of soil carbon in the HWSD (Pearson's correlation <0.40, RMSE 9-22 kg m-2). On a biome level there was a broad range of agreement between the ESMs and the HWSD. Some models predicted HWSD biome totals well (R2=0.91) while others did not (R2=0.23). All of the ESM terrestrial decomposition models are structurally similar with outputs that were well described by a reduced complexity model that included NPP and soil temperature (R2 of 0.73-0.93). However, MPI-ESM-LR outputs showed only a moderate fit to this model (R2=0.51), and CanESM2 outputs were better described by a reduced model that included soil moisture (R2=0.74), We also found a broad range in soil carbon responses to climate change predicted by the ESMs, with changes of -480 to 230 Pg-C from 2005-2100. All models that reported NPP and heterotrophic respiration showed increases in both of these processes over the simulated period. In two of the models, soils switched from a global sink for carbon to a net source. Of the remaining models, half predicted that soils were a sink for carbon throughout the time period and the other half predicted that soils were a carbon source.. Heterotrophic respiration in most of the models from 2005-2100 was well explained by a reduced complexity model dependent on soil carbon, soil temperature, and soil moisture (R2 values >0.74). However, MPI-ESM (R2=0.45) showed only moderate fit to this model. Our analysis shows that soil carbon predictions from ESMs are highly variable, with much of this variability due to model parameterization and variations in driving variables. Furthermore, our reduced complexity models show that most variation in ESM outputs can be explained by a simple one-pool model with a small number of drivers and parameters. Therefore, agreement between soil carbon predictions across models could improve substantially by reconciling differences in driving variables and the parameters that link soil carbon with environmental drivers. However it is unclear if this model agreement would reflect what is truly happening in the Earth system.

  2. Water Quality Variable Estimation using Partial Least Squares Regression and Multi-Scale Remote Sensing.

    NASA Astrophysics Data System (ADS)

    Peterson, K. T.; Wulamu, A.

    2017-12-01

    Water, essential to all living organisms, is one of the Earth's most precious resources. Remote sensing offers an ideal approach to monitor water quality over traditional in-situ techniques that are highly time and resource consuming. Utilizing a multi-scale approach, incorporating data from handheld spectroscopy, UAS based hyperspectal, and satellite multispectral images were collected in coordination with in-situ water quality samples for the two midwestern watersheds. The remote sensing data was modeled and correlated to the in-situ water quality variables including chlorophyll content (Chl), turbidity, and total dissolved solids (TDS) using Normalized Difference Spectral Indices (NDSI) and Partial Least Squares Regression (PLSR). The results of the study supported the original hypothesis that correlating water quality variables with remotely sensed data benefits greatly from the use of more complex modeling and regression techniques such as PLSR. The final results generated from the PLSR analysis resulted in much higher R2 values for all variables when compared to NDSI. The combination of NDSI and PLSR analysis also identified key wavelengths for identification that aligned with previous study's findings. This research displays the advantages and future for complex modeling and machine learning techniques to improve water quality variable estimation from spectral data.

  3. Family Environment and Cognitive Development: Twelve Analytic Models

    ERIC Educational Resources Information Center

    Walberg, Herbert J.; Marjoribanks, Kevin

    1976-01-01

    The review indicates that refined measures of the family environment and the use of complex statistical models increase the understanding of the relationships between socioeconomic status, sibling variables, family environment, and cognitive development. (RC)

  4. Effects of head-down bed rest on complex heart rate variability: Response to LBNP testing

    NASA Technical Reports Server (NTRS)

    Goldberger, Ary L.; Mietus, Joseph E.; Rigney, David R.; Wood, Margie L.; Fortney, Suzanne M.

    1994-01-01

    Head-down bed rest is used to model physiological changes during spaceflight. We postulated that bed rest would decrease the degree of complex physiological heart rate variability. We analyzed continuous heart rate data from digitized Holter recordings in eight healthy female volunteers (age 28-34 yr) who underwent a 13-day 6 deg head-down bed rest study with serial lower body negative pressure (LBNP) trials. Heart rate variability was measured on a 4-min data sets using conventional time and frequency domain measures as well as with a new measure of signal 'complexity' (approximate entropy). Data were obtained pre-bed rest (control), during bed rest (day 4 and day 9 or 11), and 2 days post-bed rest (recovery). Tolerance to LBNP was significantly reduced on both bed rest days vs. pre-bed rest. Heart rate variability was assessed at peak LBNP. Heart rate approximate entropy was significantly decreased at day 4 and day 9 or 11, returning toward normal during recovery. Heart rate standard deviation and the ratio of high- to low-power frequency did not change significantly. We conclude that short-term bed rest is associated with a decrease in the complex variability of heart rate during LBNP testing in healthy young adult women. Measurement of heart rate complexity, using a method derived from nonlinear dynamics ('chaos theory'), may provide a sensitive marker of this loss of physiological variability, complementing conventional time and frequency domain statistical measures.

  5. Quantification for complex assessment: uncertainty estimation in final year project thesis assessment

    NASA Astrophysics Data System (ADS)

    Kim, Ho Sung

    2013-12-01

    A quantitative method for estimating an expected uncertainty (reliability and validity) in assessment results arising from the relativity between four variables, viz examiner's expertise, examinee's expertise achieved, assessment task difficulty and examinee's performance, was developed for the complex assessment applicable to final year project thesis assessment including peer assessment. A guide map can be generated by the method for finding expected uncertainties prior to the assessment implementation with a given set of variables. It employs a scale for visualisation of expertise levels, derivation of which is based on quantified clarities of mental images for levels of the examiner's expertise and the examinee's expertise achieved. To identify the relevant expertise areas that depend on the complexity in assessment format, a graphical continuum model was developed. The continuum model consists of assessment task, assessment standards and criterion for the transition towards the complex assessment owing to the relativity between implicitness and explicitness and is capable of identifying areas of expertise required for scale development.

  6. Flexible and structured survival model for a simultaneous estimation of non-linear and non-proportional effects and complex interactions between continuous variables: Performance of this multidimensional penalized spline approach in net survival trend analysis.

    PubMed

    Remontet, Laurent; Uhry, Zoé; Bossard, Nadine; Iwaz, Jean; Belot, Aurélien; Danieli, Coraline; Charvat, Hadrien; Roche, Laurent

    2018-01-01

    Cancer survival trend analyses are essential to describe accurately the way medical practices impact patients' survival according to the year of diagnosis. To this end, survival models should be able to account simultaneously for non-linear and non-proportional effects and for complex interactions between continuous variables. However, in the statistical literature, there is no consensus yet on how to build such models that should be flexible but still provide smooth estimates of survival. In this article, we tackle this challenge by smoothing the complex hypersurface (time since diagnosis, age at diagnosis, year of diagnosis, and mortality hazard) using a multidimensional penalized spline built from the tensor product of the marginal bases of time, age, and year. Considering this penalized survival model as a Poisson model, we assess the performance of this approach in estimating the net survival with a comprehensive simulation study that reflects simple and complex realistic survival trends. The bias was generally small and the root mean squared error was good and often similar to that of the true model that generated the data. This parametric approach offers many advantages and interesting prospects (such as forecasting) that make it an attractive and efficient tool for survival trend analyses.

  7. Wuestite (Fe/1-x/O) - A review of its defect structure and physical properties

    NASA Technical Reports Server (NTRS)

    Hazen, R. M.; Jeanloz, R.

    1984-01-01

    Such complexities of the Wustite structure as nonstoichiometry, ferric iron variable site distribution, long and short range ordering, and exsolution, yield complex physical properties. Magnesiowustite, a phase which has been suggested to occur in the earth's lower mantle, is also expected to exhibit many of these complexities. Geophysical models including the properties of (Mg, Fe)O should accordingly take into account the uncertainties associated with the synthesis and measurement of iron-rich oxides. Given the variability of the Fe(1-x)O structure, it is important that future researchers define the structural state and extent of exsolution of their samples.

  8. INTERDISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Transition Features from Simplicity-Universality to Complexity-Diversification Under UHNTF

    NASA Astrophysics Data System (ADS)

    Fang, Jin-Qing; Li, Yong

    2010-02-01

    A large unified hybrid network model with a variable speed growth (LUHNM-VSG) is proposed as third model of the unified hybrid network theoretical framework (UHNTF). A hybrid growth ratio vg of deterministic linking number to random linking number and variable speed growth index α are introduced in it. The main effects of vg and α on topological transition features of the LUHNM-VSG are revealed. For comparison with the other models, we construct a type of the network complexity pyramid with seven levels, in which from the bottom level-1 to the top level-7 of the pyramid simplicity-universality is increasing but complexity-diversity is decreasing. The transition relations between them depend on matching of four hybrid ratios (dr, fd, gr, vg). Thus the most of network models can be investigated in the unification way via four hybrid ratios (dr, fd, gr, vg). The LUHNM-VSG as the level-1 of the pyramid is much better and closer to description of real-world networks as well as has potential application.

  9. Comparative study of the Aristotle Comprehensive Complexity and the Risk Adjustment in Congenital Heart Surgery scores.

    PubMed

    Bojan, Mirela; Gerelli, Sébastien; Gioanni, Simone; Pouard, Philippe; Vouhé, Pascal

    2011-09-01

    The Aristotle Comprehensive Complexity (ACC) and the Risk Adjustment in Congenital Heart Surgery (RACHS-1) scores have been proposed for complexity adjustment in the analysis of outcome after congenital heart surgery. Previous studies found RACHS-1 to be a better predictor of outcome than the Aristotle Basic Complexity score. We compared the ability to predict operative mortality and morbidity between ACC, the latest update of the Aristotle method and RACHS-1. Morbidity was assessed by length of intensive care unit stay. We retrospectively enrolled patients undergoing congenital heart surgery. We modeled each score as a continuous variable, mortality as a binary variable, and length of stay as a censored variable. We compared performance between mortality and morbidity models using likelihood ratio tests for nested models and paired concordance statistics. Among all 1,384 patients enrolled, 30-day mortality rate was 3.5% and median length of intensive care unit stay was 3 days. Both scores strongly related to mortality, but ACC made better prediction than RACHS-1; c-indexes 0.87 (0.84, 0.91) vs 0.75 (0.65, 0.82). Both scores related to overall length of stay only during the first postoperative week, but ACC made better predictions than RACHS-1; U statistic=0.22, p<0.001. No significant difference was noted after adjusting RACHS-1 models on age, prematurity, and major extracardiac abnormalities. The ACC was a better predictor of operative mortality and length of intensive care unit stay than RACHS-1. In order to achieve similar performance, regression models including RACHS-1 need to be further adjusted on age, prematurity, and major extracardiac abnormalities. Copyright © 2011 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  10. Modeling Effects of Temperature, Soil, Moisture, Nutrition and Variety As Determinants of Severity of Pythium Damping-Off and Root Disease in Subterranean Clover

    PubMed Central

    You, Ming P.; Rensing, Kelly; Renton, Michael; Barbetti, Martin J.

    2017-01-01

    Subterranean clover (Trifolium subterraneum) is a critical pasture legume in Mediterranean regions of southern Australia and elsewhere, including Mediterranean-type climatic regions in Africa, Asia, Australia, Europe, North America, and South America. Pythium damping-off and root disease caused by Pythium irregulare is a significant threat to subterranean clover in Australia and a study was conducted to define how environmental factors (viz. temperature, soil type, moisture and nutrition) as well as variety, influence the extent of damping-off and root disease as well as subterranean clover productivity under challenge by this pathogen. Relationships were statistically modeled using linear and generalized linear models and boosted regression trees. Modeling found complex relationships between explanatory variables and the extent of Pythium damping-off and root rot. Linear modeling identified high-level (4 or 5-way) significant interactions for each dependent variable (dry shoot and root weight, emergence, tap and lateral root disease index). Furthermore, all explanatory variables (temperature, soil, moisture, nutrition, variety) were found significant as part of some interaction within these models. A significant five-way interaction between all explanatory variables was found for both dry shoot and root dry weights, and a four way interaction between temperature, soil, moisture, and nutrition was found for both tap and lateral root disease index. A second approach to modeling using boosted regression trees provided support for and helped clarify the complex nature of the relationships found in linear models. All explanatory variables showed at least 5% relative influence on each of the five dependent variables. All models indicated differences due to soil type, with the sand-based soil having either higher weights, greater emergence, or lower disease indices; while lowest weights and less emergence, as well as higher disease indices, were found for loam soil and low temperature. There was more severe tap and lateral root rot disease in higher moisture situations. PMID:29184544

  11. Using a Bayesian network to predict barrier island geomorphologic characteristics

    USGS Publications Warehouse

    Gutierrez, Ben; Plant, Nathaniel G.; Thieler, E. Robert; Turecek, Aaron

    2015-01-01

    Quantifying geomorphic variability of coastal environments is important for understanding and describing the vulnerability of coastal topography, infrastructure, and ecosystems to future storms and sea level rise. Here we use a Bayesian network (BN) to test the importance of multiple interactions between barrier island geomorphic variables. This approach models complex interactions and handles uncertainty, which is intrinsic to future sea level rise, storminess, or anthropogenic processes (e.g., beach nourishment and other forms of coastal management). The BN was developed and tested at Assateague Island, Maryland/Virginia, USA, a barrier island with sufficient geomorphic and temporal variability to evaluate our approach. We tested the ability to predict dune height, beach width, and beach height variables using inputs that included longer-term, larger-scale, or external variables (historical shoreline change rates, distances to inlets, barrier width, mean barrier elevation, and anthropogenic modification). Data sets from three different years spanning nearly a decade sampled substantial temporal variability and serve as a proxy for analysis of future conditions. We show that distinct geomorphic conditions are associated with different long-term shoreline change rates and that the most skillful predictions of dune height, beach width, and beach height depend on including multiple input variables simultaneously. The predictive relationships are robust to variations in the amount of input data and to variations in model complexity. The resulting model can be used to evaluate scenarios related to coastal management plans and/or future scenarios where shoreline change rates may differ from those observed historically.

  12. Exploring uncertainty and model predictive performance concepts via a modular snowmelt-runoff modeling framework

    Treesearch

    Tyler Jon Smith; Lucy Amanda Marshall

    2010-01-01

    Model selection is an extremely important aspect of many hydrologic modeling studies because of the complexity, variability, and uncertainty that surrounds the current understanding of watershed-scale systems. However, development and implementation of a complete precipitation-runoff modeling framework, from model selection to calibration and uncertainty analysis, are...

  13. Introduction to the special section on mixture modeling in personality assessment.

    PubMed

    Wright, Aidan G C; Hallquist, Michael N

    2014-01-01

    Latent variable models offer a conceptual and statistical framework for evaluating the underlying structure of psychological constructs, including personality and psychopathology. Complex structures that combine or compare categorical and dimensional latent variables can be accommodated using mixture modeling approaches, which provide a powerful framework for testing nuanced theories about psychological structure. This special series includes introductory primers on cross-sectional and longitudinal mixture modeling, in addition to empirical examples applying these techniques to real-world data collected in clinical settings. This group of articles is designed to introduce personality assessment scientists and practitioners to a general latent variable framework that we hope will stimulate new research and application of mixture models to the assessment of personality and its pathology.

  14. A data-driven approach to identify controls on global fire activity from satellite and climate observations (SOFIA V1)

    NASA Astrophysics Data System (ADS)

    Forkel, Matthias; Dorigo, Wouter; Lasslop, Gitta; Teubner, Irene; Chuvieco, Emilio; Thonicke, Kirsten

    2017-12-01

    Vegetation fires affect human infrastructures, ecosystems, global vegetation distribution, and atmospheric composition. However, the climatic, environmental, and socioeconomic factors that control global fire activity in vegetation are only poorly understood, and in various complexities and formulations are represented in global process-oriented vegetation-fire models. Data-driven model approaches such as machine learning algorithms have successfully been used to identify and better understand controlling factors for fire activity. However, such machine learning models cannot be easily adapted or even implemented within process-oriented global vegetation-fire models. To overcome this gap between machine learning-based approaches and process-oriented global fire models, we introduce a new flexible data-driven fire modelling approach here (Satellite Observations to predict FIre Activity, SOFIA approach version 1). SOFIA models can use several predictor variables and functional relationships to estimate burned area that can be easily adapted with more complex process-oriented vegetation-fire models. We created an ensemble of SOFIA models to test the importance of several predictor variables. SOFIA models result in the highest performance in predicting burned area if they account for a direct restriction of fire activity under wet conditions and if they include a land cover-dependent restriction or allowance of fire activity by vegetation density and biomass. The use of vegetation optical depth data from microwave satellite observations, a proxy for vegetation biomass and water content, reaches higher model performance than commonly used vegetation variables from optical sensors. We further analyse spatial patterns of the sensitivity between anthropogenic, climate, and vegetation predictor variables and burned area. We finally discuss how multiple observational datasets on climate, hydrological, vegetation, and socioeconomic variables together with data-driven modelling and model-data integration approaches can guide the future development of global process-oriented vegetation-fire models.

  15. Differences in aquatic habitat quality as an impact of one- and two-dimensional hydrodynamic model simulated flow variables

    NASA Astrophysics Data System (ADS)

    Benjankar, R. M.; Sohrabi, M.; Tonina, D.; McKean, J. A.

    2013-12-01

    Aquatic habitat models utilize flow variables which may be predicted with one-dimensional (1D) or two-dimensional (2D) hydrodynamic models to simulate aquatic habitat quality. Studies focusing on the effects of hydrodynamic model dimensionality on predicted aquatic habitat quality are limited. Here we present the analysis of the impact of flow variables predicted with 1D and 2D hydrodynamic models on simulated spatial distribution of habitat quality and Weighted Usable Area (WUA) for fall-spawning Chinook salmon. Our study focuses on three river systems located in central Idaho (USA), which are a straight and pool-riffle reach (South Fork Boise River), small pool-riffle sinuous streams in a large meadow (Bear Valley Creek) and a steep-confined plane-bed stream with occasional deep forced pools (Deadwood River). We consider low and high flows in simple and complex morphologic reaches. Results show that 1D and 2D modeling approaches have effects on both the spatial distribution of the habitat and WUA for both discharge scenarios, but we did not find noticeable differences between complex and simple reaches. In general, the differences in WUA were small, but depended on stream type. Nevertheless, spatially distributed habitat quality difference is considerable in all streams. The steep-confined plane bed stream had larger differences between aquatic habitat quality defined with 1D and 2D flow models compared to results for streams with well defined macro-topographies, such as pool-riffle bed forms. KEY WORDS: one- and two-dimensional hydrodynamic models, habitat modeling, weighted usable area (WUA), hydraulic habitat suitability, high and low discharges, simple and complex reaches

  16. Extended q -Gaussian and q -exponential distributions from gamma random variables

    NASA Astrophysics Data System (ADS)

    Budini, Adrián A.

    2015-05-01

    The family of q -Gaussian and q -exponential probability densities fit the statistical behavior of diverse complex self-similar nonequilibrium systems. These distributions, independently of the underlying dynamics, can rigorously be obtained by maximizing Tsallis "nonextensive" entropy under appropriate constraints, as well as from superstatistical models. In this paper we provide an alternative and complementary scheme for deriving these objects. We show that q -Gaussian and q -exponential random variables can always be expressed as a function of two statistically independent gamma random variables with the same scale parameter. Their shape index determines the complexity q parameter. This result also allows us to define an extended family of asymmetric q -Gaussian and modified q -exponential densities, which reduce to the standard ones when the shape parameters are the same. Furthermore, we demonstrate that a simple change of variables always allows relating any of these distributions with a beta stochastic variable. The extended distributions are applied in the statistical description of different complex dynamics such as log-return signals in financial markets and motion of point defects in a fluid flow.

  17. Common quandaries and their practical solutions in Bayesian network modeling

    Treesearch

    Bruce G. Marcot

    2017-01-01

    Use and popularity of Bayesian network (BN) modeling has greatly expanded in recent years, but many common problems remain. Here, I summarize key problems in BN model construction and interpretation,along with suggested practical solutions. Problems in BN model construction include parameterizing probability values, variable definition, complex network structures,...

  18. Modeling the Components of an Economy as a Complex Adaptive System

    DTIC Science & Technology

    principles of constrained optimization and fails to see economic variables as part of an interconnected network. While tools for forecasting economic...data sets such as the stock market . This research portrays the stock market as one component of a networked system of economic variables, with the

  19. Modeling Coast Redwood Variable Retention Management Regimes

    Treesearch

    John-Pascal Berrill; Kevin O' Hara

    2007-01-01

    Variable retention is a flexible silvicultural system that provides forest managers with an alternative to clearcutting. While much of the standing volume is removed in one harvesting operation, residual stems are retained to provide structural complexity and wildlife habitat functions, or to accrue volume before removal during subsequent stand entries. The residual...

  20. PBSM3D: A finite volume, scalar-transport blowing snow model for use with variable resolution meshes

    NASA Astrophysics Data System (ADS)

    Marsh, C.; Wayand, N. E.; Pomeroy, J. W.; Wheater, H. S.; Spiteri, R. J.

    2017-12-01

    Blowing snow redistribution results in heterogeneous snowcovers that are ubiquitous in cold, windswept environments. Capturing this spatial and temporal variability is important for melt and runoff simulations. Point scale blowing snow transport models are difficult to apply in fully distributed hydrological models due to landscape heterogeneity and complex wind fields. Many existing distributed snow transport models have empirical wind flow and/or simplified wind direction algorithms that perform poorly in calculating snow redistribution where there are divergent wind flows, sharp topography, and over large spatial extents. Herein, a steady-state scalar transport model is discretized using the finite volume method (FVM), using parameterizations from the Prairie Blowing Snow Model (PBSM). PBSM has been applied in hydrological response units and grids to prairie, arctic, glacier, and alpine terrain and shows a good capability to represent snow redistribution over complex terrain. The FVM discretization takes advantage of the variable resolution mesh in the Canadian Hydrological Model (CHM) to ensure efficient calculations over small and large spatial extents. Variable resolution unstructured meshes preserve surface heterogeneity but result in fewer computational elements versus high-resolution structured (raster) grids. Snowpack, soil moisture, and streamflow observations were used to evaluate CHM-modelled outputs in a sub-arctic and an alpine basin. Newly developed remotely sensed snowcover indices allowed for validation over large basins. CHM simulations of snow hydrology were improved by inclusion of the blowing snow model. The results demonstrate the key role of snow transport processes in creating pre-melt snowcover heterogeneity and therefore governing post-melt soil moisture and runoff generation dynamics.

  1. Developing, delivering and evaluating primary mental health care: the co-production of a new complex intervention.

    PubMed

    Reeve, Joanne; Cooper, Lucy; Harrington, Sean; Rosbottom, Peter; Watkins, Jane

    2016-09-06

    Health services face the challenges created by complex problems, and so need complex intervention solutions. However they also experience ongoing difficulties in translating findings from research in this area in to quality improvement changes on the ground. BounceBack was a service development innovation project which sought to examine this issue through the implementation and evaluation in a primary care setting of a novel complex intervention. The project was a collaboration between a local mental health charity, an academic unit, and GP practices. The aim was to translate the charity's model of care into practice-based evidence describing delivery and impact. Normalisation Process Theory (NPT) was used to support the implementation of the new model of primary mental health care into six GP practices. An integrated process evaluation evaluated the process and impact of care. Implementation quickly stalled as we identified problems with the described model of care when applied in a changing and variable primary care context. The team therefore switched to using the NPT framework to support the systematic identification and modification of the components of the complex intervention: including the core components that made it distinct (the consultation approach) and the variable components (organisational issues) that made it work in practice. The extra work significantly reduced the time available for outcome evaluation. However findings demonstrated moderately successful implementation of the model and a suggestion of hypothesised changes in outcomes. The BounceBack project demonstrates the development of a complex intervention from practice. It highlights the use of Normalisation Process Theory to support development, and not just implementation, of a complex intervention; and describes the use of the research process in the generation of practice-based evidence. Implications for future translational complex intervention research supporting practice change through scholarship are discussed.

  2. A network-based approach for semi-quantitative knowledge mining and its application to yield variability

    NASA Astrophysics Data System (ADS)

    Schauberger, Bernhard; Rolinski, Susanne; Müller, Christoph

    2016-12-01

    Variability of crop yields is detrimental for food security. Under climate change its amplitude is likely to increase, thus it is essential to understand the underlying causes and mechanisms. Crop models are the primary tool to project future changes in crop yields under climate change. A systematic overview of drivers and mechanisms of crop yield variability (YV) can thus inform crop model development and facilitate improved understanding of climate change impacts on crop yields. Yet there is a vast body of literature on crop physiology and YV, which makes a prioritization of mechanisms for implementation in models challenging. Therefore this paper takes on a novel approach to systematically mine and organize existing knowledge from the literature. The aim is to identify important mechanisms lacking in models, which can help to set priorities in model improvement. We structure knowledge from the literature in a semi-quantitative network. This network consists of complex interactions between growing conditions, plant physiology and crop yield. We utilize the resulting network structure to assign relative importance to causes of YV and related plant physiological processes. As expected, our findings confirm existing knowledge, in particular on the dominant role of temperature and precipitation, but also highlight other important drivers of YV. More importantly, our method allows for identifying the relevant physiological processes that transmit variability in growing conditions to variability in yield. We can identify explicit targets for the improvement of crop models. The network can additionally guide model development by outlining complex interactions between processes and by easily retrieving quantitative information for each of the 350 interactions. We show the validity of our network method as a structured, consistent and scalable dictionary of literature. The method can easily be applied to many other research fields.

  3. Statistical and Biophysical Models for Predicting Total and Outdoor Water Use in Los Angeles

    NASA Astrophysics Data System (ADS)

    Mini, C.; Hogue, T. S.; Pincetl, S.

    2012-04-01

    Modeling water demand is a complex exercise in the choice of the functional form, techniques and variables to integrate in the model. The goal of the current research is to identify the determinants that control total and outdoor residential water use in semi-arid cities and to utilize that information in the development of statistical and biophysical models that can forecast spatial and temporal urban water use. The City of Los Angeles is unique in its highly diverse socio-demographic, economic and cultural characteristics across neighborhoods, which introduces significant challenges in modeling water use. Increasing climate variability also contributes to uncertainties in water use predictions in urban areas. Monthly individual water use records were acquired from the Los Angeles Department of Water and Power (LADWP) for the 2000 to 2010 period. Study predictors of residential water use include socio-demographic, economic, climate and landscaping variables at the zip code level collected from US Census database. Climate variables are estimated from ground-based observations and calculated at the centroid of each zip code by inverse-distance weighting method. Remotely-sensed products of vegetation biomass and landscape land cover are also utilized. Two linear regression models were developed based on the panel data and variables described: a pooled-OLS regression model and a linear mixed effects model. Both models show income per capita and the percentage of landscape areas in each zip code as being statistically significant predictors. The pooled-OLS model tends to over-estimate higher water use zip codes and both models provide similar RMSE values.Outdoor water use was estimated at the census tract level as the residual between total water use and indoor use. This residual is being compared with the output from a biophysical model including tree and grass cover areas, climate variables and estimates of evapotranspiration at very high spatial resolution. A genetic algorithm based model (Shuffled Complex Evolution-UA; SCE-UA) is also being developed to provide estimates of the predictions and parameters uncertainties and to compare against the linear regression models. Ultimately, models will be selected to undertake predictions for a range of climate change and landscape scenarios. Finally, project results will contribute to a better understanding of water demand to help predict future water use and implement targeted landscaping conservation programs to maintain sustainable water needs for a growing population under uncertain climate variability.

  4. Initial fractal exponent of heart-rate variability is associated with success of early resuscitation in patients with severe sepsis or septic shock: a prospective cohort study

    PubMed Central

    Brown, Samuel M.; Tate, Quinn; Jones, Jason P.; Knox, Daniel; Kuttler, Kathryn G.; Lanspa, Michael; Rondina, Matthew T.; Grissom, Colin K.; Behera, Subhasis; Mathews, V.J.; Morris, Alan

    2013-01-01

    Introduction Heart-rate variability reflects autonomic nervous system tone as well as the overall health of the baroreflex system. We hypothesized that loss of complexity in heart-rate variability upon ICU admission would be associated with unsuccessful early resuscitation of sepsis. Methods We prospectively enrolled patients admitted to ICUs with severe sepsis or septic shock from 2009 to 2011. We studied 30 minutes of EKG, sampled at 500 Hz, at ICU admission and calculated heart-rate complexity via detrended fluctuation analysis. Primary outcome was vasopressor independence at 24 hours after ICU admission. Secondary outcome was 28-day mortality. Results We studied 48 patients, of whom 60% were vasopressor independent at 24 hours. Five (10%) died within 28 days. The ratio of fractal alpha parameters was associated with both vasopressor independence and 28-day mortality (p=0.04) after controlling for mean heart rate. In the optimal model, SOFA score and the long-term fractal alpha parameter were associated with vasopressor independence. Conclusions Loss of complexity in heart rate variability is associated with worse outcome early in severe sepsis and septic shock. Further work should evaluate whether complexity of heart rate variability (HRV) could guide treatment in sepsis. PMID:23958243

  5. Computer modeling and simulation of human movement. Applications in sport and rehabilitation.

    PubMed

    Neptune, R R

    2000-05-01

    Computer modeling and simulation of human movement plays an increasingly important role in sport and rehabilitation, with applications ranging from sport equipment design to understanding pathologic gait. The complex dynamic interactions within the musculoskeletal and neuromuscular systems make analyzing human movement with existing experimental techniques difficult but computer modeling and simulation allows for the identification of these complex interactions and causal relationships between input and output variables. This article provides an overview of computer modeling and simulation and presents an example application in the field of rehabilitation.

  6. Modelling Pseudocalanus elongatus stage-structured population dynamics embedded in a water column ecosystem model for the northern North Sea

    NASA Astrophysics Data System (ADS)

    Moll, Andreas; Stegert, Christoph

    2007-01-01

    This paper outlines an approach to couple a structured zooplankton population model with state variables for eggs, nauplii, two copepodites stages and adults adapted to Pseudocalanus elongatus into the complex marine ecosystem model ECOHAM2 with 13 state variables resolving the carbon and nitrogen cycle. Different temperature and food scenarios derived from laboratory culture studies were examined to improve the process parameterisation for copepod stage dependent development processes. To study annual cycles under realistic weather and hydrographic conditions, the coupled ecosystem-zooplankton model is applied to a water column in the northern North Sea. The main ecosystem state variables were validated against observed monthly mean values. Then vertical profiles of selected state variables were compared to the physical forcing to study differences between zooplankton as one biomass state variable or partitioned into five population state variables. Simulated generation times are more affected by temperature than food conditions except during the spring phytoplankton bloom. Up to six generations within the annual cycle can be discerned in the simulation.

  7. Utility of computer simulations in landscape genetics

    Treesearch

    Bryan K. Epperson; Brad H. McRae; Kim Scribner; Samuel A. Cushman; Michael S. Rosenberg; Marie-Josee Fortin; Patrick M. A. James; Melanie Murphy; Stephanie Manel; Pierre Legendre; Mark R. T. Dale

    2010-01-01

    Population genetics theory is primarily based on mathematical models in which spatial complexity and temporal variability are largely ignored. In contrast, the field of landscape genetics expressly focuses on how population genetic processes are affected by complex spatial and temporal environmental heterogeneity. It is spatially explicit and relates patterns to...

  8. Environmental Uncertainty and Communication Network Complexity: A Cross-System, Cross-Cultural Test.

    ERIC Educational Resources Information Center

    Danowski, James

    An infographic model is proposed to account for the operation of systems within their information environments. Infographics is a communication paradigm used to indicate the clustering of information processing variables in communication systems. Four propositions concerning environmental uncertainty and internal communication network complexity,…

  9. A framework for parametric modeling of ankle ligaments to determine the in situ response under gross foot motion.

    PubMed

    Nie, Bingbing; Panzer, Matthew Brian; Mane, Adwait; Mait, Alexander Ritz; Donlon, John-Paul; Forman, Jason Lee; Kent, Richard Wesley

    2016-09-01

    Ligament sprains account for a majority of injuries to the foot and ankle complex, but ligament properties have not been understood well due to the difficulties in replicating the complex geometry, in situ stress state, and non-uniformity of the strain. For a full investigation of the injury mechanism, it is essential to build up a foot and ankle model validated at the level of bony kinematics and ligament properties. This study developed a framework to parameterize the ligament response for determining the in situ stress state and heterogeneous force-elongation characteristics using a finite element ankle model. Nine major ankle ligaments and the interosseous membrane were modeled as discrete elements corresponding functionally to the ligamentous microstructure of collagen fibers and having parameterized toe region and stiffness at the fiber level. The range of the design variables in the ligament model was determined from existing experimental data. Sensitivity of the bony kinematics to each variable was investigated by design of experiment. The results highlighted the critical role of the length of the toe region of the ligamentous fibers on the bony kinematics with the cumulative influence of more than 95%, while the fiber stiffness was statistically insignificant with an influence of less than 1% under the given variable range and loading conditions. With the flexibility of variable adjustment and high computational efficiency, the presented ankle model was generic in nature so as to maximize its applicability to capture the individual ligament behaviors in future studies.

  10. Aspect-Oriented Model-Driven Software Product Line Engineering

    NASA Astrophysics Data System (ADS)

    Groher, Iris; Voelter, Markus

    Software product line engineering aims to reduce development time, effort, cost, and complexity by taking advantage of the commonality within a portfolio of similar products. The effectiveness of a software product line approach directly depends on how well feature variability within the portfolio is implemented and managed throughout the development lifecycle, from early analysis through maintenance and evolution. This article presents an approach that facilitates variability implementation, management, and tracing by integrating model-driven and aspect-oriented software development. Features are separated in models and composed of aspect-oriented composition techniques on model level. Model transformations support the transition from problem to solution space models. Aspect-oriented techniques enable the explicit expression and modularization of variability on model, template, and code level. The presented concepts are illustrated with a case study of a home automation system.

  11. Complex mean circulation over the inner shelf south of Martha's Vineyard revealed by observations and a high-resolution model

    USGS Publications Warehouse

    Ganju, Neil K.; Lentz, Steven J.; Kirincich, Anthony R.; Farrar, J. Thomas

    2011-01-01

    Inner-shelf circulation is governed by the interaction between tides, baroclinic forcing, winds, waves, and frictional losses; the mean circulation ultimately governs exchange between the coast and ocean. In some cases, oscillatory tidal currents interact with bathymetric features to generate a tidally rectified flow. Recent observational and modeling efforts in an overlapping domain centered on the Martha's Vineyard Coastal Observatory (MVCO) provided an opportunity to investigate the spatial and temporal complexity of circulation on the inner shelf. ADCP and surface radar observations revealed a mean circulation pattern that was highly variable in the alongshore and cross-shore directions. Nested modeling incrementally improved representation of the mean circulation as grid resolution increased and indicated tidal rectification as the generation mechanism of a counter-clockwise gyre near the MVCO. The loss of model skill with decreasing resolution is attributed to insufficient representation of the bathymetric gradients (Δh/h), which is important for representing nonlinear interactions between currents and bathymetry. The modeled momentum balance was characterized by large spatial variability of the pressure gradient and horizontal advection terms over short distances, suggesting that observed inner-shelf momentum balances may be confounded. Given the available observational and modeling data, this work defines the spatially variable mean circulation and its formation mechanism—tidal rectification—and illustrates the importance of model resolution for resolving circulation and constituent exchange near the coast. The results of this study have implications for future observational and modeling studies near the MVCO and other inner-shelf locations with alongshore bathymetric variability.

  12. Variability in perceived satisfaction of reservoir management objectives

    USGS Publications Warehouse

    Owen, W.J.; Gates, T.K.; Flug, M.

    1997-01-01

    Fuzzy set theory provides a useful model to address imprecision in interpreting linguistically described objectives for reservoir management. Fuzzy membership functions can be used to represent degrees of objective satisfaction for different values of management variables. However, lack of background information, differing experiences and qualifications, and complex interactions of influencing factors can contribute to significant variability among membership functions derived from surveys of multiple experts. In the present study, probabilistic membership functions are used to model variability in experts' perceptions of satisfaction of objectives for hydropower generation, fish habitat, kayaking, rafting, and scenery preservation on the Green River through operations of Flaming Gorge Dam. Degree of variability in experts' perceptions differed among objectives but resulted in substantial uncertainty in estimation of optimal reservoir releases.

  13. Contemporary Model Fidelity over the Maritime Continent: Examination of the Diurnal Cycle, Synoptic, Intraseasonal and Seasonal Variability

    NASA Astrophysics Data System (ADS)

    Baranowski, D.; Waliser, D. E.; Jiang, X.

    2016-12-01

    One of the key challenges in subseasonal weather forecasting is the fidelity in representing the propagation of the Madden-Julian Oscillation (MJO) across the Maritime Continent (MC). In reality both propagating and non-propagating MJO events are observed, but in numerical forecast the latter group largely dominates. For this study, comprehensive model performances are evaluated using metrics that utilize the mean precipitation pattern and the amplitude and phase of the diurnal cycle, with a particular focus on the linkage between a model's local MC variability and its fidelity in representing propagation of the MJO and equatorial Kelvin waves across the MC. Subseasonal to seasonal variability of mean precipitation and its diurnal cycle in 20 year long climate simulations from over 20 general circulation models (GCMs) is examined to benchmark model performance. Our results show that many models struggle to represent the precipitation pattern over complex Maritime Continent terrain. Many models show negative biases of mean precipitation and amplitude of its diurnal cycle; these biases are often larger over land than over ocean. Furthermore, only a handful of models realistically represent the spatial variability of the phase of the diurnal cycle of precipitation. Models tend to correctly simulate the timing of the diurnal maximum of precipitation over ocean during local solar time morning, but fail to acknowledge influence of the land, with the timing of the maximum of precipitation there occurring, unrealistically, at the same time as over ocean. The day-to-day and seasonal variability of the mean precipitation follows observed patterns, but is often unrealistic for the diurnal cycle amplitude. The intraseasonal variability of the amplitude of the diurnal cycle of precipitation is mainly driven by model's ability (or lack of) to produce eastward propagating MJO-like signal. Our results show that many models tend to decrease apparent air-sea contrast in the mean precipitation and diurnal cycle of precipitation patterns over the Maritime Continent. As a result, the complexity of those patterns is heavily smoothed, to such an extent in some models that the Maritime Continent features and imprint is almost unrecognizable relative to the eastern Indian Ocean or Western Pacific.

  14. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 2. Application to Owens Valley, California

    USGS Publications Warehouse

    Guymon, Gary L.; Yen, Chung-Cheng

    1990-01-01

    The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.

  15. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 2. Application to Owens Valley, California

    NASA Astrophysics Data System (ADS)

    Guymon, Gary L.; Yen, Chung-Cheng

    1990-07-01

    The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.

  16. Robust peptidoglycan growth by dynamic and variable multi-protein complexes.

    PubMed

    Pazos, Manuel; Peters, Katharina; Vollmer, Waldemar

    2017-04-01

    In Gram-negative bacteria such as Escherichia coli the peptidoglycan sacculus resides in the periplasm, a compartment that experiences changes in pH value, osmolality, ion strength and other parameters depending on the cell's environment. Hence, the cell needs robust peptidoglycan growth mechanisms to grow and divide under different conditions. Here we propose a model according to which the cell achieves robust peptidoglycan growth by employing dynamic multi-protein complexes, which assemble with variable composition from freely diffusing sets of peptidoglycan synthases, hydrolases and their regulators, whereby the composition of the active complexes depends on the cell cycle state - cell elongation or division - and the periplasmic growth conditions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. The trajectory of life. Decreasing physiological network complexity through changing fractal patterns

    PubMed Central

    Sturmberg, Joachim P.; Bennett, Jeanette M.; Picard, Martin; Seely, Andrew J. E.

    2015-01-01

    In this position paper, we submit a synthesis of theoretical models based on physiology, non-equilibrium thermodynamics, and non-linear time-series analysis. Based on an understanding of the human organism as a system of interconnected complex adaptive systems, we seek to examine the relationship between health, complexity, variability, and entropy production, as it might be useful to help understand aging, and improve care for patients. We observe the trajectory of life is characterized by the growth, plateauing and subsequent loss of adaptive function of organ systems, associated with loss of functioning and coordination of systems. Understanding development and aging requires the examination of interdependence among these organ systems. Increasing evidence suggests network interconnectedness and complexity can be captured/measured/associated with the degree and complexity of healthy biologic rhythm variability (e.g., heart and respiratory rate variability). We review physiological mechanisms linking the omics, arousal/stress systems, immune function, and mitochondrial bioenergetics; highlighting their interdependence in normal physiological function and aging. We argue that aging, known to be characterized by a loss of variability, is manifested at multiple scales, within functional units at the small scale, and reflected by diagnostic features at the larger scale. While still controversial and under investigation, it appears conceivable that the integrity of whole body complexity may be, at least partially, reflected in the degree and variability of intrinsic biologic rhythms, which we believe are related to overall system complexity that may be a defining feature of health and it's loss through aging. Harnessing this information for the development of therapeutic and preventative strategies may hold an opportunity to significantly improve the health of our patients across the trajectory of life. PMID:26082722

  18. Single stage queueing/manufacturing system model that involves emission variable

    NASA Astrophysics Data System (ADS)

    Murdapa, P. S.; Pujawan, I. N.; Karningsih, P. D.; Nasution, A. H.

    2018-04-01

    Queueing is commonly occured at every industry. The basic model of queueing theory gives a foundation for modeling a manufacturing system. Nowadays, carbon emission is an important and inevitable issue due to its huge impact to our environment. However, existing model of queuing applied for analysis of single stage manufacturing system has not taken Carbon emissions into consideration. If it is applied to manufacturing context, it may lead to improper decisisions. By taking into account of emission variables into queuing models, not only the model become more comprehensive but also it creates awareness on the issue to many parties that involves in the system. This paper discusses the single stage M/M/1 queueing model that involves emission variable. Hopefully it could be a starting point for the next more complex models. It has a main objective for determining how carbon emissions could fit into the basic queueing theory. It turned out that the involvement of emission variables into the model has modified the traditional model of a single stage queue to a calculation model of production lot quantity allowed per period.

  19. Prediction of enteric methane production, yield, and intensity in dairy cattle using an intercontinental database.

    PubMed

    Niu, Mutian; Kebreab, Ermias; Hristov, Alexander N; Oh, Joonpyo; Arndt, Claudia; Bannink, André; Bayat, Ali R; Brito, André F; Boland, Tommy; Casper, David; Crompton, Les A; Dijkstra, Jan; Eugène, Maguy A; Garnsworthy, Phil C; Haque, Md Najmul; Hellwing, Anne L F; Huhtanen, Pekka; Kreuzer, Michael; Kuhla, Bjoern; Lund, Peter; Madsen, Jørgen; Martin, Cécile; McClelland, Shelby C; McGee, Mark; Moate, Peter J; Muetzel, Stefan; Muñoz, Camila; O'Kiely, Padraig; Peiren, Nico; Reynolds, Christopher K; Schwarm, Angela; Shingfield, Kevin J; Storlien, Tonje M; Weisbjerg, Martin R; Yáñez-Ruiz, David R; Yu, Zhongtang

    2018-02-16

    Enteric methane (CH 4 ) production from cattle contributes to global greenhouse gas emissions. Measurement of enteric CH 4 is complex, expensive, and impractical at large scales; therefore, models are commonly used to predict CH 4 production. However, building robust prediction models requires extensive data from animals under different management systems worldwide. The objectives of this study were to (1) collate a global database of enteric CH 4 production from individual lactating dairy cattle; (2) determine the availability of key variables for predicting enteric CH 4 production (g/day per cow), yield [g/kg dry matter intake (DMI)], and intensity (g/kg energy corrected milk) and their respective relationships; (3) develop intercontinental and regional models and cross-validate their performance; and (4) assess the trade-off between availability of on-farm inputs and CH 4 prediction accuracy. The intercontinental database covered Europe (EU), the United States (US), and Australia (AU). A sequential approach was taken by incrementally adding key variables to develop models with increasing complexity. Methane emissions were predicted by fitting linear mixed models. Within model categories, an intercontinental model with the most available independent variables performed best with root mean square prediction error (RMSPE) as a percentage of mean observed value of 16.6%, 14.7%, and 19.8% for intercontinental, EU, and United States regions, respectively. Less complex models requiring only DMI had predictive ability comparable to complex models. Enteric CH 4 production, yield, and intensity prediction models developed on an intercontinental basis had similar performance across regions, however, intercepts and slopes were different with implications for prediction. Revised CH 4 emission conversion factors for specific regions are required to improve CH 4 production estimates in national inventories. In conclusion, information on DMI is required for good prediction, and other factors such as dietary neutral detergent fiber (NDF) concentration, improve the prediction. For enteric CH 4 yield and intensity prediction, information on milk yield and composition is required for better estimation. © 2018 John Wiley & Sons Ltd.

  20. Integral projection models for finite populations in a stochastic environment.

    PubMed

    Vindenes, Yngvild; Engen, Steinar; Saether, Bernt-Erik

    2011-05-01

    Continuous types of population structure occur when continuous variables such as body size or habitat quality affect the vital parameters of individuals. These structures can give rise to complex population dynamics and interact with environmental conditions. Here we present a model for continuously structured populations with finite size, including both demographic and environmental stochasticity in the dynamics. Using recent methods developed for discrete age-structured models we derive the demographic and environmental variance of the population growth as functions of a continuous state variable. These two parameters, together with the expected population growth rate, are used to define a one-dimensional diffusion approximation of the population dynamics. Thus, a substantial reduction in complexity is achieved as the dynamics of the complex structured model can be described by only three population parameters. We provide methods for numerical calculation of the model parameters and demonstrate the accuracy of the diffusion approximation by computer simulation of specific examples. The general modeling framework makes it possible to analyze and predict future dynamics and extinction risk of populations with various types of structure, and to explore consequences of changes in demography caused by, e.g., climate change or different management decisions. Our results are especially relevant for small populations that are often of conservation concern.

  1. Pedological memory in forest soil development

    Treesearch

    Jonathan D. Phillips; Daniel A. Marion

    2004-01-01

    Individual trees may have significant impacts on soil morphology. If these impacts are non-random such that some microsites are repeatedly preferentially affected by trees, complex local spatial variability of soils would result. A model of self-reinforcing pedologic influences of trees (SRPIT) is proposed to explain patterns of soil variability in the Ouachita...

  2. Simulating tracer transport in variably saturated soils and shallow groundwater

    USDA-ARS?s Scientific Manuscript database

    The objective of this study was to develop a realistic model to simulate the complex processes of flow and tracer transport in variably saturated soils and to compare simulation results with the detailed monitoring observations. The USDA-ARS OPE3 field site was selected for the case study due to ava...

  3. A Short Note on Estimating the Testlet Model with Different Estimators in Mplus

    ERIC Educational Resources Information Center

    Luo, Yong

    2018-01-01

    Mplus is a powerful latent variable modeling software program that has become an increasingly popular choice for fitting complex item response theory models. In this short note, we demonstrate that the two-parameter logistic testlet model can be estimated as a constrained bifactor model in Mplus with three estimators encompassing limited- and…

  4. The theory and method of variable frequency directional seismic wave under the complex geologic conditions

    NASA Astrophysics Data System (ADS)

    Jiang, T.; Yue, Y.

    2017-12-01

    It is well known that the mono-frequency directional seismic wave technology can concentrate seismic waves into a beam. However, little work on the method and effect of variable frequency directional seismic wave under complex geological conditions have been done .We studied the variable frequency directional wave theory in several aspects. Firstly, we studied the relation between directional parameters and the direction of the main beam. Secondly, we analyzed the parameters that affect the beam width of main beam significantly, such as spacing of vibrator, wavelet dominant frequency, and number of vibrator. In addition, we will study different characteristics of variable frequency directional seismic wave in typical velocity models. In order to examine the propagation characteristics of directional seismic wave, we designed appropriate parameters according to the character of direction parameters, which is capable to enhance the energy of the main beam direction. Further study on directional seismic wave was discussed in the viewpoint of power spectral. The results indicate that the energy intensity of main beam direction increased 2 to 6 times for a multi-ore body velocity model. It showed us that the variable frequency directional seismic technology provided an effective way to strengthen the target signals under complex geological conditions. For concave interface model, we introduced complicated directional seismic technology which supports multiple main beams to obtain high quality data. Finally, we applied the 9-element variable frequency directional seismic wave technology to process the raw data acquired in a oil-shale exploration area. The results show that the depth of exploration increased 4 times with directional seismic wave method. Based on the above analysis, we draw the conclusion that the variable frequency directional seismic wave technology can improve the target signals of different geologic conditions and increase exploration depth with little cost. Due to inconvenience of hydraulic vibrators in complicated surface area, we suggest that the combination of high frequency portable vibrator and variable frequency directional seismic wave method is an alternative technology to increase depth of exploration or prospecting.

  5. Task Models in the Digital Ocean

    ERIC Educational Resources Information Center

    DiCerbo, Kristen E.

    2014-01-01

    The Task Model is a description of each task in a workflow. It defines attributes associated with that task. The creation of task models becomes increasingly important as the assessment tasks become more complex. Explicitly delineating the impact of task variables on the ability to collect evidence and make inferences demands thoughtfulness from…

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia-Fernandez, Ignacio; Pla-Castells, Marta; Martinez-Dura, Rafael J.

    A model of a cable and pulleys is presented that can be used in Real Time Computer Graphics applications. The model is formulated by the coupling of a damped spring and a variable coefficient wave equation, and can be integrated in more complex mechanical models of lift systems, such as cranes, elevators, etc. with a high degree of interactivity.

  7. Developing a theoretical framework for complex community-based interventions.

    PubMed

    Angeles, Ricardo N; Dolovich, Lisa; Kaczorowski, Janusz; Thabane, Lehana

    2014-01-01

    Applying existing theories to research, in the form of a theoretical framework, is necessary to advance knowledge from what is already known toward the next steps to be taken. This article proposes a guide on how to develop a theoretical framework for complex community-based interventions using the Cardiovascular Health Awareness Program as an example. Developing a theoretical framework starts with identifying the intervention's essential elements. Subsequent steps include the following: (a) identifying and defining the different variables (independent, dependent, mediating/intervening, moderating, and control); (b) postulating mechanisms how the independent variables will lead to the dependent variables; (c) identifying existing theoretical models supporting the theoretical framework under development; (d) scripting the theoretical framework into a figure or sets of statements as a series of hypotheses, if/then logic statements, or a visual model; (e) content and face validation of the theoretical framework; and (f) revising the theoretical framework. In our example, we combined the "diffusion of innovation theory" and the "health belief model" to develop our framework. Using the Cardiovascular Health Awareness Program as the model, we demonstrated a stepwise process of developing a theoretical framework. The challenges encountered are described, and an overview of the strategies employed to overcome these challenges is presented.

  8. Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations

    NASA Technical Reports Server (NTRS)

    Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.

    2017-01-01

    A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.

  9. Rebuilding DEMATEL threshold value: an example of a food and beverage information system.

    PubMed

    Hsieh, Yi-Fang; Lee, Yu-Cheng; Lin, Shao-Bin

    2016-01-01

    This study demonstrates how a decision-making trial and evaluation laboratory (DEMATEL) threshold value can be quickly and reasonably determined in the process of combining DEMATEL and decomposed theory of planned behavior (DTPB) models. Models are combined to identify the key factors of a complex problem. This paper presents a case study of a food and beverage information system as an example. The analysis of the example indicates that, given direct and indirect relationships among variables, if a traditional DTPB model only simulates the effects of the variables without considering that the variables will affect the original cause-and-effect relationships among the variables, then the original DTPB model variables cannot represent a complete relationship. For the food and beverage example, a DEMATEL method was employed to reconstruct a DTPB model and, more importantly, to calculate reasonable DEMATEL threshold value for determining additional relationships of variables in the original DTPB model. This study is method-oriented, and the depth of investigation into any individual case is limited. Therefore, the methods proposed in various fields of study should ideally be used to identify deeper and more practical implications.

  10. Phytoplankton dynamics of a subtropical reservoir controlled by the complex interplay among hydrological, abiotic, and biotic variables.

    PubMed

    Kuo, Yi-Ming; Wu, Jiunn-Tzong

    2016-12-01

    This study was conducted to identify the key factors related to the spatiotemporal variations in phytoplankton abundance in a subtropical reservoir from 2006 to 2010 and to assist in developing strategies for water quality management. Dynamic factor analysis (DFA), a dimension-reduction technique, was used to identify interactions between explanatory variables (i.e., environmental variables) and abundance (biovolume) of predominant phytoplankton classes. The optimal DFA model significantly described the dynamic changes in abundances of predominant phytoplankton groups (including dinoflagellates, diatoms, and green algae) at five monitoring sites. Water temperature, electrical conductivity, water level, nutrients (total phosphorus, NO 3 -N, and NH 3 -N), macro-zooplankton, and zooplankton were the key factors affecting the dynamics of aforementioned phytoplankton. Therefore, transformations of nutrients and reactions between water quality variables and aforementioned processes altered by hydrological conditions may also control the abundance dynamics of phytoplankton, which may represent common trends in the DFA model. The meandering shape of Shihmen Reservoir and its surrounding rivers caused a complex interplay between hydrological conditions and abiotic and biotic variables, resulting in phytoplankton abundance that could not be estimated using certain variables. Additional water quality and hydrological variables at surrounding rivers and monitoring plans should be executed a few days before and after reservoir operations and heavy storm, which would assist in developing site-specific preventive strategies to control phytoplankton abundance.

  11. A non-linear data mining parameter selection algorithm for continuous variables

    PubMed Central

    Razavi, Marianne; Brady, Sean

    2017-01-01

    In this article, we propose a new data mining algorithm, by which one can both capture the non-linearity in data and also find the best subset model. To produce an enhanced subset of the original variables, a preferred selection method should have the potential of adding a supplementary level of regression analysis that would capture complex relationships in the data via mathematical transformation of the predictors and exploration of synergistic effects of combined variables. The method that we present here has the potential to produce an optimal subset of variables, rendering the overall process of model selection more efficient. This algorithm introduces interpretable parameters by transforming the original inputs and also a faithful fit to the data. The core objective of this paper is to introduce a new estimation technique for the classical least square regression framework. This new automatic variable transformation and model selection method could offer an optimal and stable model that minimizes the mean square error and variability, while combining all possible subset selection methodology with the inclusion variable transformations and interactions. Moreover, this method controls multicollinearity, leading to an optimal set of explanatory variables. PMID:29131829

  12. Phase-field modeling of fracture in variably saturated porous media

    NASA Astrophysics Data System (ADS)

    Cajuhi, T.; Sanavia, L.; De Lorenzis, L.

    2018-03-01

    We propose a mechanical and computational model to describe the coupled problem of poromechanics and cracking in variably saturated porous media. A classical poromechanical formulation is adopted and coupled with a phase-field formulation for the fracture problem. The latter has the advantage of being able to reproduce arbitrarily complex crack paths without introducing discontinuities on a fixed mesh. The obtained simulation results show good qualitative agreement with desiccation experiments on soils from the literature.

  13. Use of neural networks to model complex immunogenetic associations of disease: human leukocyte antigen impact on the progression of human immunodeficiency virus infection.

    PubMed

    Ioannidis, J P; McQueen, P G; Goedert, J J; Kaslow, R A

    1998-03-01

    Complex immunogenetic associations of disease involving a large number of gene products are difficult to evaluate with traditional statistical methods and may require complex modeling. The authors evaluated the performance of feed-forward backpropagation neural networks in predicting rapid progression to acquired immunodeficiency syndrome (AIDS) for patients with human immunodeficiency virus (HIV) infection on the basis of major histocompatibility complex variables. Networks were trained on data from patients from the Multicenter AIDS Cohort Study (n = 139) and then validated on patients from the DC Gay cohort (n = 102). The outcome of interest was rapid disease progression, defined as progression to AIDS in <6 years from seroconversion. Human leukocyte antigen (HLA) variables were selected as network inputs with multivariate regression and a previously described algorithm selecting markers with extreme point estimates for progression risk. Network performance was compared with that of logistic regression. Networks with 15 HLA inputs and a single hidden layer of five nodes achieved a sensitivity of 87.5% and specificity of 95.6% in the training set, vs. 77.0% and 76.9%, respectively, achieved by logistic regression. When validated on the DC Gay cohort, networks averaged a sensitivity of 59.1% and specificity of 74.3%, vs. 53.1% and 61.4%, respectively, for logistic regression. Neural networks offer further support to the notion that HIV disease progression may be dependent on complex interactions between different class I and class II alleles and transporters associated with antigen processing variants. The effect in the current models is of moderate magnitude, and more data as well as other host and pathogen variables may need to be considered to improve the performance of the models. Artificial intelligence methods may complement linear statistical methods for evaluating immunogenetic associations of disease.

  14. Some comparisons of complexity in dictionary-based and linear computational models.

    PubMed

    Gnecco, Giorgio; Kůrková, Věra; Sanguineti, Marcello

    2011-03-01

    Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator. Copyright © 2010 Elsevier Ltd. All rights reserved.

  15. [Predicting individual risk of high healthcare cost to identify complex chronic patients].

    PubMed

    Coderch, Jordi; Sánchez-Pérez, Inma; Ibern, Pere; Carreras, Marc; Pérez-Berruezo, Xavier; Inoriza, José M

    2014-01-01

    To develop a predictive model for the risk of high consumption of healthcare resources, and assess the ability of the model to identify complex chronic patients. A cross-sectional study was performed within a healthcare management organization by using individual data from 2 consecutive years (88,795 people). The dependent variable consisted of healthcare costs above the 95th percentile (P95), including all services provided by the organization and pharmaceutical consumption outside of the institution. The predictive variables were age, sex, morbidity-based on clinical risk groups (CRG)-and selected data from previous utilization (use of hospitalization, use of high-cost drugs in ambulatory care, pharmaceutical expenditure). A univariate descriptive analysis was performed. We constructed a logistic regression model with a 95% confidence level and analyzed sensitivity, specificity, positive predictive values (PPV), and the area under the ROC curve (AUC). Individuals incurring costs >P95 accumulated 44% of total healthcare costs and were concentrated in ACRG3 (aggregated CRG level 3) categories related to multiple chronic diseases. All variables were statistically significant except for sex. The model had a sensitivity of 48.4% (CI: 46.9%-49.8%), specificity of 97.2% (CI: 97.0%-97.3%), PPV of 46.5% (CI: 45.0%-47.9%), and an AUC of 0.897 (CI: 0.892 to 0.902). High consumption of healthcare resources is associated with complex chronic morbidity. A model based on age, morbidity, and prior utilization is able to predict high-cost risk and identify a target population requiring proactive care. Copyright © 2013 SESPAS. Published by Elsevier Espana. All rights reserved.

  16. Risk models for post-endoscopic retrograde cholangiopancreatography pancreatitis (PEP): smoking and chronic liver disease are predictors of protection against PEP.

    PubMed

    DiMagno, Matthew J; Spaete, Joshua P; Ballard, Darren D; Wamsteker, Erik-Jan; Saini, Sameer D

    2013-08-01

    We investigated which variables independently associated with protection against or development of postendoscopic retrograde cholangiopancreatography (ERCP) pancreatitis (PEP) and severity of PEP. Subsequently, we derived predictive risk models for PEP. In a case-control design, 6505 patients had 8264 ERCPs, 211 patients had PEP, and 22 patients had severe PEP. We randomly selected 348 non-PEP controls. We examined 7 established- and 9 investigational variables. In univariate analysis, 7 variables predicted PEP: younger age, female sex, suspected sphincter of Oddi dysfunction (SOD), pancreatic sphincterotomy, moderate-difficult cannulation (MDC), pancreatic stent placement, and lower Charlson score. Protective variables were current smoking, former drinking, diabetes, and chronic liver disease (CLD, biliary/transplant complications). Multivariate analysis identified seven independent variables for PEP, three protective (current smoking, CLD-biliary, CLD-transplant/hepatectomy complications) and 4 predictive (younger age, suspected SOD, pancreatic sphincterotomy, MDC). Pre- and post-ERCP risk models of 7 variables have a C-statistic of 0.74. Removing age (seventh variable) did not significantly affect the predictive value (C-statistic of 0.73) and reduced model complexity. Severity of PEP did not associate with any variables by multivariate analysis. By using the newly identified protective variables with 3 predictive variables, we derived 2 risk models with a higher predictive value for PEP compared to prior studies.

  17. Theorems and application of local activity of CNN with five state variables and one port.

    PubMed

    Xiong, Gang; Dong, Xisong; Xie, Li; Yang, Thomas

    2012-01-01

    Coupled nonlinear dynamical systems have been widely studied recently. However, the dynamical properties of these systems are difficult to deal with. The local activity of cellular neural network (CNN) has provided a powerful tool for studying the emergence of complex patterns in a homogeneous lattice, which is composed of coupled cells. In this paper, the analytical criteria for the local activity in reaction-diffusion CNN with five state variables and one port are presented, which consists of four theorems, including a serial of inequalities involving CNN parameters. These theorems can be used for calculating the bifurcation diagram to determine or analyze the emergence of complex dynamic patterns, such as chaos. As a case study, a reaction-diffusion CNN of hepatitis B Virus (HBV) mutation-selection model is analyzed and simulated, the bifurcation diagram is calculated. Using the diagram, numerical simulations of this CNN model provide reasonable explanations of complex mutant phenomena during therapy. Therefore, it is demonstrated that the local activity of CNN provides a practical tool for the complex dynamics study of some coupled nonlinear systems.

  18. A Spatially Continuous Model of Carbohydrate Digestion and Transport Processes in the Colon

    PubMed Central

    Moorthy, Arun S.; Brooks, Stephen P. J.; Kalmokoff, Martin; Eberl, Hermann J.

    2015-01-01

    A spatially continuous mathematical model of transport processes, anaerobic digestion and microbial complexity as would be expected in the human colon is presented. The model is a system of first-order partial differential equations with context determined number of dependent variables, and stiff, non-linear source terms. Numerical simulation of the model is used to elucidate information about the colon-microbiota complex. It is found that the composition of materials on outflow of the model does not well-describe the composition of material in other model locations, and inferences using outflow data varies according to model reactor representation. Additionally, increased microbial complexity allows the total microbial community to withstand major system perturbations in diet and community structure. However, distribution of strains and functional groups within the microbial community can be modified depending on perturbation length and microbial kinetic parameters. Preliminary model extensions and potential investigative opportunities using the computational model are discussed. PMID:26680208

  19. Self-dual form of Ruijsenaars-Schneider models and ILW equation with discrete Laplacian

    NASA Astrophysics Data System (ADS)

    Zabrodin, A.; Zotov, A.

    2018-02-01

    We discuss a self-dual form or the Bäcklund transformations for the continuous (in time variable) glN Ruijsenaars-Schneider model. It is based on the first order equations in N + M complex variables which include N positions of particles and M dual variables. The latter satisfy equations of motion of the glM Ruijsenaars-Schneider model. In the elliptic case it holds M = N while for the rational and trigonometric models M is not necessarily equal to N. Our consideration is similar to the previously obtained results for the Calogero-Moser models which are recovered in the non-relativistic limit. We also show that the self-dual description of the Ruijsenaars-Schneider models can be derived from complexified intermediate long wave equation with discrete Laplacian by means of the simple pole ansatz likewise the Calogero-Moser models arise from ordinary intermediate long wave and Benjamin-Ono equations.

  20. A simple, dynamic, hydrological model of a mesotidal salt marsh

    EPA Science Inventory

    Salt marsh hydrology presents many difficulties from a modeling standpoint: the bi-directional flows of tidal waters, variable water densities due to mixing of fresh and salt water, significant influences from vegetation, and complex stream morphologies. Because of these difficu...

  1. Design and performance of optimal detectors for guided wave structural health monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dib, G.; Udpa, L.

    2016-01-01

    Ultrasonic guided wave measurements in a long term structural health monitoring system are affected by measurement noise, environmental conditions, transducer aging and malfunction. This results in measurement variability which affects detection performance, especially in complex structures where baseline data comparison is required. This paper derives the optimal detector structure, within the framework of detection theory, where a guided wave signal at the sensor is represented by a single feature value that can be used for comparison with a threshold. Three different types of detectors are derived depending on the underlying structure’s complexity: (i) Simple structures where defect reflections can bemore » identified without the need for baseline data; (ii) Simple structures that require baseline data due to overlap of defect scatter with scatter from structural features; (iii) Complex structure with dense structural features that require baseline data. The detectors are derived by modeling the effects of variabilities and uncertainties as random processes. Analytical solutions for the performance of detectors in terms of the probability of detection and false alarm are derived. A finite element model is used to generate guided wave signals and the performance results of a Monte-Carlo simulation are compared with the theoretical performance. initial results demonstrate that the problems of signal complexity and environmental variability can in fact be exploited to improve detection performance.« less

  2. A new fractional operator of variable order: Application in the description of anomalous diffusion

    NASA Astrophysics Data System (ADS)

    Yang, Xiao-Jun; Machado, J. A. Tenreiro

    2017-09-01

    In this paper, a new fractional operator of variable order with the use of the monotonic increasing function is proposed in sense of Caputo type. The properties in term of the Laplace and Fourier transforms are analyzed and the results for the anomalous diffusion equations of variable order are discussed. The new formulation is efficient in modeling a class of concentrations in the complex transport process.

  3. Can Process Understanding Help Elucidate The Structure Of The Critical Zone? Comparing Process-Based Soil Formation Models With Digital Soil Mapping.

    NASA Astrophysics Data System (ADS)

    Vanwalleghem, T.; Román, A.; Peña, A.; Laguna, A.; Giráldez, J. V.

    2017-12-01

    There is a need for better understanding the processes influencing soil formation and the resulting distribution of soil properties in the critical zone. Soil properties can exhibit strong spatial variation, even at the small catchment scale. Especially soil carbon pools in semi-arid, mountainous areas are highly uncertain because bulk density and stoniness are very heterogeneous and rarely measured explicitly. In this study, we explore the spatial variability in key soil properties (soil carbon stocks, stoniness, bulk density and soil depth) as a function of processes shaping the critical zone (weathering, erosion, soil water fluxes and vegetation patterns). We also compare the potential of traditional digital soil mapping versus a mechanistic soil formation model (MILESD) for predicting these key soil properties. Soil core samples were collected from 67 locations at 6 depths. Total soil organic carbon stocks were 4.38 kg m-2. Solar radiation proved to be the key variable controlling soil carbon distribution. Stone content was mostly controlled by slope, indicating the importance of erosion. Spatial distribution of bulk density was found to be highly random. Finally, total carbon stocks were predicted using a random forest model whose main covariates were solar radiation and NDVI. The model predicts carbon stocks that are double as high on north versus south-facing slopes. However, validation showed that these covariates only explained 25% of the variation in the dataset. Apparently, present-day landscape and vegetation properties are not sufficient to fully explain variability in the soil carbon stocks in this complex terrain under natural vegetation. This is attributed to a high spatial variability in bulk density and stoniness, key variables controlling carbon stocks. Similar results were obtained with the mechanistic soil formation model MILESD, suggesting that more complex models might be needed to further explore this high spatial variability.

  4. Spatiotemporal Variability of Turbulence Kinetic Energy Budgets in the Convective Boundary Layer over Both Simple and Complex Terrain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rai, Raj K.; Berg, Larry K.; Pekour, Mikhail

    The assumption of sub-grid scale (SGS) horizontal homogeneity within a model grid cell, which forms the basis of SGS turbulence closures used by mesoscale models, becomes increasingly tenuous as grid spacing is reduced to a few kilometers or less, such as in many emerging high-resolution applications. Herein, we use the turbulence kinetic energy (TKE) budget equation to study the spatio-temporal variability in two types of terrain—complex (Columbia Basin Wind Energy Study [CBWES] site, north-eastern Oregon) and flat (ScaledWind Farm Technologies [SWiFT] site, west Texas) using the Weather Research and Forecasting (WRF) model. In each case six-nested domains (three domains eachmore » for mesoscale and large-eddy simulation [LES]) are used to downscale the horizontal grid spacing from 10 km to 10 m using the WRF model framework. The model output was used to calculate the values of the TKE budget terms in vertical and horizontal planes as well as the averages of grid cells contained in the four quadrants (a quarter area) of the LES domain. The budget terms calculated along the planes and the mean profile of budget terms show larger spatial variability at CBWES site than at the SWiFT site. The contribution of the horizontal derivative of the shear production term to the total production shear was found to be 45% and 15% of the total shear, at the CBWES and SWiFT sites, respectively, indicating that the horizontal derivatives applied in the budget equation should not be ignored in mesoscale model parameterizations, especially for cases with complex terrain with <10 km scale.« less

  5. A meteorological distribution system for high-resolution terrestrial modeling (MicroMet)

    Treesearch

    Glen E. Liston; Kelly Elder

    2006-01-01

    An intermediate-complexity, quasi-physically based, meteorological model (MicroMet) has been developed to produce high-resolution (e.g., 30-m to 1-km horizontal grid increment) atmospheric forcings required to run spatially distributed terrestrial models over a wide variety of landscapes. The following eight variables, required to run most terrestrial models, are...

  6. Support for Simulation-Based Learning; The Effects of Model Progression and Assignments on Learning about Oscillatory Motion.

    ERIC Educational Resources Information Center

    Swaak, Janine; And Others

    In this study, learners worked with a simulation of harmonic oscillation. Two supportive measures were introduced: model progression and assignments. In model progression, the model underlying the simulation is not offered in its full complexity from the start, but variables are gradually introduced. Assignments are small exercises that help the…

  7. Model development and validation of geometrically complex eddy current coils using finite element methods

    NASA Astrophysics Data System (ADS)

    Brown, Alexander; Eviston, Connor

    2017-02-01

    Multiple FEM models of complex eddy current coil geometries were created and validated to calculate the change of impedance due to the presence of a notch. Capable realistic simulations of eddy current inspections are required for model assisted probability of detection (MAPOD) studies, inversion algorithms, experimental verification, and tailored probe design for NDE applications. An FEM solver was chosen to model complex real world situations including varying probe dimensions and orientations along with complex probe geometries. This will also enable creation of a probe model library database with variable parameters. Verification and validation was performed using other commercially available eddy current modeling software as well as experimentally collected benchmark data. Data analysis and comparison showed that the created models were able to correctly model the probe and conductor interactions and accurately calculate the change in impedance of several experimental scenarios with acceptable error. The promising results of the models enabled the start of an eddy current probe model library to give experimenters easy access to powerful parameter based eddy current models for alternate project applications.

  8. Evaluating measurement models in clinical research: covariance structure analysis of latent variable models of self-conception.

    PubMed

    Hoyle, R H

    1991-02-01

    Indirect measures of psychological constructs are vital to clinical research. On occasion, however, the meaning of indirect measures of psychological constructs is obfuscated by statistical procedures that do not account for the complex relations between items and latent variables and among latent variables. Covariance structure analysis (CSA) is a statistical procedure for testing hypotheses about the relations among items that indirectly measure a psychological construct and relations among psychological constructs. This article introduces clinical researchers to the strengths and limitations of CSA as a statistical procedure for conceiving and testing structural hypotheses that are not tested adequately with other statistical procedures. The article is organized around two empirical examples that illustrate the use of CSA for evaluating measurement models with correlated error terms, higher-order factors, and measured and latent variables.

  9. Nonparametric Bayesian Multiple Imputation for Incomplete Categorical Variables in Large-Scale Assessment Surveys

    ERIC Educational Resources Information Center

    Si, Yajuan; Reiter, Jerome P.

    2013-01-01

    In many surveys, the data comprise a large number of categorical variables that suffer from item nonresponse. Standard methods for multiple imputation, like log-linear models or sequential regression imputation, can fail to capture complex dependencies and can be difficult to implement effectively in high dimensions. We present a fully Bayesian,…

  10. Comparison of Metabolomics Approaches for Evaluating the Variability of Complex Botanical Preparations: Green Tea (Camellia sinensis) as a Case Study.

    PubMed

    Kellogg, Joshua J; Graf, Tyler N; Paine, Mary F; McCune, Jeannine S; Kvalheim, Olav M; Oberlies, Nicholas H; Cech, Nadja B

    2017-05-26

    A challenge that must be addressed when conducting studies with complex natural products is how to evaluate their complexity and variability. Traditional methods of quantifying a single or a small range of metabolites may not capture the full chemical complexity of multiple samples. Different metabolomics approaches were evaluated to discern how they facilitated comparison of the chemical composition of commercial green tea [Camellia sinensis (L.) Kuntze] products, with the goal of capturing the variability of commercially used products and selecting representative products for in vitro or clinical evaluation. Three metabolomic-related methods-untargeted ultraperformance liquid chromatography-mass spectrometry (UPLC-MS), targeted UPLC-MS, and untargeted, quantitative 1 HNMR-were employed to characterize 34 commercially available green tea samples. Of these methods, untargeted UPLC-MS was most effective at discriminating between green tea, green tea supplement, and non-green-tea products. A method using reproduced correlation coefficients calculated from principal component analysis models was developed to quantitatively compare differences among samples. The obtained results demonstrated the utility of metabolomics employing UPLC-MS data for evaluating similarities and differences between complex botanical products.

  11. Comparison of Metabolomics Approaches for Evaluating the Variability of Complex Botanical Preparations: Green Tea (Camellia sinensis) as a Case Study

    PubMed Central

    2017-01-01

    A challenge that must be addressed when conducting studies with complex natural products is how to evaluate their complexity and variability. Traditional methods of quantifying a single or a small range of metabolites may not capture the full chemical complexity of multiple samples. Different metabolomics approaches were evaluated to discern how they facilitated comparison of the chemical composition of commercial green tea [Camellia sinensis (L.) Kuntze] products, with the goal of capturing the variability of commercially used products and selecting representative products for in vitro or clinical evaluation. Three metabolomic-related methods—untargeted ultraperformance liquid chromatography–mass spectrometry (UPLC-MS), targeted UPLC-MS, and untargeted, quantitative 1HNMR—were employed to characterize 34 commercially available green tea samples. Of these methods, untargeted UPLC-MS was most effective at discriminating between green tea, green tea supplement, and non-green-tea products. A method using reproduced correlation coefficients calculated from principal component analysis models was developed to quantitatively compare differences among samples. The obtained results demonstrated the utility of metabolomics employing UPLC-MS data for evaluating similarities and differences between complex botanical products. PMID:28453261

  12. Lithologic Effects on Landscape Response to Base Level Changes: A Modeling Study in the Context of the Eastern Jura Mountains, Switzerland

    NASA Astrophysics Data System (ADS)

    Yanites, Brian J.; Becker, Jens K.; Madritsch, Herfried; Schnellmann, Michael; Ehlers, Todd A.

    2017-11-01

    Landscape evolution is a product of the forces that drive geomorphic processes (e.g., tectonics and climate) and the resistance to those processes. The underlying lithology and structural setting in many landscapes set the resistance to erosion. This study uses a modified version of the Channel-Hillslope Integrated Landscape Development (CHILD) landscape evolution model to determine the effect of a spatially and temporally changing erodibility in a terrain with a complex base level history. Specifically, our focus is to quantify how the effects of variable lithology influence transient base level signals. We set up a series of numerical landscape evolution models with increasing levels of complexity based on the lithologic variability and base level history of the Jura Mountains of northern Switzerland. The models are consistent with lithology (and therewith erodibility) playing an important role in the transient evolution of the landscape. The results show that the erosion rate history at a location depends on the rock uplift and base level history, the range of erodibilities of the different lithologies, and the history of the surface geology downstream from the analyzed location. Near the model boundary, the history of erosion is dominated by the base level history. The transient wave of incision, however, is quite variable in the different model runs and depends on the geometric structure of lithology used. It is thus important to constrain the spatiotemporal erodibility patterns downstream of any given point of interest to understand the evolution of a landscape subject to variable base level in a quantitative framework.

  13. Attributing runoff changes to climate variability and human activities: uncertainty analysis using four monthly water balance models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Shuai; Xiong, Lihua; Li, Hong-Yi

    2015-05-26

    Hydrological simulations to delineate the impacts of climate variability and human activities are subjected to uncertainties related to both parameter and structure of the hydrological models. To analyze the impact of these uncertainties on the model performance and to yield more reliable simulation results, a global calibration and multimodel combination method that integrates the Shuffled Complex Evolution Metropolis (SCEM) and Bayesian Model Averaging (BMA) of four monthly water balance models was proposed. The method was applied to the Weihe River Basin (WRB), the largest tributary of the Yellow River, to determine the contribution of climate variability and human activities tomore » runoff changes. The change point, which was used to determine the baseline period (1956-1990) and human-impacted period (1991-2009), was derived using both cumulative curve and Pettitt’s test. Results show that the combination method from SCEM provides more skillful deterministic predictions than the best calibrated individual model, resulting in the smallest uncertainty interval of runoff changes attributed to climate variability and human activities. This combination methodology provides a practical and flexible tool for attribution of runoff changes to climate variability and human activities by hydrological models.« less

  14. Disturbance History,Spatial Variability, and Patterns of Biodiversity

    NASA Astrophysics Data System (ADS)

    Bendix, J.; Wiley, J. J.; Commons, M.

    2012-12-01

    The intermediate disturbance hypothesis predicts that species diversity will be maximized in environments experiencing intermediate intensity disturbance, after an intermediate timespan. Because many landscapes comprise mosaics with complex disturbance histories, the theory implies that each patch in those mosaics should have a distinct level of diversity reflecting combined impact of the magnitude of disturbance and the time since it occurred. We modeled the changing patterns of species richness across a landscape experiencing varied scenarios of simulated disturbance. Model outputs show that individual landscape patches have highly variable species richness through time, with the details reflecting the timing, intensity and sequence of their disturbance history. When the results are mapped across the landscape, the resulting temporal and spatial complexity illustrates both the contingent nature of diversity and the danger of generalizing about the impacts of disturbance.

  15. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology.

    PubMed

    Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H

    2017-07-01

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in using RF to develop predictive models with large environmental data sets.

  16. Power Analysis for Complex Mediational Designs Using Monte Carlo Methods

    ERIC Educational Resources Information Center

    Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.

    2010-01-01

    Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex…

  17. Proportional Reasoning of Preservice Elementary Education Majors: An Epistemic Model of the Proportional Reasoning Construct.

    ERIC Educational Resources Information Center

    Fleener, M. Jayne

    Current research and learning theory suggest that a hierarchy of proportional reasoning exists that can be tested. Using G. Vergnaud's four complexity variables (structure, content, numerical characteristics, and presentation) and T. E. Kieren's model of rational number knowledge building, an epistemic model of proportional reasoning was…

  18. Probabilistic modeling of anatomical variability using a low dimensional parameterization of diffeomorphisms.

    PubMed

    Zhang, Miaomiao; Wells, William M; Golland, Polina

    2017-10-01

    We present an efficient probabilistic model of anatomical variability in a linear space of initial velocities of diffeomorphic transformations and demonstrate its benefits in clinical studies of brain anatomy. To overcome the computational challenges of the high dimensional deformation-based descriptors, we develop a latent variable model for principal geodesic analysis (PGA) based on a low dimensional shape descriptor that effectively captures the intrinsic variability in a population. We define a novel shape prior that explicitly represents principal modes as a multivariate complex Gaussian distribution on the initial velocities in a bandlimited space. We demonstrate the performance of our model on a set of 3D brain MRI scans from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our model yields a more compact representation of group variation at substantially lower computational cost than the state-of-the-art method such as tangent space PCA (TPCA) and probabilistic principal geodesic analysis (PPGA) that operate in the high dimensional image space. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Job Loss: An Individual Level Review and Model.

    ERIC Educational Resources Information Center

    DeFrank, Richard S.; Ivancevich, John M.

    1986-01-01

    Reviews behavioral, medical, and social science literature to illustrate the complexity and multidisciplinary nature of the job loss experience and provides a conceptual model to examine individual responses to job loss. Emphasizes the importance of including organizational-relevant variables in individual level conceptualizations and proposed…

  20. Best geoscience approach to complex systems in environment

    NASA Astrophysics Data System (ADS)

    Mezemate, Yacine; Tchiguirinskaia, Ioulia; Schertzer, Daniel

    2017-04-01

    The environment is a social issue that continues to grow in importance. Its complexity, both cross-disciplinary and multi-scale, has given rise to a large number of scientific and technological locks, that complex systems approaches can solve. Significant challenges must met to achieve the understanding of the environmental complexes systems. There study should proceed in some steps in which the use of data and models is crucial: - Exploration, observation and basic data acquisition - Identification of correlations, patterns, and mechanisms - Modelling - Model validation, implementation and prediction - Construction of a theory Since the e-learning becomes a powerful tool for knowledge and best practice shearing, we use it to teach the environmental complexities and systems. In this presentation we promote the e-learning course dedicated for a large public (undergraduates, graduates, PhD students and young scientists) which gather and puts in coherence different pedagogical materials of complex systems and environmental studies. This course describes a complex processes using numerous illustrations, examples and tests that make it "easy to enjoy" learning process. For the seek of simplicity, the course is divided in different modules and at the end of each module a set of exercises and program codes are proposed for a best practice. The graphical user interface (GUI) which is constructed using an open source Opale Scenari offers a simple navigation through the different module. The course treats the complex systems that can be found in environment and their observables, we particularly highlight the extreme variability of these observables over a wide range of scales. Using the multifractal formalism through different applications (turbulence, precipitation, hydrology) we demonstrate how such extreme variability of the geophysical/biological fields should be used solving everyday (geo-)environmental chalenges.

  1. Not Noisy, Just Wrong: The Role of Suboptimal Inference in Behavioral Variability

    PubMed Central

    Beck, Jeffrey M.; Ma, Wei Ji; Pitkow, Xaq; Latham, Peter E.; Pouget, Alexandre

    2015-01-01

    Behavior varies from trial to trial even when the stimulus is maintained as constant as possible. In many models, this variability is attributed to noise in the brain. Here, we propose that there is another major source of variability: suboptimal inference. Importantly, we argue that in most tasks of interest, and particularly complex ones, suboptimal inference is likely to be the dominant component of behavioral variability. This perspective explains a variety of intriguing observations, including why variability appears to be larger on the sensory than on the motor side, and why our sensors are sometimes surprisingly unreliable. PMID:22500627

  2. Randomization-Based Inference about Latent Variables from Complex Samples: The Case of Two-Stage Sampling

    ERIC Educational Resources Information Center

    Li, Tiandong

    2012-01-01

    In large-scale assessments, such as the National Assessment of Educational Progress (NAEP), plausible values based on Multiple Imputations (MI) have been used to estimate population characteristics for latent constructs under complex sample designs. Mislevy (1991) derived a closed-form analytic solution for a fixed-effect model in creating…

  3. Plenary Speech: Researching Complex Dynamic Systems--"Retrodictive Qualitative Modelling" in the Language Classroom

    ERIC Educational Resources Information Center

    Dörnyei, Zoltán

    2014-01-01

    While approaching second language acquisition from a complex dynamic systems perspective makes a lot of intuitive sense, it is difficult for a number of reasons to operationalise such a dynamic approach in research terms. For example, the most common research paradigms in the social sciences tend to examine variables in relative isolation rather…

  4. VARIABLE BOUND-SITE CHARGING CONTRIBUTIONS TO SURFACE COMPLEXATION MASS ACTION EXPRESSIONS

    EPA Science Inventory

    One and two pK models of surface complexation reactions between reactive surface sites (>SOH) and the proton (H+) use mass action expressions of the form: Ka={[>SOHn-1z-1]g>SOH(0-1)aH+EXP(-xeY/kT)}/{[>SOHnz]g>SOH(n)} where Ka=the acidity constant, [ ]=reactive species concentrati...

  5. Bim Automation: Advanced Modeling Generative Process for Complex Structures

    NASA Astrophysics Data System (ADS)

    Banfi, F.; Fai, S.; Brumana, R.

    2017-08-01

    The new paradigm of the complexity of modern and historic structures, which are characterised by complex forms, morphological and typological variables, is one of the greatest challenges for building information modelling (BIM). Generation of complex parametric models needs new scientific knowledge concerning new digital technologies. These elements are helpful to store a vast quantity of information during the life cycle of buildings (LCB). The latest developments of parametric applications do not provide advanced tools, resulting in time-consuming work for the generation of models. This paper presents a method capable of processing and creating complex parametric Building Information Models (BIM) with Non-Uniform to NURBS) with multiple levels of details (Mixed and ReverseLoD) based on accurate 3D photogrammetric and laser scanning surveys. Complex 3D elements are converted into parametric BIM software and finite element applications (BIM to FEA) using specific exchange formats and new modelling tools. The proposed approach has been applied to different case studies: the BIM of modern structure for the courtyard of West Block on Parliament Hill in Ottawa (Ontario) and the BIM of Masegra Castel in Sondrio (Italy), encouraging the dissemination and interaction of scientific results without losing information during the generative process.

  6. Representing general theoretical concepts in structural equation models: The role of composite variables

    USGS Publications Warehouse

    Grace, J.B.; Bollen, K.A.

    2008-01-01

    Structural equation modeling (SEM) holds the promise of providing natural scientists the capacity to evaluate complex multivariate hypotheses about ecological systems. Building on its predecessors, path analysis and factor analysis, SEM allows for the incorporation of both observed and unobserved (latent) variables into theoretically-based probabilistic models. In this paper we discuss the interface between theory and data in SEM and the use of an additional variable type, the composite. In simple terms, composite variables specify the influences of collections of other variables and can be helpful in modeling heterogeneous concepts of the sort commonly of interest to ecologists. While long recognized as a potentially important element of SEM, composite variables have received very limited use, in part because of a lack of theoretical consideration, but also because of difficulties that arise in parameter estimation when using conventional solution procedures. In this paper we present a framework for discussing composites and demonstrate how the use of partially-reduced-form models can help to overcome some of the parameter estimation and evaluation problems associated with models containing composites. Diagnostic procedures for evaluating the most appropriate and effective use of composites are illustrated with an example from the ecological literature. It is argued that an ability to incorporate composite variables into structural equation models may be particularly valuable in the study of natural systems, where concepts are frequently multifaceted and the influence of suites of variables are often of interest. ?? Springer Science+Business Media, LLC 2007.

  7. The effect of muscle fatigue and low back pain on lumbar movement variability and complexity.

    PubMed

    Bauer, C M; Rast, F M; Ernst, M J; Meichtry, A; Kool, J; Rissanen, S M; Suni, J H; Kankaanpää, M

    2017-04-01

    Changes in movement variability and complexity may reflect an adaptation strategy to fatigue. One unresolved question is whether this adaptation is hampered by the presence of low back pain (LBP). This study investigated if changes in movement variability and complexity after fatigue are influenced by the presence of LBP. It is hypothesised that pain free people and people suffering from LBP differ in their response to fatigue. The effect of an isometric endurance test on lumbar movement was tested in 27 pain free participants and 59 participants suffering from LBP. Movement variability and complexity were quantified with %determinism and sample entropy of lumbar angular displacement and velocity. Generalized linear models were fitted for each outcome. Bayesian estimation of the group-fatigue effect with 95% highest posterior density intervals (95%HPDI) was performed. After fatiguing %determinism decreased and sample entropy increased in the pain free group, compared to the LBP group. The corresponding group-fatigue effects were 3.7 (95%HPDI: 2.3-7.1) and -1.4 (95%HPDI: -2.7 to -0.1). These effects manifested in angular velocity, but not in angular displacement. The effects indicate that pain free participants showed more complex and less predictable lumbar movement with a lower degree of structure in its variability following fatigue while participants suffering from LBP did not. This may be physiological responses to avoid overload of fatigued tissue, increase endurance, or a consequence of reduced movement control caused by fatigue. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Multilayer Joint Gait-Pose Manifolds for Human Gait Motion Modeling.

    PubMed

    Ding, Meng; Fan, Guolian

    2015-11-01

    We present new multilayer joint gait-pose manifolds (multilayer JGPMs) for complex human gait motion modeling, where three latent variables are defined jointly in a low-dimensional manifold to represent a variety of body configurations. Specifically, the pose variable (along the pose manifold) denotes a specific stage in a walking cycle; the gait variable (along the gait manifold) represents different walking styles; and the linear scale variable characterizes the maximum stride in a walking cycle. We discuss two kinds of topological priors for coupling the pose and gait manifolds, i.e., cylindrical and toroidal, to examine their effectiveness and suitability for motion modeling. We resort to a topologically-constrained Gaussian process (GP) latent variable model to learn the multilayer JGPMs where two new techniques are introduced to facilitate model learning under limited training data. First is training data diversification that creates a set of simulated motion data with different strides. Second is the topology-aware local learning to speed up model learning by taking advantage of the local topological structure. The experimental results on the Carnegie Mellon University motion capture data demonstrate the advantages of our proposed multilayer models over several existing GP-based motion models in terms of the overall performance of human gait motion modeling.

  9. Force control compensation method with variable load stiffness and damping of the hydraulic drive unit force control system

    NASA Astrophysics Data System (ADS)

    Kong, Xiangdong; Ba, Kaixian; Yu, Bin; Cao, Yuan; Zhu, Qixin; Zhao, Hualong

    2016-05-01

    Each joint of hydraulic drive quadruped robot is driven by the hydraulic drive unit (HDU), and the contacting between the robot foot end and the ground is complex and variable, which increases the difficulty of force control inevitably. In the recent years, although many scholars researched some control methods such as disturbance rejection control, parameter self-adaptive control, impedance control and so on, to improve the force control performance of HDU, the robustness of the force control still needs improving. Therefore, how to simulate the complex and variable load characteristics of the environment structure and how to ensure HDU having excellent force control performance with the complex and variable load characteristics are key issues to be solved in this paper. The force control system mathematic model of HDU is established by the mechanism modeling method, and the theoretical models of a novel force control compensation method and a load characteristics simulation method under different environment structures are derived, considering the dynamic characteristics of the load stiffness and the load damping under different environment structures. Then, simulation effects of the variable load stiffness and load damping under the step and sinusoidal load force are analyzed experimentally on the HDU force control performance test platform, which provides the foundation for the force control compensation experiment research. In addition, the optimized PID control parameters are designed to make the HDU have better force control performance with suitable load stiffness and load damping, under which the force control compensation method is introduced, and the robustness of the force control system with several constant load characteristics and the variable load characteristics respectively are comparatively analyzed by experiment. The research results indicate that if the load characteristics are known, the force control compensation method presented in this paper has positive compensation effects on the load characteristics variation, i.e., this method decreases the effects of the load characteristics variation on the force control performance and enhances the force control system robustness with the constant PID parameters, thereby, the online PID parameters tuning control method which is complex needs not be adopted. All the above research provides theoretical and experimental foundation for the force control method of the quadruped robot joints with high robustness.

  10. Uranium plume persistence impacted by hydrologic and geochemical heterogeneity in the groundwater and river water interaction zone of Hanford site

    NASA Astrophysics Data System (ADS)

    Chen, X.; Zachara, J. M.; Vermeul, V. R.; Freshley, M.; Hammond, G. E.

    2015-12-01

    The behavior of a persistent uranium plume in an extended groundwater- river water (GW-SW) interaction zone at the DOE Hanford site is dominantly controlled by river stage fluctuations in the adjacent Columbia River. The plume behavior is further complicated by substantial heterogeneity in physical and geochemical properties of the host aquifer sediments. Multi-scale field and laboratory experiments and reactive transport modeling were integrated to understand the complex plume behavior influenced by highly variable hydrologic and geochemical conditions in time and space. In this presentation we (1) describe multiple data sets from field-scale uranium adsorption and desorption experiments performed at our experimental well-field, (2) develop a reactive transport model that incorporates hydrologic and geochemical heterogeneities characterized from multi-scale and multi-type datasets and a surface complexation reaction network based on laboratory studies, and (3) compare the modeling and observation results to provide insights on how to refine the conceptual model and reduce prediction uncertainties. The experimental results revealed significant spatial variability in uranium adsorption/desorption behavior, while modeling demonstrated that ambient hydrologic and geochemical conditions and heterogeneities in sediment physical and chemical properties both contributed to complex plume behavior and its persistence. Our analysis provides important insights into the characterization, understanding, modeling, and remediation of groundwater contaminant plumes influenced by surface water and groundwater interactions.

  11. Supporting Fisheries Management by Means of Complex Models: Can We Point out Isles of Robustness in a Sea of Uncertainty?

    PubMed Central

    Gasche, Loïc; Mahévas, Stéphanie; Marchal, Paul

    2013-01-01

    Ecosystems are usually complex, nonlinear and strongly influenced by poorly known environmental variables. Among these systems, marine ecosystems have high uncertainties: marine populations in general are known to exhibit large levels of natural variability and the intensity of fishing efforts can change rapidly. These uncertainties are a source of risks that threaten the sustainability of both fish populations and fishing fleets targeting them. Appropriate management measures have to be found in order to reduce these risks and decrease sensitivity to uncertainties. Methods have been developed within decision theory that aim at allowing decision making under severe uncertainty. One of these methods is the information-gap decision theory. The info-gap method has started to permeate ecological modelling, with recent applications to conservation. However, these practical applications have so far been restricted to simple models with analytical solutions. Here we implement a deterministic approach based on decision theory in a complex model of the Eastern English Channel. Using the ISIS-Fish modelling platform, we model populations of sole and plaice in this area. We test a wide range of values for ecosystem, fleet and management parameters. From these simulations, we identify management rules controlling fish harvesting that allow reaching management goals recommended by ICES (International Council for the Exploration of the Sea) working groups while providing the highest robustness to uncertainties on ecosystem parameters. PMID:24204873

  12. Supporting fisheries management by means of complex models: can we point out isles of robustness in a sea of uncertainty?

    PubMed

    Gasche, Loïc; Mahévas, Stéphanie; Marchal, Paul

    2013-01-01

    Ecosystems are usually complex, nonlinear and strongly influenced by poorly known environmental variables. Among these systems, marine ecosystems have high uncertainties: marine populations in general are known to exhibit large levels of natural variability and the intensity of fishing efforts can change rapidly. These uncertainties are a source of risks that threaten the sustainability of both fish populations and fishing fleets targeting them. Appropriate management measures have to be found in order to reduce these risks and decrease sensitivity to uncertainties. Methods have been developed within decision theory that aim at allowing decision making under severe uncertainty. One of these methods is the information-gap decision theory. The info-gap method has started to permeate ecological modelling, with recent applications to conservation. However, these practical applications have so far been restricted to simple models with analytical solutions. Here we implement a deterministic approach based on decision theory in a complex model of the Eastern English Channel. Using the ISIS-Fish modelling platform, we model populations of sole and plaice in this area. We test a wide range of values for ecosystem, fleet and management parameters. From these simulations, we identify management rules controlling fish harvesting that allow reaching management goals recommended by ICES (International Council for the Exploration of the Sea) working groups while providing the highest robustness to uncertainties on ecosystem parameters.

  13. A Hardware Model Validation Tool for Use in Complex Space Systems

    NASA Technical Reports Server (NTRS)

    Davies, Misty Dawn; Gundy-Burlet, Karen L.; Limes, Gregory L.

    2010-01-01

    One of the many technological hurdles that must be overcome in future missions is the challenge of validating as-built systems against the models used for design. We propose a technique composed of intelligent parameter exploration in concert with automated failure analysis as a scalable method for the validation of complex space systems. The technique is impervious to discontinuities and linear dependencies in the data, and can handle dimensionalities consisting of hundreds of variables over tens of thousands of experiments.

  14. Assimilating multi-source uncertainties of a parsimonious conceptual hydrological model using hierarchical Bayesian modeling

    Treesearch

    Wei Wu; James Clark; James Vose

    2010-01-01

    Hierarchical Bayesian (HB) modeling allows for multiple sources of uncertainty by factoring complex relationships into conditional distributions that can be used to draw inference and make predictions. We applied an HB model to estimate the parameters and state variables of a parsimonious hydrological model – GR4J – by coherently assimilating the uncertainties from the...

  15. Ocean Hydrodynamics Numerical Model in Curvilinear Coordinates for Simulating Circulation of the Global Ocean and its Separate Basins.

    NASA Astrophysics Data System (ADS)

    Gusev, Anatoly; Diansky, Nikolay; Zalesny, Vladimir

    2010-05-01

    The original program complex is proposed for the ocean circulation sigma-model, developed in the Institute of Numerical Mathematics (INM), Russian Academy of Sciences (RAS). The complex can be used in various curvilinear orthogonal coordinate systems. In addition to ocean circulation model, the complex contains a sea ice dynamics and thermodynamics model, as well as the original system of the atmospheric forcing implementation on the basis of both prescribed meteodata and atmospheric model results. This complex can be used as the oceanic block of Earth climate model as well as for solving the scientific and practical problems concerning the World ocean and its separate oceans and seas. The developed program complex can be effectively used on parallel shared memory computational systems and on contemporary personal computers. On the base of the complex proposed the ocean general circulation model (OGCM) was developed. The model is realized in the curvilinear orthogonal coordinate system obtained by the conformal transformation of the standard geographical grid that allowed us to locate the system singularities outside the integration domain. The horizontal resolution of the OGCM is 1 degree on longitude, 0.5 degree on latitude, and it has 40 non-uniform sigma-levels in depth. The model was integrated for 100 years starting from the Levitus January climatology using the realistic atmospheric annual cycle calculated on the base of CORE datasets. The experimental results showed us that the model adequately reproduces the basic characteristics of large-scale World Ocean dynamics, that is in good agreement with both observational data and results of the best climatic OGCMs. This OGCM is used as the oceanic component of the new version of climatic system model (CSM) developed in INM RAS. The latter is now ready for carrying out the new numerical experiments on climate and its change modelling according to IPCC (Intergovernmental Panel on Climate Change) scenarios in the scope of the CMIP-5 (Coupled Model Intercomparison Project). On the base of the complex proposed the Pacific Ocean circulation eddy-resolving model was realized. The integration domain covers the Pacific from Equator to Bering Strait. The model horizontal resolution is 0.125 degree and it has 20 non-uniform sigma-levels in depth. The model adequately reproduces circulation large-scale structure and its variability: Kuroshio meandering, ocean synoptic eddies, frontal zones, etc. Kuroshio high variability is shown. The distribution of contaminant was simulated that is admittedly wasted near Petropavlovsk-Kamchatsky. The results demonstrate contaminant distribution structure and provide us understanding of hydrological fields formation processes in the North-West Pacific.

  16. A practical approach to Sasang constitutional diagnosis using vocal features

    PubMed Central

    2013-01-01

    Background Sasang constitutional medicine (SCM) is a type of tailored medicine that divides human beings into four Sasang constitutional (SC) types. Diagnosis of SC types is crucial to proper treatment in SCM. Voice characteristics have been used as an essential clue for diagnosing SC types. In the past, many studies tried to extract quantitative vocal features to make diagnosis models; however, these studies were flawed by limited data collected from one or a few sites, long recording time, and low accuracy. We propose a practical diagnosis model having only a few variables, which decreases model complexity. This in turn, makes our model appropriate for clinical applications. Methods A total of 2,341 participants’ voice recordings were used in making a SC classification model and to test the generalization ability of the model. Although the voice data consisted of five vowels and two repeated sentences per participant, we used only the sentence part for our study. A total of 21 features were extracted, and an advanced feature selection method—the least absolute shrinkage and selection operator (LASSO)—was applied to reduce the number of variables for classifier learning. A SC classification model was developed using multinomial logistic regression via LASSO. Results We compared the proposed classification model to the previous study, which used both sentences and five vowels from the same patient’s group. The classification accuracies for the test set were 47.9% and 40.4% for male and female, respectively. Our result showed that the proposed method was superior to the previous study in that it required shorter voice recordings, is more applicable to practical use, and had better generalization performance. Conclusions We proposed a practical SC classification method and showed that our model having fewer variables outperformed the model having many variables in the generalization test. We attempted to reduce the number of variables in two ways: 1) the initial number of candidate features was decreased by considering shorter voice recording, and 2) LASSO was introduced for reducing model complexity. The proposed method is suitable for an actual clinical environment. Moreover, we expect it to yield more stable results because of the model’s simplicity. PMID:24200041

  17. Range expansion through fragmented landscapes under a variable climate

    PubMed Central

    Bennie, Jonathan; Hodgson, Jenny A; Lawson, Callum R; Holloway, Crispin TR; Roy, David B; Brereton, Tom; Thomas, Chris D; Wilson, Robert J

    2013-01-01

    Ecological responses to climate change may depend on complex patterns of variability in weather and local microclimate that overlay global increases in mean temperature. Here, we show that high-resolution temporal and spatial variability in temperature drives the dynamics of range expansion for an exemplar species, the butterfly Hesperia comma. Using fine-resolution (5 m) models of vegetation surface microclimate, we estimate the thermal suitability of 906 habitat patches at the species' range margin for 27 years. Population and metapopulation models that incorporate this dynamic microclimate surface improve predictions of observed annual changes to population density and patch occupancy dynamics during the species' range expansion from 1982 to 2009. Our findings reveal how fine-scale, short-term environmental variability drives rates and patterns of range expansion through spatially localised, intermittent episodes of expansion and contraction. Incorporating dynamic microclimates can thus improve models of species range shifts at spatial and temporal scales relevant to conservation interventions. PMID:23701124

  18. Polarization and long-term variability of Sgr A* X-ray echo

    NASA Astrophysics Data System (ADS)

    Churazov, E.; Khabibullin, I.; Ponti, G.; Sunyaev, R.

    2017-06-01

    We use a model of the molecular gas distribution within ˜100 pc from the centre of the Milky Way (Kruijssen, Dale & Longmore) to simulate time evolution and polarization properties of the reflected X-ray emission, associated with the past outbursts from Sgr A*. While this model is too simple to describe the complexity of the true gas distribution, it illustrates the importance and power of long-term observations of the reflected emission. We show that the variable part of X-ray emission observed by Chandra and XMM-Newton from prominent molecular clouds is well described by a pure reflection model, providing strong support of the reflection scenario. While the identification of Sgr A* as a primary source for this reflected emission is already a very appealing hypothesis, a decisive test of this model can be provided by future X-ray polarimetric observations, which will allow placing constraints on the location of the primary source. In addition, X-ray polarimeters (like, e.g. XIPE) have sufficient sensitivity to constrain the line-of-sight positions of molecular complexes, removing major uncertainty in the model.

  19. OLYMPEX Data Workshop: GPM View

    NASA Technical Reports Server (NTRS)

    Petersen, W.

    2017-01-01

    OLYMPEX Primary Objectives: Datasets to enable: (1) Direct validation over complex terrain at multiple scales, liquid and frozen precip types, (a) Do we capture terrain and synoptic regime transitions, orographic enhancements/structure, full range of precipitation intensity (e.g., very light to heavy) and types, spatial variability? (b) How well can we estimate space/time-accumulated precipitation over terrain (liquid + frozen)? (2) Physical validation of algorithms in mid-latitude cold season frontal systems over ocean and complex terrain, (a) What are the column properties of frozen, melting, liquid hydrometeors-their relative contributions to estimated surface precipitation, transition under the influence of terrain gradients, and systematic variability as a function of synoptic regime? (3) Integrated hydrologic validation in complex terrain, (a) Can satellite estimates be combined with modeling over complex topography to drive improved products (assimilation, downscaling) [Level IV products] (b) What are capabilities and limitations for use of satellite-based precipitation estimates in stream/river flow forecasting?

  20. Applications of MIDAS regression in analysing trends in water quality

    NASA Astrophysics Data System (ADS)

    Penev, Spiridon; Leonte, Daniela; Lazarov, Zdravetz; Mann, Rob A.

    2014-04-01

    We discuss novel statistical methods in analysing trends in water quality. Such analysis uses complex data sets of different classes of variables, including water quality, hydrological and meteorological. We analyse the effect of rainfall and flow on trends in water quality utilising a flexible model called Mixed Data Sampling (MIDAS). This model arises because of the mixed frequency in the data collection. Typically, water quality variables are sampled fortnightly, whereas the rain data is sampled daily. The advantage of using MIDAS regression is in the flexible and parsimonious modelling of the influence of the rain and flow on trends in water quality variables. We discuss the model and its implementation on a data set from the Shoalhaven Supply System and Catchments in the state of New South Wales, Australia. Information criteria indicate that MIDAS modelling improves upon simplistic approaches that do not utilise the mixed data sampling nature of the data.

  1. Monte Carlo sensitivity analysis of land surface parameters using the Variable Infiltration Capacity model

    NASA Astrophysics Data System (ADS)

    Demaria, Eleonora M.; Nijssen, Bart; Wagener, Thorsten

    2007-06-01

    Current land surface models use increasingly complex descriptions of the processes that they represent. Increase in complexity is accompanied by an increase in the number of model parameters, many of which cannot be measured directly at large spatial scales. A Monte Carlo framework was used to evaluate the sensitivity and identifiability of ten parameters controlling surface and subsurface runoff generation in the Variable Infiltration Capacity model (VIC). Using the Monte Carlo Analysis Toolbox (MCAT), parameter sensitivities were studied for four U.S. watersheds along a hydroclimatic gradient, based on a 20-year data set developed for the Model Parameter Estimation Experiment (MOPEX). Results showed that simulated streamflows are sensitive to three parameters when evaluated with different objective functions. Sensitivity of the infiltration parameter (b) and the drainage parameter (exp) were strongly related to the hydroclimatic gradient. The placement of vegetation roots played an important role in the sensitivity of model simulations to the thickness of the second soil layer (thick2). Overparameterization was found in the base flow formulation indicating that a simplified version could be implemented. Parameter sensitivity was more strongly dictated by climatic gradients than by changes in soil properties. Results showed how a complex model can be reduced to a more parsimonious form, leading to a more identifiable model with an increased chance of successful regionalization to ungauged basins. Although parameter sensitivities are strictly valid for VIC, this model is representative of a wider class of macroscale hydrological models. Consequently, the results and methodology will have applicability to other hydrological models.

  2. Applied Routh approximation

    NASA Technical Reports Server (NTRS)

    Merrill, W. C.

    1978-01-01

    The Routh approximation technique for reducing the complexity of system models was applied in the frequency domain to a 16th order, state variable model of the F100 engine and to a 43d order, transfer function model of a launch vehicle boost pump pressure regulator. The results motivate extending the frequency domain formulation of the Routh method to the time domain in order to handle the state variable formulation directly. The time domain formulation was derived and a characterization that specifies all possible Routh similarity transformations was given. The characterization was computed by solving two eigenvalue-eigenvector problems. The application of the time domain Routh technique to the state variable engine model is described, and some results are given. Additional computational problems are discussed, including an optimization procedure that can improve the approximation accuracy by taking advantage of the transformation characterization.

  3. Evaluating a process-based model for use in streambank stabilization and stream restoration: insights on the bank stability and toe erosion model (BSTEM)

    USDA-ARS?s Scientific Manuscript database

    Streambank retreat is a complex cyclical process involving subaerial processes, fluvial erosion, seepage erosion, and geotechnical failures and is driven by several soil properties that themselves are temporally and spatially variable. Therefore, it can be extremely challenging to predict and model ...

  4. System Thinking and Feeding Relations: Learning with a Live Ecosystem Model

    ERIC Educational Resources Information Center

    Eilam, Billie

    2012-01-01

    Considering well-documented difficulties in mastering ecology concepts and system thinking, the aim of the study was to examine 9th graders' understanding of the complex, multilevel, systemic construct of feeding relations, nested within a larger system of a live model. Fifty students interacted with the model and manipulated a variable within it…

  5. Modelling the vertical distribution of canopy fuel load using national forest inventory and low-density airbone laser scanning data.

    PubMed

    González-Ferreiro, Eduardo; Arellano-Pérez, Stéfano; Castedo-Dorado, Fernando; Hevia, Andrea; Vega, José Antonio; Vega-Nieva, Daniel; Álvarez-González, Juan Gabriel; Ruiz-González, Ana Daría

    2017-01-01

    The fuel complex variables canopy bulk density and canopy base height are often used to predict crown fire initiation and spread. Direct measurement of these variables is impractical, and they are usually estimated indirectly by modelling. Recent advances in predicting crown fire behaviour require accurate estimates of the complete vertical distribution of canopy fuels. The objectives of the present study were to model the vertical profile of available canopy fuel in pine stands by using data from the Spanish national forest inventory plus low-density airborne laser scanning (ALS) metrics. In a first step, the vertical distribution of the canopy fuel load was modelled using the Weibull probability density function. In a second step, two different systems of models were fitted to estimate the canopy variables defining the vertical distributions; the first system related these variables to stand variables obtained in a field inventory, and the second system related the canopy variables to airborne laser scanning metrics. The models of each system were fitted simultaneously to compensate the effects of the inherent cross-model correlation between the canopy variables. Heteroscedasticity was also analyzed, but no correction in the fitting process was necessary. The estimated canopy fuel load profiles from field variables explained 84% and 86% of the variation in canopy fuel load for maritime pine and radiata pine respectively; whereas the estimated canopy fuel load profiles from ALS metrics explained 52% and 49% of the variation for the same species. The proposed models can be used to assess the effectiveness of different forest management alternatives for reducing crown fire hazard.

  6. High resolution modelling of soil moisture patterns with TerrSysMP: A comparison with sensor network data

    NASA Astrophysics Data System (ADS)

    Gebler, S.; Hendricks Franssen, H.-J.; Kollet, S. J.; Qu, W.; Vereecken, H.

    2017-04-01

    The prediction of the spatial and temporal variability of land surface states and fluxes with land surface models at high spatial resolution is still a challenge. This study compares simulation results using TerrSysMP including a 3D variably saturated groundwater flow model (ParFlow) coupled to the Community Land Model (CLM) of a 38 ha managed grassland head-water catchment in the Eifel (Germany), with soil water content (SWC) measurements from a wireless sensor network, actual evapotranspiration recorded by lysimeters and eddy covariance stations and discharge observations. TerrSysMP was discretized with a 10 × 10 m lateral resolution, variable vertical resolution (0.025-0.575 m), and the following parameterization strategies of the subsurface soil hydraulic parameters: (i) completely homogeneous, (ii) homogeneous parameters for different soil horizons, (iii) different parameters for each soil unit and soil horizon and (iv) heterogeneous stochastic realizations. Hydraulic conductivity and Mualem-Van Genuchten parameters in these simulations were sampled from probability density functions, constructed from either (i) soil texture measurements and Rosetta pedotransfer functions (ROS), or (ii) estimated soil hydraulic parameters by 1D inverse modelling using shuffle complex evolution (SCE). The results indicate that the spatial variability of SWC at the scale of a small headwater catchment is dominated by topography and spatially heterogeneous soil hydraulic parameters. The spatial variability of the soil water content thereby increases as a function of heterogeneity of soil hydraulic parameters. For lower levels of complexity, spatial variability of the SWC was underrepresented in particular for the ROS-simulations. Whereas all model simulations were able to reproduce the seasonal evapotranspiration variability, the poor discharge simulations with high model bias are likely related to short-term ET dynamics and the lack of information about bedrock characteristics and an on-site drainage system in the uncalibrated model. In general, simulation performance was better for the SCE setups. The SCE-simulations had a higher inverse air entry parameter resulting in SWC dynamics in better correspondence with data than the ROS simulations during dry periods. This illustrates that small scale measurements of soil hydraulic parameters cannot be transferred to the larger scale and that interpolated 1D inverse parameter estimates result in an acceptable performance for the catchment.

  7. Transdimensional Seismic Tomography

    NASA Astrophysics Data System (ADS)

    Bodin, T.; Sambridge, M.

    2009-12-01

    In seismic imaging the degree of model complexity is usually determined by manually tuning damping parameters within a fixed parameterization chosen in advance. Here we present an alternative methodology for seismic travel time tomography where the model complexity is controlled automatically by the data. In particular we use a variable parametrization consisting of Voronoi cells with mobile geometry, shape and number, all treated as unknowns in the inversion. The reversible jump algorithm is used to sample the transdimensional model space within a Bayesian framework which avoids global damping procedures and the need to tune regularisation parameters. The method is an ensemble inference approach, as many potential solutions are generated with variable numbers of cells. Information is extracted from the ensemble as a whole by performing Monte Carlo integration to produce the expected Earth model. The ensemble of models can also be used to produce velocity uncertainty estimates and experiments with synthetic data suggest they represent actual uncertainty surprisingly well. In a transdimensional approach, the level of data uncertainty directly determines the model complexity needed to satisfy the data. Intriguingly, the Bayesian formulation can be extended to the case where data uncertainty is also uncertain. Experiments show that it is possible to recover data noise estimate while at the same time controlling model complexity in an automated fashion. The method is tested on synthetic data in a 2-D application and compared with a more standard matrix based inversion scheme. The method has also been applied to real data obtained from cross correlation of ambient noise where little is known about the size of the errors associated with the travel times. As an example, a tomographic image of Rayleigh wave group velocity for the Australian continent is constructed for 5s data together with uncertainty estimates.

  8. A delay differential model of ENSO variability: parametric instability and the distribution of extremes

    NASA Astrophysics Data System (ADS)

    Zaliapin, I.; Ghil, M.; Thompson, S.

    2007-12-01

    We consider a Delay Differential Equation (DDE) model for El-Nino Southern Oscillation (ENSO) variability. The model combines two key mechanisms that participate in the ENSO dynamics: delayed negative feedback and seasonal forcing. Descriptive and metric stability analyses of the model are performed in a complete 3D space of its physically relevant parameters. Existence of two regimes --- stable and unstable --- is reported. The domains of the regimes are separated by a sharp neutral curve in the parameter space. The detailed structure of the neutral curve become very complicated (possibly fractal), and individual trajectories within the unstable region become highly complex (possibly chaotic) as the atmosphere-ocean coupling increases. In the unstable regime, spontaneous transitions in the mean "temperature" (i.e., thermocline depth), period, and extreme annual values occur, for purely periodic, seasonal forcing. This indicates (via the continuous dependence theorem) the existence of numerous unstable solutions responsible for the complex dynamics of the system. In the stable regime, only periodic solutions are found. Our results illustrate the role of the distinct parameters of ENSO variability, such as strength of seasonal forcing vs. atmosphere ocean coupling and propagation period of oceanic waves across the Tropical Pacific. The model reproduces, among other phenomena, the Devil's bleachers (caused by period locking) documented in other ENSO models, such as nonlinear PDEs and GCMs, as well as in certain observations. We expect such behavior in much more detailed and realistic models, where it is harder to describe its causes as completely.

  9. Forecasting daily source air quality using multivariate statistical analysis and radial basis function networks.

    PubMed

    Sun, Gang; Hoff, Steven J; Zelle, Brian C; Nelson, Minda A

    2008-12-01

    It is vital to forecast gas and particle matter concentrations and emission rates (GPCER) from livestock production facilities to assess the impact of airborne pollutants on human health, ecological environment, and global warming. Modeling source air quality is a complex process because of abundant nonlinear interactions between GPCER and other factors. The objective of this study was to introduce statistical methods and radial basis function (RBF) neural network to predict daily source air quality in Iowa swine deep-pit finishing buildings. The results show that four variables (outdoor and indoor temperature, animal units, and ventilation rates) were identified as relative important model inputs using statistical methods. It can be further demonstrated that only two factors, the environment factor and the animal factor, were capable of explaining more than 94% of the total variability after performing principal component analysis. The introduction of fewer uncorrelated variables to the neural network would result in the reduction of the model structure complexity, minimize computation cost, and eliminate model overfitting problems. The obtained results of RBF network prediction were in good agreement with the actual measurements, with values of the correlation coefficient between 0.741 and 0.995 and very low values of systemic performance indexes for all the models. The good results indicated the RBF network could be trained to model these highly nonlinear relationships. Thus, the RBF neural network technology combined with multivariate statistical methods is a promising tool for air pollutant emissions modeling.

  10. [Comparison of predictive models for the selection of high-complexity patients].

    PubMed

    Estupiñán-Ramírez, Marcos; Tristancho-Ajamil, Rita; Company-Sancho, María Consuelo; Sánchez-Janáriz, Hilda

    2017-08-18

    To compare the concordance of complexity weights between Clinical Risk Groups (CRG) and Adjusted Morbidity Groups (AMG). To determine which one is the best predictor of patient admission. To optimise the method used to select the 0.5% of patients of higher complexity that will be included in an intervention protocol. Cross-sectional analytical study in 18 Canary Island health areas, 385,049 citizens were enrolled, using sociodemographic variables from health cards; diagnoses and use of healthcare resources obtained from primary health care electronic records (PCHR) and the basic minimum set of hospital data; the functional status recorded in the PCHR, and the drugs prescribed through the electronic prescription system. The correlation between stratifiers was estimated from these data. The ability of each stratifier to predict patient admissions was evaluated and prediction optimisation models were constructed. Concordance between weights complexity stratifiers was strong (rho = 0.735) and the correlation between categories of complexity was moderate (weighted kappa = 0.515). AMG complexity weight predicts better patient admission than CRG (AUC: 0.696 [0.695-0.697] versus 0.692 [0.691-0.693]). Other predictive variables were added to the AMG weight, obtaining the best AUC (0.708 [0.707-0.708]) the model composed by AMG, sex, age, Pfeiffer and Barthel scales, re-admissions and number of prescribed therapeutic groups. strong concordance was found between stratifiers, and higher predictive capacity for admission from AMG, which can be increased by adding other dimensions. Copyright © 2017 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  11. Matlab Geochemistry: An open source geochemistry solver based on MRST

    NASA Astrophysics Data System (ADS)

    McNeece, C. J.; Raynaud, X.; Nilsen, H.; Hesse, M. A.

    2017-12-01

    The study of geological systems often requires the solution of complex geochemical relations. To address this need we present an open source geochemical solver based on the Matlab Reservoir Simulation Toolbox (MRST) developed by SINTEF. The implementation supports non-isothermal multicomponent aqueous complexation, surface complexation, ion exchange, and dissolution/precipitation reactions. The suite of tools available in MRST allows for rapid model development, in particular the incorporation of geochemical calculations into transport simulations of multiple phases, complex domain geometry and geomechanics. Different numerical schemes and additional physics can be easily incorporated into the existing tools through the object-oriented framework employed by MRST. The solver leverages the automatic differentiation tools available in MRST to solve arbitrarily complex geochemical systems with any choice of species or element concentration as input. Four mathematical approaches enable the solver to be quite robust: 1) the choice of chemical elements as the basis components makes all entries in the composition matrix positive thus preserving convexity, 2) a log variable transformation is used which transfers the nonlinearity to the convex composition matrix, 3) a priori bounds on variables are calculated from the structure of the problem, constraining Netwon's path and 4) an initial guess is calculated implicitly by sequentially adding model complexity. As a benchmark we compare the model to experimental and semi-analytic solutions of the coupled salinity-acidity transport system. Together with the reservoir simulation capabilities of MRST the solver offers a promising tool for geochemical simulations in reservoir domains for applications in a diversity of fields from enhanced oil recovery to radionuclide storage.

  12. An Application of Latent Variable Structural Equation Modeling for Experimental Research in Educational Technology

    ERIC Educational Resources Information Center

    Lee, Hyeon Woo

    2011-01-01

    As the technology-enriched learning environments and theoretical constructs involved in instructional design become more sophisticated and complex, a need arises for equally sophisticated analytic methods to research these environments, theories, and models. Thus, this paper illustrates a comprehensive approach for analyzing data arising from…

  13. MULTIVARIATE STATISTICAL MODELS FOR EFFECTS OF PM AND COPOLLUTANTS IN A DAILY TIME SERIES EPIDEMIOLOGY STUDY

    EPA Science Inventory

    Most analyses of daily time series epidemiology data relate mortality or morbidity counts to PM and other air pollutants by means of single-outcome regression models using multiple predictors, without taking into account the complex statistical structure of the predictor variable...

  14. "No Soy de Aqui ni Soy de Alla": Transgenerational Cultural Identity Formation

    ERIC Educational Resources Information Center

    Cardona, Jose Ruben Parra; Busby, Dean M.; Wampler, Richard S.

    2004-01-01

    The transgenerational cultural identity model offers a detailed understanding of the immigration experience by challenging agendas of assimilation and by expanding on existing theories of cultural identity. Based on this model, immigration is a complex phenomenon influenced by many variables such as sociopsychological dimensions, family,…

  15. Strategic by Design: Iterative Approaches to Educational Planning

    ERIC Educational Resources Information Center

    Chance, Shannon

    2010-01-01

    Linear planning and decision-making models assume a level of predictability that is uncommon today. Such models inadequately address the complex variables found in higher education. When academic organizations adopt paired-down business strategies, they restrict their own vision. They fail to harness emerging opportunities or learn from their own…

  16. Modeling Noisy Data with Differential Equations Using Observed and Expected Matrices

    ERIC Educational Resources Information Center

    Deboeck, Pascal R.; Boker, Steven M.

    2010-01-01

    Complex intraindividual variability observed in psychology may be well described using differential equations. It is difficult, however, to apply differential equation models in psychological contexts, as time series are frequently short, poorly sampled, and have large proportions of measurement and dynamic error. Furthermore, current methods for…

  17. Equation-free and variable free modeling for complex/multiscale systems. Coarse-grained computation in science and engineering using fine-grained models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kevrekidis, Ioannis G.

    The work explored the linking of modern developing machine learning techniques (manifold learning and in particular diffusion maps) with traditional PDE modeling/discretization/scientific computation techniques via the equation-free methodology developed by the PI. The result (in addition to several PhD degrees, two of them by CSGF Fellows) was a sequence of strong developments - in part on the algorithmic side, linking data mining with scientific computing, and in part on applications, ranging from PDE discretizations to molecular dynamics and complex network dynamics.

  18. Complexity and Hopf Bifurcation Analysis on a Kind of Fractional-Order IS-LM Macroeconomic System

    NASA Astrophysics Data System (ADS)

    Ma, Junhai; Ren, Wenbo

    On the basis of our previous research, we deepen and complete a kind of macroeconomics IS-LM model with fractional-order calculus theory, which is a good reflection on the memory characteristics of economic variables, we also focus on the influence of the variables on the real system, and improve the analysis capabilities of the traditional economic models to suit the actual macroeconomic environment. The conditions of Hopf bifurcation in fractional-order system models are briefly demonstrated, and the fractional order when Hopf bifurcation occurs is calculated, showing the inherent complex dynamic characteristics of the system. With numerical simulation, bifurcation, strange attractor, limit cycle, waveform and other complex dynamic characteristics are given; and the order condition is obtained with respect to time. We find that the system order has an important influence on the running state of the system. The system has a periodic motion when the order meets the conditions of Hopf bifurcation; the fractional-order system gradually stabilizes with the change of the order and parameters while the corresponding integer-order system diverges. This study has certain significance to policy-making about macroeconomic regulation and control.

  19. “Skill of Generalized Additive Model to Detect PM2.5 Health ...

    EPA Pesticide Factsheets

    Summary. Measures of health outcomes are collinear with meteorology and air quality, making analysis of connections between human health and air quality difficult. The purpose of this analysis was to determine time scales and periods shared by the variables of interest (and by implication scales and periods that are not shared). Hospital admissions, meteorology (temperature and relative humidity), and air quality (PM2.5 and daily maximum ozone) for New York City during the period 2000-2006 were decomposed into temporal scales ranging from 2 days to greater than two years using a complex wavelet transform. Health effects were modeled as functions of the wavelet components of meteorology and air quality using the generalized additive model (GAM) framework. This simulation study showed that GAM is extremely successful at extracting and estimating a health effect embedded in a dataset. It also shows that, if the objective in mind is to estimate the health signal but not to fully explain this signal, a simple GAM model with a single confounder (calendar time) whose smooth representation includes a sufficient number of constraints is as good as a more complex model.Introduction. In the context of wavelet regression, confounding occurs when two or more independent variables interact with the dependent variable at the same frequency. Confounding also acts on a variety of time scales, changing the PM2.5 coefficient (magnitude and sign) and its significance ac

  20. Mediation and moderation of treatment effects in randomised controlled trials of complex interventions.

    PubMed

    Emsley, Richard; Dunn, Graham; White, Ian R

    2010-06-01

    Complex intervention trials should be able to answer both pragmatic and explanatory questions in order to test the theories motivating the intervention and help understand the underlying nature of the clinical problem being tested. Key to this is the estimation of direct effects of treatment and indirect effects acting through intermediate variables which are measured post-randomisation. Using psychological treatment trials as an example of complex interventions, we review statistical methods which crucially evaluate both direct and indirect effects in the presence of hidden confounding between mediator and outcome. We review the historical literature on mediation and moderation of treatment effects. We introduce two methods from within the existing causal inference literature, principal stratification and structural mean models, and demonstrate how these can be applied in a mediation context before discussing approaches and assumptions necessary for attaining identifiability of key parameters of the basic causal model. Assuming that there is modification by baseline covariates of the effect of treatment (i.e. randomisation) on the mediator (i.e. covariate by treatment interactions), but no direct effect on the outcome of these treatment by covariate interactions leads to the use of instrumental variable methods. We describe how moderation can occur through post-randomisation variables, and extend the principal stratification approach to multiple group methods with explanatory models nested within the principal strata. We illustrate the new methodology with motivating examples of randomised trials from the mental health literature.

  1. Evaluation of Deep Learning Models for Predicting CO2 Flux

    NASA Astrophysics Data System (ADS)

    Halem, M.; Nguyen, P.; Frankel, D.

    2017-12-01

    Artificial neural networks have been employed to calculate surface flux measurements from station data because they are able to fit highly nonlinear relations between input and output variables without knowing the detail relationships between the variables. However, the accuracy in performing neural net estimates of CO2 flux from observations of CO2 and other atmospheric variables is influenced by the architecture of the neural model, the availability, and complexity of interactions between physical variables such as wind, temperature, and indirect variables like latent heat, and sensible heat, etc. We evaluate two deep learning models, feed forward and recurrent neural network models to learn how they each respond to the physical measurements, time dependency of the measurements of CO2 concentration, humidity, pressure, temperature, wind speed etc. for predicting the CO2 flux. In this paper, we focus on a) building neural network models for estimating CO2 flux based on DOE data from tower Atmospheric Radiation Measurement data; b) evaluating the impact of choosing the surface variables and model hyper-parameters on the accuracy and predictions of surface flux; c) assessing the applicability of the neural network models on estimate CO2 flux by using OCO-2 satellite data; d) studying the efficiency of using GPU-acceleration for neural network performance using IBM Power AI deep learning software and packages on IBM Minsky system.

  2. Surface complexation modeling of Cu(II) adsorption on mixtures of hydrous ferric oxide and kaolinite

    PubMed Central

    Lund, Tracy J; Koretsky, Carla M; Landry, Christopher J; Schaller, Melinda S; Das, Soumya

    2008-01-01

    Background The application of surface complexation models (SCMs) to natural sediments and soils is hindered by a lack of consistent models and data for large suites of metals and minerals of interest. Furthermore, the surface complexation approach has mostly been developed and tested for single solid systems. Few studies have extended the SCM approach to systems containing multiple solids. Results Cu adsorption was measured on pure hydrous ferric oxide (HFO), pure kaolinite (from two sources) and in systems containing mixtures of HFO and kaolinite over a wide range of pH, ionic strength, sorbate/sorbent ratios and, for the mixed solid systems, using a range of kaolinite/HFO ratios. Cu adsorption data measured for the HFO and kaolinite systems was used to derive diffuse layer surface complexation models (DLMs) describing Cu adsorption. Cu adsorption on HFO is reasonably well described using a 1-site or 2-site DLM. Adsorption of Cu on kaolinite could be described using a simple 1-site DLM with formation of a monodentate Cu complex on a variable charge surface site. However, for consistency with models derived for weaker sorbing cations, a 2-site DLM with a variable charge and a permanent charge site was also developed. Conclusion Component additivity predictions of speciation in mixed mineral systems based on DLM parameters derived for the pure mineral systems were in good agreement with measured data. Discrepancies between the model predictions and measured data were similar to those observed for the calibrated pure mineral systems. The results suggest that quantifying specific interactions between HFO and kaolinite in speciation models may not be necessary. However, before the component additivity approach can be applied to natural sediments and soils, the effects of aging must be further studied and methods must be developed to estimate reactive surface areas of solid constituents in natural samples. PMID:18783619

  3. A system of three-dimensional complex variables

    NASA Technical Reports Server (NTRS)

    Martin, E. Dale

    1986-01-01

    Some results of a new theory of multidimensional complex variables are reported, including analytic functions of a three-dimensional (3-D) complex variable. Three-dimensional complex numbers are defined, including vector properties and rules of multiplication. The necessary conditions for a function of a 3-D variable to be analytic are given and shown to be analogous to the 2-D Cauchy-Riemann equations. A simple example also demonstrates the analogy between the newly defined 3-D complex velocity and 3-D complex potential and the corresponding ordinary complex velocity and complex potential in two dimensions.

  4. Cognitive predictors of adaptive functioning in children with symptomatic epilepsy.

    PubMed

    Kerr, Elizabeth N; Fayed, Nora

    2017-10-01

    The current study sought to understand the contribution of the attention and working memory challenges experienced by children with active epilepsy without an intellectual disability to adaptive functioning (AF) while taking into account intellectual ability, co-occurring brain-based psychosocial diagnoses, and epilepsy-related variables. The relationship of attention and working memory with AF was examined in 76 children with active epilepsy with intellectual ability above the 2nd percentile recruited from a tertiary care center. AF was measured using the Scales of Independent Behavior-Revised (SIB-R) and compared with norm-referenced data. Standardized clinical assessments of attention span, sustained attention, as well as basic and more complex working memory were administered to children. Commonality analysis was used to investigate the importance of the variables with respect to the prediction of AF and to construct parsimonious models to elucidate the factors most important in explaining AF. Seventy-one percent of parents reported that their child experienced mild to severe difficulties with overall AF. Similar proportions of children displayed limitations in domain-specific areas of AF (Motor, Social/Communication, Person Living, and Community Living). The reduced models for Broad and domain-specific AF produced a maximum of seven predictor variables, with little loss in overall explained variance compared to the full models. Intellectual ability was a powerful predictor of Broad and domain-specific AF. Complex working memory was the only other cognitive predictor retained in each of the parsimonious models of AF. Sustained attention and complex working memory explained a large amount of the total variance in Motor AF. Children with a previously diagnosed comorbidity displayed lower Social/Communication, Personal Living, and Broad AF than those without a diagnosis. At least one epilepsy-related variable appeared in each of the reduced models, with age of seizure onset and seizure type (generalized or partial) being the main predictors. Intellectual ability was the most powerful predictor of AF in children with epilepsy whose intellectual functioning was above the 2nd percentile. Co-occurring brain-based cognitive and psychosocial issues experienced by children with living epilepsy, particularly complex working memory and diagnosed comorbidities, contribute to AF and may be amenable to intervention. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  5. The Interface Between Theory and Data in Structural Equation Models

    USGS Publications Warehouse

    Grace, James B.; Bollen, Kenneth A.

    2006-01-01

    Structural equation modeling (SEM) holds the promise of providing natural scientists the capacity to evaluate complex multivariate hypotheses about ecological systems. Building on its predecessors, path analysis and factor analysis, SEM allows for the incorporation of both observed and unobserved (latent) variables into theoretically based probabilistic models. In this paper we discuss the interface between theory and data in SEM and the use of an additional variable type, the composite, for representing general concepts. In simple terms, composite variables specify the influences of collections of other variables and can be helpful in modeling general relationships of the sort commonly of interest to ecologists. While long recognized as a potentially important element of SEM, composite variables have received very limited use, in part because of a lack of theoretical consideration, but also because of difficulties that arise in parameter estimation when using conventional solution procedures. In this paper we present a framework for discussing composites and demonstrate how the use of partially reduced form models can help to overcome some of the parameter estimation and evaluation problems associated with models containing composites. Diagnostic procedures for evaluating the most appropriate and effective use of composites are illustrated with an example from the ecological literature. It is argued that an ability to incorporate composite variables into structural equation models may be particularly valuable in the study of natural systems, where concepts are frequently multifaceted and the influences of suites of variables are often of interest.

  6. A Nonlinear Model for Gene-Based Gene-Environment Interaction.

    PubMed

    Sa, Jian; Liu, Xu; He, Tao; Liu, Guifen; Cui, Yuehua

    2016-06-04

    A vast amount of literature has confirmed the role of gene-environment (G×E) interaction in the etiology of complex human diseases. Traditional methods are predominantly focused on the analysis of interaction between a single nucleotide polymorphism (SNP) and an environmental variable. Given that genes are the functional units, it is crucial to understand how gene effects (rather than single SNP effects) are influenced by an environmental variable to affect disease risk. Motivated by the increasing awareness of the power of gene-based association analysis over single variant based approach, in this work, we proposed a sparse principle component regression (sPCR) model to understand the gene-based G×E interaction effect on complex disease. We first extracted the sparse principal components for SNPs in a gene, then the effect of each principal component was modeled by a varying-coefficient (VC) model. The model can jointly model variants in a gene in which their effects are nonlinearly influenced by an environmental variable. In addition, the varying-coefficient sPCR (VC-sPCR) model has nice interpretation property since the sparsity on the principal component loadings can tell the relative importance of the corresponding SNPs in each component. We applied our method to a human birth weight dataset in Thai population. We analyzed 12,005 genes across 22 chromosomes and found one significant interaction effect using the Bonferroni correction method and one suggestive interaction. The model performance was further evaluated through simulation studies. Our model provides a system approach to evaluate gene-based G×E interaction.

  7. The Canadian Hydrological Model (CHM): A multi-scale, variable-complexity hydrological model for cold regions

    NASA Astrophysics Data System (ADS)

    Marsh, C.; Pomeroy, J. W.; Wheater, H. S.

    2016-12-01

    There is a need for hydrological land surface schemes that can link to atmospheric models, provide hydrological prediction at multiple scales and guide the development of multiple objective water predictive systems. Distributed raster-based models suffer from an overrepresentation of topography, leading to wasted computational effort that increases uncertainty due to greater numbers of parameters and initial conditions. The Canadian Hydrological Model (CHM) is a modular, multiphysics, spatially distributed modelling framework designed for representing hydrological processes, including those that operate in cold-regions. Unstructured meshes permit variable spatial resolution, allowing coarse resolutions at low spatial variability and fine resolutions as required. Model uncertainty is reduced by lessening the necessary computational elements relative to high-resolution rasters. CHM uses a novel multi-objective approach for unstructured triangular mesh generation that fulfills hydrologically important constraints (e.g., basin boundaries, water bodies, soil classification, land cover, elevation, and slope/aspect). This provides an efficient spatial representation of parameters and initial conditions, as well as well-formed and well-graded triangles that are suitable for numerical discretization. CHM uses high-quality open source libraries and high performance computing paradigms to provide a framework that allows for integrating current state-of-the-art process algorithms. The impact of changes to model structure, including individual algorithms, parameters, initial conditions, driving meteorology, and spatial/temporal discretization can be easily tested. Initial testing of CHM compared spatial scales and model complexity for a spring melt period at a sub-arctic mountain basin. The meshing algorithm reduced the total number of computational elements and preserved the spatial heterogeneity of predictions.

  8. Utilizing multiple scale models to improve predictions of extra-axial hemorrhage in the immature piglet.

    PubMed

    Scott, Gregory G; Margulies, Susan S; Coats, Brittany

    2016-10-01

    Traumatic brain injury (TBI) is a leading cause of death and disability in the USA. To help understand and better predict TBI, researchers have developed complex finite element (FE) models of the head which incorporate many biological structures such as scalp, skull, meninges, brain (with gray/white matter differentiation), and vasculature. However, most models drastically simplify the membranes and substructures between the pia and arachnoid membranes. We hypothesize that substructures in the pia-arachnoid complex (PAC) contribute substantially to brain deformation following head rotation, and that when included in FE models accuracy of extra-axial hemorrhage prediction improves. To test these hypotheses, microscale FE models of the PAC were developed to span the variability of PAC substructure anatomy and regional density. The constitutive response of these models were then integrated into an existing macroscale FE model of the immature piglet brain to identify changes in cortical stress distribution and predictions of extra-axial hemorrhage (EAH). Incorporating regional variability of PAC substructures substantially altered the distribution of principal stress on the cortical surface of the brain compared to a uniform representation of the PAC. Simulations of 24 non-impact rapid head rotations in an immature piglet animal model resulted in improved accuracy of EAH prediction (to 94 % sensitivity, 100 % specificity), as well as a high accuracy in regional hemorrhage prediction (to 82-100 % sensitivity, 100 % specificity). We conclude that including a biofidelic PAC substructure variability in FE models of the head is essential for improved predictions of hemorrhage at the brain/skull interface.

  9. Parameter optimization, sensitivity, and uncertainty analysis of an ecosystem model at a forest flux tower site in the United States

    USGS Publications Warehouse

    Wu, Yiping; Liu, Shuguang; Huang, Zhihong; Yan, Wende

    2014-01-01

    Ecosystem models are useful tools for understanding ecological processes and for sustainable management of resources. In biogeochemical field, numerical models have been widely used for investigating carbon dynamics under global changes from site to regional and global scales. However, it is still challenging to optimize parameters and estimate parameterization uncertainty for complex process-based models such as the Erosion Deposition Carbon Model (EDCM), a modified version of CENTURY, that consider carbon, water, and nutrient cycles of ecosystems. This study was designed to conduct the parameter identifiability, optimization, sensitivity, and uncertainty analysis of EDCM using our developed EDCM-Auto, which incorporated a comprehensive R package—Flexible Modeling Framework (FME) and the Shuffled Complex Evolution (SCE) algorithm. Using a forest flux tower site as a case study, we implemented a comprehensive modeling analysis involving nine parameters and four target variables (carbon and water fluxes) with their corresponding measurements based on the eddy covariance technique. The local sensitivity analysis shows that the plant production-related parameters (e.g., PPDF1 and PRDX) are most sensitive to the model cost function. Both SCE and FME are comparable and performed well in deriving the optimal parameter set with satisfactory simulations of target variables. Global sensitivity and uncertainty analysis indicate that the parameter uncertainty and the resulting output uncertainty can be quantified, and that the magnitude of parameter-uncertainty effects depends on variables and seasons. This study also demonstrates that using the cutting-edge R functions such as FME can be feasible and attractive for conducting comprehensive parameter analysis for ecosystem modeling.

  10. A Mixed Model for Real-Time, Interactive Simulation of a Cable Passing Through Several Pulleys

    NASA Astrophysics Data System (ADS)

    García-Fernández, Ignacio; Pla-Castells, Marta; Martínez-Durá, Rafael J.

    2007-09-01

    A model of a cable and pulleys is presented that can be used in Real Time Computer Graphics applications. The model is formulated by the coupling of a damped spring and a variable coefficient wave equation, and can be integrated in more complex mechanical models of lift systems, such as cranes, elevators, etc. with a high degree of interactivity.

  11. Designing eHealth Applications to Reduce Cognitive Effort for Persons With Severe Mental Illness: Page Complexity, Navigation Simplicity, and Comprehensibility

    PubMed Central

    Spring, Michael R; Hanusa, Barbara H; Eack, Shaun M; Haas, Gretchen L

    2017-01-01

    Background eHealth technologies offer great potential for improving the use and effectiveness of treatments for those with severe mental illness (SMI), including schizophrenia and schizoaffective disorder. This potential can be muted by poor design. There is limited research on designing eHealth technologies for those with SMI, others with cognitive impairments, and those who are not technology savvy. We previously tested a design model, the Flat Explicit Design Model (FEDM), to create eHealth interventions for individuals with SMI. Subsequently, we developed the design concept page complexity, defined via the design variables we created of distinct topic areas, distinct navigation areas, and number of columns used to organize contents and the variables of text reading level, text reading ease (a newly added variable to the FEDM), and the number of hyperlinks and number of words on a page. Objective The objective of our study was to report the influence that the 19 variables of the FEDM have on the ability of individuals with SMI to use a website, ratings of a website’s ease of use, and performance on a novel usability task we created termed as content disclosure (a measure of the influence of a homepage’s design on the understanding user’s gain of a website). Finally, we assessed the performance of 3 groups or dimensions we developed that organize the 19 variables of the FEDM, termed as page complexity, navigational simplicity, and comprehensibility. Methods We measured 4 website usability outcomes: ability to find information, time to find information, ease of use, and a user’s ability to accurately judge a website’s contents. A total of 38 persons with SMI (chart diagnosis of schizophrenia or schizoaffective disorder) and 5 mental health websites were used to evaluate the importance of the new design concepts, as well as the other variables in the FEDM. Results We found that 11 of the FEDM’s 19 variables were significantly associated with all 4 usability outcomes. Most other variables were significantly related to 2 or 3 of these usability outcomes. With the 5 tested websites, 7 of the 19 variables of the FEDM overlapped with other variables, resulting in 12 distinct variable groups. The 3 design dimensions had acceptable coefficient alphas. Both navigational simplicity and comprehensibility were significantly related to correctly identifying whether information was available on a website. Page complexity and navigational simplicity were significantly associated with the ability and time to find information and ease-of-use ratings. Conclusions The 19 variables and 3 dimensions (page complexity, navigational simplicity, and comprehensibility) of the FEDM offer evidence-based design guidance intended to reduce the cognitive effort required to effectively use eHealth applications, particularly for persons with SMI, and potentially others, including those with cognitive impairments and limited skills or experience with technology. The new variables we examined (topic areas, navigational areas, columns) offer additional and very simple ways to improve simplicity. PMID:28057610

  12. Modeling Prairie Pothole Lakes: Linking Satellite Observation and Calibration (Invited)

    NASA Astrophysics Data System (ADS)

    Schwartz, F. W.; Liu, G.; Zhang, B.; Yu, Z.

    2009-12-01

    This paper examines the response of a complex lake wetland system to variations in climate. The focus is on the lakes and wetlands of the Missouri Coteau, which is part of the larger Prairie Pothole Region of the Central Plains of North America. Information on lake size was enumerated from satellite images, and yielded power law relationships for different hydrological conditions. More traditional lake-stage data were made available to us from the USGS Cottonwood Lake Study Site in North Dakota. A Probabilistic Hydrologic Model (PHM) was developed to simulate lake complexes comprised of tens-of-thousands or more individual closed-basin lakes and wetlands. What is new about this model is a calibration scheme that utilizes remotely-sensed data on lake area as well as stage data for individual lakes. Some ¼ million individual data points are used within a Genetic Algorithm to calibrate the model by comparing the simulated results with observed lake area-frequency power law relationships derived from Landsat images and water depths from seven individual lakes and wetlands. The simulated lake behaviors show good agreement with the observations under average, dry, and wet climatic conditions. The calibrated model is used to examine the impact of climate variability on a large lake complex in ND, in particular, the “Dust Bowl Drought” 1930s. This most famous drought of the 20th Century devastated the agricultural economy of the Great Plains with health and social impacts lingering for years afterwards. Interestingly, the drought of 1930s is unremarkable in relation to others of greater intensity and frequency before AD 1200 in the Great Plains. Major droughts and deluges have the ability to create marked variability of the power law function (e.g. up to one and a half orders of magnitude variability from the extreme Dust Bowl Drought to the extreme 1993-2001 deluge). This new probabilistic modeling approach provides a novel tool to examine the response of the behavior of a complex of closed lakes vary in scale from the footprint of a small house to that of a small city.

  13. Variable thickness transient ground-water flow model. Volume 3. Program listings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reisenauer, A.E.

    1979-12-01

    The Assessment of Effectiveness of Geologic Isolation Systems (AEGIS) Program is developing and applying the methodology for assessing the far-field, long-term post-closure safety of deep geologic nuclear waste repositories. AEGIS is being performed by Pacific Northwest Laboratory (PNL) under contract with the Office of Nuclear Waste Isolation (OWNI) for the Department of Energy (DOE). One task within AEGIS is the development of methodology for analysis of the consequences (water pathway) from loss of repository containment as defined by various release scenarios. Analysis of the long-term, far-field consequences of release scenarios requires the application of numerical codes which simulate the hydrologicmore » systems, model the transport of released radionuclides through the hydrologic systems to the biosphere, and, where applicable, assess the radiological dose to humans. Hydrologic and transport models are available at several levels of complexity or sophistication. Model selection and use are determined by the quantity and quality of input data. Model development under AEGIS and related programs provides three levels of hydrologic models, two levels of transport models, and one level of dose models (with several separate models). This is the third of 3 volumes of the description of the VTT (Variable Thickness Transient) Groundwater Hydrologic Model - second level (intermediate complexity) two-dimensional saturated groundwater flow.« less

  14. The IRMIS object model and services API.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saunders, C.; Dohan, D. A.; Arnold, N. D.

    2005-01-01

    The relational model developed for the Integrated Relational Model of Installed Systems (IRMIS) toolkit has been successfully used to capture the Advanced Photon Source (APS) control system software (EPICS process variables and their definitions). The relational tables are populated by a crawler script that parses each Input/Output Controller (IOC) start-up file when an IOC reboot is detected. User interaction is provided by a Java Swing application that acts as a desktop for viewing the process variable information. Mapping between the display objects and the relational tables was carried out with the Hibernate Object Relational Modeling (ORM) framework. Work is wellmore » underway at the APS to extend the relational modeling to include control system hardware. For this work, due in part to the complex user interaction required, the primary application development environment has shifted from the relational database view to the object oriented (Java) perspective. With this approach, the business logic is executed in Java rather than in SQL stored procedures. This paper describes the object model used to represent control system software, hardware, and interconnects in IRMIS. We also describe the services API used to encapsulate the required behaviors for creating and maintaining the complex data. In addition to the core schema and object model, many important concepts in IRMIS are captured by the services API. IRMIS is an ambitious collaborative effort for defining and developing a relational database and associated applications to comprehensively document the large and complex EPICS-based control systems of today's accelerators. The documentation effort includes process variables, control system hardware, and interconnections. The approach could also be used to document all components of the accelerator, including mechanical, vacuum, power supplies, etc. One key aspect of IRMIS is that it is a documentation framework, not a design and development tool. We do not generate EPICS control system configurations from IRMIS, and hence do not impose any additional requirements on EPICS developers.« less

  15. The Role of Model Complexity in Determining Patterns of Chlorophyll Variability in the Coastal Northwest North Atlantic

    NASA Astrophysics Data System (ADS)

    Kuhn, A. M.; Fennel, K.; Bianucci, L.

    2016-02-01

    A key feature of the North Atlantic Ocean's biological dynamics is the annual phytoplankton spring bloom. In the region comprising the continental shelf and adjacent deep ocean of the northwest North Atlantic, we identified two patterns of bloom development: 1) locations with cold temperatures and deep winter mixed layers, where the spring bloom peaks around April and the annual chlorophyll cycle has a large amplitude, and 2) locations with warmer temperatures and shallow winter mixed layers, where the spring bloom peaks earlier in the year, sometimes indiscernible from the fall bloom. These patterns result from a combination of limiting environmental factors and interactions among planktonic groups with different optimal requirements. Simple models that represent the ecosystem with a single phytoplankton (P) and a single zooplankton (Z) group are challenged to reproduce these ecological interactions. Here we investigate the effect that added complexity has on determining spatio-temporal chlorophyll. We compare two ecosystem models, one that contains one P and one Z group, and one with two P and three Z groups. We consider three types of changes in complexity: 1) added dependencies among variables (e.g., temperature dependent rates), 2) modified structural pathways, and 3) added pathways. Subsets of the most sensitive parameters are optimized in each model to replicate observations in the region. For computational efficiency, the parameter optimization is performed using 1D surrogates of a 3D model. We evaluate how model complexity affects model skill, and whether the optimized parameter sets found for each model modify the interpretation of ecosystem functioning. Spatial differences in the parameter sets that best represent different areas hint at the existence of different ecological communities or at physical-biological interactions that are not represented in the simplest model. Our methodology emphasizes the combined use of observations, 1D models to help identifying patterns, and 3D models able to simulate the environment modre realistically, as a means to acquire predictive understanding of the ocean's ecology.

  16. A visual-environment simulator with variable contrast

    NASA Astrophysics Data System (ADS)

    Gusarova, N. F.; Demin, A. V.; Polshchikov, G. V.

    1987-01-01

    A visual-environment simulator is proposed in which the image contrast can be varied continuously up to the reversal of the image. Contrast variability can be achieved by using two independently adjustable light sources to simultaneously illuminate the carrier of visual information (e.g., a slide or a cinematographic film). It is shown that such a scheme makes it possible to adequately model a complex visual environment.

  17. Dynamical minimalism: why less is more in psychology.

    PubMed

    Nowak, Andrzej

    2004-01-01

    The principle of parsimony, embraced in all areas of science, states that simple explanations are preferable to complex explanations in theory construction. Parsimony, however, can necessitate a trade-off with depth and richness in understanding. The approach of dynamical minimalism avoids this trade-off. The goal of this approach is to identify the simplest mechanisms and fewest variables capable of producing the phenomenon in question. A dynamical model in which change is produced by simple rules repetitively interacting with each other can exhibit unexpected and complex properties. It is thus possible to explain complex psychological and social phenomena with very simple models if these models are dynamic. In dynamical minimalist theories, then, the principle of parsimony can be followed without sacrificing depth in understanding. Computer simulations have proven especially useful for investigating the emergent properties of simple models.

  18. An ecohydrologic model for a shallow groundwater urban environment.

    PubMed

    Arden, Sam; Ma, Xin Cissy; Brown, Mark

    2014-01-01

    The urban environment is a patchwork of natural and artificial surfaces that results in complex interactions with and impacts to natural hydrologic cycles. Evapotranspiration is a major hydrologic flow that is often altered through urbanization, although the mechanisms of change are sometimes difficult to tease out due to difficulty in effectively simulating soil-plant-atmosphere interactions. This paper introduces a simplified yet realistic model that is a combination of existing surface runoff and ecohydrology models designed to increase the quantitative understanding of complex urban hydrologic processes. Results demonstrate that the model is capable of simulating the long-term variability of major hydrologic fluxes as a function of impervious surface, temperature, water table elevation, canopy interception, soil characteristics, precipitation and complex mechanisms of plant water uptake. These understandings have potential implications for holistic urban water system management.

  19. State estimation and prediction using clustered particle filters.

    PubMed

    Lee, Yoonsang; Majda, Andrew J

    2016-12-20

    Particle filtering is an essential tool to improve uncertain model predictions by incorporating noisy observational data from complex systems including non-Gaussian features. A class of particle filters, clustered particle filters, is introduced for high-dimensional nonlinear systems, which uses relatively few particles compared with the standard particle filter. The clustered particle filter captures non-Gaussian features of the true signal, which are typical in complex nonlinear dynamical systems such as geophysical systems. The method is also robust in the difficult regime of high-quality sparse and infrequent observations. The key features of the clustered particle filtering are coarse-grained localization through the clustering of the state variables and particle adjustment to stabilize the method; each observation affects only neighbor state variables through clustering and particles are adjusted to prevent particle collapse due to high-quality observations. The clustered particle filter is tested for the 40-dimensional Lorenz 96 model with several dynamical regimes including strongly non-Gaussian statistics. The clustered particle filter shows robust skill in both achieving accurate filter results and capturing non-Gaussian statistics of the true signal. It is further extended to multiscale data assimilation, which provides the large-scale estimation by combining a cheap reduced-order forecast model and mixed observations of the large- and small-scale variables. This approach enables the use of a larger number of particles due to the computational savings in the forecast model. The multiscale clustered particle filter is tested for one-dimensional dispersive wave turbulence using a forecast model with model errors.

  20. State estimation and prediction using clustered particle filters

    PubMed Central

    Lee, Yoonsang; Majda, Andrew J.

    2016-01-01

    Particle filtering is an essential tool to improve uncertain model predictions by incorporating noisy observational data from complex systems including non-Gaussian features. A class of particle filters, clustered particle filters, is introduced for high-dimensional nonlinear systems, which uses relatively few particles compared with the standard particle filter. The clustered particle filter captures non-Gaussian features of the true signal, which are typical in complex nonlinear dynamical systems such as geophysical systems. The method is also robust in the difficult regime of high-quality sparse and infrequent observations. The key features of the clustered particle filtering are coarse-grained localization through the clustering of the state variables and particle adjustment to stabilize the method; each observation affects only neighbor state variables through clustering and particles are adjusted to prevent particle collapse due to high-quality observations. The clustered particle filter is tested for the 40-dimensional Lorenz 96 model with several dynamical regimes including strongly non-Gaussian statistics. The clustered particle filter shows robust skill in both achieving accurate filter results and capturing non-Gaussian statistics of the true signal. It is further extended to multiscale data assimilation, which provides the large-scale estimation by combining a cheap reduced-order forecast model and mixed observations of the large- and small-scale variables. This approach enables the use of a larger number of particles due to the computational savings in the forecast model. The multiscale clustered particle filter is tested for one-dimensional dispersive wave turbulence using a forecast model with model errors. PMID:27930332

  1. Modeling Menstrual Cycle Length and Variability at the Approach of Menopause Using Hierarchical Change Point Models

    PubMed Central

    Huang, Xiaobi; Elliott, Michael R.; Harlow, Siobán D.

    2013-01-01

    SUMMARY As women approach menopause, the patterns of their menstrual cycle lengths change. To study these changes, we need to jointly model both the mean and variability of cycle length. Our proposed model incorporates separate mean and variance change points for each woman and a hierarchical model to link them together, along with regression components to include predictors of menopausal onset such as age at menarche and parity. Additional complexity arises from the fact that the calendar data have substantial missingness due to hormone use, surgery, and failure to report. We integrate multiple imputation and time-to event modeling in a Bayesian estimation framework to deal with different forms of the missingness. Posterior predictive model checks are applied to evaluate the model fit. Our method successfully models patterns of women’s menstrual cycle trajectories throughout their late reproductive life and identifies change points for mean and variability of segment length, providing insight into the menopausal process. More generally, our model points the way toward increasing use of joint mean-variance models to predict health outcomes and better understand disease processes. PMID:24729638

  2. Associations between complex OHC mixtures and thyroid and cortisol hormone levels in East Greenland polar bears

    PubMed Central

    TØ, Bechshøft; Sonne, C; Dietz, R; Born, EW; Muir, DCG; Letcher, RJ; Novak, MA; Henchey, E; Meyer, JS; Jenssen, BM; Villanger, GD

    2012-01-01

    The multivariate relationship between hair cortisol, whole blood thyroid hormones, and the complex mixtures of organohalogen contaminant (OHC) levels measured in subcutaneous adipose of 23 East Greenland polar bears (eight males and 15 females, all sampled between the years 1999 and 2001) was analyzed using projection to latent structure (PLS) regression modeling. In the resulting PLS model, most important variables with a negative influence on cortisol levels were particularly BDE-99, but also CB-180, -201, BDE-153, and CB-170/190. The most important variables with a positive influence on cortisol were CB-66/95, α-HCH, TT3, as well as heptachlor epoxide, dieldrin, BDE-47, p,p′-DDD. Although statistical modeling does not necessarily fully explain biological cause-effect relationships, relationships indicate that (1) the hypothalamic-pituitary-adrenal (HPA) axis in East Greenland polar bears is likely to be affected by OHC-contaminants and (2) the association between OHCs and cortisol may be linked with the hypothalamus-pituitary-thyroid (HPT) axis. PMID:22575327

  3. Complexity in Soil Systems: What Does It Mean and How Should We Proceed?

    NASA Astrophysics Data System (ADS)

    Faybishenko, B.; Molz, F. J.; Brodie, E.; Hubbard, S. S.

    2015-12-01

    The complex soil systems approach is needed fundamentally for the development of integrated, interdisciplinary methods to measure and quantify the physical, chemical and biological processes taking place in soil, and to determine the role of fine-scale heterogeneities. This presentation is aimed at a review of the concepts and observations concerning complexity and complex systems theory, including terminology, emergent complexity and simplicity, self-organization and a general approach to the study of complex systems using the Weaver (1948) concept of "organized complexity." These concepts are used to provide understanding of complex soil systems, and to develop experimental and mathematical approaches to soil microbiological processes. The results of numerical simulations, observations and experiments are presented that indicate the presence of deterministic chaotic dynamics in soil microbial systems. So what are the implications for the scientists who wish to develop mathematical models in the area of organized complexity or to perform experiments to help clarify an aspect of an organized complex system? The modelers have to deal with coupled systems having at least three dependent variables, and they have to forgo making linear approximations to nonlinear phenomena. The analogous rule for experimentalists is that they need to perform experiments that involve measurement of at least three interacting entities (variables depending on time, space, and each other). These entities could be microbes in soil penetrated by roots. If a process being studied in a soil affects the soil properties, like biofilm formation, then this effect has to be measured and included. The mathematical implications of this viewpoint are examined, and results of numerical solutions to a system of equations demonstrating deterministic chaotic behavior are also discussed using time series and the 3D strange attractors.

  4. T-MATS Toolbox for the Modeling and Analysis of Thermodynamic Systems

    NASA Technical Reports Server (NTRS)

    Chapman, Jeffryes W.

    2014-01-01

    The Toolbox for the Modeling and Analysis of Thermodynamic Systems (T-MATS) is a MATLABSimulink (The MathWorks Inc.) plug-in for creating and simulating thermodynamic systems and controls. The package contains generic parameterized components that can be combined with a variable input iterative solver and optimization algorithm to create complex system models, such as gas turbines.

  5. The concept and use of elasticity in population viability models [Exercise 13

    Treesearch

    Carolyn Hull Sieg; Rudy M. King; Fred Van Dyke

    2003-01-01

    As you have seen in exercise 12, plants, such as the western prairie fringed orchid, typically have distinct life stages and complex life cycles that require the matrix analyses associated with a stage-based population model. Some statistics that can be generated from such matrix analyses can be very informative in determining which variables in the model have the...

  6. A mathematical function to evaluate surgical complexity of cleft lip and palate.

    PubMed

    Ortiz-Posadas, M R; Vega-Alvarado, L; Toni, B

    2009-06-01

    The objective of this work is to show the modeling of a similarity function adapted to the medical environment using the logical-combinatorial approach of pattern recognition theory, and its application comparing the condition of patients with congenital malformations in the lip and/or palate, which are called cleft-primary palate and/or cleft-secondary palate, respectively. The similarity function is defined by the comparison criteria determined for each variable, taking into account their type (qualitative or quantitative), their domain and their initial space representation. In all, we defined 18 variables, with their domains and six different comparison criteria (fuzzy and absolute difference type). The model includes, further, the importance of every variable as well as a weight which reflects the surgical complexity of the cleft. Likewise, the usefulness of this function is shown by calculating the similarity among three patients. This work was developed jointly with the Cleft Palate Team at the Reconstructive Surgery Service of the Pediatric Hospital of Tacubaya, which belongs to the Health Institute of the Federal District in Mexico City.

  7. Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale

    NASA Astrophysics Data System (ADS)

    Kreibich, Heidi; Schröter, Kai; Merz, Bruno

    2016-05-01

    Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB).In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.

  8. Continuous-time discrete-space models for animal movement

    USGS Publications Warehouse

    Hanks, Ephraim M.; Hooten, Mevin B.; Alldredge, Mat W.

    2015-01-01

    The processes influencing animal movement and resource selection are complex and varied. Past efforts to model behavioral changes over time used Bayesian statistical models with variable parameter space, such as reversible-jump Markov chain Monte Carlo approaches, which are computationally demanding and inaccessible to many practitioners. We present a continuous-time discrete-space (CTDS) model of animal movement that can be fit using standard generalized linear modeling (GLM) methods. This CTDS approach allows for the joint modeling of location-based as well as directional drivers of movement. Changing behavior over time is modeled using a varying-coefficient framework which maintains the computational simplicity of a GLM approach, and variable selection is accomplished using a group lasso penalty. We apply our approach to a study of two mountain lions (Puma concolor) in Colorado, USA.

  9. Influential input classification in probabilistic multimedia models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.

    1999-05-01

    Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions onemore » should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.« less

  10. Complexity reduction of biochemical rate expressions.

    PubMed

    Schmidt, Henning; Madsen, Mads F; Danø, Sune; Cedersund, Gunnar

    2008-03-15

    The current trend in dynamical modelling of biochemical systems is to construct more and more mechanistically detailed and thus complex models. The complexity is reflected in the number of dynamic state variables and parameters, as well as in the complexity of the kinetic rate expressions. However, a greater level of complexity, or level of detail, does not necessarily imply better models, or a better understanding of the underlying processes. Data often does not contain enough information to discriminate between different model hypotheses, and such overparameterization makes it hard to establish the validity of the various parts of the model. Consequently, there is an increasing demand for model reduction methods. We present a new reduction method that reduces complex rational rate expressions, such as those often used to describe enzymatic reactions. The method is a novel term-based identifiability analysis, which is easy to use and allows for user-specified reductions of individual rate expressions in complete models. The method is one of the first methods to meet the classical engineering objective of improved parameter identifiability without losing the systems biology demand of preserved biochemical interpretation. The method has been implemented in the Systems Biology Toolbox 2 for MATLAB, which is freely available from http://www.sbtoolbox2.org. The Supplementary Material contains scripts that show how to use it by applying the method to the example models, discussed in this article.

  11. Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling

    NASA Technical Reports Server (NTRS)

    Hojnicki, Jeffrey S.; Rusick, Jeffrey J.

    2005-01-01

    Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).

  12. Optimization of an angle-beam ultrasonic approach for characterization of impact damage in composites

    NASA Astrophysics Data System (ADS)

    Henry, Christine; Kramb, Victoria; Welter, John T.; Wertz, John N.; Lindgren, Eric A.; Aldrin, John C.; Zainey, David

    2018-04-01

    Advances in NDE method development are greatly improved through model-guided experimentation. In the case of ultrasonic inspections, models which provide insight into complex mode conversion processes and sound propagation paths are essential for understanding the experimental data and inverting the experimental data into relevant information. However, models must also be verified using experimental data obtained under well-documented and understood conditions. Ideally, researchers would utilize the model simulations and experimental approach to efficiently converge on the optimal solution. However, variability in experimental parameters introduce extraneous signals that are difficult to differentiate from the anticipated response. This paper discusses the results of an ultrasonic experiment designed to evaluate the effect of controllable variables on the anticipated signal, and the effect of unaccounted for experimental variables on the uncertainty in those results. Controlled experimental parameters include the transducer frequency, incidence beam angle and focal depth.

  13. Alpine Ecohydrology Across Scales: Propagating Fine-scale Heterogeneity to the Catchment and Beyond

    NASA Astrophysics Data System (ADS)

    Mastrotheodoros, T.; Pappas, C.; Molnar, P.; Burlando, P.; Hadjidoukas, P.; Fatichi, S.

    2017-12-01

    In mountainous ecosystems, complex topography and landscape heterogeneity govern ecohydrological states and fluxes. Here, we investigate topographic controls on water, energy and carbon fluxes across different climatic regimes and vegetation types representative of the European Alps. We use an ecohydrological model to perform fine-scale numerical experiments on a synthetic domain that comprises a symmetric mountain with eight catchments draining along the cardinal and intercardinal directions. Distributed meteorological model input variables are generated using observations from Switzerland. The model computes the incoming solar radiation based on the local topography. We implement a multivariate statistical framework to disentangle the impact of landscape heterogeneity (i.e., elevation, aspect, flow contributing area, vegetation type) on the simulated water, carbon, and energy dynamics. This allows us to identify the sensitivities of several ecohydrological variables (including leaf area index, evapotranspiration, snow-cover and net primary productivity) to topographic and meteorological inputs at different spatial and temporal scales. We also use an alpine catchment as a real case study to investigate how the natural variability of soil and land cover affects the idealized relationships that arise from the synthetic domain. In accordance with previous studies, our analysis shows a complex pattern of vegetation response to radiation. We find also different patterns of ecosystem sensitivity to topography-driven heterogeneity depending on the hydrological regime (i.e., wet vs. dry conditions). Our results suggest that topography-driven variability in ecohydrological variables (e.g. transpiration) at the fine spatial scale can exceed 50%, but it is substantially reduced ( 5%) when integrated at the catchment scale.

  14. Close-range laser scanning in forests: towards physically based semantics across scales.

    PubMed

    Morsdorf, F; Kükenbrink, D; Schneider, F D; Abegg, M; Schaepman, M E

    2018-04-06

    Laser scanning with its unique measurement concept holds the potential to revolutionize the way we assess and quantify three-dimensional vegetation structure. Modern laser systems used at close range, be it on terrestrial, mobile or unmanned aerial platforms, provide dense and accurate three-dimensional data whose information just waits to be harvested. However, the transformation of such data to information is not as straightforward as for airborne and space-borne approaches, where typically empirical models are built using ground truth of target variables. Simpler variables, such as diameter at breast height, can be readily derived and validated. More complex variables, e.g. leaf area index, need a thorough understanding and consideration of the physical particularities of the measurement process and semantic labelling of the point cloud. Quantified structural models provide a framework for such labelling by deriving stem and branch architecture, a basis for many of the more complex structural variables. The physical information of the laser scanning process is still underused and we show how it could play a vital role in conjunction with three-dimensional radiative transfer models to shape the information retrieval methods of the future. Using such a combined forward and physically based approach will make methods robust and transferable. In addition, it avoids replacing observer bias from field inventories with instrument bias from different laser instruments. Still, an intensive dialogue with the users of the derived information is mandatory to potentially re-design structural concepts and variables so that they profit most of the rich data that close-range laser scanning provides.

  15. Probabilistic graphs as a conceptual and computational tool in hydrology and water management

    NASA Astrophysics Data System (ADS)

    Schoups, Gerrit

    2014-05-01

    Originally developed in the fields of machine learning and artificial intelligence, probabilistic graphs constitute a general framework for modeling complex systems in the presence of uncertainty. The framework consists of three components: 1. Representation of the model as a graph (or network), with nodes depicting random variables in the model (e.g. parameters, states, etc), which are joined together by factors. Factors are local probabilistic or deterministic relations between subsets of variables, which, when multiplied together, yield the joint distribution over all variables. 2. Consistent use of probability theory for quantifying uncertainty, relying on basic rules of probability for assimilating data into the model and expressing unknown variables as a function of observations (via the posterior distribution). 3. Efficient, distributed approximation of the posterior distribution using general-purpose algorithms that exploit model structure encoded in the graph. These attributes make probabilistic graphs potentially useful as a conceptual and computational tool in hydrology and water management (and beyond). Conceptually, they can provide a common framework for existing and new probabilistic modeling approaches (e.g. by drawing inspiration from other fields of application), while computationally they can make probabilistic inference feasible in larger hydrological models. The presentation explores, via examples, some of these benefits.

  16. Artificial neural network model for ozone concentration estimation and Monte Carlo analysis

    NASA Astrophysics Data System (ADS)

    Gao, Meng; Yin, Liting; Ning, Jicai

    2018-07-01

    Air pollution in urban atmosphere directly affects public-health; therefore, it is very essential to predict air pollutant concentrations. Air quality is a complex function of emissions, meteorology and topography, and artificial neural networks (ANNs) provide a sound framework for relating these variables. In this study, we investigated the feasibility of using ANN model with meteorological parameters as input variables to predict ozone concentration in the urban area of Jinan, a metropolis in Northern China. We firstly found that the architecture of network of neurons had little effect on the predicting capability of ANN model. A parsimonious ANN model with 6 routinely monitored meteorological parameters and one temporal covariate (the category of day, i.e. working day, legal holiday and regular weekend) as input variables was identified, where the 7 input variables were selected following the forward selection procedure. Compared with the benchmarking ANN model with 9 meteorological and photochemical parameters as input variables, the predicting capability of the parsimonious ANN model was acceptable. Its predicting capability was also verified in term of warming success ratio during the pollution episodes. Finally, uncertainty and sensitivity analysis were also performed based on Monte Carlo simulations (MCS). It was concluded that the ANN could properly predict the ambient ozone level. Maximum temperature, atmospheric pressure, sunshine duration and maximum wind speed were identified as the predominate input variables significantly influencing the prediction of ambient ozone concentrations.

  17. Learning abstract visual concepts via probabilistic program induction in a Language of Thought.

    PubMed

    Overlan, Matthew C; Jacobs, Robert A; Piantadosi, Steven T

    2017-11-01

    The ability to learn abstract concepts is a powerful component of human cognition. It has been argued that variable binding is the key element enabling this ability, but the computational aspects of variable binding remain poorly understood. Here, we address this shortcoming by formalizing the Hierarchical Language of Thought (HLOT) model of rule learning. Given a set of data items, the model uses Bayesian inference to infer a probability distribution over stochastic programs that implement variable binding. Because the model makes use of symbolic variables as well as Bayesian inference and programs with stochastic primitives, it combines many of the advantages of both symbolic and statistical approaches to cognitive modeling. To evaluate the model, we conducted an experiment in which human subjects viewed training items and then judged which test items belong to the same concept as the training items. We found that the HLOT model provides a close match to human generalization patterns, significantly outperforming two variants of the Generalized Context Model, one variant based on string similarity and the other based on visual similarity using features from a deep convolutional neural network. Additional results suggest that variable binding happens automatically, implying that binding operations do not add complexity to peoples' hypothesized rules. Overall, this work demonstrates that a cognitive model combining symbolic variables with Bayesian inference and stochastic program primitives provides a new perspective for understanding people's patterns of generalization. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics

    PubMed Central

    Schwartz, Odelia; Sejnowski, Terrence J.; Dayan, Peter

    2010-01-01

    Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing. PMID:16999575

  19. Factor complexity of crash occurrence: An empirical demonstration using boosted regression trees.

    PubMed

    Chung, Yi-Shih

    2013-12-01

    Factor complexity is a characteristic of traffic crashes. This paper proposes a novel method, namely boosted regression trees (BRT), to investigate the complex and nonlinear relationships in high-variance traffic crash data. The Taiwanese 2004-2005 single-vehicle motorcycle crash data are used to demonstrate the utility of BRT. Traditional logistic regression and classification and regression tree (CART) models are also used to compare their estimation results and external validities. Both the in-sample cross-validation and out-of-sample validation results show that an increase in tree complexity provides improved, although declining, classification performance, indicating a limited factor complexity of single-vehicle motorcycle crashes. The effects of crucial variables including geographical, time, and sociodemographic factors explain some fatal crashes. Relatively unique fatal crashes are better approximated by interactive terms, especially combinations of behavioral factors. BRT models generally provide improved transferability than conventional logistic regression and CART models. This study also discusses the implications of the results for devising safety policies. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Scale dependency of American marten (Martes americana) habitat relations [Chapter 12

    Treesearch

    Andrew J. Shirk; Tzeidle N. Wasserman; Samuel A. Cushman; Martin G. Raphael

    2012-01-01

    Animals select habitat resources at multiple spatial scales; therefore, explicit attention to scale-dependency when modeling habitat relations is critical to understanding how organisms select habitat in complex landscapes. Models that evaluate habitat variables calculated at a single spatial scale (e.g., patch, home range) fail to account for the effects of...

  1. Where in the Marsh is the Water (and When)?: Measuring and modeling salt marsh hydrology for ecological and biogeochemical applications

    EPA Science Inventory

    Salt marsh hydrology presents many difficulties from a measurement and modeling standpoint: the bi-directional flows of tidal waters, variable water densities due to mixing of fresh and salt water, significant influences from vegetation, and complex stream morphologies. Because o...

  2. Real-world hydrologic assessment of a fully-distributed hydrological model in a parallel computing environment

    NASA Astrophysics Data System (ADS)

    Vivoni, Enrique R.; Mascaro, Giuseppe; Mniszewski, Susan; Fasel, Patricia; Springer, Everett P.; Ivanov, Valeriy Y.; Bras, Rafael L.

    2011-10-01

    SummaryA major challenge in the use of fully-distributed hydrologic models has been the lack of computational capabilities for high-resolution, long-term simulations in large river basins. In this study, we present the parallel model implementation and real-world hydrologic assessment of the Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator (tRIBS). Our parallelization approach is based on the decomposition of a complex watershed using the channel network as a directed graph. The resulting sub-basin partitioning divides effort among processors and handles hydrologic exchanges across boundaries. Through numerical experiments in a set of nested basins, we quantify parallel performance relative to serial runs for a range of processors, simulation complexities and lengths, and sub-basin partitioning methods, while accounting for inter-run variability on a parallel computing system. In contrast to serial simulations, the parallel model speed-up depends on the variability of hydrologic processes. Load balancing significantly improves parallel speed-up with proportionally faster runs as simulation complexity (domain resolution and channel network extent) increases. The best strategy for large river basins is to combine a balanced partitioning with an extended channel network, with potential savings through a lower TIN resolution. Based on these advances, a wider range of applications for fully-distributed hydrologic models are now possible. This is illustrated through a set of ensemble forecasts that account for precipitation uncertainty derived from a statistical downscaling model.

  3. Integrated geostatistics for modeling fluid contacts and shales in Prudhoe Bay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perez, G.; Chopra, A.K.; Severson, C.D.

    1997-12-01

    Geostatistics techniques are being used increasingly to model reservoir heterogeneity at a wide range of scales. A variety of techniques is now available with differing underlying assumptions, complexity, and applications. This paper introduces a novel method of geostatistics to model dynamic gas-oil contacts and shales in the Prudhoe Bay reservoir. The method integrates reservoir description and surveillance data within the same geostatistical framework. Surveillance logs and shale data are transformed to indicator variables. These variables are used to evaluate vertical and horizontal spatial correlation and cross-correlation of gas and shale at different times and to develop variogram models. Conditional simulationmore » techniques are used to generate multiple three-dimensional (3D) descriptions of gas and shales that provide a measure of uncertainty. These techniques capture the complex 3D distribution of gas-oil contacts through time. The authors compare results of the geostatistical method with conventional techniques as well as with infill wells drilled after the study. Predicted gas-oil contacts and shale distributions are in close agreement with gas-oil contacts observed at infill wells.« less

  4. Designing management strategies for carbon dioxide storage and utilization under uncertainty using inexact modelling

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2017-06-01

    Effective application of carbon capture, utilization and storage (CCUS) systems could help to alleviate the influence of climate change by reducing carbon dioxide (CO2) emissions. The research objective of this study is to develop an equilibrium chance-constrained programming model with bi-random variables (ECCP model) for supporting the CCUS management system under random circumstances. The major advantage of the ECCP model is that it tackles random variables as bi-random variables with a normal distribution, where the mean values follow a normal distribution. This could avoid irrational assumptions and oversimplifications in the process of parameter design and enrich the theory of stochastic optimization. The ECCP model is solved by an equilibrium change-constrained programming algorithm, which provides convenience for decision makers to rank the solution set using the natural order of real numbers. The ECCP model is applied to a CCUS management problem, and the solutions could be useful in helping managers to design and generate rational CO2-allocation patterns under complexities and uncertainties.

  5. The spatial and temporal variability of groundwater recharge in a forested basin in northern Wisconsin

    USGS Publications Warehouse

    Dripps, W.R.; Bradbury, K.R.

    2010-01-01

    Recharge varies spatially and temporally as it depends on a wide variety of factors (e.g. vegetation, precipitation, climate, topography, geology, and soil type), making it one of the most difficult, complex, and uncertain hydrologic parameters to quantify. Despite its inherent variability, groundwater modellers, planners, and policy makers often ignore recharge variability and assume a single average recharge value for an entire watershed. Relatively few attempts have been made to quantify or incorporate spatial and temporal recharge variability into water resource planning or groundwater modelling efforts. In this study, a simple, daily soil-water balance model was developed and used to estimate the spatial and temporal distribution of groundwater recharge of the Trout Lake basin of northern Wisconsin for 1996-2000 as a means to quantify recharge variability. For the 5 years of study, annual recharge varied spatially by as much as 18 cm across the basin; vegetation was the predominant control on this variability. Recharge also varied temporally with a threefold annual difference over the 5-year period. Intra-annually, recharge was limited to a few isolated events each year and exhibited a distinct seasonal pattern. The results suggest that ignoring recharge variability may not only be inappropriate, but also, depending on the application, may invalidate model results and predictions for regional and local water budget calculations, water resource management, nutrient cycling, and contaminant transport studies. Recharge is spatially and temporally variable, and should be modelled as such. Copyright ?? 2009 John Wiley & Sons, Ltd.

  6. Modelling the vertical distribution of canopy fuel load using national forest inventory and low-density airbone laser scanning data

    PubMed Central

    Castedo-Dorado, Fernando; Hevia, Andrea; Vega, José Antonio; Vega-Nieva, Daniel; Ruiz-González, Ana Daría

    2017-01-01

    The fuel complex variables canopy bulk density and canopy base height are often used to predict crown fire initiation and spread. Direct measurement of these variables is impractical, and they are usually estimated indirectly by modelling. Recent advances in predicting crown fire behaviour require accurate estimates of the complete vertical distribution of canopy fuels. The objectives of the present study were to model the vertical profile of available canopy fuel in pine stands by using data from the Spanish national forest inventory plus low-density airborne laser scanning (ALS) metrics. In a first step, the vertical distribution of the canopy fuel load was modelled using the Weibull probability density function. In a second step, two different systems of models were fitted to estimate the canopy variables defining the vertical distributions; the first system related these variables to stand variables obtained in a field inventory, and the second system related the canopy variables to airborne laser scanning metrics. The models of each system were fitted simultaneously to compensate the effects of the inherent cross-model correlation between the canopy variables. Heteroscedasticity was also analyzed, but no correction in the fitting process was necessary. The estimated canopy fuel load profiles from field variables explained 84% and 86% of the variation in canopy fuel load for maritime pine and radiata pine respectively; whereas the estimated canopy fuel load profiles from ALS metrics explained 52% and 49% of the variation for the same species. The proposed models can be used to assess the effectiveness of different forest management alternatives for reducing crown fire hazard. PMID:28448524

  7. Comparing flood loss models of different complexity

    NASA Astrophysics Data System (ADS)

    Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Riggelsen, Carsten; Scherbaum, Frank; Merz, Bruno

    2013-04-01

    Any deliberation on flood risk requires the consideration of potential flood losses. In particular, reliable flood loss models are needed to evaluate cost-effectiveness of mitigation measures, to assess vulnerability, for comparative risk analysis and financial appraisal during and after floods. In recent years, considerable improvements have been made both concerning the data basis and the methodological approaches used for the development of flood loss models. Despite of that, flood loss models remain an important source of uncertainty. Likewise the temporal and spatial transferability of flood loss models is still limited. This contribution investigates the predictive capability of different flood loss models in a split sample cross regional validation approach. For this purpose, flood loss models of different complexity, i.e. based on different numbers of explaining variables, are learned from a set of damage records that was obtained from a survey after the Elbe flood in 2002. The validation of model predictions is carried out for different flood events in the Elbe and Danube river basins in 2002, 2005 and 2006 for which damage records are available from surveys after the flood events. The models investigated are a stage-damage model, the rule based model FLEMOps+r as well as novel model approaches which are derived using data mining techniques of regression trees and Bayesian networks. The Bayesian network approach to flood loss modelling provides attractive additional information concerning the probability distribution of both model predictions and explaining variables.

  8. Research on the Complexity of Dual-Channel Supply Chain Model in Competitive Retailing Service Market

    NASA Astrophysics Data System (ADS)

    Ma, Junhai; Li, Ting; Ren, Wenbo

    2017-06-01

    This paper examines the optimal decisions of dual-channel game model considering the inputs of retailing service. We analyze how adjustment speed of service inputs affect the system complexity and market performance, and explore the stability of the equilibrium points by parameter basin diagrams. And chaos control is realized by variable feedback method. The numerical simulation shows that complex behavior would trigger the system to become unstable, such as double period bifurcation and chaos. We measure the performances of the model in different periods by analyzing the variation of average profit index. The theoretical results show that the percentage share of the demand and cross-service coefficients have important influence on the stability of the system and its feasible basin of attraction.

  9. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining.

    PubMed

    Hero, Alfred O; Rajaratnam, Bala

    2016-01-01

    When can reliable inference be drawn in fue "Big Data" context? This paper presents a framework for answering this fundamental question in the context of correlation mining, wifu implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics fue dataset is often variable-rich but sample-starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than fue number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for "Big Data". Sample complexity however has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address fuis gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where fue variable dimension is fixed and fue sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa cale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables fua t are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. we demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.

  10. Discrete-Choice Modeling Of Non-Working Women’s Trip-Chaining Activity Based

    NASA Astrophysics Data System (ADS)

    Hayati, Amelia; Pradono; Purboyo, Heru; Maryati, Sri

    2018-05-01

    Start The urban developments of technology and economics are now changing the lifestyles of the urban societies. It is also changing their travel demand to meet their movement needs. Nowadays, urban women, especially in Bandung, West Java, have a high demand for their daily travel and tend to increase. They have the ease of accessibility to personal modes of transportation and freedom to go anywhere to meet their personal and family needs. This also happens to non-working women or as housewives in the city of Bandung. More than 50% of women’s mobility is outside the home, in the term of trip-chaining, from leaving to returning home in one day. It is based on their complex activities in order to meet the needs of family and home care. While less than 60% of male’s mobility is outdoors, it is a simple trip-chaining or only has a single trip. The trip-chaining has significant differences between non-working women and working-men. This illustrates the pattern of Mom and Dad’s mobility in a family with an activity-based approach for the same purpose, i.e. family welfare. This study explains how complex the trip-chaining of non-working urban women and as housewives, with an activity-based approach done outdoors in a week. Socio-economic and household demographic variables serve as the basis for measuring the independent variables affecting family welfare, as well as the variables of type, time and duration of activities performed by unemployed housewives. This study aims to examine the interrelationships between activity variables, especially the time of activity and travel, and socio-economic of household variables that can generate the complexity of women’s daily travel. Discrete Choice Modeling developed by Ben-Akiva, Chandra Bhat, etc., is used in this study to illustrate the relationship between activity and socio-economic demographic variables based on primary survey data in Bandung, West Java for 466 unemployed housewives. The results of the regression, by Seemingly Unrelated Regression approach methods, showed the interrelationship between all variables, including the complexity of trip chaining of housewives based on their daily activities. The type of mandatory and discretionary activities, and the duration of activities performed during the dismissal in the series of trip chains conducted are intended for the fulfillment of the welfare of all family member.

  11. Newtonian nudging for a Richards equation-based distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Paniconi, Claudio; Marrocu, Marino; Putti, Mario; Verbunt, Mark

    The objective of data assimilation is to provide physically consistent estimates of spatially distributed environmental variables. In this study a relatively simple data assimilation method has been implemented in a relatively complex hydrological model. The data assimilation technique is Newtonian relaxation or nudging, in which model variables are driven towards observations by a forcing term added to the model equations. The forcing term is proportional to the difference between simulation and observation (relaxation component) and contains four-dimensional weighting functions that can incorporate prior knowledge about the spatial and temporal variability and characteristic scales of the state variable(s) being assimilated. The numerical model couples a three-dimensional finite element Richards equation solver for variably saturated porous media and a finite difference diffusion wave approximation based on digital elevation data for surface water dynamics. We describe the implementation of the data assimilation algorithm for the coupled model and report on the numerical and hydrological performance of the resulting assimilation scheme. Nudging is shown to be successful in improving the hydrological simulation results, and it introduces little computational cost, in terms of CPU and other numerical aspects of the model's behavior, in some cases even improving numerical performance compared to model runs without nudging. We also examine the sensitivity of the model to nudging term parameters including the spatio-temporal influence coefficients in the weighting functions. Overall the nudging algorithm is quite flexible, for instance in dealing with concurrent observation datasets, gridded or scattered data, and different state variables, and the implementation presented here can be readily extended to any of these features not already incorporated. Moreover the nudging code and tests can serve as a basis for implementation of more sophisticated data assimilation techniques in a Richards equation-based hydrological model.

  12. Atmospheric studies in complex terrain: a planning guide for future studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orgill, M.M.

    The objective of this study is to assist the US Department of Energy in Conducting its atmospheric studies in complex terrain (ASCOT0 by defining various complex terrain research systems and relating these options to specific landforms sites. This includes: (1) reviewing past meteorological and diffusion research on complex terrain; (2) relating specific terrain-induced airflow phenomena to specific landforms and time and space scales; (3) evaluating the technical difficulty of modeling and measuring terrain-induced airflow phenomena; and (4) avolving severdal research options and proposing candidate sites for continuing and expanding field and modeling work. To evolve research options using variable candidatemore » sites, four areas were considered: site selection, terrain uniqueness and quantification, definition of research problems and research plans. 36 references, 111 figures, 20 tables.« less

  13. Multifractal cross-correlation effects in two-variable time series of complex network vertex observables

    NASA Astrophysics Data System (ADS)

    OświÈ©cimka, Paweł; Livi, Lorenzo; DroŻdŻ, Stanisław

    2016-10-01

    We investigate the scaling of the cross-correlations calculated for two-variable time series containing vertex properties in the context of complex networks. Time series of such observables are obtained by means of stationary, unbiased random walks. We consider three vertex properties that provide, respectively, short-, medium-, and long-range information regarding the topological role of vertices in a given network. In order to reveal the relation between these quantities, we applied the multifractal cross-correlation analysis technique, which provides information about the nonlinear effects in coupling of time series. We show that the considered network models are characterized by unique multifractal properties of the cross-correlation. In particular, it is possible to distinguish between Erdös-Rényi, Barabási-Albert, and Watts-Strogatz networks on the basis of fractal cross-correlation. Moreover, the analysis of protein contact networks reveals characteristics shared with both scale-free and small-world models.

  14. "American" or "Multiethnic"? Family Ethnic Identity Among Transracial Adoptive Families, Ethnic-Racial Socialization, and Children's Self-Perception.

    PubMed

    Pinderhughes, Ellen E; Zhang, Xian; Agerbak, Susanne

    2015-12-01

    Drawing on a model of ethnic-racial socialization (E-RS; Pinderhughes, 2013), this study examined hypothesized relations among parents' role variables (family ethnic identity and acknowledgment of cultural and racial differences), cultural socialization (CS) behaviors, and children's self-perceptions (ethnic self-label and feelings about self-label). The sample comprised 44 U.S.-based parents and their daughters ages 6 to 9 who were adopted from China. Correlation analyses revealed that parents' role variables and CS behaviors were related, and children's ethnic self-label was related to family ethnic identity and CS behaviors. Qualitative analyses point to complexities in children's ethnic identity and between family and children's ethnic identities. Together, these findings provide support for the theoretical model and suggest that although ethnic identity among international transracial adoptees (ITRAs) has similarities to that of nonadopted ethnic minority children, their internal experiences are more complex. © 2015 Wiley Periodicals, Inc.

  15. Variable Melt Production Rate of the Kerguelen HotSpot Due To Long-Term Plume-Ridge Interaction

    NASA Astrophysics Data System (ADS)

    Bredow, Eva; Steinberger, Bernhard

    2018-01-01

    For at least 120 Myr, the Kerguelen plume has distributed enormous amounts of magmatic rocks over various igneous provinces between India, Australia, and Antarctica. Previous attempts to reconstruct the complex history of this plume have revealed several characteristics that are inconsistent with properties typically associated with plumes. To explore the geodynamic behavior of the Kerguelen hotspot, and in particular address these inconsistencies, we set up a regional viscous flow model with the mantle convection code ASPECT. Our model features complex time-dependent boundary conditions in order to explicitly simulate the surrounding conditions of the Kerguelen plume. We show that a constant plume influx can result in a variable magma production rate if the plume interacts with nearby spreading ridges and that a dismembered plume, multiple plumes, or solitary waves in the plume conduit are not required to explain the fluctuating magma output and other unusual characteristics attributed to the Kerguelen hotspot.

  16. Complexity of Multi-Dimensional Spontaneous EEG Decreases during Propofol Induced General Anaesthesia

    PubMed Central

    Schartner, Michael; Seth, Anil; Noirhomme, Quentin; Boly, Melanie; Bruno, Marie-Aurelie; Laureys, Steven; Barrett, Adam

    2015-01-01

    Emerging neural theories of consciousness suggest a correlation between a specific type of neural dynamical complexity and the level of consciousness: When awake and aware, causal interactions between brain regions are both integrated (all regions are to a certain extent connected) and differentiated (there is inhomogeneity and variety in the interactions). In support of this, recent work by Casali et al (2013) has shown that Lempel-Ziv complexity correlates strongly with conscious level, when computed on the EEG response to transcranial magnetic stimulation. Here we investigated complexity of spontaneous high-density EEG data during propofol-induced general anaesthesia. We consider three distinct measures: (i) Lempel-Ziv complexity, which is derived from how compressible the data are; (ii) amplitude coalition entropy, which measures the variability in the constitution of the set of active channels; and (iii) the novel synchrony coalition entropy (SCE), which measures the variability in the constitution of the set of synchronous channels. After some simulations on Kuramoto oscillator models which demonstrate that these measures capture distinct ‘flavours’ of complexity, we show that there is a robustly measurable decrease in the complexity of spontaneous EEG during general anaesthesia. PMID:26252378

  17. Surgical Risk Preoperative Assessment System (SURPAS): II. Parsimonious Risk Models for Postoperative Adverse Outcomes Addressing Need for Laboratory Variables and Surgeon Specialty-specific Models.

    PubMed

    Meguid, Robert A; Bronsert, Michael R; Juarez-Colunga, Elizabeth; Hammermeister, Karl E; Henderson, William G

    2016-07-01

    To develop parsimonious prediction models for postoperative mortality, overall morbidity, and 6 complication clusters applicable to a broad range of surgical operations in adult patients. Quantitative risk assessment tools are not routinely used for preoperative patient assessment, shared decision making, informed consent, and preoperative patient optimization, likely due in part to the burden of data collection and the complexity of incorporation into routine surgical practice. Multivariable forward selection stepwise logistic regression analyses were used to develop predictive models for 30-day mortality, overall morbidity, and 6 postoperative complication clusters, using 40 preoperative variables from 2,275,240 surgical cases in the American College of Surgeons National Surgical Quality Improvement Program data set, 2005 to 2012. For the mortality and overall morbidity outcomes, prediction models were compared with and without preoperative laboratory variables, and generic models (based on all of the data from 9 surgical specialties) were compared with specialty-specific models. In each model, the cumulative c-index was used to examine the contribution of each added predictor variable. C-indexes, Hosmer-Lemeshow analyses, and Brier scores were used to compare discrimination and calibration between models. For the mortality and overall morbidity outcomes, the prediction models without the preoperative laboratory variables performed as well as the models with the laboratory variables, and the generic models performed as well as the specialty-specific models. The c-indexes were 0.938 for mortality, 0.810 for overall morbidity, and for the 6 complication clusters ranged from 0.757 for infectious to 0.897 for pulmonary complications. Across the 8 prediction models, the first 7 to 11 variables entered accounted for at least 99% of the c-index of the full model (using up to 28 nonlaboratory predictor variables). Our results suggest that it will be possible to develop parsimonious models to predict 8 important postoperative outcomes for a broad surgical population, without the need for surgeon specialty-specific models or inclusion of laboratory variables.

  18. A review on reflective remote sensing and data assimilation techniques for enhanced agroecosystem modeling

    NASA Astrophysics Data System (ADS)

    Dorigo, W. A.; Zurita-Milla, R.; de Wit, A. J. W.; Brazile, J.; Singh, R.; Schaepman, M. E.

    2007-05-01

    During the last 50 years, the management of agroecosystems has been undergoing major changes to meet the growing demand for food, timber, fibre and fuel. As a result of this intensified use, the ecological status of many agroecosystems has been severely deteriorated. Modeling the behavior of agroecosystems is, therefore, of great help since it allows the definition of management strategies that maximize (crop) production while minimizing the environmental impacts. Remote sensing can support such modeling by offering information on the spatial and temporal variation of important canopy state variables which would be very difficult to obtain otherwise. In this paper, we present an overview of different methods that can be used to derive biophysical and biochemical canopy state variables from optical remote sensing data in the VNIR-SWIR regions. The overview is based on an extensive literature review where both statistical-empirical and physically based methods are discussed. Subsequently, the prevailing techniques of assimilating remote sensing data into agroecosystem models are outlined. The increasing complexity of data assimilation methods and of models describing agroecosystem functioning has significantly increased computational demands. For this reason, we include a short section on the potential of parallel processing to deal with the complex and computationally intensive algorithms described in the preceding sections. The studied literature reveals that many valuable techniques have been developed both for the retrieval of canopy state variables from reflective remote sensing data as for assimilating the retrieved variables in agroecosystem models. However, for agroecosystem modeling and remote sensing data assimilation to be commonly employed on a global operational basis, emphasis will have to be put on bridging the mismatch between data availability and accuracy on one hand, and model and user requirements on the other. This could be achieved by integrating imagery with different spatial, temporal, spectral, and angular resolutions, and the fusion of optical data with data of different origin, such as LIDAR and radar/microwave.

  19. Structural identifiability of cyclic graphical models of biological networks with latent variables.

    PubMed

    Wang, Yulin; Lu, Na; Miao, Hongyu

    2016-06-13

    Graphical models have long been used to describe biological networks for a variety of important tasks such as the determination of key biological parameters, and the structure of graphical model ultimately determines whether such unknown parameters can be unambiguously obtained from experimental observations (i.e., the identifiability problem). Limited by resources or technical capacities, complex biological networks are usually partially observed in experiment, which thus introduces latent variables into the corresponding graphical models. A number of previous studies have tackled the parameter identifiability problem for graphical models such as linear structural equation models (SEMs) with or without latent variables. However, the limited resolution and efficiency of existing approaches necessarily calls for further development of novel structural identifiability analysis algorithms. An efficient structural identifiability analysis algorithm is developed in this study for a broad range of network structures. The proposed method adopts the Wright's path coefficient method to generate identifiability equations in forms of symbolic polynomials, and then converts these symbolic equations to binary matrices (called identifiability matrix). Several matrix operations are introduced for identifiability matrix reduction with system equivalency maintained. Based on the reduced identifiability matrices, the structural identifiability of each parameter is determined. A number of benchmark models are used to verify the validity of the proposed approach. Finally, the network module for influenza A virus replication is employed as a real example to illustrate the application of the proposed approach in practice. The proposed approach can deal with cyclic networks with latent variables. The key advantage is that it intentionally avoids symbolic computation and is thus highly efficient. Also, this method is capable of determining the identifiability of each single parameter and is thus of higher resolution in comparison with many existing approaches. Overall, this study provides a basis for systematic examination and refinement of graphical models of biological networks from the identifiability point of view, and it has a significant potential to be extended to more complex network structures or high-dimensional systems.

  20. [Optimal extraction of effective constituents from Aralia elata by central composite design and response surface methodology].

    PubMed

    Lv, Shao-Wa; Liu, Dong; Hu, Pan-Pan; Ye, Xu-Yan; Xiao, Hong-Bin; Kuang, Hai-Xue

    2010-03-01

    To optimize the process of extracting effective constituents from Aralia elata by response surface methodology. The independent variables were ethanol concentration, reflux time and solvent fold, the dependent variable was extraction rate of total saponins in Aralia elata. Linear or no-linear mathematic models were used to estimate the relationship between independent and dependent variables. Response surface methodology was used to optimize the process of extraction. The prediction was carried out through comparing the observed and predicted values. Regression coefficient of binomial fitting complex model was as high as 0.9617, the optimum conditions of extraction process were 70% ethanol, 2.5 hours for reflux, 20-fold solvent and 3 times for extraction. The bias between observed and predicted values was -2.41%. It shows the optimum model is highly predictive.

  1. TopoSCALE v.1.0: downscaling gridded climate data in complex terrain

    NASA Astrophysics Data System (ADS)

    Fiddes, J.; Gruber, S.

    2014-02-01

    Simulation of land surface processes is problematic in heterogeneous terrain due to the the high resolution required of model grids to capture strong lateral variability caused by, for example, topography, and the lack of accurate meteorological forcing data at the site or scale it is required. Gridded data products produced by atmospheric models can fill this gap, however, often not at an appropriate spatial resolution to drive land-surface simulations. In this study we describe a method that uses the well-resolved description of the atmospheric column provided by climate models, together with high-resolution digital elevation models (DEMs), to downscale coarse-grid climate variables to a fine-scale subgrid. The main aim of this approach is to provide high-resolution driving data for a land-surface model (LSM). The method makes use of an interpolation of pressure-level data according to topographic height of the subgrid. An elevation and topography correction is used to downscale short-wave radiation. Long-wave radiation is downscaled by deriving a cloud-component of all-sky emissivity at grid level and using downscaled temperature and relative humidity fields to describe variability with elevation. Precipitation is downscaled with a simple non-linear lapse and optionally disaggregated using a climatology approach. We test the method in comparison with unscaled grid-level data and a set of reference methods, against a large evaluation dataset (up to 210 stations per variable) in the Swiss Alps. We demonstrate that the method can be used to derive meteorological inputs in complex terrain, with most significant improvements (with respect to reference methods) seen in variables derived from pressure levels: air temperature, relative humidity, wind speed and incoming long-wave radiation. This method may be of use in improving inputs to numerical simulations in heterogeneous and/or remote terrain, especially when statistical methods are not possible, due to lack of observations (i.e. remote areas or future periods).

  2. Linking Inflammation, Cardiorespiratory Variability, and Neural Control in Acute Inflammation via Computational Modeling

    PubMed Central

    Dick, Thomas E.; Molkov, Yaroslav I.; Nieman, Gary; Hsieh, Yee-Hsee; Jacono, Frank J.; Doyle, John; Scheff, Jeremy D.; Calvano, Steve E.; Androulakis, Ioannis P.; An, Gary; Vodovotz, Yoram

    2012-01-01

    Acute inflammation leads to organ failure by engaging catastrophic feedback loops in which stressed tissue evokes an inflammatory response and, in turn, inflammation damages tissue. Manifestations of this maladaptive inflammatory response include cardio-respiratory dysfunction that may be reflected in reduced heart rate and ventilatory pattern variabilities. We have developed signal-processing algorithms that quantify non-linear deterministic characteristics of variability in biologic signals. Now, coalescing under the aegis of the NIH Computational Biology Program and the Society for Complexity in Acute Illness, two research teams performed iterative experiments and computational modeling on inflammation and cardio-pulmonary dysfunction in sepsis as well as on neural control of respiration and ventilatory pattern variability. These teams, with additional collaborators, have recently formed a multi-institutional, interdisciplinary consortium, whose goal is to delineate the fundamental interrelationship between the inflammatory response and physiologic variability. Multi-scale mathematical modeling and complementary physiological experiments will provide insight into autonomic neural mechanisms that may modulate the inflammatory response to sepsis and simultaneously reduce heart rate and ventilatory pattern variabilities associated with sepsis. This approach integrates computational models of neural control of breathing and cardio-respiratory coupling with models that combine inflammation, cardiovascular function, and heart rate variability. The resulting integrated model will provide mechanistic explanations for the phenomena of respiratory sinus-arrhythmia and cardio-ventilatory coupling observed under normal conditions, and the loss of these properties during sepsis. This approach holds the potential of modeling cross-scale physiological interactions to improve both basic knowledge and clinical management of acute inflammatory diseases such as sepsis and trauma. PMID:22783197

  3. Linking Inflammation, Cardiorespiratory Variability, and Neural Control in Acute Inflammation via Computational Modeling.

    PubMed

    Dick, Thomas E; Molkov, Yaroslav I; Nieman, Gary; Hsieh, Yee-Hsee; Jacono, Frank J; Doyle, John; Scheff, Jeremy D; Calvano, Steve E; Androulakis, Ioannis P; An, Gary; Vodovotz, Yoram

    2012-01-01

    Acute inflammation leads to organ failure by engaging catastrophic feedback loops in which stressed tissue evokes an inflammatory response and, in turn, inflammation damages tissue. Manifestations of this maladaptive inflammatory response include cardio-respiratory dysfunction that may be reflected in reduced heart rate and ventilatory pattern variabilities. We have developed signal-processing algorithms that quantify non-linear deterministic characteristics of variability in biologic signals. Now, coalescing under the aegis of the NIH Computational Biology Program and the Society for Complexity in Acute Illness, two research teams performed iterative experiments and computational modeling on inflammation and cardio-pulmonary dysfunction in sepsis as well as on neural control of respiration and ventilatory pattern variability. These teams, with additional collaborators, have recently formed a multi-institutional, interdisciplinary consortium, whose goal is to delineate the fundamental interrelationship between the inflammatory response and physiologic variability. Multi-scale mathematical modeling and complementary physiological experiments will provide insight into autonomic neural mechanisms that may modulate the inflammatory response to sepsis and simultaneously reduce heart rate and ventilatory pattern variabilities associated with sepsis. This approach integrates computational models of neural control of breathing and cardio-respiratory coupling with models that combine inflammation, cardiovascular function, and heart rate variability. The resulting integrated model will provide mechanistic explanations for the phenomena of respiratory sinus-arrhythmia and cardio-ventilatory coupling observed under normal conditions, and the loss of these properties during sepsis. This approach holds the potential of modeling cross-scale physiological interactions to improve both basic knowledge and clinical management of acute inflammatory diseases such as sepsis and trauma.

  4. Shuttle Debris Impact Tool Assessment Using the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    DeLoach, R.; Rayos, E. M.; Campbell, C. H.; Rickman, S. L.

    2006-01-01

    Computational tools have been developed to estimate thermal and mechanical reentry loads experienced by the Space Shuttle Orbiter as the result of cavities in the Thermal Protection System (TPS). Such cavities can be caused by impact from ice or insulating foam debris shed from the External Tank (ET) on liftoff. The reentry loads depend on cavity geometry and certain Shuttle state variables, among other factors. Certain simplifying assumptions have been made in the tool development about the cavity geometry variables. For example, the cavities are all modeled as shoeboxes , with rectangular cross-sections and planar walls. So an actual cavity is typically approximated with an idealized cavity described in terms of its length, width, and depth, as well as its entry angle, exit angle, and side angles (assumed to be the same for both sides). As part of a comprehensive assessment of the uncertainty in reentry loads estimated by the debris impact assessment tools, an effort has been initiated to quantify the component of the uncertainty that is due to imperfect geometry specifications for the debris impact cavities. The approach is to compute predicted loads for a set of geometry factor combinations sufficient to develop polynomial approximations to the complex, nonparametric underlying computational models. Such polynomial models are continuous and feature estimable, continuous derivatives, conditions that facilitate the propagation of independent variable errors. As an additional benefit, once the polynomial models have been developed, they require fewer computational resources to execute than the underlying finite element and computational fluid dynamics codes, and can generate reentry loads estimates in significantly less time. This provides a practical screening capability, in which a large number of debris impact cavities can be quickly classified either as harmless, or subject to additional analysis with the more comprehensive underlying computational tools. The polynomial models also provide useful insights into the sensitivity of reentry loads to various cavity geometry variables, and reveal complex interactions among those variables that indicate how the sensitivity of one variable depends on the level of one or more other variables. For example, the effect of cavity length on certain reentry loads depends on the depth of the cavity. Such interactions are clearly displayed in the polynomial response models.

  5. Complex analyses on clinical information systems using restricted natural language querying to resolve time-event dependencies.

    PubMed

    Safari, Leila; Patrick, Jon D

    2018-06-01

    This paper reports on a generic framework to provide clinicians with the ability to conduct complex analyses on elaborate research topics using cascaded queries to resolve internal time-event dependencies in the research questions, as an extension to the proposed Clinical Data Analytics Language (CliniDAL). A cascaded query model is proposed to resolve internal time-event dependencies in the queries which can have up to five levels of criteria starting with a query to define subjects to be admitted into a study, followed by a query to define the time span of the experiment. Three more cascaded queries can be required to define control groups, control variables and output variables which all together simulate a real scientific experiment. According to the complexity of the research questions, the cascaded query model has the flexibility of merging some lower level queries for simple research questions or adding a nested query to each level to compose more complex queries. Three different scenarios (one of them contains two studies) are described and used for evaluation of the proposed solution. CliniDAL's complex analyses solution enables answering complex queries with time-event dependencies at most in a few hours which manually would take many days. An evaluation of results of the research studies based on the comparison between CliniDAL and SQL solutions reveals high usability and efficiency of CliniDAL's solution. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Adaptive Variability in Skilled Human Movements

    NASA Astrophysics Data System (ADS)

    Kudo, Kazutoshi; Ohtsuki, Tatsuyuki

    Human movements are produced in variable external/internal environments. Because of this variability, the same motor command can result in quite different movement patterns. Therefore, to produce skilled movements humans must coordinate the variability, not try to exclude it. In addition, because human movements are produced in redundant and complex systems, a combination of variability should be observed in different anatomical/physiological levels. In this paper, we introduce our research about human movement variability that shows remarkable coordination among components, and between organism and environment. We also introduce nonlinear dynamical models that can describe a variety of movements as a self-organization of a dynamical system, because the dynamical systems approach is a major candidate to understand the principle underlying organization of varying systems with huge degrees-of-freedom.

  7. The Mathematics of Psychotherapy: A Nonlinear Model of Change Dynamics.

    PubMed

    Schiepek, Gunter; Aas, Benjamin; Viol, Kathrin

    2016-07-01

    Psychotherapy is a dynamic process produced by a complex system of interacting variables. Even though there are qualitative models of such systems the link between structure and function, between network and network dynamics is still missing. The aim of this study is to realize these links. The proposed model is composed of five state variables (P: problem severity, S: success and therapeutic progress, M: motivation to change, E: emotions, I: insight and new perspectives) interconnected by 16 functions. The shape of each function is modified by four parameters (a: capability to form a trustful working alliance, c: mentalization and emotion regulation, r: behavioral resources and skills, m: self-efficacy and reward expectation). Psychologically, the parameters play the role of competencies or traits, which translate into the concept of control parameters in synergetics. The qualitative model was transferred into five coupled, deterministic, nonlinear difference equations generating the dynamics of each variable as a function of other variables. The mathematical model is able to reproduce important features of psychotherapy processes. Examples of parameter-dependent bifurcation diagrams are given. Beyond the illustrated similarities between simulated and empirical dynamics, the model has to be further developed, systematically tested by simulated experiments, and compared to empirical data.

  8. A respiratory alert model for the Shenandoah Valley, Virginia, USA

    NASA Astrophysics Data System (ADS)

    Hondula, David M.; Davis, Robert E.; Knight, David B.; Sitka, Luke J.; Enfield, Kyle; Gawtry, Stephen B.; Stenger, Phillip J.; Deaton, Michael L.; Normile, Caroline P.; Lee, Temple R.

    2013-01-01

    Respiratory morbidity (particularly COPD and asthma) can be influenced by short-term weather fluctuations that affect air quality and lung function. We developed a model to evaluate meteorological conditions associated with respiratory hospital admissions in the Shenandoah Valley of Virginia, USA. We generated ensembles of classification trees based on six years of respiratory-related hospital admissions (64,620 cases) and a suite of 83 potential environmental predictor variables. As our goal was to identify short-term weather linkages to high admission periods, the dependent variable was formulated as a binary classification of five-day moving average respiratory admission departures from the seasonal mean value. Accounting for seasonality removed the long-term apparent inverse relationship between temperature and admissions. We generated eight total models specific to the northern and southern portions of the valley for each season. All eight models demonstrate predictive skill (mean odds ratio = 3.635) when evaluated using a randomization procedure. The predictor variables selected by the ensembling algorithm vary across models, and both meteorological and air quality variables are included. In general, the models indicate complex linkages between respiratory health and environmental conditions that may be difficult to identify using more traditional approaches.

  9. The risk-adjusted vision beyond casemix (DRG) funding in Australia. International lessons in high complexity and capitation.

    PubMed

    Antioch, Kathryn M; Walsh, Michael K

    2004-06-01

    Hospitals throughout the world using funding based on diagnosis-related groups (DRG) have incurred substantial budgetary deficits, despite high efficiency. We identify the limitations of DRG funding that lack risk (severity) adjustment for State-wide referral services. Methods to risk adjust DRGs are instructive. The average price in casemix funding in the Australian State of Victoria is policy based, not benchmarked. Average cost weights are too low for high-complexity DRGs relating to State-wide referral services such as heart and lung transplantation and trauma. Risk-adjusted specified grants (RASG) are required for five high-complexity respiratory, cardiology and stroke DRGs incurring annual deficits of $3.6 million due to high casemix complexity and government under-funding despite high efficiency. Five stepwise linear regressions for each DRG excluded non-significant variables and assessed heteroskedasticity and multicollinearlity. Cost per patient was the dependent variable. Significant independent variables were age, length-of-stay outliers, number of disease types, diagnoses, procedures and emergency status. Diagnosis and procedure severity markers were identified. The methodology and the work of the State-wide Risk Adjustment Working Group can facilitate risk adjustment of DRGs State-wide and for Treasury negotiations for expenditure growth. The Alfred Hospital previously negotiated RASG of $14 million over 5 years for three trauma and chronic DRGs. Some chronic diseases require risk-adjusted capitation funding models for Australian Health Maintenance Organizations as an alternative to casemix funding. The use of Diagnostic Cost Groups can facilitate State and Federal government reform via new population-based risk adjusted funding models that measure health need.

  10. Exploring biological, chemical and geomorphological patterns in fluvial ecosystems with Structural Equation Modelling

    NASA Astrophysics Data System (ADS)

    Bizzi, S.; Surridge, B.; Lerner, D. N.:

    2009-04-01

    River ecosystems represent complex networks of interacting biological, chemical and geomorphological processes. These processes generate spatial and temporal patterns in biological, chemical and geomorphological variables, and a growing number of these variables are now being used to characterise the status of rivers. However, integrated analyses of these biological-chemical-geomorphological networks have rarely been undertaken, and as a result our knowledge of the underlying processes and how they generate the resulting patterns remains weak. The apparent complexity of the networks involved, and the lack of coherent datasets, represent two key challenges to such analyses. In this paper we describe the application of a novel technique, Structural Equation Modelling (SEM), to the investigation of biological, chemical and geomorphological data collected from rivers across England and Wales. The SEM approach is a multivariate statistical technique enabling simultaneous examination of direct and indirect relationships across a network of variables. Further, SEM allows a-priori conceptual or theoretical models to be tested against available data. This is a significant departure from the solely exploratory analyses which characterise other multivariate techniques. We took biological, chemical and river habitat survey data collected by the Environment Agency for 400 sites in rivers spread across England and Wales, and created a single, coherent dataset suitable for SEM analyses. Biological data cover benthic macroinvertebrates, chemical data relate to a range of standard parameters (e.g. BOD, dissolved oxygen and phosphate concentration), and geomorphological data cover factors such as river typology, substrate material and degree of physical modification. We developed a number of a-priori conceptual models, reflecting current research questions or existing knowledge, and tested the ability of these conceptual models to explain the variance and covariance within the dataset. The conceptual models we developed were able to explain correctly the variance and covariance shown by the datasets, proving to be a relevant representation of the processes involved. The models explained 65% of the variance in indices describing benthic macroinvertebrate communities. Dissolved oxygen was of primary importance, but geomorphological factors, including river habitat type and degree of habitat degradation, also had significant explanatory power. The addition of spatial variables, such as latitude or longitude, did not provide additional explanatory power. This suggests that the variables already included in the models effectively represented the eco-regions across which our data were distributed. The models produced new insights into the relative importance of chemical and geomorphological factors for river macroinvertebrate communities. The SEM technique proved a powerful tool for exploring complex biological-chemical-geomorphological networks, for example able to deal with the co-correlations that are common in rivers due to multiple feedback mechanisms.

  11. On the X-ray spectra of luminous, inhomogeneous accretion flows

    NASA Astrophysics Data System (ADS)

    Merloni, A.; Malzac, J.; Fabian, A. C.; Ross, R. R.

    2006-08-01

    We discuss the expected X-ray spectral and variability properties of black hole accretion discs at high luminosity, under the hypothesis that radiation-pressure-dominated discs are subject to violent clumping instabilities and, as a result, have a highly inhomogeneous two-phase structure. After deriving the full accretion disc solutions explicitly in terms of the parameters of the model, we study their radiative properties both with a simple two-zone model, treatable analytically, and with radiative transfer simulations which account simultaneously for energy balance and Comptonization in the hot phase, together with reflection, reprocessing, ionization and thermal balance in the cold phase. We show that, if not only the density, but also the heating rate within these flows is inhomogeneous, then complex reflection-dominated spectra can be obtained for a high enough covering fraction of the cold phase. In general, large reflection components in the observed X-ray spectra should be associated with strong soft excesses, resulting from the combined emission of ionized atomic emission lines. The variability properties of such systems are such that, even when contributing to a large fraction of the hard X-ray spectrum, the reflection component is less variable than the power-law-like emission originating from the hot Comptonizing phase, in agreement with what is observed in many Narrow Line Seyfert 1 galaxies and bright Seyfert 1. Our model falls within the family of those trying to explain the complex X-ray spectra of bright AGN with ionized reflection, but presents an alternative, specific, physically motivated, geometrical set-up for the complex multiphase structure of the inner regions of near-Eddington accretion flows.

  12. Selection of optimal complexity for ENSO-EMR model by minimum description length principle

    NASA Astrophysics Data System (ADS)

    Loskutov, E. M.; Mukhin, D.; Mukhina, A.; Gavrilov, A.; Kondrashov, D. A.; Feigin, A. M.

    2012-12-01

    One of the main problems arising in modeling of data taken from natural system is finding a phase space suitable for construction of the evolution operator model. Since we usually deal with strongly high-dimensional behavior, we are forced to construct a model working in some projection of system phase space corresponding to time scales of interest. Selection of optimal projection is non-trivial problem since there are many ways to reconstruct phase variables from given time series, especially in the case of a spatio-temporal data field. Actually, finding optimal projection is significant part of model selection, because, on the one hand, the transformation of data to some phase variables vector can be considered as a required component of the model. On the other hand, such an optimization of a phase space makes sense only in relation to the parametrization of the model we use, i.e. representation of evolution operator, so we should find an optimal structure of the model together with phase variables vector. In this paper we propose to use principle of minimal description length (Molkov et al., 2009) for selection models of optimal complexity. The proposed method is applied to optimization of Empirical Model Reduction (EMR) of ENSO phenomenon (Kravtsov et al. 2005, Kondrashov et. al., 2005). This model operates within a subset of leading EOFs constructed from spatio-temporal field of SST in Equatorial Pacific, and has a form of multi-level stochastic differential equations (SDE) with polynomial parameterization of the right-hand side. Optimal values for both the number of EOF, the order of polynomial and number of levels are estimated from the Equatorial Pacific SST dataset. References: Ya. Molkov, D. Mukhin, E. Loskutov, G. Fidelin and A. Feigin, Using the minimum description length principle for global reconstruction of dynamic systems from noisy time series, Phys. Rev. E, Vol. 80, P 046207, 2009 Kravtsov S, Kondrashov D, Ghil M, 2005: Multilevel regression modeling of nonlinear processes: Derivation and applications to climatic variability. J. Climate, 18 (21): 4404-4424. D. Kondrashov, S. Kravtsov, A. W. Robertson and M. Ghil, 2005. A hierarchy of data-based ENSO models. J. Climate, 18, 4425-4444.

  13. Modeling the Spatial Dynamics of International Tuna Fleets

    PubMed Central

    2016-01-01

    We developed an iterative sequential random utility model to investigate the social and environmental determinants of the spatiotemporal decision process of tuna purse-seine fishery fishing effort in the eastern Pacific Ocean. Operations of the fishing gear mark checkpoints in a continuous complex decision-making process. Individual fisher behavior is modeled by identifying diversified choices over decision-space for an entire fishing trip, which allows inclusion of prior and current vessel locations and conditions among the explanatory variables. Among these factors are vessel capacity; departure and arrival port; duration of the fishing trip; daily and cumulative distance travelled, which provides a proxy for operation costs; expected revenue; oceanographic conditions; and tons of fish on board. The model uses a two-step decision process to capture the probability of a vessel choosing a specific fishing region for the first set and the probability of switching to (or staying in) a specific region to fish before returning to its landing port. The model provides a means to anticipate the success of marine resource management, and it can be used to evaluate fleet diversity in fisher behavior, the impact of climate variability, and the stability and resilience of complex coupled human and natural systems. PMID:27537545

  14. Convenient QSAR model for predicting the complexation of structurally diverse compounds with beta-cyclodextrins.

    PubMed

    Pérez-Garrido, Alfonso; Morales Helguera, Aliuska; Abellán Guillén, Adela; Cordeiro, M Natália D S; Garrido Escudero, Amalio

    2009-01-15

    This paper reports a QSAR study for predicting the complexation of a large and heterogeneous variety of substances (233 organic compounds) with beta-cyclodextrins (beta-CDs). Several different theoretical molecular descriptors, calculated solely from the molecular structure of the compounds under investigation, and an efficient variable selection procedure, like the Genetic Algorithm, led to models with satisfactory global accuracy and predictivity. But the best-final QSAR model is based on Topological descriptors meanwhile offering a reasonable interpretation. This QSAR model was able to explain ca. 84% of the variance in the experimental activity, and displayed very good internal cross-validation statistics and predictivity on external data. It shows that the driving forces for CD complexation are mainly hydrophobic and steric (van der Waals) interactions. Thus, the results of our study provide a valuable tool for future screening and priority testing of beta-CDs guest molecules.

  15. A 1.26 μW Cytomimetic IC Emulating Complex Nonlinear Mammalian Cell Cycle Dynamics: Synthesis, Simulation and Proof-of-Concept Measured Results.

    PubMed

    Houssein, Alexandros; Papadimitriou, Konstantinos I; Drakakis, Emmanuel M

    2015-08-01

    Cytomimetic circuits represent a novel, ultra low-power, continuous-time, continuous-value class of circuits, capable of mapping on silicon cellular and molecular dynamics modelled by means of nonlinear ordinary differential equations (ODEs). Such monolithic circuits are in principle able to emulate on chip, single or multiple cell operations in a highly parallel fashion. Cytomimetic topologies can be synthesized by adopting the Nonlinear Bernoulli Cell Formalism (NBCF), a mathematical framework that exploits the striking similarities between the equations describing weakly-inverted Metal-Oxide Semiconductor (MOS) devices and coupled nonlinear ODEs, typically appearing in models of naturally encountered biochemical systems. The NBCF maps biological state variables onto strictly positive subthreshold MOS circuit currents. This paper presents the synthesis, the simulation and proof-of-concept chip results corresponding to the emulation of a complex cellular network mechanism, the skeleton model for the network of Cyclin-dependent Kinases (CdKs) driving the mammalian cell cycle. This five variable nonlinear biological model, when appropriate model parameter values are assigned, can exhibit multiple oscillatory behaviors, varying from simple periodic oscillations, to complex oscillations such as quasi-periodicity and chaos. The validity of our approach is verified by simulated results with realistic process parameters from the commercially available AMS 0.35 μm technology and by chip measurements. The fabricated chip occupies an area of 2.27 mm2 and consumes a power of 1.26 μW from a power supply of 3 V. The presented cytomimetic topology follows closely the behavior of its biological counterpart, exhibiting similar time-dependent solutions of the Cdk complexes, the transcription factors and the proteins.

  16. Encoding dependence in Bayesian causal networks

    USDA-ARS?s Scientific Manuscript database

    Bayesian networks (BNs) represent complex, uncertain spatio-temporal dynamics by propagation of conditional probabilities between identifiable states with a testable causal interaction model. Typically, they assume random variables are discrete in time and space with a static network structure that ...

  17. New developments in UTMOST : application to electronic stability control.

    DOT National Transportation Integrated Search

    2009-10-01

    The Unified Tool for Mapping Opportunities for Safety Technology (UTMOST) : is a model of crash data that incorporates the complex relationships among different : vehicle and driver variables. It is designed to visualize the effect of multiple safety...

  18. Solving the Inverse-Square Problem with Complex Variables

    ERIC Educational Resources Information Center

    Gauthier, N.

    2005-01-01

    The equation of motion for a mass that moves under the influence of a central, inverse-square force is formulated and solved as a problem in complex variables. To find the solution, the constancy of angular momentum is first established using complex variables. Next, the complex position coordinate and complex velocity of the particle are assumed…

  19. TUMOR HAPLOTYPE ASSEMBLY ALGORITHMS FOR CANCER GENOMICS

    PubMed Central

    AGUIAR, DEREK; WONG, WENDY S.W.; ISTRAIL, SORIN

    2014-01-01

    The growing availability of inexpensive high-throughput sequence data is enabling researchers to sequence tumor populations within a single individual at high coverage. But, cancer genome sequence evolution and mutational phenomena like driver mutations and gene fusions are difficult to investigate without first reconstructing tumor haplotype sequences. Haplotype assembly of single individual tumor populations is an exceedingly difficult task complicated by tumor haplotype heterogeneity, tumor or normal cell sequence contamination, polyploidy, and complex patterns of variation. While computational and experimental haplotype phasing of diploid genomes has seen much progress in recent years, haplotype assembly in cancer genomes remains uncharted territory. In this work, we describe HapCompass-Tumor a computational modeling and algorithmic framework for haplotype assembly of copy number variable cancer genomes containing haplotypes at different frequencies and complex variation. We extend our polyploid haplotype assembly model and present novel algorithms for (1) complex variations, including copy number changes, as varying numbers of disjoint paths in an associated graph, (2) variable haplotype frequencies and contamination, and (3) computation of tumor haplotypes using simple cycles of the compass graph which constrain the space of haplotype assembly solutions. The model and algorithm are implemented in the software package HapCompass-Tumor which is available for download from http://www.brown.edu/Research/Istrail_Lab/. PMID:24297529

  20. Empirical modeling ENSO dynamics with complex-valued artificial neural networks

    NASA Astrophysics Data System (ADS)

    Seleznev, Aleksei; Gavrilov, Andrey; Mukhin, Dmitry

    2016-04-01

    The main difficulty in empirical reconstructing the distributed dynamical systems (e.g. regional climate systems, such as El-Nino-Southern Oscillation - ENSO) is a huge amount of observational data comprising time-varying spatial fields of several variables. An efficient reduction of system's dimensionality thereby is essential for inferring an evolution operator (EO) for a low-dimensional subsystem that determines the key properties of the observed dynamics. In this work, to efficient reduction of observational data sets we use complex-valued (Hilbert) empirical orthogonal functions which are appropriate, by their nature, for describing propagating structures unlike traditional empirical orthogonal functions. For the approximation of the EO, a universal model in the form of complex-valued artificial neural network is suggested. The effectiveness of this approach is demonstrated by predicting both the Jin-Neelin-Ghil ENSO model [1] behavior and real ENSO variability from sea surface temperature anomalies data [2]. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS). 1. Jin, F.-F., J. D. Neelin, and M. Ghil, 1996: El Ni˜no/Southern Oscillation and the annual cycle: subharmonic frequency locking and aperiodicity. Physica D, 98, 442-465. 2. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/

  1. Modeling spatial and temporal dynamics of wind flow and potential fire behavior following a mountain pine beetle outbreak in a lodgepole pine forest

    Treesearch

    Chad M. Hoffman; Rodman Linn; Russell Parsons; Carolyn Sieg; Judith Winterkamp

    2015-01-01

    Patches of live, dead, and dying trees resulting from bark beetle-caused mortality alter spatial and temporal variability in the canopy and surface fuel complex through changes in the foliar moisture content of attacked trees and through the redistribution of canopy fuels. The resulting heterogeneous fuels complexes alter within-canopy wind flow, wind fluctuations, and...

  2. Geo-Semantic Framework for Integrating Long-Tail Data and Model Resources for Advancing Earth System Science

    NASA Astrophysics Data System (ADS)

    Elag, M.; Kumar, P.

    2014-12-01

    Often, scientists and small research groups collect data, which target to address issues and have limited geographic or temporal range. A large number of such collections together constitute a large database that is of immense value to Earth Science studies. Complexity of integrating these data include heterogeneity in dimensions, coordinate systems, scales, variables, providers, users and contexts. They have been defined as long-tail data. Similarly, we use "long-tail models" to characterize a heterogeneous collection of models and/or modules developed for targeted problems by individuals and small groups, which together provide a large valuable collection. Complexity of integrating across these models include differing variable names and units for the same concept, model runs at different time steps and spatial resolution, use of differing naming and reference conventions, etc. Ability to "integrate long-tail models and data" will provide an opportunity for the interoperability and reusability of communities' resources, where not only models can be combined in a workflow, but each model will be able to discover and (re)use data in application specific context of space, time and questions. This capability is essential to represent, understand, predict, and manage heterogeneous and interconnected processes and activities by harnessing the complex, heterogeneous, and extensive set of distributed resources. Because of the staggering production rate of long-tail models and data resulting from the advances in computational, sensing, and information technologies, an important challenge arises: how can geoinformatics bring together these resources seamlessly, given the inherent complexity among model and data resources that span across various domains. We will present a semantic-based framework to support integration of "long-tail" models and data. This builds on existing technologies including: (i) SEAD (Sustainable Environmental Actionable Data) which supports curation and preservation of long-tail data during its life-cycle; (ii) BrownDog, which enhances the machine interpretability of large unstructured and uncurated data; and (iii) CSDMS (Community Surface Dynamics Modeling System), which "componentizes" models by providing plug-and-play environment for models integration.

  3. Field measurements and modeling to resolve m2 to km2 CH4 emissions for a complex urban source: An Indiana landfill study

    USDA-ARS?s Scientific Manuscript database

    Large uncertainties for landfill CH4 emissions due to spatial and temporal variabilities remain unresolved by short-term field campaigns and historic GHG inventory models. Using four field methods (aircraft-based mass balance, tracer correlation, vertical radial plume mapping, and static chambers) ...

  4. Simulating initial attack with two fire containment models

    Treesearch

    Romain M. Mees

    1985-01-01

    Given a variable rate of fireline construction and an elliptical fire growth model, two methods for estimating the required number of resources, time to containment, and the resulting fire area were compared. Five examples illustrate some of the computational differences between the simple and the complex methods. The equations for the two methods can be used and...

  5. Resampling and Distribution of the Product Methods for Testing Indirect Effects in Complex Models

    ERIC Educational Resources Information Center

    Williams, Jason; MacKinnon, David P.

    2008-01-01

    Recent advances in testing mediation have found that certain resampling methods and tests based on the mathematical distribution of 2 normal random variables substantially outperform the traditional "z" test. However, these studies have primarily focused only on models with a single mediator and 2 component paths. To address this limitation, a…

  6. High-resolution numerical approximation of traffic flow problems with variable lanes and free-flow velocities.

    PubMed

    Zhang, Peng; Liu, Ru-Xun; Wong, S C

    2005-05-01

    This paper develops macroscopic traffic flow models for a highway section with variable lanes and free-flow velocities, that involve spatially varying flux functions. To address this complex physical property, we develop a Riemann solver that derives the exact flux values at the interface of the Riemann problem. Based on this solver, we formulate Godunov-type numerical schemes to solve the traffic flow models. Numerical examples that simulate the traffic flow around a bottleneck that arises from a drop in traffic capacity on the highway section are given to illustrate the efficiency of these schemes.

  7. Interesting examples of supervised continuous variable systems

    NASA Technical Reports Server (NTRS)

    Chase, Christopher; Serrano, Joe; Ramadge, Peter

    1990-01-01

    The authors analyze two simple deterministic flow models for multiple buffer servers which are examples of the supervision of continuous variable systems by a discrete controller. These systems exhibit what may be regarded as the two extremes of complexity of the closed loop behavior: one is eventually periodic, the other is chaotic. The first example exhibits chaotic behavior that could be characterized statistically. The dual system, the switched server system, exhibits very predictable behavior, which is modeled by a finite state automaton. This research has application to multimodal discrete time systems where the controller can choose from a set of transition maps to implement.

  8. Feynman propagator for spin foam quantum gravity.

    PubMed

    Oriti, Daniele

    2005-03-25

    We link the notion causality with the orientation of the spin foam 2-complex. We show that all current spin foam models are orientation independent. Using the technology of evolution kernels for quantum fields on Lie groups, we construct a generalized version of spin foam models, introducing an extra proper time variable. We prove that different ranges of integration for this variable lead to different classes of spin foam models: the usual ones, interpreted as the quantum gravity analogue of the Hadamard function of quantum field theory (QFT) or as inner products between quantum gravity states; and a new class of causal models, the quantum gravity analogue of the Feynman propagator in QFT, nontrivial function of the orientation data, and implying a notion of "timeless ordering".

  9. A New Integrated Weighted Model in SNOW-V10: Verification of Categorical Variables

    NASA Astrophysics Data System (ADS)

    Huang, Laura X.; Isaac, George A.; Sheng, Grant

    2014-01-01

    This paper presents the verification results for nowcasts of seven categorical variables from an integrated weighted model (INTW) and the underlying numerical weather prediction (NWP) models. Nowcasting, or short range forecasting (0-6 h), over complex terrain with sufficient accuracy is highly desirable but a very challenging task. A weighting, evaluation, bias correction and integration system (WEBIS) for generating nowcasts by integrating NWP forecasts and high frequency observations was used during the Vancouver 2010 Olympic and Paralympic Winter Games as part of the Science of Nowcasting Olympic Weather for Vancouver 2010 (SNOW-V10) project. Forecast data from Canadian high-resolution deterministic NWP system with three nested grids (at 15-, 2.5- and 1-km horizontal grid-spacing) were selected as background gridded data for generating the integrated nowcasts. Seven forecast variables of temperature, relative humidity, wind speed, wind gust, visibility, ceiling and precipitation rate are treated as categorical variables for verifying the integrated weighted forecasts. By analyzing the verification of forecasts from INTW and the NWP models among 15 sites, the integrated weighted model was found to produce more accurate forecasts for the 7 selected forecast variables, regardless of location. This is based on the multi-categorical Heidke skill scores for the test period 12 February to 21 March 2010.

  10. Expanding the occupational health methodology: A concatenated artificial neural network approach to model the burnout process in Chinese nurses.

    PubMed

    Ladstätter, Felix; Garrosa, Eva; Moreno-Jiménez, Bernardo; Ponsoda, Vicente; Reales Aviles, José Manuel; Dai, Junming

    2016-01-01

    Artificial neural networks are sophisticated modelling and prediction tools capable of extracting complex, non-linear relationships between predictor (input) and predicted (output) variables. This study explores this capacity by modelling non-linearities in the hardiness-modulated burnout process with a neural network. Specifically, two multi-layer feed-forward artificial neural networks are concatenated in an attempt to model the composite non-linear burnout process. Sensitivity analysis, a Monte Carlo-based global simulation technique, is then utilised to examine the first-order effects of the predictor variables on the burnout sub-dimensions and consequences. Results show that (1) this concatenated artificial neural network approach is feasible to model the burnout process, (2) sensitivity analysis is a prolific method to study the relative importance of predictor variables and (3) the relationships among variables involved in the development of burnout and its consequences are to different degrees non-linear. Many relationships among variables (e.g., stressors and strains) are not linear, yet researchers use linear methods such as Pearson correlation or linear regression to analyse these relationships. Artificial neural network analysis is an innovative method to analyse non-linear relationships and in combination with sensitivity analysis superior to linear methods.

  11. Comparison modeling for alpine vegetation distribution in an arid area.

    PubMed

    Zhou, Jihua; Lai, Liming; Guan, Tianyu; Cai, Wetao; Gao, Nannan; Zhang, Xiaolong; Yang, Dawen; Cong, Zhentao; Zheng, Yuanrun

    2016-07-01

    Mapping and modeling vegetation distribution are fundamental topics in vegetation ecology. With the rise of powerful new statistical techniques and GIS tools, the development of predictive vegetation distribution models has increased rapidly. However, modeling alpine vegetation with high accuracy in arid areas is still a challenge because of the complexity and heterogeneity of the environment. Here, we used a set of 70 variables from ASTER GDEM, WorldClim, and Landsat-8 OLI (land surface albedo and spectral vegetation indices) data with decision tree (DT), maximum likelihood classification (MLC), and random forest (RF) models to discriminate the eight vegetation groups and 19 vegetation formations in the upper reaches of the Heihe River Basin in the Qilian Mountains, northwest China. The combination of variables clearly discriminated vegetation groups but failed to discriminate vegetation formations. Different variable combinations performed differently in each type of model, but the most consistently important parameter in alpine vegetation modeling was elevation. The best RF model was more accurate for vegetation modeling compared with the DT and MLC models for this alpine region, with an overall accuracy of 75 % and a kappa coefficient of 0.64 verified against field point data and an overall accuracy of 65 % and a kappa of 0.52 verified against vegetation map data. The accuracy of regional vegetation modeling differed depending on the variable combinations and models, resulting in different classifications for specific vegetation groups.

  12. Tree-based modeling of complex interactions of phosphorus loadings and environmental factors.

    PubMed

    Grunwald, S; Daroub, S H; Lang, T A; Diaz, O A

    2009-06-01

    Phosphorus (P) enrichment has been observed in the historic oligotrophic Greater Everglades in Florida mainly due to P influx from upstream, agriculturally dominated, low relief drainage basins of the Everglades Agricultural Area (EAA). Our specific objectives were to: (1) investigate relationships between various environmental factors and P loads in 10 farm basins within the EAA, (2) identify those environmental factors that impart major effects on P loads using three different tree-based modeling approaches, and (3) evaluate predictive models to assess P loads. We assembled thirteen environmental variable sets for all 10 sub-basins characterizing water level management, cropping practices, soils, hydrology, and farm-specific properties. Drainage flow and P concentrations were measured at each sub-basin outlet from 1992-2002 and aggregated to derive monthly P loads. We used three different tree-based models including single regression trees (ST), committee trees in Bagging (CTb) and ARCing (CTa) modes and ten-fold cross-validation to test prediction performances. The monthly P loads (MPL) during the monitoring period showed a maximum of 2528 kg (mean: 103 kg) and maximum monthly unit area P loads (UAL) of 4.88 kg P ha(-1) (mean: 0.16 kg P ha(-1)). Our results suggest that hydrologic/water management properties are the major controlling variables to predict MPL and UAL in the EAA. Tree-based modeling was successful in identifying relationships between P loads and environmental predictor variables on 10 farms in the EAA indicated by high R(2) (>0.80) and low prediction errors. Committee trees in ARCing mode generated the best performing models to predict P loads and P loads per unit area. Tree-based models had the ability to analyze complex, non-linear relationships between P loads and multiple variables describing hydrologic/water management, cropping practices, soil and farm-specific properties within the EAA.

  13. Evaluating the effects of variable water chemistry on bacterial transport during infiltration.

    PubMed

    Zhang, Haibo; Nordin, Nahjan Amer; Olson, Mira S

    2013-07-01

    Bacterial infiltration through the subsurface has been studied experimentally under different conditions of interest and is dependent on a variety of physical, chemical and biological factors. However, most bacterial transport studies fail to adequately represent the complex processes occurring in natural systems. Bacteria are frequently detected in stormwater runoff, and may present risk of microbial contamination during stormwater recharge into groundwater. Mixing of stormwater runoff with groundwater during infiltration results in changes in local solution chemistry, which may lead to changes in both bacterial and collector surface properties and subsequent bacterial attachment rates. This study focuses on quantifying changes in bacterial transport behavior under variable solution chemistry, and on comparing the influences of chemical variability and physical variability on bacterial attachment rates. Bacterial attachment rate at the soil-water interface was predicted analytically using a combined rate equation, which varies temporally and spatially with respect to changes in solution chemistry. Two-phase Monte Carlo analysis was conducted and an overall input-output correlation coefficient was calculated to quantitatively describe the importance of physiochemical variation on the estimates of attachment rate. Among physical variables, soil particle size has the highest correlation coefficient, followed by porosity of the soil media, bacterial size and flow velocity. Among chemical variables, ionic strength has the highest correlation coefficient. A semi-reactive microbial transport model was developed within HP1 (HYDRUS1D-PHREEQC) and applied to column transport experiments with constant and variable solution chemistries. Bacterial attachment rates varied from 9.10×10(-3)min(-1) to 3.71×10(-3)min(-1) due to mixing of synthetic stormwater (SSW) with artificial groundwater (AGW), while bacterial attachment remained constant at 9.10×10(-3)min(-1) in a constant solution chemistry (AGW only). The model matched observed bacterial breakthrough curves well. Although limitations exist in the application of a semi-reactive microbial transport model, this method represents one step towards a more realistic model of bacterial transport in complex microbial-water-soil systems. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Incorporating wind availability into land use regression modelling of air quality in mountainous high-density urban environment.

    PubMed

    Shi, Yuan; Lau, Kevin Ka-Lun; Ng, Edward

    2017-08-01

    Urban air quality serves as an important function of the quality of urban life. Land use regression (LUR) modelling of air quality is essential for conducting health impacts assessment but more challenging in mountainous high-density urban scenario due to the complexities of the urban environment. In this study, a total of 21 LUR models are developed for seven kinds of air pollutants (gaseous air pollutants CO, NO 2 , NO x , O 3 , SO 2 and particulate air pollutants PM 2.5 , PM 10 ) with reference to three different time periods (summertime, wintertime and annual average of 5-year long-term hourly monitoring data from local air quality monitoring network) in Hong Kong. Under the mountainous high-density urban scenario, we improved the traditional LUR modelling method by incorporating wind availability information into LUR modelling based on surface geomorphometrical analysis. As a result, 269 independent variables were examined to develop the LUR models by using the "ADDRESS" independent variable selection method and stepwise multiple linear regression (MLR). Cross validation has been performed for each resultant model. The results show that wind-related variables are included in most of the resultant models as statistically significant independent variables. Compared with the traditional method, a maximum increase of 20% was achieved in the prediction performance of annual averaged NO 2 concentration level by incorporating wind-related variables into LUR model development. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. The Naïve Overfitting Index Selection (NOIS): A new method to optimize model complexity for hyperspectral data

    NASA Astrophysics Data System (ADS)

    Rocha, Alby D.; Groen, Thomas A.; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Willemen, Louise

    2017-11-01

    The growing number of narrow spectral bands in hyperspectral remote sensing improves the capacity to describe and predict biological processes in ecosystems. But it also poses a challenge to fit empirical models based on such high dimensional data, which often contain correlated and noisy predictors. As sample sizes, to train and validate empirical models, seem not to be increasing at the same rate, overfitting has become a serious concern. Overly complex models lead to overfitting by capturing more than the underlying relationship, and also through fitting random noise in the data. Many regression techniques claim to overcome these problems by using different strategies to constrain complexity, such as limiting the number of terms in the model, by creating latent variables or by shrinking parameter coefficients. This paper is proposing a new method, named Naïve Overfitting Index Selection (NOIS), which makes use of artificially generated spectra, to quantify the relative model overfitting and to select an optimal model complexity supported by the data. The robustness of this new method is assessed by comparing it to a traditional model selection based on cross-validation. The optimal model complexity is determined for seven different regression techniques, such as partial least squares regression, support vector machine, artificial neural network and tree-based regressions using five hyperspectral datasets. The NOIS method selects less complex models, which present accuracies similar to the cross-validation method. The NOIS method reduces the chance of overfitting, thereby avoiding models that present accurate predictions that are only valid for the data used, and too complex to make inferences about the underlying process.

  16. Numerical modelling of biomass combustion: Solid conversion processes in a fixed bed furnace

    NASA Astrophysics Data System (ADS)

    Karim, Md. Rezwanul; Naser, Jamal

    2017-06-01

    Increasing demand for energy and rising concerns over global warming has urged the use of renewable energy sources to carry a sustainable development of the world. Bio mass is a renewable energy which has become an important fuel to produce thermal energy or electricity. It is an eco-friendly source of energy as it reduces carbon dioxide emissions. Combustion of solid biomass is a complex phenomenon due to its large varieties and physical structures. Among various systems, fixed bed combustion is the most commonly used technique for thermal conversion of solid biomass. But inadequate knowledge on complex solid conversion processes has limited the development of such combustion system. Numerical modelling of this combustion system has some advantages over experimental analysis. Many important system parameters (e.g. temperature, density, solid fraction) can be estimated inside the entire domain under different working conditions. In this work, a complete numerical model is used for solid conversion processes of biomass combustion in a fixed bed furnace. The combustion system is divided in to solid and gas phase. This model includes several sub models to characterize the solid phase of the combustion with several variables. User defined subroutines are used to introduce solid phase variables in commercial CFD code. Gas phase of combustion is resolved using built-in module of CFD code. Heat transfer model is modified to predict the temperature of solid and gas phases with special radiation heat transfer solution for considering the high absorptivity of the medium. Considering all solid conversion processes the solid phase variables are evaluated. Results obtained are discussed with reference from an experimental burner.

  17. Stability of uncertain impulsive complex-variable chaotic systems with time-varying delays.

    PubMed

    Zheng, Song

    2015-09-01

    In this paper, the robust exponential stabilization of uncertain impulsive complex-variable chaotic delayed systems is considered with parameters perturbation and delayed impulses. It is assumed that the considered complex-variable chaotic systems have bounded parametric uncertainties together with the state variables on the impulses related to the time-varying delays. Based on the theories of adaptive control and impulsive control, some less conservative and easily verified stability criteria are established for a class of complex-variable chaotic delayed systems with delayed impulses. Some numerical simulations are given to validate the effectiveness of the proposed criteria of impulsive stabilization for uncertain complex-variable chaotic delayed systems. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Central Arctic Crustal Modeling Constrained by Potential Field data and recent ECS Seismic Data

    NASA Astrophysics Data System (ADS)

    Evangelatos, John; Oakey, Gordon; Saltus, Rick

    2017-04-01

    2-D gravity and magnetic models have been generated for several transects across the Alpha-Mendeleev ridge complex to study the regional variability of the crustal structure and identify large scale lateral changes. The geometry and density parameters for the models have been constrained using recently acquired seismic reflection and refraction data collected jointly by Canada and the United States as part of their collaborative Arctic ECS programs. A total of fifteen models have been generated perpendicular to the ridge complex, typically 50 to 150 km apart. A minimalist approach to modeling involved maintaining a simple, laterally continuous density structure for the crust while varying the model geometry to fit the observed gravity field. This approach is justified because low amplitude residual Bouguer anomalies suggest a relatively homogenous density structure within the ridge complex. These models have provided a new measure of the regional variability in crustal thickness. Typically, models with thinner crust correspond with deeper bathymetric depths of the ridge which is consistent with regional isostatic equilibrium. Complex "chaotic" magnetic anomalies are associated with the Alpha-Mendeleev ridge complex, which extends beneath the surrounding sedimentary basins. Pseudogravity inversion (magnetic potential) of the magnetic field provides a quantifiable areal extent of ˜1.3 x106 km2. Forward modeling confirms that the magnetic anomalies are not solely the result of magnetized bathymetric highs, but are caused to a great extent by mid- and lower crustal sources. The magnetization of the crust inferred from modeling is significantly higher than available lab measurements of onshore volcanic rocks. Although the 2-D models cannot uniquely identify whether the crustal protolith was continental or oceanic, there is a necessity for a significant content of high density and highly magnetic (ultramafic) material. Based on the crustal thickness estimates from our regional 2-D gravity models and the two possible protoliths, we determine volumetric estimates of the volcanic composition to ˜ 6 × 106 km3 for the mid- and upper-crust and between 10 × 106 and 14 × 106 km3 within the lower crust — for a total of at least ˜16 × 106 km3. This exceeds any estimates for the onshore circum-Arctic HALIP by more than an order of magnitude.

  19. Heart-Rate Variability-More than Heart Beats?

    PubMed

    Ernst, Gernot

    2017-01-01

    Heart-rate variability (HRV) is frequently introduced as mirroring imbalances within the autonomous nerve system. Many investigations are based on the paradigm that increased sympathetic tone is associated with decreased parasympathetic tone and vice versa . But HRV is probably more than an indicator for probable disturbances in the autonomous system. Some perturbations trigger not reciprocal, but parallel changes of vagal and sympathetic nerve activity. HRV has also been considered as a surrogate parameter of the complex interaction between brain and cardiovascular system. Systems biology is an inter-disciplinary field of study focusing on complex interactions within biological systems like the cardiovascular system, with the help of computational models and time series analysis, beyond others. Time series are considered surrogates of the particular system, reflecting robustness or fragility. Increased variability is usually seen as associated with a good health condition, whereas lowered variability might signify pathological changes. This might explain why lower HRV parameters were related to decreased life expectancy in several studies. Newer integrating theories have been proposed. According to them, HRV reflects as much the state of the heart as the state of the brain. The polyvagal theory suggests that the physiological state dictates the range of behavior and psychological experience. Stressful events perpetuate the rhythms of autonomic states, and subsequently, behaviors. Reduced variability will according to this theory not only be a surrogate but represent a fundamental homeostasis mechanism in a pathological state. The neurovisceral integration model proposes that cardiac vagal tone, described in HRV beyond others as HF-index, can mirror the functional balance of the neural networks implicated in emotion-cognition interactions. Both recent models represent a more holistic approach to understanding the significance of HRV.

  20. A site specific model and analysis of the neutral somatic mutation rate in whole-genome cancer data.

    PubMed

    Bertl, Johanna; Guo, Qianyun; Juul, Malene; Besenbacher, Søren; Nielsen, Morten Muhlig; Hornshøj, Henrik; Pedersen, Jakob Skou; Hobolth, Asger

    2018-04-19

    Detailed modelling of the neutral mutational process in cancer cells is crucial for identifying driver mutations and understanding the mutational mechanisms that act during cancer development. The neutral mutational process is very complex: whole-genome analyses have revealed that the mutation rate differs between cancer types, between patients and along the genome depending on the genetic and epigenetic context. Therefore, methods that predict the number of different types of mutations in regions or specific genomic elements must consider local genomic explanatory variables. A major drawback of most methods is the need to average the explanatory variables across the entire region or genomic element. This procedure is particularly problematic if the explanatory variable varies dramatically in the element under consideration. To take into account the fine scale of the explanatory variables, we model the probabilities of different types of mutations for each position in the genome by multinomial logistic regression. We analyse 505 cancer genomes from 14 different cancer types and compare the performance in predicting mutation rate for both regional based models and site-specific models. We show that for 1000 randomly selected genomic positions, the site-specific model predicts the mutation rate much better than regional based models. We use a forward selection procedure to identify the most important explanatory variables. The procedure identifies site-specific conservation (phyloP), replication timing, and expression level as the best predictors for the mutation rate. Finally, our model confirms and quantifies certain well-known mutational signatures. We find that our site-specific multinomial regression model outperforms the regional based models. The possibility of including genomic variables on different scales and patient specific variables makes it a versatile framework for studying different mutational mechanisms. Our model can serve as the neutral null model for the mutational process; regions that deviate from the null model are candidates for elements that drive cancer development.

  1. Social vulnerability from a social ecology perspective: a cohort study of older adults from the National Population Health Survey of Canada

    PubMed Central

    2014-01-01

    Background Numerous social factors, generally studied in isolation, have been associated with older adults’ health. Even so, older people’s social circumstances are complex and an approach which embraces this complexity is desirable. Here we investigate many social factors in relation to one another and to survival among older adults using a social ecology perspective to measure social vulnerability among older adults. Methods 2740 adults aged 65 and older were followed for ten years in the Canadian National Population Health Survey (NPHS). Twenty-three individual-level social variables were drawn from the 1994 NPHS and five Enumeration Area (EA)-level variables were abstracted from the 1996 Canadian Census using postal code linkage. Principal Component Analysis (PCA) was used to identify dimensions of social vulnerability. All social variables were summed to create a social vulnerability index which was studied in relation to ten-year mortality. Results The PCA was limited by low variance (47%) explained by emergent factors. Seven dimensions of social vulnerability emerged in the most robust, yet limited, model: social support, engagement, living situation, self-esteem, sense of control, relations with others and contextual socio-economic status. These dimensions showed complex inter-relationships and were situated within a social ecology framework, considering spheres of influence from the individual through to group, neighbourhood and broader societal levels. Adjusting for age, sex, and frailty, increasing social vulnerability measured using the cumulative social vulnerability index was associated with increased risk of mortality over ten years in a Cox regression model (HR 1.04, 95% CI:1.01-1.07, p = 0.01). Conclusions Social vulnerability has important independent influence on older adults’ health though relationships between contributing variables are complex and do not lend themselves well to fragmentation into a small number of discrete factors. A social ecology perspective provides a candidate framework for further study of social vulnerability among older adults. PMID:25129548

  2. Centromere synteny among Brachypodium, wheat, and rice

    USDA-ARS?s Scientific Manuscript database

    Rice, wheat and Brachypodium are plant genetic models with variable genome complexity and basic chromosome numbers, representing two subfamilies of the Poaceae. Centromeres are prominent chromosome landmarks, but their fate during this convoluted chromosome evolution has been more difficult to deter...

  3. A novel framework to simulating non-stationary, non-linear, non-Normal hydrological time series using Markov Switching Autoregressive Models

    NASA Astrophysics Data System (ADS)

    Birkel, C.; Paroli, R.; Spezia, L.; Tetzlaff, D.; Soulsby, C.

    2012-12-01

    In this paper we present a novel model framework using the class of Markov Switching Autoregressive Models (MSARMs) to examine catchments as complex stochastic systems that exhibit non-stationary, non-linear and non-Normal rainfall-runoff and solute dynamics. Hereby, MSARMs are pairs of stochastic processes, one observed and one unobserved, or hidden. We model the unobserved process as a finite state Markov chain and assume that the observed process, given the hidden Markov chain, is conditionally autoregressive, which means that the current observation depends on its recent past (system memory). The model is fully embedded in a Bayesian analysis based on Markov Chain Monte Carlo (MCMC) algorithms for model selection and uncertainty assessment. Hereby, the autoregressive order and the dimension of the hidden Markov chain state-space are essentially self-selected. The hidden states of the Markov chain represent unobserved levels of variability in the observed process that may result from complex interactions of hydroclimatic variability on the one hand and catchment characteristics affecting water and solute storage on the other. To deal with non-stationarity, additional meteorological and hydrological time series along with a periodic component can be included in the MSARMs as covariates. This extension allows identification of potential underlying drivers of temporal rainfall-runoff and solute dynamics. We applied the MSAR model framework to streamflow and conservative tracer (deuterium and oxygen-18) time series from an intensively monitored 2.3 km2 experimental catchment in eastern Scotland. Statistical time series analysis, in the form of MSARMs, suggested that the streamflow and isotope tracer time series are not controlled by simple linear rules. MSARMs showed that the dependence of current observations on past inputs observed by transport models often in form of the long-tailing of travel time and residence time distributions can be efficiently explained by non-stationarity either of the system input (climatic variability) and/or the complexity of catchment storage characteristics. The statistical model is also capable of reproducing short (event) and longer-term (inter-event) and wet and dry dynamical "hydrological states". These reflect the non-linear transport mechanisms of flow pathways induced by transient climatic and hydrological variables and modified by catchment characteristics. We conclude that MSARMs are a powerful tool to analyze the temporal dynamics of hydrological data, allowing for explicit integration of non-stationary, non-linear and non-Normal characteristics.

  4. Forecasting seasonal hydrologic response in major river basins

    NASA Astrophysics Data System (ADS)

    Bhuiyan, A. M.

    2014-05-01

    Seasonal precipitation variation due to natural climate variation influences stream flow and the apparent frequency and severity of extreme hydrological conditions such as flood and drought. To study hydrologic response and understand the occurrence of extreme hydrological events, the relevant forcing variables must be identified. This study attempts to assess and quantify the historical occurrence and context of extreme hydrologic flow events and quantify the relation between relevant climate variables. Once identified, the flow data and climate variables are evaluated to identify the primary relationship indicators of hydrologic extreme event occurrence. Existing studies focus on developing basin-scale forecasting techniques based on climate anomalies in El Nino/La Nina episodes linked to global climate. Building on earlier work, the goal of this research is to quantify variations in historical river flows at seasonal temporal-scale, and regional to continental spatial-scale. The work identifies and quantifies runoff variability of major river basins and correlates flow with environmental forcing variables such as El Nino, La Nina, sunspot cycle. These variables are expected to be the primary external natural indicators of inter-annual and inter-seasonal patterns of regional precipitation and river flow. Relations between continental-scale hydrologic flows and external climate variables are evaluated through direct correlations in a seasonal context with environmental phenomenon such as sun spot numbers (SSN), Southern Oscillation Index (SOI), and Pacific Decadal Oscillation (PDO). Methods including stochastic time series analysis and artificial neural networks are developed to represent the seasonal variability evident in the historical records of river flows. River flows are categorized into low, average and high flow levels to evaluate and simulate flow variations under associated climate variable variations. Results demonstrated not any particular method is suited to represent scenarios leading to extreme flow conditions. For selected flow scenarios, the persistence model performance may be comparable to more complex multivariate approaches, and complex methods did not always improve flow estimation. Overall model performance indicates inclusion of river flows and forcing variables on average improve model extreme event forecasting skills. As a means to further refine the flow estimation, an ensemble forecast method is implemented to provide a likelihood-based indication of expected river flow magnitude and variability. Results indicate seasonal flow variations are well-captured in the ensemble range, therefore the ensemble approach can often prove efficient in estimating extreme river flow conditions. The discriminant prediction approach, a probabilistic measure to forecast streamflow, is also adopted to derive model performance. Results show the efficiency of the method in terms of representing uncertainties in the forecasts.

  5. Rule-Mining for the Early Prediction of Chronic Kidney Disease Based on Metabolomics and Multi-Source Data

    PubMed Central

    Luck, Margaux; Bertho, Gildas; Bateson, Mathilde; Karras, Alexandre; Yartseva, Anastasia; Thervet, Eric

    2016-01-01

    1H Nuclear Magnetic Resonance (NMR)-based metabolic profiling is very promising for the diagnostic of the stages of chronic kidney disease (CKD). Because of the high dimension of NMR spectra datasets and the complex mixture of metabolites in biological samples, the identification of discriminant biomarkers of a disease is challenging. None of the widely used chemometric methods in NMR metabolomics performs a local exhaustive exploration of the data. We developed a descriptive and easily understandable approach that searches for discriminant local phenomena using an original exhaustive rule-mining algorithm in order to predict two groups of patients: 1) patients having low to mild CKD stages with no renal failure and 2) patients having moderate to established CKD stages with renal failure. Our predictive algorithm explores the m-dimensional variable space to capture the local overdensities of the two groups of patients under the form of easily interpretable rules. Afterwards, a L2-penalized logistic regression on the discriminant rules was used to build predictive models of the CKD stages. We explored a complex multi-source dataset that included the clinical, demographic, clinical chemistry, renal pathology and urine metabolomic data of a cohort of 110 patients. Given this multi-source dataset and the complex nature of metabolomic data, we analyzed 1- and 2-dimensional rules in order to integrate the information carried by the interactions between the variables. The results indicated that our local algorithm is a valuable analytical method for the precise characterization of multivariate CKD stage profiles and as efficient as the classical global model using chi2 variable section with an approximately 70% of good classification level. The resulting predictive models predominantly identify urinary metabolites (such as 3-hydroxyisovalerate, carnitine, citrate, dimethylsulfone, creatinine and N-methylnicotinamide) as relevant variables indicating that CKD significantly affects the urinary metabolome. In addition, the simple knowledge of the concentration of urinary metabolites classifies the CKD stage of the patients correctly. PMID:27861591

  6. Design of experiments for identification of complex biochemical systems with applications to mitochondrial bioenergetics.

    PubMed

    Vinnakota, Kalyan C; Beard, Daniel A; Dash, Ranjan K

    2009-01-01

    Identification of a complex biochemical system model requires appropriate experimental data. Models constructed on the basis of data from the literature often contain parameters that are not identifiable with high sensitivity and therefore require additional experimental data to identify those parameters. Here we report the application of a local sensitivity analysis to design experiments that will improve the identifiability of previously unidentifiable model parameters in a model of mitochondrial oxidative phosphorylation and tricaboxylic acid cycle. Experiments were designed based on measurable biochemical reactants in a dilute suspension of purified cardiac mitochondria with experimentally feasible perturbations to this system. Experimental perturbations and variables yielding the most number of parameters above a 5% sensitivity level are presented and discussed.

  7. Variability And Uncertainty Analysis Of Contaminant Transport Model Using Fuzzy Latin Hypercube Sampling Technique

    NASA Astrophysics Data System (ADS)

    Kumar, V.; Nayagum, D.; Thornton, S.; Banwart, S.; Schuhmacher2, M.; Lerner, D.

    2006-12-01

    Characterization of uncertainty associated with groundwater quality models is often of critical importance, as for example in cases where environmental models are employed in risk assessment. Insufficient data, inherent variability and estimation errors of environmental model parameters introduce uncertainty into model predictions. However, uncertainty analysis using conventional methods such as standard Monte Carlo sampling (MCS) may not be efficient, or even suitable, for complex, computationally demanding models and involving different nature of parametric variability and uncertainty. General MCS or variant of MCS such as Latin Hypercube Sampling (LHS) assumes variability and uncertainty as a single random entity and the generated samples are treated as crisp assuming vagueness as randomness. Also when the models are used as purely predictive tools, uncertainty and variability lead to the need for assessment of the plausible range of model outputs. An improved systematic variability and uncertainty analysis can provide insight into the level of confidence in model estimates, and can aid in assessing how various possible model estimates should be weighed. The present study aims to introduce, Fuzzy Latin Hypercube Sampling (FLHS), a hybrid approach of incorporating cognitive and noncognitive uncertainties. The noncognitive uncertainty such as physical randomness, statistical uncertainty due to limited information, etc can be described by its own probability density function (PDF); whereas the cognitive uncertainty such estimation error etc can be described by the membership function for its fuzziness and confidence interval by ?-cuts. An important property of this theory is its ability to merge inexact generated data of LHS approach to increase the quality of information. The FLHS technique ensures that the entire range of each variable is sampled with proper incorporation of uncertainty and variability. A fuzzified statistical summary of the model results will produce indices of sensitivity and uncertainty that relate the effects of heterogeneity and uncertainty of input variables to model predictions. The feasibility of the method is validated to assess uncertainty propagation of parameter values for estimation of the contamination level of a drinking water supply well due to transport of dissolved phenolics from a contaminated site in the UK.

  8. Creation of Synthetic Surface Temperature and Precipitation Ensembles Through A Computationally Efficient, Mixed Method Approach

    NASA Astrophysics Data System (ADS)

    Hartin, C.; Lynch, C.; Kravitz, B.; Link, R. P.; Bond-Lamberty, B. P.

    2017-12-01

    Typically, uncertainty quantification of internal variability relies on large ensembles of climate model runs under multiple forcing scenarios or perturbations in a parameter space. Computationally efficient, standard pattern scaling techniques only generate one realization and do not capture the complicated dynamics of the climate system (i.e., stochastic variations with a frequency-domain structure). In this study, we generate large ensembles of climate data with spatially and temporally coherent variability across a subselection of Coupled Model Intercomparison Project Phase 5 (CMIP5) models. First, for each CMIP5 model we apply a pattern emulation approach to derive the model response to external forcing. We take all the spatial and temporal variability that isn't explained by the emulator and decompose it into non-physically based structures through use of empirical orthogonal functions (EOFs). Then, we perform a Fourier decomposition of the EOF projection coefficients to capture the input fields' temporal autocorrelation so that our new emulated patterns reproduce the proper timescales of climate response and "memory" in the climate system. Through this 3-step process, we derive computationally efficient climate projections consistent with CMIP5 model trends and modes of variability, which address a number of deficiencies inherent in the ability of pattern scaling to reproduce complex climate model behavior.

  9. Newtonian Nudging For A Richards Equation-based Distributed Hydrological Model

    NASA Astrophysics Data System (ADS)

    Paniconi, C.; Marrocu, M.; Putti, M.; Verbunt, M.

    In this study a relatively simple data assimilation method has been implemented in a relatively complex hydrological model. The data assimilation technique is Newtonian relaxation or nudging, in which model variables are driven towards observations by a forcing term added to the model equations. The forcing term is proportional to the difference between simulation and observation (relaxation component) and contains four-dimensional weighting functions that can incorporate prior knowledge about the spatial and temporal variability and characteristic scales of the state variable(s) being assimilated. The numerical model couples a three-dimensional finite element Richards equation solver for variably saturated porous media and a finite difference diffusion wave approximation based on digital elevation data for surface water dynamics. We describe the implementation of the data assimilation algorithm for the coupled model and report on the numerical and hydrological performance of the resulting assimila- tion scheme. Nudging is shown to be successful in improving the hydrological sim- ulation results, and it introduces little computational cost, in terms of CPU and other numerical aspects of the model's behavior, in some cases even improving numerical performance compared to model runs without nudging. We also examine the sensitiv- ity of the model to nudging term parameters including the spatio-temporal influence coefficients in the weighting functions. Overall the nudging algorithm is quite flexi- ble, for instance in dealing with concurrent observation datasets, gridded or scattered data, and different state variables, and the implementation presented here can be read- ily extended to any features not already incorporated. Moreover the nudging code and tests can serve as a basis for implementation of more sophisticated data assimilation techniques in a Richards equation-based hydrological model.

  10. Aspects of Complexity in Sleep Analysis

    NASA Astrophysics Data System (ADS)

    Leitão, José M. N.; Da Rosa, Agostinho C.

    The paper presents a selection of sleep analysis problems where some aspects and concepts of complexity come about. Emphasis is given to the electroencephalogram (EEG) as the most important sleep related variable. The conception of the EEG as a message to be deciphered stresses the importance of the communication and information theories in this field. An optimal detector of K complexes and vertex sharp waves based on a stochastic model of sleep EEG is considered. Besides detecting, the algorithm is also able to follow the evolution of the basic ongoing activity. It is shown that both the ostructure and microstructure of sleep can be described in terms of symbols and interpreted as sentences of a language. Syntactic models and Markov chain representations play in this context an important role.

  11. Impact of earthquake source complexity and land elevation data resolution on tsunami hazard assessment and fatality estimation

    NASA Astrophysics Data System (ADS)

    Muhammad, Ario; Goda, Katsuichiro

    2018-03-01

    This study investigates the impact of model complexity in source characterization and digital elevation model (DEM) resolution on the accuracy of tsunami hazard assessment and fatality estimation through a case study in Padang, Indonesia. Two types of earthquake source models, i.e. complex and uniform slip models, are adopted by considering three resolutions of DEMs, i.e. 150 m, 50 m, and 10 m. For each of the three grid resolutions, 300 complex source models are generated using new statistical prediction models of earthquake source parameters developed from extensive finite-fault models of past subduction earthquakes, whilst 100 uniform slip models are constructed with variable fault geometry without slip heterogeneity. The results highlight that significant changes to tsunami hazard and fatality estimates are observed with regard to earthquake source complexity and grid resolution. Coarse resolution (i.e. 150 m) leads to inaccurate tsunami hazard prediction and fatality estimation, whilst 50-m and 10-m resolutions produce similar results. However, velocity and momentum flux are sensitive to the grid resolution and hence, at least 10-m grid resolution needs to be implemented when considering flow-based parameters for tsunami hazard and risk assessments. In addition, the results indicate that the tsunami hazard parameters and fatality number are more sensitive to the complexity of earthquake source characterization than the grid resolution. Thus, the uniform models are not recommended for probabilistic tsunami hazard and risk assessments. Finally, the findings confirm that uncertainties of tsunami hazard level and fatality in terms of depth, velocity and momentum flux can be captured and visualized through the complex source modeling approach. From tsunami risk management perspectives, this indeed creates big data, which are useful for making effective and robust decisions.

  12. Distinct promoter activation mechanisms modulate noise-driven HIV gene expression

    NASA Astrophysics Data System (ADS)

    Chavali, Arvind K.; Wong, Victor C.; Miller-Jensen, Kathryn

    2015-12-01

    Latent human immunodeficiency virus (HIV) infections occur when the virus occupies a transcriptionally silent but reversible state, presenting a major obstacle to cure. There is experimental evidence that random fluctuations in gene expression, when coupled to the strong positive feedback encoded by the HIV genetic circuit, act as a ‘molecular switch’ controlling cell fate, i.e., viral replication versus latency. Here, we implemented a stochastic computational modeling approach to explore how different promoter activation mechanisms in the presence of positive feedback would affect noise-driven activation from latency. We modeled the HIV promoter as existing in one, two, or three states that are representative of increasingly complex mechanisms of promoter repression underlying latency. We demonstrate that two-state and three-state models are associated with greater variability in noisy activation behaviors, and we find that Fano factor (defined as variance over mean) proves to be a useful noise metric to compare variability across model structures and parameter values. Finally, we show how three-state promoter models can be used to qualitatively describe complex reactivation phenotypes in response to therapeutic perturbations that we observe experimentally. Ultimately, our analysis suggests that multi-state models more accurately reflect observed heterogeneous reactivation and may be better suited to evaluate how noise affects viral clearance.

  13. Neural Network Machine Learning and Dimension Reduction for Data Visualization

    NASA Technical Reports Server (NTRS)

    Liles, Charles A.

    2014-01-01

    Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.

  14. Modeling the cardiovascular system using a nonlinear additive autoregressive model with exogenous input

    NASA Astrophysics Data System (ADS)

    Riedl, M.; Suhrbier, A.; Malberg, H.; Penzel, T.; Bretthauer, G.; Kurths, J.; Wessel, N.

    2008-07-01

    The parameters of heart rate variability and blood pressure variability have proved to be useful analytical tools in cardiovascular physics and medicine. Model-based analysis of these variabilities additionally leads to new prognostic information about mechanisms behind regulations in the cardiovascular system. In this paper, we analyze the complex interaction between heart rate, systolic blood pressure, and respiration by nonparametric fitted nonlinear additive autoregressive models with external inputs. Therefore, we consider measurements of healthy persons and patients suffering from obstructive sleep apnea syndrome (OSAS), with and without hypertension. It is shown that the proposed nonlinear models are capable of describing short-term fluctuations in heart rate as well as systolic blood pressure significantly better than similar linear ones, which confirms the assumption of nonlinear controlled heart rate and blood pressure. Furthermore, the comparison of the nonlinear and linear approaches reveals that the heart rate and blood pressure variability in healthy subjects is caused by a higher level of noise as well as nonlinearity than in patients suffering from OSAS. The residue analysis points at a further source of heart rate and blood pressure variability in healthy subjects, in addition to heart rate, systolic blood pressure, and respiration. Comparison of the nonlinear models within and among the different groups of subjects suggests the ability to discriminate the cohorts that could lead to a stratification of hypertension risk in OSAS patients.

  15. Project EASE: a study to test a psychosocial model of epilepsy medication managment.

    PubMed

    DiIorio, Collen; Shafer, Patricia Osborne; Letz, Richard; Henry, Thomas R; Schomer, Donal L; Yeager, Kate

    2004-12-01

    The purpose of this study was to test a psychosocial model of medication self-management among people with epilepsy. This model was based primarily on social cognitive theory and included personal (self-efficacy, outcome expectations, goals, stigma, and depressive symptoms), social (social support), and provider (patient satisfaction and desire for control) variables. Participants for the study were enrolled at research sites in Atlanta, Georgia, and Boston, Massachusetts and completed computer-based assessments that included measures of the study variables listed above. The mean age of the 317 participants was 43.3 years; about 50% were female, and 81%white. Self-efficacy and patient satisfaction explained the most variance in medication management. Social support was related to self-efficacy; stigma to self-efficacy and depressive symptoms; and self-efficacy to outcome expectations and depressive symptoms. Findings reinforce that medication-taking behavior is affected by a complex set of interactions among psychosocial variables.

  16. EUV local CDU healing performance and modeling capability towards 5nm node

    NASA Astrophysics Data System (ADS)

    Jee, Tae Kwon; Timoshkov, Vadim; Choi, Peter; Rio, David; Tsai, Yu-Cheng; Yaegashi, Hidetami; Koike, Kyohei; Fonseca, Carlos; Schoofs, Stijn

    2017-10-01

    Both local variability and optical proximity correction (OPC) errors are big contributors to the edge placement error (EPE) budget which is closely related to the device yield. The post-litho contact hole healing will be demonstrated to meet after-etch local variability specifications using a low dose, 30mJ/cm2 dose-to-size, positive tone developed (PTD) resist with relevant throughput in high volume manufacturing (HVM). The total local variability of the node 5nm (N5) contact holes will be characterized in terms of local CD uniformity (LCDU), local placement error (LPE), and contact edge roughness (CER) using a statistical methodology. The CD healing process has complex etch proximity effects, so the OPC prediction accuracy is challenging to meet EPE requirements for the N5. Thus, the prediction accuracy of an after-etch model will be investigated and discussed using ASML Tachyon OPC model.

  17. Propagation of uncertainty in nasal spray in vitro performance models using Monte Carlo simulation: Part II. Error propagation during product performance modeling.

    PubMed

    Guo, Changning; Doub, William H; Kauffman, John F

    2010-08-01

    Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association

  18. A multiple-time-scale turbulence model based on variable partitioning of turbulent kinetic energy spectrum

    NASA Technical Reports Server (NTRS)

    Kim, S.-W.; Chen, C.-P.

    1987-01-01

    A multiple-time-scale turbulence model of a single point closure and a simplified split-spectrum method is presented. In the model, the effect of the ratio of the production rate to the dissipation rate on eddy viscosity is modeled by use of the multiple-time-scales and a variable partitioning of the turbulent kinetic energy spectrum. The concept of a variable partitioning of the turbulent kinetic energy spectrum and the rest of the model details are based on the previously reported algebraic stress turbulence model. Example problems considered include: a fully developed channel flow, a plane jet exhausting into a moving stream, a wall jet flow, and a weakly coupled wake-boundary layer interaction flow. The computational results compared favorably with those obtained by using the algebraic stress turbulence model as well as experimental data. The present turbulence model, as well as the algebraic stress turbulence model, yielded significantly improved computational results for the complex turbulent boundary layer flows, such as the wall jet flow and the wake boundary layer interaction flow, compared with available computational results obtained by using the standard kappa-epsilon turbulence model.

  19. A multiple-time-scale turbulence model based on variable partitioning of the turbulent kinetic energy spectrum

    NASA Technical Reports Server (NTRS)

    Kim, S.-W.; Chen, C.-P.

    1989-01-01

    A multiple-time-scale turbulence model of a single point closure and a simplified split-spectrum method is presented. In the model, the effect of the ratio of the production rate to the dissipation rate on eddy viscosity is modeled by use of the multiple-time-scales and a variable partitioning of the turbulent kinetic energy spectrum. The concept of a variable partitioning of the turbulent kinetic energy spectrum and the rest of the model details are based on the previously reported algebraic stress turbulence model. Example problems considered include: a fully developed channel flow, a plane jet exhausting into a moving stream, a wall jet flow, and a weakly coupled wake-boundary layer interaction flow. The computational results compared favorably with those obtained by using the algebraic stress turbulence model as well as experimental data. The present turbulence model, as well as the algebraic stress turbulence model, yielded significantly improved computational results for the complex turbulent boundary layer flows, such as the wall jet flow and the wake boundary layer interaction flow, compared with available computational results obtained by using the standard kappa-epsilon turbulence model.

  20. Learning from catchments to understand hydrological drought (HS Division Outstanding ECS Award Lecture)

    NASA Astrophysics Data System (ADS)

    Van Loon, Anne

    2017-04-01

    Drought is a global challenge. To be able to manage drought effectively on global or national scales without losing smaller scale variability and local context, we need to understand what the important hydrological drought processes are at different scales. Global scale models and satellite data are providing a global overview and catchment scale studies provide detailed site-specific information. I am interested in bridging these two scale levels by learning from catchments from around the world. Much information from local case studies is currently underused on larger scales because there is too much complexity. However, some of this complexity might be crucial on the level where people are facing the consequences of drought. In this talk, I will take you on a journey around the world to unlock catchment scale information and see if the comparison of many catchments gives us additional understanding of hydrological drought processes on the global scale. I will focus on the role of storage in different compartments of the terrestrial hydrological cycle, and how we as humans interact with that storage. I will discuss aspects of spatial and temporal variability in storage that are crucial for hydrological drought development and persistence, drawing from examples of catchments with storage in groundwater, lakes and wetlands, and snow and ice. The added complexity of human activities shifts the focus from natural to catchments with anthropogenic increases in storage (reservoirs), decreases in storage (groundwater abstraction), and changes in hydrological processes (urbanisation). We learn how local information is providing valuable insights, in some cases challenging theoretical understanding or model outcomes. Despite the challenges of working across countries, with a high number of collaborators, in a multitude of languages, under data-scarce conditions, the scientific advantages of bridging scales are substantial. The comparison of catchments around the world can inform global scale models, give the needed spatial variability to satellite data, and help us make steps in understanding and managing the complex challenge of drought, now and in the future.

  1. Coupled effects of vertical mixing and benthic grazing on phytoplankton populations in shallow, turbid estuaries

    USGS Publications Warehouse

    Koseff, Jeffrey R.; Holen, Jacqueline K.; Monismith, Stephen G.; Cloern, James E.

    1993-01-01

    Coastal ocean waters tend to have very different patterns of phytoplankton biomass variability from the open ocean, and the connections between physical variability and phytoplankton bloom dynamics are less well established for these shallow systems. Predictions of biological responses to physical variability in these environments is inherently difficult because the recurrent seasonal patterns of mixing are complicated by aperiodic fluctuations in river discharge and the high-frequency components of tidal variability. We might expect, then, less predictable and more complex bloom dynamics in these shallow coastal systems compared with the open ocean. Given this complex and dynamic physical environment, can we develop a quantitative framework to define the physical regimes necessary for bloom inception, and can we identify the important mechanisms of physical-biological coupling that lead to the initiation and termination of blooms in estuaries and shallow coastal waters? Numerical modeling provides one approach to address these questions. Here we present results of simulation experiments with a refined version of Cloern's (1991) model in which mixing processes are treated more realistically to reflect the dynamic nature of turbulence generation in estuaries. We investigated several simple models for the turbulent mixing coefficient. We found that the addition of diurnal tidal variation to Cloern's model greatly reduces biomass growth indicating that variations of mixing on the time scale of hours are crucial. Furthermore, we found that for conditions representative of South San Francisco Bay, numerical simulations only allowed for bloom development when the water column was stratified and when minimal mixing was prescribed in the upper layer. Stratification, however, itself is not sufficient to ensure that a bloom will develop: minimal wind stirring is a further prerequisite to bloom development in shallow turbid estuaries with abundant populations of benthic suspension feeders.

  2. Predicting radiotherapy outcomes using statistical learning techniques

    NASA Astrophysics Data System (ADS)

    El Naqa, Issam; Bradley, Jeffrey D.; Lindsay, Patricia E.; Hope, Andrew J.; Deasy, Joseph O.

    2009-09-01

    Radiotherapy outcomes are determined by complex interactions between treatment, anatomical and patient-related variables. A common obstacle to building maximally predictive outcome models for clinical practice is the failure to capture potential complexity of heterogeneous variable interactions and applicability beyond institutional data. We describe a statistical learning methodology that can automatically screen for nonlinear relations among prognostic variables and generalize to unseen data before. In this work, several types of linear and nonlinear kernels to generate interaction terms and approximate the treatment-response function are evaluated. Examples of institutional datasets of esophagitis, pneumonitis and xerostomia endpoints were used. Furthermore, an independent RTOG dataset was used for 'generalizabilty' validation. We formulated the discrimination between risk groups as a supervised learning problem. The distribution of patient groups was initially analyzed using principle components analysis (PCA) to uncover potential nonlinear behavior. The performance of the different methods was evaluated using bivariate correlations and actuarial analysis. Over-fitting was controlled via cross-validation resampling. Our results suggest that a modified support vector machine (SVM) kernel method provided superior performance on leave-one-out testing compared to logistic regression and neural networks in cases where the data exhibited nonlinear behavior on PCA. For instance, in prediction of esophagitis and pneumonitis endpoints, which exhibited nonlinear behavior on PCA, the method provided 21% and 60% improvements, respectively. Furthermore, evaluation on the independent pneumonitis RTOG dataset demonstrated good generalizabilty beyond institutional data in contrast with other models. This indicates that the prediction of treatment response can be improved by utilizing nonlinear kernel methods for discovering important nonlinear interactions among model variables. These models have the capacity to predict on unseen data. Part of this work was first presented at the Seventh International Conference on Machine Learning and Applications, San Diego, CA, USA, 11-13 December 2008.

  3. Prediction of moisture variation during composting process: A comparison of mathematical models.

    PubMed

    Wang, Yongjiang; Ai, Ping; Cao, Hongliang; Liu, Zhigang

    2015-10-01

    This study was carried out to develop and compare three models for simulating the moisture content during composting. Model 1 described changes in water content using mass balance, while Model 2 introduced a liquid-gas transferred water term. Model 3 predicted changes in moisture content without complex degradation kinetics. Average deviations for Model 1-3 were 8.909, 7.422 and 5.374 kg m(-3) while standard deviations were 10.299, 8.374 and 6.095, respectively. The results showed that Model 1 is complex and involves more state variables, but can be used to reveal the effect of humidity on moisture content. Model 2 tested the hypothesis of liquid-gas transfer and was shown to be capable of predicting moisture content during composting. Model 3 could predict water content well without considering degradation kinetics. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Application of geologic-mathematical 3D modeling for complex structure deposits by the example of Lower- Cretaceous period depositions in Western Ust - Balykh oil field (Khanty-Mansiysk Autonomous District)

    NASA Astrophysics Data System (ADS)

    Perevertailo, T.; Nedolivko, N.; Prisyazhnyuk, O.; Dolgaya, T.

    2015-11-01

    The complex structure of the Lower-Cretaceous formation by the example of the reservoir BC101 in Western Ust - Balykh Oil Field (Khanty-Mansiysk Autonomous District) has been studied. Reservoir range relationships have been identified. 3D geologic- mathematical modeling technique considering the heterogeneity and variability of a natural reservoir structure has been suggested. To improve the deposit geological structure integrity methods of mathematical statistics were applied, which, in its turn, made it possible to obtain equal probability models with similar input data and to consider the formation conditions of reservoir rocks and cap rocks.

  5. An Algorithm for Integrated Subsystem Embodiment and System Synthesis

    NASA Technical Reports Server (NTRS)

    Lewis, Kemper

    1997-01-01

    Consider the statement,'A system has two coupled subsystems, one of which dominates the design process. Each subsystem consists of discrete and continuous variables, and is solved using sequential analysis and solution.' To address this type of statement in the design of complex systems, three steps are required, namely, the embodiment of the statement in terms of entities on a computer, the mathematical formulation of subsystem models, and the resulting solution and system synthesis. In complex system decomposition, the subsystems are not isolated, self-supporting entities. Information such as constraints, goals, and design variables may be shared between entities. But many times in engineering problems, full communication and cooperation does not exist, information is incomplete, or one subsystem may dominate the design. Additionally, these engineering problems give rise to mathematical models involving nonlinear functions of both discrete and continuous design variables. In this dissertation an algorithm is developed to handle these types of scenarios for the domain-independent integration of subsystem embodiment, coordination, and system synthesis using constructs from Decision-Based Design, Game Theory, and Multidisciplinary Design Optimization. Implementation of the concept in this dissertation involves testing of the hypotheses using example problems and a motivating case study involving the design of a subsonic passenger aircraft.

  6. Importance of physical and hydraulic characteristics to unionid mussels: A retrospective analysis in a reach of large river

    USGS Publications Warehouse

    Zigler, S.J.; Newton, T.J.; Steuer, J.J.; Bartsch, M.R.; Sauer, J.S.

    2008-01-01

    Interest in understanding physical and hydraulic factors that might drive distribution and abundance of freshwater mussels has been increasing due to their decline throughout North America. We assessed whether the spatial distribution of unionid mussels could be predicted from physical and hydraulic variables in a reach of the Upper Mississippi River. Classification and regression tree (CART) models were constructed using mussel data compiled from various sources and explanatory variables derived from GIS coverages. Prediction success of CART models for presence-absence of mussels ranged from 71 to 76% across three gears (brail, sled-dredge, and dive-quadrat) and 51% of the deviance in abundance. Models were largely driven by shear stress and substrate stability variables, but interactions with simple physical variables, especially slope, were also important. Geospatial models, which were based on tree model results, predicted few mussels in poorly connected backwater areas (e.g., floodplain lakes) and the navigation channel, whereas main channel border areas with high geomorphic complexity (e.g., river bends, islands, side channel entrances) and small side channels were typically favorable to mussels. Moreover, bootstrap aggregation of discharge-specific regression tree models of dive-quadrat data indicated that variables measured at low discharge were about 25% more predictive (PMSE = 14.8) than variables measured at median discharge (PMSE = 20.4) with high discharge (PMSE = 17.1) variables intermediate. This result suggests that episodic events such as droughts and floods were important in structuring mussel distributions. Although the substantial mussel and ancillary data in our study reach is unusual, our approach to develop exploratory statistical and geospatial models should be useful even when data are more limited. ?? 2007 Springer Science+Business Media B.V.

  7. Associations between complex OHC mixtures and thyroid and cortisol hormone levels in East Greenland polar bears.

    PubMed

    Bechshøft, T Ø; Sonne, C; Dietz, R; Born, E W; Muir, D C G; Letcher, R J; Novak, M A; Henchey, E; Meyer, J S; Jenssen, B M; Villanger, G D

    2012-07-01

    The multivariate relationship between hair cortisol, whole blood thyroid hormones, and the complex mixtures of organohalogen contaminant (OHC) levels measured in subcutaneous adipose of 23 East Greenland polar bears (eight males and 15 females, all sampled between the years 1999 and 2001) was analyzed using projection to latent structure (PLS) regression modeling. In the resulting PLS model, most important variables with a negative influence on cortisol levels were particularly BDE-99, but also CB-180, -201, BDE-153, and CB-170/190. The most important variables with a positive influence on cortisol were CB-66/95, α-HCH, TT3, as well as heptachlor epoxide, dieldrin, BDE-47, p,p'-DDD. Although statistical modeling does not necessarily fully explain biological cause-effect relationships, relationships indicate that (1) the hypothalamic-pituitary-adrenal (HPA) axis in East Greenland polar bears is likely to be affected by OHC-contaminants and (2) the association between OHCs and cortisol may be linked with the hypothalamus-pituitary-thyroid (HPT) axis. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. Dynamic Uncertain Causality Graph for Knowledge Representation and Reasoning: Utilization of Statistical Data and Domain Knowledge in Complex Cases.

    PubMed

    Zhang, Qin; Yao, Quanying

    2018-05-01

    The dynamic uncertain causality graph (DUCG) is a newly presented framework for uncertain causality representation and probabilistic reasoning. It has been successfully applied to online fault diagnoses of large, complex industrial systems, and decease diagnoses. This paper extends the DUCG to model more complex cases than what could be previously modeled, e.g., the case in which statistical data are in different groups with or without overlap, and some domain knowledge and actions (new variables with uncertain causalities) are introduced. In other words, this paper proposes to use -mode, -mode, and -mode of the DUCG to model such complex cases and then transform them into either the standard -mode or the standard -mode. In the former situation, if no directed cyclic graph is involved, the transformed result is simply a Bayesian network (BN), and existing inference methods for BNs can be applied. In the latter situation, an inference method based on the DUCG is proposed. Examples are provided to illustrate the methodology.

  9. Cross-scale modeling of surface temperature and tree seedling establishment inmountain landscapes

    USGS Publications Warehouse

    Dingman, John; Sweet, Lynn C.; McCullough, Ian M.; Davis, Frank W.; Flint, Alan L.; Franklin, Janet; Flint, Lorraine E.

    2013-01-01

    Abstract: Introduction: Estimating surface temperature from above-ground field measurements is important for understanding the complex landscape patterns of plant seedling survival and establishment, processes which occur at heights of only several centimeters. Currently, future climate models predict temperature at 2 m above ground, leaving ground-surface microclimate not well characterized. Methods: Using a network of field temperature sensors and climate models, a ground-surface temperature method was used to estimate microclimate variability of minimum and maximum temperature. Temperature lapse rates were derived from field temperature sensors and distributed across the landscape capturing differences in solar radiation and cold air drainages modeled at a 30-m spatial resolution. Results: The surface temperature estimation method used for this analysis successfully estimated minimum surface temperatures on north-facing, south-facing, valley, and ridgeline topographic settings, and when compared to measured temperatures yielded an R2 of 0.88, 0.80, 0.88, and 0.80, respectively. Maximum surface temperatures generally had slightly more spatial variability than minimum surface temperatures, resulting in R2 values of 0.86, 0.77, 0.72, and 0.79 for north-facing, south-facing, valley, and ridgeline topographic settings. Quasi-Poisson regressions predicting recruitment of Quercus kelloggii (black oak) seedlings from temperature variables were significantly improved using these estimates of surface temperature compared to air temperature modeled at 2 m. Conclusion: Predicting minimum and maximum ground-surface temperatures using a downscaled climate model coupled with temperature lapse rates estimated from field measurements provides a method for modeling temperature effects on plant recruitment. Such methods could be applied to improve projections of species’ range shifts under climate change. Areas of complex topography can provide intricate microclimates that may allow species to redistribute locally as climate changes.

  10. A multi-model approach to monitor emissions of CO2 and CO from an urban-industrial complex

    NASA Astrophysics Data System (ADS)

    Super, Ingrid; Denier van der Gon, Hugo A. C.; van der Molen, Michiel K.; Sterk, Hendrika A. M.; Hensen, Arjan; Peters, Wouter

    2017-11-01

    Monitoring urban-industrial emissions is often challenging because observations are scarce and regional atmospheric transport models are too coarse to represent the high spatiotemporal variability in the resulting concentrations. In this paper we apply a new combination of an Eulerian model (Weather Research and Forecast, WRF, with chemistry) and a Gaussian plume model (Operational Priority Substances - OPS). The modelled mixing ratios are compared to observed CO2 and CO mole fractions at four sites along a transect from an urban-industrial complex (Rotterdam, the Netherlands) towards rural conditions for October-December 2014. Urban plumes are well-mixed at our semi-urban location, making this location suited for an integrated emission estimate over the whole study area. The signals at our urban measurement site (with average enhancements of 11 ppm CO2 and 40 ppb CO over the baseline) are highly variable due to the presence of distinct source areas dominated by road traffic/residential heating emissions or industrial activities. This causes different emission signatures that are translated into a large variability in observed ΔCO : ΔCO2 ratios, which can be used to identify dominant source types. We find that WRF-Chem is able to represent synoptic variability in CO2 and CO (e.g. the median CO2 mixing ratio is 9.7 ppm, observed, against 8.8 ppm, modelled), but it fails to reproduce the hourly variability of daytime urban plumes at the urban site (R2 up to 0.05). For the urban site, adding a plume model to the model framework is beneficial to adequately represent plume transport especially from stack emissions. The explained variance in hourly, daytime CO2 enhancements from point source emissions increases from 30 % with WRF-Chem to 52 % with WRF-Chem in combination with the most detailed OPS simulation. The simulated variability in ΔCO :  ΔCO2 ratios decreases drastically from 1.5 to 0.6 ppb ppm-1, which agrees better with the observed standard deviation of 0.4 ppb ppm-1. This is partly due to improved wind fields (increase in R2 of 0.10) but also due to improved point source representation (increase in R2 of 0.05) and dilution (increase in R2 of 0.07). Based on our analysis we conclude that a plume model with detailed and accurate dispersion parameters adds substantially to top-down monitoring of greenhouse gas emissions in urban environments with large point source contributions within a ˜ 10 km radius from the observation sites.

  11. Techniques and resources for storm-scale numerical weather prediction

    NASA Technical Reports Server (NTRS)

    Droegemeier, Kelvin; Grell, Georg; Doyle, James; Soong, Su-Tzai; Skamarock, William; Bacon, David; Staniforth, Andrew; Crook, Andrew; Wilhelmson, Robert

    1993-01-01

    The topics discussed include the following: multiscale application of the 5th-generation PSU/NCAR mesoscale model, the coupling of nonhydrostatic atmospheric and hydrostatic ocean models for air-sea interaction studies; a numerical simulation of cloud formation over complex topography; adaptive grid simulations of convection; an unstructured grid, nonhydrostatic meso/cloud scale model; efficient mesoscale modeling for multiple scales using variable resolution; initialization of cloud-scale models with Doppler radar data; and making effective use of future computing architectures, networks, and visualization software.

  12. Complex earthquake rupture and local tsunamis

    USGS Publications Warehouse

    Geist, E.L.

    2002-01-01

    In contrast to far-field tsunami amplitudes that are fairly well predicted by the seismic moment of subduction zone earthquakes, there exists significant variation in the scaling of local tsunami amplitude with respect to seismic moment. From a global catalog of tsunami runup observations this variability is greatest for the most frequently occuring tsunamigenic subduction zone earthquakes in the magnitude range of 7 < Mw < 8.5. Variability in local tsunami runup scaling can be ascribed to tsunami source parameters that are independent of seismic moment: variations in the water depth in the source region, the combination of higher slip and lower shear modulus at shallow depth, and rupture complexity in the form of heterogeneous slip distribution patterns. The focus of this study is on the effect that rupture complexity has on the local tsunami wave field. A wide range of slip distribution patterns are generated using a stochastic, self-affine source model that is consistent with the falloff of far-field seismic displacement spectra at high frequencies. The synthetic slip distributions generated by the stochastic source model are discretized and the vertical displacement fields from point source elastic dislocation expressions are superimposed to compute the coseismic vertical displacement field. For shallow subduction zone earthquakes it is demonstrated that self-affine irregularities of the slip distribution result in significant variations in local tsunami amplitude. The effects of rupture complexity are less pronounced for earthquakes at greater depth or along faults with steep dip angles. For a test region along the Pacific coast of central Mexico, peak nearshore tsunami amplitude is calculated for a large number (N = 100) of synthetic slip distribution patterns, all with identical seismic moment (Mw = 8.1). Analysis of the results indicates that for earthquakes of a fixed location, geometry, and seismic moment, peak nearshore tsunami amplitude can vary by a factor of 3 or more. These results indicate that there is substantially more variation in the local tsunami wave field derived from the inherent complexity subduction zone earthquakes than predicted by a simple elastic dislocation model. Probabilistic methods that take into account variability in earthquake rupture processes are likely to yield more accurate assessments of tsunami hazards.

  13. Sensitivity analysis of a sound absorption model with correlated inputs

    NASA Astrophysics Data System (ADS)

    Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.

    2017-04-01

    Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.

  14. Complexity, accuracy and practical applicability of different biogeochemical model versions

    NASA Astrophysics Data System (ADS)

    Los, F. J.; Blaas, M.

    2010-04-01

    The construction of validated biogeochemical model applications as prognostic tools for the marine environment involves a large number of choices particularly with respect to the level of details of the .physical, chemical and biological aspects. Generally speaking, enhanced complexity might enhance veracity, accuracy and credibility. However, very complex models are not necessarily effective or efficient forecast tools. In this paper, models of varying degrees of complexity are evaluated with respect to their forecast skills. In total 11 biogeochemical model variants have been considered based on four different horizontal grids. The applications vary in spatial resolution, in vertical resolution (2DH versus 3D), in nature of transport, in turbidity and in the number of phytoplankton species. Included models range from 15 year old applications with relatively simple physics up to present state of the art 3D models. With all applications the same year, 2003, has been simulated. During the model intercomparison it has been noticed that the 'OSPAR' Goodness of Fit cost function (Villars and de Vries, 1998) leads to insufficient discrimination of different models. This results in models obtaining similar scores although closer inspection of the results reveals large differences. In this paper therefore, we have adopted the target diagram by Jolliff et al. (2008) which provides a concise and more contrasting picture of model skill on the entire model domain and for the entire period of the simulations. Correctness in prediction of the mean and the variability are separated and thus enhance insight in model functioning. Using the target diagrams it is demonstrated that recent models are more consistent and have smaller biases. Graphical inspection of time series confirms this, as the level of variability appears more realistic, also given the multi-annual background statistics of the observations. Nevertheless, whether the improvements are all genuine for the particular year cannot be judged due to the low sampling frequency of the traditional monitoring data at hand. Specifically, the overall results for chlorophyll- a are rather consistent throughout all models, but regionally recent models are better; resolution is crucial for the accuracy of transport and more important than the nature of the forcing of the transport; SPM strongly affects the biomass simulation and species composition, but even the most recent SPM results do not yet obtain a good overall score; coloured dissolved organic matter (CDOM) should be included in the calculation of the light regime; more complexity in the phytoplankton model improves the chlorophyll- a simulation, but the simulated species composition needs further improvement for some of the functional groups.

  15. A Parent-Child Interactional Model of Social Anxiety Disorder in Youth

    ERIC Educational Resources Information Center

    Ollendick, Thomas H.; Benoit, Kristy E.

    2012-01-01

    In this paper, one of the most common disorders of childhood and adolescence, social anxiety disorder (SAD), is examined to illustrate the complex and delicate interplay between parent and child factors that can result in normal development gone awry. Our parent-child model of SAD posits a host of variables that converge to occasion the onset and…

  16. A Path Analysis of Pre-Service Teachers' Attitudes to Computer Use: Applying and Extending the Technology Acceptance Model in an Educational Context

    ERIC Educational Resources Information Center

    Teo, Timothy

    2010-01-01

    The purpose of this study is to examine pre-service teachers' attitudes to computers. This study extends the technology acceptance model (TAM) framework by adding subjective norm, facilitating conditions, and technological complexity as external variables. Results show that the TAM and subjective norm, facilitating conditions, and technological…

  17. A Three-Step Approach To Model Tree Mortality in the State of Georgia

    Treesearch

    Qingmin Meng; Chris J. Cieszewski; Roger C. Lowe; Michal Zasada

    2005-01-01

    Tree mortality is one of the most complex phenomena of forest growth and yield. Many types of factors affect tree mortality, which is considered difficult to predict. This study presents a new systematic approach to simulate tree mortality based on the integration of statistical models and geographical information systems. This method begins with variable preselection...

  18. Modeling of Damage Initiation and Progression in a SiC/SiC Woven Ceramic Matrix Composite

    NASA Technical Reports Server (NTRS)

    Mital, Subodh K.; Goldberg, Robert K.; Bonacuse, Peter J.

    2012-01-01

    The goal of an ongoing project at NASA Glenn is to investigate the effects of the complex microstructure of a woven ceramic matrix composite and its variability on the effective properties and the durability of the material. Detailed analysis of these complex microstructures may provide clues for the material scientists who `design the material? or to structural analysts and designers who `design with the material? regarding damage initiation and damage propagation. A model material system, specifically a five-harness satin weave architecture CVI SiC/SiC composite composed of Sylramic-iBN fibers and a SiC matrix, has been analyzed. Specimens of the material were serially sectioned and polished to capture the detailed images of fiber tows, matrix and porosity. Open source analysis tools were used to isolate various constituents and finite elements models were then generated from simplified models of those images. Detailed finite element analyses were performed that examine how the variability in the local microstructure affected the macroscopic behavior as well as the local damage initiation and progression. Results indicate that the locations where damage initiated and propagated is linked to specific microstructural features.

  19. Non-Linear Approach in Kinesiology Should Be Preferred to the Linear--A Case of Basketball.

    PubMed

    Trninić, Marko; Jeličić, Mario; Papić, Vladan

    2015-07-01

    In kinesiology, medicine, biology and psychology, in which research focus is on dynamical self-organized systems, complex connections exist between variables. Non-linear nature of complex systems has been discussed and explained by the example of non-linear anthropometric predictors of performance in basketball. Previous studies interpreted relations between anthropometric features and measures of effectiveness in basketball by (a) using linear correlation models, and by (b) including all basketball athletes in the same sample of participants regardless of their playing position. In this paper the significance and character of linear and non-linear relations between simple anthropometric predictors (AP) and performance criteria consisting of situation-related measures of effectiveness (SE) in basketball were determined and evaluated. The sample of participants consisted of top-level junior basketball players divided in three groups according to their playing time (8 minutes and more per game) and playing position: guards (N = 42), forwards (N = 26) and centers (N = 40). Linear (general model) and non-linear (general model) regression models were calculated simultaneously and separately for each group. The conclusion is viable: non-linear regressions are frequently superior to linear correlations when interpreting actual association logic among research variables.

  20. SAINT: A combined simulation language for modeling man-machine systems

    NASA Technical Reports Server (NTRS)

    Seifert, D. J.

    1979-01-01

    SAINT (Systems Analysis of Integrated Networks of Tasks) is a network modeling and simulation technique for design and analysis of complex man machine systems. SAINT provides the conceptual framework for representing systems that consist of discrete task elements, continuous state variables, and interactions between them. It also provides a mechanism for combining human performance models and dynamic system behaviors in a single modeling structure. The SAINT technique is described and applications of the SAINT are discussed.

  1. Integrating an artificial intelligence approach with k-means clustering to model groundwater salinity: the case of Gaza coastal aquifer (Palestine)

    NASA Astrophysics Data System (ADS)

    Alagha, Jawad S.; Seyam, Mohammed; Md Said, Md Azlin; Mogheir, Yunes

    2017-12-01

    Artificial intelligence (AI) techniques have increasingly become efficient alternative modeling tools in the water resources field, particularly when the modeled process is influenced by complex and interrelated variables. In this study, two AI techniques—artificial neural networks (ANNs) and support vector machine (SVM)—were employed to achieve deeper understanding of the salinization process (represented by chloride concentration) in complex coastal aquifers influenced by various salinity sources. Both models were trained using 11 years of groundwater quality data from 22 municipal wells in Khan Younis Governorate, Gaza, Palestine. Both techniques showed satisfactory prediction performance, where the mean absolute percentage error (MAPE) and correlation coefficient ( R) for the test data set were, respectively, about 4.5 and 99.8% for the ANNs model, and 4.6 and 99.7% for SVM model. The performances of the developed models were further noticeably improved through preprocessing the wells data set using a k-means clustering method, then conducting AI techniques separately for each cluster. The developed models with clustered data were associated with higher performance, easiness and simplicity. They can be employed as an analytical tool to investigate the influence of input variables on coastal aquifer salinity, which is of great importance for understanding salinization processes, leading to more effective water-resources-related planning and decision making.

  2. Application fields for the new Object Management Group (OMG) Standards Case Management Model and Notation (CMMN) and Decision Management Notation (DMN) in the perioperative field.

    PubMed

    Wiemuth, M; Junger, D; Leitritz, M A; Neumann, J; Neumuth, T; Burgert, O

    2017-08-01

    Medical processes can be modeled using different methods and notations. Currently used modeling systems like Business Process Model and Notation (BPMN) are not capable of describing the highly flexible and variable medical processes in sufficient detail. We combined two modeling systems, Business Process Management (BPM) and Adaptive Case Management (ACM), to be able to model non-deterministic medical processes. We used the new Standards Case Management Model and Notation (CMMN) and Decision Management Notation (DMN). First, we explain how CMMN, DMN and BPMN could be used to model non-deterministic medical processes. We applied this methodology to model 79 cataract operations provided by University Hospital Leipzig, Germany, and four cataract operations provided by University Eye Hospital Tuebingen, Germany. Our model consists of 85 tasks and about 20 decisions in BPMN. We were able to expand the system with more complex situations that might appear during an intervention. An effective modeling of the cataract intervention is possible using the combination of BPM and ACM. The combination gives the possibility to depict complex processes with complex decisions. This combination allows a significant advantage for modeling perioperative processes.

  3. The Complex Action Recognition via the Correlated Topic Model

    PubMed Central

    Tu, Hong-bin; Xia, Li-min; Wang, Zheng-wu

    2014-01-01

    Human complex action recognition is an important research area of the action recognition. Among various obstacles to human complex action recognition, one of the most challenging is to deal with self-occlusion, where one body part occludes another one. This paper presents a new method of human complex action recognition, which is based on optical flow and correlated topic model (CTM). Firstly, the Markov random field was used to represent the occlusion relationship between human body parts in terms of an occlusion state variable. Secondly, the structure from motion (SFM) is used for reconstructing the missing data of point trajectories. Then, we can extract the key frame based on motion feature from optical flow and the ratios of the width and height are extracted by the human silhouette. Finally, we use the topic model of correlated topic model (CTM) to classify action. Experiments were performed on the KTH, Weizmann, and UIUC action dataset to test and evaluate the proposed method. The compared experiment results showed that the proposed method was more effective than compared methods. PMID:24574920

  4. A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models

    NASA Astrophysics Data System (ADS)

    Brugnach, M.; Neilson, R.; Bolte, J.

    2001-12-01

    The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in the output are identified, the causes of its variability can be found. Some of the advantages of this approach are that it reduces the dimensionality of the search space, it facilitates the interpretation of the results and it provides information that allows exploration of uncertainty at the process level, and how it might affect model output. We present an example using the vegetation model BIOME-BGC.

  5. Towards Improved Forecasts of Atmospheric and Oceanic Circulations over the Complex Terrain of the Eastern Mediterranean

    NASA Technical Reports Server (NTRS)

    Chronis, Themis; Case, Jonathan L.; Papadopoulos, Anastasios; Anagnostou, Emmanouil N.; Mecikalski, John R.; Haines, Stephanie L.

    2008-01-01

    Forecasting atmospheric and oceanic circulations accurately over the Eastern Mediterranean has proved to be an exceptional challenge. The existence of fine-scale topographic variability (land/sea coverage) and seasonal dynamics variations can create strong spatial gradients in temperature, wind and other state variables, which numerical models may have difficulty capturing. The Hellenic Center for Marine Research (HCMR) is one of the main operational centers for wave forecasting in the eastern Mediterranean. Currently, HCMR's operational numerical weather/ocean prediction model is based on the coupled Eta/Princeton Ocean Model (POM). Since 1999, HCMR has also operated the POSEIDON floating buoys as a means of state-of-the-art, real-time observations of several oceanic and surface atmospheric variables. This study attempts a first assessment at improving both atmospheric and oceanic prediction by initializing a regional Numerical Weather Prediction (NWP) model with high-resolution sea surface temperatures (SST) from remotely sensed platforms in order to capture the small-scale characteristics.

  6. Some considerations concerning the challenge of incorporating social variables into epidemiological models of infectious disease transmission.

    PubMed

    Barnett, Tony; Fournié, Guillaume; Gupta, Sunetra; Seeley, Janet

    2015-01-01

    Incorporation of 'social' variables into epidemiological models remains a challenge. Too much detail and models cease to be useful; too little and the very notion of infection - a highly social process in human populations - may be considered with little reference to the social. The French sociologist Émile Durkheim proposed that the scientific study of society required identification and study of 'social currents'. Such 'currents' are what we might today describe as 'emergent properties', specifiable variables appertaining to individuals and groups, which represent the perspectives of social actors as they experience the environment in which they live their lives. Here we review the ways in which one particular emergent property, hope, relevant to a range of epidemiological situations, might be used in epidemiological modelling of infectious diseases in human populations. We also indicate how such an approach might be extended to include a range of other potential emergent properties to represent complex social and economic processes bearing on infectious disease transmission.

  7. Robust Bayesian clustering.

    PubMed

    Archambeau, Cédric; Verleysen, Michel

    2007-01-01

    A new variational Bayesian learning algorithm for Student-t mixture models is introduced. This algorithm leads to (i) robust density estimation, (ii) robust clustering and (iii) robust automatic model selection. Gaussian mixture models are learning machines which are based on a divide-and-conquer approach. They are commonly used for density estimation and clustering tasks, but are sensitive to outliers. The Student-t distribution has heavier tails than the Gaussian distribution and is therefore less sensitive to any departure of the empirical distribution from Gaussianity. As a consequence, the Student-t distribution is suitable for constructing robust mixture models. In this work, we formalize the Bayesian Student-t mixture model as a latent variable model in a different way from Svensén and Bishop [Svensén, M., & Bishop, C. M. (2005). Robust Bayesian mixture modelling. Neurocomputing, 64, 235-252]. The main difference resides in the fact that it is not necessary to assume a factorized approximation of the posterior distribution on the latent indicator variables and the latent scale variables in order to obtain a tractable solution. Not neglecting the correlations between these unobserved random variables leads to a Bayesian model having an increased robustness. Furthermore, it is expected that the lower bound on the log-evidence is tighter. Based on this bound, the model complexity, i.e. the number of components in the mixture, can be inferred with a higher confidence.

  8. A Complex Systems Approach to Causal Discovery in Psychiatry.

    PubMed

    Saxe, Glenn N; Statnikov, Alexander; Fenyo, David; Ren, Jiwen; Li, Zhiguo; Prasad, Meera; Wall, Dennis; Bergman, Nora; Briggs, Ernestine C; Aliferis, Constantin

    2016-01-01

    Conventional research methodologies and data analytic approaches in psychiatric research are unable to reliably infer causal relations without experimental designs, or to make inferences about the functional properties of the complex systems in which psychiatric disorders are embedded. This article describes a series of studies to validate a novel hybrid computational approach--the Complex Systems-Causal Network (CS-CN) method-designed to integrate causal discovery within a complex systems framework for psychiatric research. The CS-CN method was first applied to an existing dataset on psychopathology in 163 children hospitalized with injuries (validation study). Next, it was applied to a much larger dataset of traumatized children (replication study). Finally, the CS-CN method was applied in a controlled experiment using a 'gold standard' dataset for causal discovery and compared with other methods for accurately detecting causal variables (resimulation controlled experiment). The CS-CN method successfully detected a causal network of 111 variables and 167 bivariate relations in the initial validation study. This causal network had well-defined adaptive properties and a set of variables was found that disproportionally contributed to these properties. Modeling the removal of these variables resulted in significant loss of adaptive properties. The CS-CN method was successfully applied in the replication study and performed better than traditional statistical methods, and similarly to state-of-the-art causal discovery algorithms in the causal detection experiment. The CS-CN method was validated, replicated, and yielded both novel and previously validated findings related to risk factors and potential treatments of psychiatric disorders. The novel approach yields both fine-grain (micro) and high-level (macro) insights and thus represents a promising approach for complex systems-oriented research in psychiatry.

  9. Variable Cultural Acquisition Costs Constrain Cumulative Cultural Evolution

    PubMed Central

    Mesoudi, Alex

    2011-01-01

    One of the hallmarks of the human species is our capacity for cumulative culture, in which beneficial knowledge and technology is accumulated over successive generations. Yet previous analyses of cumulative cultural change have failed to consider the possibility that as cultural complexity accumulates, it becomes increasingly costly for each new generation to acquire from the previous generation. In principle this may result in an upper limit on the cultural complexity that can be accumulated, at which point accumulated knowledge is so costly and time-consuming to acquire that further innovation is not possible. In this paper I first review existing empirical analyses of the history of science and technology that support the possibility that cultural acquisition costs may constrain cumulative cultural evolution. I then present macroscopic and individual-based models of cumulative cultural evolution that explore the consequences of this assumption of variable cultural acquisition costs, showing that making acquisition costs vary with cultural complexity causes the latter to reach an upper limit above which no further innovation can occur. These models further explore the consequences of different cultural transmission rules (directly biased, indirectly biased and unbiased transmission), population size, and cultural innovations that themselves reduce innovation or acquisition costs. PMID:21479170

  10. Control of complex networks requires both structure and dynamics

    NASA Astrophysics Data System (ADS)

    Gates, Alexander J.; Rocha, Luis M.

    2016-04-01

    The study of network structure has uncovered signatures of the organization of complex systems. However, there is also a need to understand how to control them; for example, identifying strategies to revert a diseased cell to a healthy state, or a mature cell to a pluripotent state. Two recent methodologies suggest that the controllability of complex systems can be predicted solely from the graph of interactions between variables, without considering their dynamics: structural controllability and minimum dominating sets. We demonstrate that such structure-only methods fail to characterize controllability when dynamics are introduced. We study Boolean network ensembles of network motifs as well as three models of biochemical regulation: the segment polarity network in Drosophila melanogaster, the cell cycle of budding yeast Saccharomyces cerevisiae, and the floral organ arrangement in Arabidopsis thaliana. We demonstrate that structure-only methods both undershoot and overshoot the number and which sets of critical variables best control the dynamics of these models, highlighting the importance of the actual system dynamics in determining control. Our analysis further shows that the logic of automata transition functions, namely how canalizing they are, plays an important role in the extent to which structure predicts dynamics.

  11. What model resolution is required in climatological downscaling over complex terrain?

    NASA Astrophysics Data System (ADS)

    El-Samra, Renalda; Bou-Zeid, Elie; El-Fadel, Mutasem

    2018-05-01

    This study presents results from the Weather Research and Forecasting (WRF) model applied for climatological downscaling simulations over highly complex terrain along the Eastern Mediterranean. We sequentially downscale general circulation model results, for a mild and wet year (2003) and a hot and dry year (2010), to three local horizontal resolutions of 9, 3 and 1 km. Simulated near-surface hydrometeorological variables are compared at different time scales against data from an observational network over the study area comprising rain gauges, anemometers, and thermometers. The overall performance of WRF at 1 and 3 km horizontal resolution was satisfactory, with significant improvement over the 9 km downscaling simulation. The total yearly precipitation from WRF's 1 km and 3 km domains exhibited < 10% bias with respect to observational data. The errors in minimum and maximum temperatures were reduced by the downscaling, along with a high-quality delineation of temperature variability and extremes for both the 1 and 3 km resolution runs. Wind speeds, on the other hand, are generally overestimated for all model resolutions, in comparison with observational data, particularly on the coast (up to 50%) compared to inland stations (up to 40%). The findings therefore indicate that a 3 km resolution is sufficient for the downscaling, especially that it would allow more years and scenarios to be investigated compared to the higher 1 km resolution at the same computational effort. In addition, the results provide a quantitative measure of the potential errors for various hydrometeorological variables.

  12. Quasi steady-state aerodynamic model development for race vehicle simulations

    NASA Astrophysics Data System (ADS)

    Mohrfeld-Halterman, J. A.; Uddin, M.

    2016-01-01

    Presented in this paper is a procedure to develop a high fidelity quasi steady-state aerodynamic model for use in race car vehicle dynamic simulations. Developed to fit quasi steady-state wind tunnel data, the aerodynamic model is regressed against three independent variables: front ground clearance, rear ride height, and yaw angle. An initial dual range model is presented and then further refined to reduce the model complexity while maintaining a high level of predictive accuracy. The model complexity reduction decreases the required amount of wind tunnel data thereby reducing wind tunnel testing time and cost. The quasi steady-state aerodynamic model for the pitch moment degree of freedom is systematically developed in this paper. This same procedure can be extended to the other five aerodynamic degrees of freedom to develop a complete six degree of freedom quasi steady-state aerodynamic model for any vehicle.

  13. Quantifying uncertainty in high-resolution coupled hydrodynamic-ecosystem models

    NASA Astrophysics Data System (ADS)

    Allen, J. I.; Somerfield, P. J.; Gilbert, F. J.

    2007-01-01

    Marine ecosystem models are becoming increasingly complex and sophisticated, and are being used to estimate the effects of future changes in the earth system with a view to informing important policy decisions. Despite their potential importance, far too little attention has been, and is generally, paid to model errors and the extent to which model outputs actually relate to real-world processes. With the increasing complexity of the models themselves comes an increasing complexity among model results. If we are to develop useful modelling tools for the marine environment we need to be able to understand and quantify the uncertainties inherent in the simulations. Analysing errors within highly multivariate model outputs, and relating them to even more complex and multivariate observational data, are not trivial tasks. Here we describe the application of a series of techniques, including a 2-stage self-organising map (SOM), non-parametric multivariate analysis, and error statistics, to a complex spatio-temporal model run for the period 1988-1989 in the Southern North Sea, coinciding with the North Sea Project which collected a wealth of observational data. We use model output, large spatio-temporally resolved data sets and a combination of methodologies (SOM, MDS, uncertainty metrics) to simplify the problem and to provide tractable information on model performance. The use of a SOM as a clustering tool allows us to simplify the dimensions of the problem while the use of MDS on independent data grouped according to the SOM classification allows us to validate the SOM. The combination of classification and uncertainty metrics allows us to pinpoint the variables and associated processes which require attention in each region. We recommend the use of this combination of techniques for simplifying complex comparisons of model outputs with real data, and analysis of error distributions.

  14. Microprocessor based implementation of attitude and shape control of large space structures

    NASA Technical Reports Server (NTRS)

    Reddy, A. S. S. R.

    1984-01-01

    The feasibility of off the shelf eight bit and 16 bit microprocessors to implement linear state variable feedback control laws and assessing the real time response to spacecraft dynamics is studied. The complexity of the dynamic model is described along with the appropriate software. An experimental setup of a beam, microprocessor system for implementing the control laws and the needed generalized software to implement any state variable feedback control system is included.

  15. High-resolution spatial databases of monthly climate variables (1961-2010) over a complex terrain region in southwestern China

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Xu, An-Ding; Liu, Hong-Bin

    2015-01-01

    Climate data in gridded format are critical for understanding climate change and its impact on eco-environment. The aim of the current study is to develop spatial databases for three climate variables (maximum, minimum temperatures, and relative humidity) over a large region with complex topography in southwestern China. Five widely used approaches including inverse distance weighting, ordinary kriging, universal kriging, co-kriging, and thin-plate smoothing spline were tested. Root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE) showed that thin-plate smoothing spline with latitude, longitude, and elevation outperformed other models. Average RMSE, MAE, and MAPE of the best models were 1.16 °C, 0.74 °C, and 7.38 % for maximum temperature; 0.826 °C, 0.58 °C, and 6.41 % for minimum temperature; and 3.44, 2.28, and 3.21 % for relative humidity, respectively. Spatial datasets of annual and monthly climate variables with 1-km resolution covering the period 1961-2010 were then obtained using the best performance methods. Comparative study showed that the current outcomes were in well agreement with public datasets. Based on the gridded datasets, changes in temperature variables were investigated across the study area. Future study might be needed to capture the uncertainty induced by environmental conditions through remote sensing and knowledge-based methods.

  16. PREDICTING TWO-DIMENSIONAL STEADY-STATE SOIL FREEZING FRONTS USING THE CVBEM.

    USGS Publications Warehouse

    Hromadka, T.V.

    1986-01-01

    The complex variable boundary element method (CVBEM) is used instead of a real variable boundary element method due to the available modeling error evaluation techniques developed. The modeling accuracy is evaluated by the model-user in the determination of an approximative boundary upon which the CVBEM provides an exact solution. Although inhomogeneity (and anisotropy) can be included in the CVBEM model, the resulting fully populated matrix system quickly becomes large. Therefore in this paper, the domain is assumed homogeneous and isotropic except for differences in frozen and thawed conduction parameters on either side of the freezing front. The example problems presented were obtained by use of a popular 64K microcomputer (the current version of the program used in this study has the capacity to accommodate 30 nodal points).

  17. Ontology and modeling patterns for state-based behavior representation

    NASA Technical Reports Server (NTRS)

    Castet, Jean-Francois; Rozek, Matthew L.; Ingham, Michel D.; Rouquette, Nicolas F.; Chung, Seung H.; Kerzhner, Aleksandr A.; Donahue, Kenneth M.; Jenkins, J. Steven; Wagner, David A.; Dvorak, Daniel L.; hide

    2015-01-01

    This paper provides an approach to capture state-based behavior of elements, that is, the specification of their state evolution in time, and the interactions amongst them. Elements can be components (e.g., sensors, actuators) or environments, and are characterized by state variables that vary with time. The behaviors of these elements, as well as interactions among them are represented through constraints on state variables. This paper discusses the concepts and relationships introduced in this behavior ontology, and the modeling patterns associated with it. Two example cases are provided to illustrate their usage, as well as to demonstrate the flexibility and scalability of the behavior ontology: a simple flashlight electrical model and a more complex spacecraft model involving instruments, power and data behaviors. Finally, an implementation in a SysML profile is provided.

  18. Circuit variability interacts with excitatory-inhibitory diversity of interneurons to regulate network encoding capacity.

    PubMed

    Tsai, Kuo-Ting; Hu, Chin-Kun; Li, Kuan-Wei; Hwang, Wen-Liang; Chou, Ya-Hui

    2018-05-23

    Local interneurons (LNs) in the Drosophila olfactory system exhibit neuronal diversity and variability, yet it is still unknown how these features impact information encoding capacity and reliability in a complex LN network. We employed two strategies to construct a diverse excitatory-inhibitory neural network beginning with a ring network structure and then introduced distinct types of inhibitory interneurons and circuit variability to the simulated network. The continuity of activity within the node ensemble (oscillation pattern) was used as a readout to describe the temporal dynamics of network activity. We found that inhibitory interneurons enhance the encoding capacity by protecting the network from extremely short activation periods when the network wiring complexity is very high. In addition, distinct types of interneurons have differential effects on encoding capacity and reliability. Circuit variability may enhance the encoding reliability, with or without compromising encoding capacity. Therefore, we have described how circuit variability of interneurons may interact with excitatory-inhibitory diversity to enhance the encoding capacity and distinguishability of neural networks. In this work, we evaluate the effects of different types and degrees of connection diversity on a ring model, which may simulate interneuron networks in the Drosophila olfactory system or other biological systems.

  19. The treatment of parental height as a biological factor in studies of birth weight and childhood growth

    PubMed Central

    Spencer, N; Logan, S

    2002-01-01

    Parental height is frequently treated as a biological variable in studies of birth weight and childhood growth. Elimination of social variables from multivariate models including parental height as a biological variable leads researchers to conclude that social factors have no independent effect on the outcome. This paper challenges the treatment of parental height as a biological variable, drawing on extensive evidence for the determination of adult height through a complex interaction of genetic and social factors. The paper firstly seeks to establish the importance of social factors in the determination of height. The methodological problems associated with treatment of parental height as a purely biological variable are then discussed, illustrated by data from published studies and by analysis of data from the 1958 National Childhood Development Study (NCDS). The paper concludes that a framework for studying pathways to pregnancy and childhood outcomes needs to take account of the complexity of the relation between genetic and social factors and be able to account for the effects of multiple risk factors acting cumulatively across time and across generations. Illustrations of these approaches are given using NCDS data. PMID:12193422

  20. Inferring Weighted Directed Association Network from Multivariate Time Series with a Synthetic Method of Partial Symbolic Transfer Entropy Spectrum and Granger Causality

    PubMed Central

    Hu, Yanzhu; Ai, Xinbo

    2016-01-01

    Complex network methodology is very useful for complex system explorer. However, the relationships among variables in complex system are usually not clear. Therefore, inferring association networks among variables from their observed data has been a popular research topic. We propose a synthetic method, named small-shuffle partial symbolic transfer entropy spectrum (SSPSTES), for inferring association network from multivariate time series. The method synthesizes surrogate data, partial symbolic transfer entropy (PSTE) and Granger causality. A proper threshold selection is crucial for common correlation identification methods and it is not easy for users. The proposed method can not only identify the strong correlation without selecting a threshold but also has the ability of correlation quantification, direction identification and temporal relation identification. The method can be divided into three layers, i.e. data layer, model layer and network layer. In the model layer, the method identifies all the possible pair-wise correlation. In the network layer, we introduce a filter algorithm to remove the indirect weak correlation and retain strong correlation. Finally, we build a weighted adjacency matrix, the value of each entry representing the correlation level between pair-wise variables, and then get the weighted directed association network. Two numerical simulated data from linear system and nonlinear system are illustrated to show the steps and performance of the proposed approach. The ability of the proposed method is approved by an application finally. PMID:27832153

  1. Decadal predictions of Southern Ocean sea ice : testing different initialization methods with an Earth-system Model of Intermediate Complexity

    NASA Astrophysics Data System (ADS)

    Zunz, Violette; Goosse, Hugues; Dubinkina, Svetlana

    2013-04-01

    The sea ice extent in the Southern Ocean has increased since 1979 but the causes of this expansion have not been firmly identified. In particular, the contribution of internal variability and external forcing to this positive trend has not been fully established. In this region, the lack of observations and the overestimation of internal variability of the sea ice by contemporary General Circulation Models (GCMs) make it difficult to understand the behaviour of the sea ice. Nevertheless, if its evolution is governed by the internal variability of the system and if this internal variability is in some way predictable, a suitable initialization method should lead to simulations results that better fit the reality. Current GCMs decadal predictions are generally initialized through a nudging towards some observed fields. This relatively simple method does not seem to be appropriated to the initialization of sea ice in the Southern Ocean. The present study aims at identifying an initialization method that could improve the quality of the predictions of Southern Ocean sea ice at decadal timescales. We use LOVECLIM, an Earth-system Model of Intermediate Complexity that allows us to perform, within a reasonable computational time, the large amount of simulations required to test systematically different initialization procedures. These involve three data assimilation methods: a nudging, a particle filter and an efficient particle filter. In a first step, simulations are performed in an idealized framework, i.e. data from a reference simulation of LOVECLIM are used instead of observations, herein after called pseudo-observations. In this configuration, the internal variability of the model obviously agrees with the one of the pseudo-observations. This allows us to get rid of the issues related to the overestimation of the internal variability by models compared to the observed one. This way, we can work out a suitable methodology to assess the efficiency of the initialization procedures tested. It also allows us determine the upper limit of improvement that can be expected if more sophisticated initialization methods are used in decadal prediction simulations and if models have an internal variability agreeing with the observed one. Furthermore, since pseudo-observations are available everywhere at any time step, we also analyse the differences between simulations initialized with a complete dataset of pseudo-observations and the ones for which pseudo-observations data are not assimilated everywhere. In a second step, simulations are realized in a realistic framework, i.e. through the use of actual available observations. The same data assimilation methods are tested in order to check if more sophisticated methods can improve the reliability and the accuracy of decadal prediction simulations, even if they are performed with models that overestimate the internal variability of the sea ice extent in the Southern Ocean.

  2. An advanced stochastic weather generator for simulating 2-D high-resolution climate variables

    NASA Astrophysics Data System (ADS)

    Peleg, Nadav; Fatichi, Simone; Paschalis, Athanasios; Molnar, Peter; Burlando, Paolo

    2017-07-01

    A new stochastic weather generator, Advanced WEather GENerator for a two-dimensional grid (AWE-GEN-2d) is presented. The model combines physical and stochastic approaches to simulate key meteorological variables at high spatial and temporal resolution: 2 km × 2 km and 5 min for precipitation and cloud cover and 100 m × 100 m and 1 h for near-surface air temperature, solar radiation, vapor pressure, atmospheric pressure, and near-surface wind. The model requires spatially distributed data for the calibration process, which can nowadays be obtained by remote sensing devices (weather radar and satellites), reanalysis data sets and ground stations. AWE-GEN-2d is parsimonious in terms of computational demand and therefore is particularly suitable for studies where exploring internal climatic variability at multiple spatial and temporal scales is fundamental. Applications of the model include models of environmental systems, such as hydrological and geomorphological models, where high-resolution spatial and temporal meteorological forcing is crucial. The weather generator was calibrated and validated for the Engelberg region, an area with complex topography in the Swiss Alps. Model test shows that the climate variables are generated by AWE-GEN-2d with a level of accuracy that is sufficient for many practical applications.

  3. Simulation of crop yield variability by improved root-soil-interaction modelling

    NASA Astrophysics Data System (ADS)

    Duan, X.; Gayler, S.; Priesack, E.

    2009-04-01

    Understanding the processes and factors that govern the within-field variability in crop yield has attached great importance due to applications in precision agriculture. Crop response to environment at field scale is a complex dynamic process involving the interactions of soil characteristics, weather conditions and crop management. The numerous static factors combined with temporal variations make it very difficult to identify and manage the variability pattern. Therefore, crop simulation models are considered to be useful tools in analyzing separately the effects of change in soil or weather conditions on the spatial variability, in order to identify the cause of yield variability and to quantify the spatial and temporal variation. However, tests showed that usual crop models such as CERES-Wheat and CERES-Maize were not able to quantify the observed within-field yield variability, while their performance on crop growth simulation under more homogeneous and mainly non-limiting conditions was sufficent to simulate average yields at the field-scale. On a study site in South Germany, within-field variability in crop growth has been documented since years. After detailed analysis and classification of the soil patterns, two site specific factors, the plant-available-water and the O2 deficiency, were considered as the main causes of the crop growth variability in this field. Based on our measurement of root distribution in the soil profile, we hypothesize that in our case the insufficiency of the applied crop models to simulate the yield variability can be due to the oversimplification of the involved root models which fail to be sensitive to different soil conditions. In this study, the root growth model described by Jones et al. (1991) was adapted by using data of root distributions in the field and linking the adapted root model to the CERES crop model. The ability of the new root model to increase the sensitivity of the CERES crop models to different enviromental conditions was then evaluated by means of comparison of the simualtion results with measured data and by scenario calculations.

  4. Determining the Ocean's Role on the Variable Gravity Field and Earth Rotation

    NASA Technical Reports Server (NTRS)

    Ponte, Rui M.

    2000-01-01

    Our three year investigation, carried out over the period 18-19 Nov 2000, focused on the study of the variability in ocean angular momentum and mass signals and their relation to the Earth's variable rotation and gravity field. This final report includes a summary description of our work and a list of related publications and presentations. One thrust of the investigation was to determine and interpret the changes in the ocean mass field, as they impact on the variable gravity field and Earth rotation. In this regard, the seasonal cycle in local vertically-integrated ocean mass was analyzed using two ocean models of different complexity: (1) the simple constant-density, coarse resolution model of Ponte; and (2) the fully stratified, eddy-resolving model of Semtner and Chervin. The dynamics and thermodynamics of the seasonal variability in ocean mass were examined in detail, as well as the methodologies to calculate those changes under different model formulations. Another thrust of the investigation was to examine signals in ocean angular momentum (OAM) in relation to Earth rotation changes. A number of efforts were undertaken in this regard. Sensitivity of the oceanic excitation to different assumptions about how the ocean is forced and how it dissipates its energy was explored.

  5. Ensemble survival tree models to reveal pairwise interactions of variables with time-to-events outcomes in low-dimensional setting

    PubMed Central

    Dazard, Jean-Eudes; Ishwaran, Hemant; Mehlotra, Rajeev; Weinberg, Aaron; Zimmerman, Peter

    2018-01-01

    Unraveling interactions among variables such as genetic, clinical, demographic and environmental factors is essential to understand the development of common and complex diseases. To increase the power to detect such variables interactions associated with clinical time-to-events outcomes, we borrowed established concepts from random survival forest (RSF) models. We introduce a novel RSF-based pairwise interaction estimator and derive a randomization method with bootstrap confidence intervals for inferring interaction significance. Using various linear and nonlinear time-to-events survival models in simulation studies, we first show the efficiency of our approach: true pairwise interaction-effects between variables are uncovered, while they may not be accompanied with their corresponding main-effects, and may not be detected by standard semi-parametric regression modeling and test statistics used in survival analysis. Moreover, using a RSF-based cross-validation scheme for generating prediction estimators, we show that informative predictors may be inferred. We applied our approach to an HIV cohort study recording key host gene polymorphisms and their association with HIV change of tropism or AIDS progression. Altogether, this shows how linear or nonlinear pairwise statistical interactions of variables may be efficiently detected with a predictive value in observational studies with time-to-event outcomes. PMID:29453930

  6. Ensemble survival tree models to reveal pairwise interactions of variables with time-to-events outcomes in low-dimensional setting.

    PubMed

    Dazard, Jean-Eudes; Ishwaran, Hemant; Mehlotra, Rajeev; Weinberg, Aaron; Zimmerman, Peter

    2018-02-17

    Unraveling interactions among variables such as genetic, clinical, demographic and environmental factors is essential to understand the development of common and complex diseases. To increase the power to detect such variables interactions associated with clinical time-to-events outcomes, we borrowed established concepts from random survival forest (RSF) models. We introduce a novel RSF-based pairwise interaction estimator and derive a randomization method with bootstrap confidence intervals for inferring interaction significance. Using various linear and nonlinear time-to-events survival models in simulation studies, we first show the efficiency of our approach: true pairwise interaction-effects between variables are uncovered, while they may not be accompanied with their corresponding main-effects, and may not be detected by standard semi-parametric regression modeling and test statistics used in survival analysis. Moreover, using a RSF-based cross-validation scheme for generating prediction estimators, we show that informative predictors may be inferred. We applied our approach to an HIV cohort study recording key host gene polymorphisms and their association with HIV change of tropism or AIDS progression. Altogether, this shows how linear or nonlinear pairwise statistical interactions of variables may be efficiently detected with a predictive value in observational studies with time-to-event outcomes.

  7. Elastic scattering spectroscopy for detection of cancer risk in Barrett's esophagus: experimental and clinical validation of error removal by orthogonal subtraction for increasing accuracy

    NASA Astrophysics Data System (ADS)

    Zhu, Ying; Fearn, Tom; MacKenzie, Gary; Clark, Ben; Dunn, Jason M.; Bigio, Irving J.; Bown, Stephen G.; Lovat, Laurence B.

    2009-07-01

    Elastic scattering spectroscopy (ESS) may be used to detect high-grade dysplasia (HGD) or cancer in Barrett's esophagus (BE). When spectra are measured in vivo by a hand-held optical probe, variability among replicated spectra from the same site can hinder the development of a diagnostic model for cancer risk. An experiment was carried out on excised tissue to investigate how two potential sources of this variability, pressure and angle, influence spectral variability, and the results were compared with the variations observed in spectra collected in vivo from patients with Barrett's esophagus. A statistical method called error removal by orthogonal subtraction (EROS) was applied to model and remove this measurement variability, which accounted for 96.6% of the variation in the spectra, from the in vivo data. Its removal allowed the construction of a diagnostic model with specificity improved from 67% to 82% (with sensitivity fixed at 90%). The improvement was maintained in predictions on an independent in vivo data set. EROS works well as an effective pretreatment for Barrett's in vivo data by identifying measurement variability and ameliorating its effect. The procedure reduces the complexity and increases the accuracy and interpretability of the model for classification and detection of cancer risk in Barrett's esophagus.

  8. Final Technical Report for Collaborative Research: Regional climate-change projections through next-generation empirical and dynamical models, DE-FG02-07ER64429

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smyth, Padhraic

    2013-07-22

    This is the final report for a DOE-funded research project describing the outcome of research on non-homogeneous hidden Markov models (NHMMs) and coupled ocean-atmosphere (O-A) intermediate-complexity models (ICMs) to identify the potentially predictable modes of climate variability, and to investigate their impacts on the regional-scale. The main results consist of extensive development of the hidden Markov models for rainfall simulation and downscaling specifically within the non-stationary climate change context together with the development of parallelized software; application of NHMMs to downscaling of rainfall projections over India; identification and analysis of decadal climate signals in data and models; and, studies ofmore » climate variability in terms of the dynamics of atmospheric flow regimes.« less

  9. Neural control of fast nonlinear systems--application to a turbocharged SI engine with VCT.

    PubMed

    Colin, Guillaume; Chamaillard, Yann; Bloch, Gérard; Corde, Gilles

    2007-07-01

    Today, (engine) downsizing using turbocharging appears as a major way in reducing fuel consumption and pollutant emissions of spark ignition (SI) engines. In this context, an efficient control of the air actuators [throttle, turbo wastegate, and variable camshaft timing (VCT)] is needed for engine torque control. This paper proposes a nonlinear model-based control scheme which combines separate, but coordinated, control modules. Theses modules are based on different control strategies: internal model control (IMC), model predictive control (MPC), and optimal control. It is shown how neural models can be used at different levels and included in the control modules to replace physical models, which are too complex to be online embedded, or to estimate nonmeasured variables. The results obtained from two different test benches show the real-time applicability and good control performance of the proposed methods.

  10. Power of data mining methods to detect genetic associations and interactions.

    PubMed

    Molinaro, Annette M; Carriero, Nicholas; Bjornson, Robert; Hartge, Patricia; Rothman, Nathaniel; Chatterjee, Nilanjan

    2011-01-01

    Genetic association studies, thus far, have focused on the analysis of individual main effects of SNP markers. Nonetheless, there is a clear need for modeling epistasis or gene-gene interactions to better understand the biologic basis of existing associations. Tree-based methods have been widely studied as tools for building prediction models based on complex variable interactions. An understanding of the power of such methods for the discovery of genetic associations in the presence of complex interactions is of great importance. Here, we systematically evaluate the power of three leading algorithms: random forests (RF), Monte Carlo logic regression (MCLR), and multifactor dimensionality reduction (MDR). We use the algorithm-specific variable importance measures (VIMs) as statistics and employ permutation-based resampling to generate the null distribution and associated p values. The power of the three is assessed via simulation studies. Additionally, in a data analysis, we evaluate the associations between individual SNPs in pro-inflammatory and immunoregulatory genes and the risk of non-Hodgkin lymphoma. The power of RF is highest in all simulation models, that of MCLR is similar to RF in half, and that of MDR is consistently the lowest. Our study indicates that the power of RF VIMs is most reliable. However, in addition to tuning parameters, the power of RF is notably influenced by the type of variable (continuous vs. categorical) and the chosen VIM. Copyright © 2011 S. Karger AG, Basel.

  11. Climate and dengue transmission: evidence and implications.

    PubMed

    Morin, Cory W; Comrie, Andrew C; Ernst, Kacey

    2013-01-01

    Climate influences dengue ecology by affecting vector dynamics, agent development, and mosquito/human interactions. Although these relationships are known, the impact climate change will have on transmission is unclear. Climate-driven statistical and process-based models are being used to refine our knowledge of these relationships and predict the effects of projected climate change on dengue fever occurrence, but results have been inconsistent. We sought to identify major climatic influences on dengue virus ecology and to evaluate the ability of climate-based dengue models to describe associations between climate and dengue, simulate outbreaks, and project the impacts of climate change. We reviewed the evidence for direct and indirect relationships between climate and dengue generated from laboratory studies, field studies, and statistical analyses of associations between vectors, dengue fever incidence, and climate conditions. We assessed the potential contribution of climate-driven, process-based dengue models and provide suggestions to improve their performance. Relationships between climate variables and factors that influence dengue transmission are complex. A climate variable may increase dengue transmission potential through one aspect of the system while simultaneously decreasing transmission potential through another. This complexity may at least partly explain inconsistencies in statistical associations between dengue and climate. Process-based models can account for the complex dynamics but often omit important aspects of dengue ecology, notably virus development and host-species interactions. Synthesizing and applying current knowledge of climatic effects on all aspects of dengue virus ecology will help direct future research and enable better projections of climate change effects on dengue incidence.

  12. [The theory of the demographic transition as a reference for demo-economic models].

    PubMed

    Genne, M

    1981-01-01

    The aim of the theory of demographic transition (TTD) is to better understand the behavior and interrelationship of economic and demographic variables. There are 2 types of demo-economic models: 1) the malthusian models, which consider demographic variables as pure exogenous variables, and 2) the neoclassical models, which consider demographic variables as strictly endogenous. If TTD can explore the behavior of exogenous and endogenous demographic variables, it cannot demonstrate neither the relation nor the order of causality among the various demographic and economic variables, but it is simply the theoretical framework of a complex social and economic phenomenon which started in Europe in the 19th Century, and which today can be extended to developing countries. There are 4 stages in the TTD; the 1st stage is characterized by high levels of fecundity and mortality; the 2nd stage is characterized by high fecundity levels and declining mortality levels; the 3rd stage is characterized by declining fecundity levels and low mortality levels; the 4th stage is characterized by low fertility and mortality levels. The impact of economic variables over mortality and birth rates is evident for mortality rates, which decline earlier and at a greater speed than birth rates. According to reliable mathematical predictions, around the year 1987 mortality rates in developing countries will have reached the low level of European countries, and growth rate will be only 1.5%. If the validity of demo-economic models has not yet been established, TTD has clearly shown that social and economic development is the factor which influences demographic expansion.

  13. Managing complexity in simulations of land surface and near-surface processes

    DOE PAGES

    Coon, Ethan T.; Moulton, J. David; Painter, Scott L.

    2016-01-12

    Increasing computing power and the growing role of simulation in Earth systems science have led to an increase in the number and complexity of processes in modern simulators. We present a multiphysics framework that specifies interfaces for coupled processes and automates weak and strong coupling strategies to manage this complexity. Process management is enabled by viewing the system of equations as a tree, where individual equations are associated with leaf nodes and coupling strategies with internal nodes. A dynamically generated dependency graph connects a variable to its dependencies, streamlining and automating model evaluation, easing model development, and ensuring models aremore » modular and flexible. Additionally, the dependency graph is used to ensure that data requirements are consistent between all processes in a given simulation. Here we discuss the design and implementation of these concepts within the Arcos framework, and demonstrate their use for verification testing and hypothesis evaluation in numerical experiments.« less

  14. Assessing knowledge ambiguity in the creation of a model based on expert knowledge and comparison with the results of a landscape succession model in central Labrador. Chapter 10.

    Treesearch

    Frederik Doyon; Brian Sturtevant; Michael J. Papaik; Andrew Fall; Brian Miranda; Daniel D. Kneeshaw; Christian Messier; Marie-Josee Fortin; Patrick M.A. James

    2012-01-01

    Sustainable forest management (SFM) recognizes that the spatial and temporal patterns generated at different scales by natural landscape and stand dynamics processes should serve as a guide for managing the forest within its range of natural variability. Landscape simulation modeling is a powerful tool that can help encompass such complexity and support SFM planning....

  15. Quantitative predictions of streamflow variability in the Susquehanna River Basin

    NASA Astrophysics Data System (ADS)

    Alexander, R.; Boyer, E. W.; Leonard, L. N.; Duffy, C.; Schwarz, G. E.; Smith, R. A.

    2012-12-01

    Hydrologic researchers and water managers have increasingly sought an improved understanding of the major processes that control fluxes of water and solutes across diverse environmental settings and large spatial scales. Regional analyses of observed streamflow data have led to advances in our knowledge of relations among land use, climate, and streamflow, with methodologies ranging from statistical assessments of multiple monitoring sites to the regionalization of the parameters of catchment-scale mechanistic simulation models. However, gaps remain in our understanding of the best ways to transfer the knowledge of hydrologic response and governing processes among locations, including methods for regionalizing streamflow measurements and model predictions. We developed an approach to predict variations in streamflow using the SPARROW (SPAtially Referenced Regression On Watershed attributes) modeling infrastructure, with mechanistic functions, mass conservation constraints, and statistical estimation of regional and sub-regional parameters. We used the model to predict discharge in the Susquehanna River Basin (SRB) under varying hydrological regimes that are representative of contemporary flow conditions. The resulting basin-scale water balance describes mean monthly flows in stream reaches throughout the entire SRB (represented at a 1:100,000 scale using the National Hydrologic Data network), with water supply and demand components that are inclusive of a range of hydrologic, climatic, and cultural properties (e.g., precipitation, evapotranspiration, soil and groundwater storage, runoff, baseflow, water use). We compare alternative models of varying complexity that reflect differences in the number and types of explanatory variables and functional expressions as well as spatial and temporal variability in the model parameters. Statistical estimation of the models reveals the levels of complexity that can be uniquely identified, subject to the information content and uncertainties of the hydrologic and climate measurements. Assessment of spatial variations in the model parameters and predictions provides an improved understanding of how much of the hydrologic response to land use, climate, and other properties is unique to specific locations versus more universally observed across catchments of the SRB. This approach advances understanding of water cycle variability at any location throughout the stream network, as a function of both landscape characteristics (e.g., soils, vegetation, land use) and external forcings (e.g., precipitation quantity and frequency). These improvements in predictions of streamflow dynamics will advance the ability to predict spatial and temporal variability in key solutes, such as nutrients, and their delivery to the Chesapeake Bay.

  16. Unforced decadal fluctuations in a coupled model of the atmosphere and ocean mixed layer

    NASA Technical Reports Server (NTRS)

    Barnett, T. P.; Del Genio, A. D.; Ruedy, R. A.

    1992-01-01

    Global average temperature in a 100-year control run of a model used for greenhouse gas response simulations showed low-frequency natural variability comparable in magnitude to that observed over the last 100 years. The model variability was found to be barotropic in the atmosphere, and located in the tropical strip with largest values near the equator in the Pacific. The model variations were traced to complex, low-frequency interactions between the meridional sea surface temperature gradients in the eastern equatorial Pacific, clouds at both high and low levels, and features of the tropical atmospheric circulation. The variations in these and other model parameters appear to oscillate between two limiting climate states. The physical scenario accounting for the oscillations on decadal time scales is almost certainly not found in the real world on shorter time scales due to limited resolution and the omission of key physics (e.g., equatorial ocean dynamics) in the model. The real message is that models with dynamical limitations can still produce significant long-term variability. Only a thorough physical diagnosis of such simulations and comparisons with decadal-length data sets will allow one to decide if faith in the model results is, or is not, warranted.

  17. Permutation importance: a corrected feature importance measure.

    PubMed

    Altmann, André; Toloşi, Laura; Sander, Oliver; Lengauer, Thomas

    2010-05-15

    In life sciences, interpretability of machine learning models is as important as their prediction accuracy. Linear models are probably the most frequently used methods for assessing feature relevance, despite their relative inflexibility. However, in the past years effective estimators of feature relevance have been derived for highly complex or non-parametric models such as support vector machines and RandomForest (RF) models. Recently, it has been observed that RF models are biased in such a way that categorical variables with a large number of categories are preferred. In this work, we introduce a heuristic for normalizing feature importance measures that can correct the feature importance bias. The method is based on repeated permutations of the outcome vector for estimating the distribution of measured importance for each variable in a non-informative setting. The P-value of the observed importance provides a corrected measure of feature importance. We apply our method to simulated data and demonstrate that (i) non-informative predictors do not receive significant P-values, (ii) informative variables can successfully be recovered among non-informative variables and (iii) P-values computed with permutation importance (PIMP) are very helpful for deciding the significance of variables, and therefore improve model interpretability. Furthermore, PIMP was used to correct RF-based importance measures for two real-world case studies. We propose an improved RF model that uses the significant variables with respect to the PIMP measure and show that its prediction accuracy is superior to that of other existing models. R code for the method presented in this article is available at http://www.mpi-inf.mpg.de/ approximately altmann/download/PIMP.R CONTACT: altmann@mpi-inf.mpg.de, laura.tolosi@mpi-inf.mpg.de Supplementary data are available at Bioinformatics online.

  18. Effect of promoter architecture on the cell-to-cell variability in gene expression.

    PubMed

    Sanchez, Alvaro; Garcia, Hernan G; Jones, Daniel; Phillips, Rob; Kondev, Jané

    2011-03-01

    According to recent experimental evidence, promoter architecture, defined by the number, strength and regulatory role of the operators that control transcription, plays a major role in determining the level of cell-to-cell variability in gene expression. These quantitative experiments call for a corresponding modeling effort that addresses the question of how changes in promoter architecture affect variability in gene expression in a systematic rather than case-by-case fashion. In this article we make such a systematic investigation, based on a microscopic model of gene regulation that incorporates stochastic effects. In particular, we show how operator strength and operator multiplicity affect this variability. We examine different modes of transcription factor binding to complex promoters (cooperative, independent, simultaneous) and how each of these affects the level of variability in transcriptional output from cell-to-cell. We propose that direct comparison between in vivo single-cell experiments and theoretical predictions for the moments of the probability distribution of mRNA number per cell can be used to test kinetic models of gene regulation. The emphasis of the discussion is on prokaryotic gene regulation, but our analysis can be extended to eukaryotic cells as well.

  19. Effect of Promoter Architecture on the Cell-to-Cell Variability in Gene Expression

    PubMed Central

    Sanchez, Alvaro; Garcia, Hernan G.; Jones, Daniel; Phillips, Rob; Kondev, Jané

    2011-01-01

    According to recent experimental evidence, promoter architecture, defined by the number, strength and regulatory role of the operators that control transcription, plays a major role in determining the level of cell-to-cell variability in gene expression. These quantitative experiments call for a corresponding modeling effort that addresses the question of how changes in promoter architecture affect variability in gene expression in a systematic rather than case-by-case fashion. In this article we make such a systematic investigation, based on a microscopic model of gene regulation that incorporates stochastic effects. In particular, we show how operator strength and operator multiplicity affect this variability. We examine different modes of transcription factor binding to complex promoters (cooperative, independent, simultaneous) and how each of these affects the level of variability in transcriptional output from cell-to-cell. We propose that direct comparison between in vivo single-cell experiments and theoretical predictions for the moments of the probability distribution of mRNA number per cell can be used to test kinetic models of gene regulation. The emphasis of the discussion is on prokaryotic gene regulation, but our analysis can be extended to eukaryotic cells as well. PMID:21390269

  20. Collection Evaluation for Interdisciplinary Fields: A Comprehensive Approach.

    ERIC Educational Resources Information Center

    Dobson, Cynthia; And Others

    1996-01-01

    Collection development for interdisciplinary areas is more complex than for traditionally well-defined disciplines, so new evaluation methods are needed. This article identifies variables in interdisciplinary fields and presents a model of their typical information components. Traditional use-centered and materials-centered evaluation methods…

  1. Spatial variability in denitrification rates in an Oregon tidal salt marsh

    EPA Science Inventory

    Modeling denitrification (DeN) is particularly challenging in tidal systems, which play a vital role in buffering adjacent coastal waters from nitrogen inputs. These systems are hydrologically and biogeochemically complex, varying on fine temporal and spatial scales. As part of a...

  2. Elementary Schools Where Students Succeed in Reading.

    ERIC Educational Resources Information Center

    Mosenthal, Jim; Lipson, Marjorie; Mekkelsen, Jane; Russ, Barbara; Sortino, Susan

    A number of studies have demonstrated the existence of "effective" schools in comparison to other "ineffective" models. To identify the contexts for success, a study examined "teacher instructional" and "school" variables to characterize the complex of factors that might be needed to achieve high levels of…

  3. An Enduring Rapidly Moving Storm as a Guide to Saturn's Equatorial Jet's Complex Structure

    NASA Technical Reports Server (NTRS)

    Sanchez-Lavega, A.; Garcia-Melendo, E.; Perez-Hoyos, S.; Hueso, R.; Wong, M. H.; Simon, A.; Sanz-Requena, J. F.; Antunano, A.; Barrado-Izagirre, N.; Garate-Lopez, I.; hide

    2016-01-01

    Saturn has an intense and broad eastward equatorial jet with a complex three-dimensional structure mixed with time variability. The equatorial region experiences strong seasonal insolation variations enhanced by ring shadowing, and three of the six known giant planetary-scale storms have developed in it. These factors make Saturn's equator a natural laboratory to test models of jets in giant planets. Here we report on a bright equatorial atmospheric feature imaged in 2015 that moved steadily at a high speed of 450/ms not measured since 1980-1981 with other equatorial clouds moving within an ample range of velocities. Radiative transfer models show that these motions occur at three altitude levels within the upper haze and clouds. We find that the peak of the jet (latitudes 10degN to 10degS) suffers intense vertical shears reaching + 2.5/ms/km, two orders of magnitude higher than meridional shears, and temporal variability above 1 bar altitude level.

  4. Low-rank approximation in the numerical modeling of the Farley-Buneman instability in ionospheric plasma

    NASA Astrophysics Data System (ADS)

    Dolgov, S. V.; Smirnov, A. P.; Tyrtyshnikov, E. E.

    2014-04-01

    We consider numerical modeling of the Farley-Buneman instability in the Earth's ionosphere plasma. The ion behavior is governed by the kinetic Vlasov equation with the BGK collisional term in the four-dimensional phase space, and since the finite difference discretization on a tensor product grid is used, this equation becomes the most computationally challenging part of the scheme. To relax the complexity and memory consumption, an adaptive model reduction using the low-rank separation of variables, namely the Tensor Train format, is employed. The approach was verified via a prototype MATLAB implementation. Numerical experiments demonstrate the possibility of efficient separation of space and velocity variables, resulting in the solution storage reduction by a factor of order tens.

  5. Model selection and averaging in the assessment of the drivers of household food waste to reduce the probability of false positives.

    PubMed

    Grainger, Matthew James; Aramyan, Lusine; Piras, Simone; Quested, Thomas Edward; Righi, Simone; Setti, Marco; Vittuari, Matteo; Stewart, Gavin Bruce

    2018-01-01

    Food waste from households contributes the greatest proportion to total food waste in developed countries. Therefore, food waste reduction requires an understanding of the socio-economic (contextual and behavioural) factors that lead to its generation within the household. Addressing such a complex subject calls for sound methodological approaches that until now have been conditioned by the large number of factors involved in waste generation, by the lack of a recognised definition, and by limited available data. This work contributes to food waste generation literature by using one of the largest available datasets that includes data on the objective amount of avoidable household food waste, along with information on a series of socio-economic factors. In order to address one aspect of the complexity of the problem, machine learning algorithms (random forests and boruta) for variable selection integrated with linear modelling, model selection and averaging are implemented. Model selection addresses model structural uncertainty, which is not routinely considered in assessments of food waste in literature. The main drivers of food waste in the home selected in the most parsimonious models include household size, the presence of fussy eaters, employment status, home ownership status, and the local authority. Results, regardless of which variable set the models are run on, point toward large households as being a key target element for food waste reduction interventions.

  6. Fitting direct covariance structures by the MSTRUCT modeling language of the CALIS procedure.

    PubMed

    Yung, Yiu-Fai; Browne, Michael W; Zhang, Wei

    2015-02-01

    This paper demonstrates the usefulness and flexibility of the general structural equation modelling (SEM) approach to fitting direct covariance patterns or structures (as opposed to fitting implied covariance structures from functional relationships among variables). In particular, the MSTRUCT modelling language (or syntax) of the CALIS procedure (SAS/STAT version 9.22 or later: SAS Institute, 2010) is used to illustrate the SEM approach. The MSTRUCT modelling language supports a direct covariance pattern specification of each covariance element. It also supports the input of additional independent and dependent parameters. Model tests, fit statistics, estimates, and their standard errors are then produced under the general SEM framework. By using numerical and computational examples, the following tests of basic covariance patterns are illustrated: sphericity, compound symmetry, and multiple-group covariance patterns. Specification and testing of two complex correlation structures, the circumplex pattern and the composite direct product models with or without composite errors and scales, are also illustrated by the MSTRUCT syntax. It is concluded that the SEM approach offers a general and flexible modelling of direct covariance and correlation patterns. In conjunction with the use of SAS macros, the MSTRUCT syntax provides an easy-to-use interface for specifying and fitting complex covariance and correlation structures, even when the number of variables or parameters becomes large. © 2014 The British Psychological Society.

  7. Model selection and averaging in the assessment of the drivers of household food waste to reduce the probability of false positives

    PubMed Central

    Aramyan, Lusine; Piras, Simone; Quested, Thomas Edward; Righi, Simone; Setti, Marco; Vittuari, Matteo; Stewart, Gavin Bruce

    2018-01-01

    Food waste from households contributes the greatest proportion to total food waste in developed countries. Therefore, food waste reduction requires an understanding of the socio-economic (contextual and behavioural) factors that lead to its generation within the household. Addressing such a complex subject calls for sound methodological approaches that until now have been conditioned by the large number of factors involved in waste generation, by the lack of a recognised definition, and by limited available data. This work contributes to food waste generation literature by using one of the largest available datasets that includes data on the objective amount of avoidable household food waste, along with information on a series of socio-economic factors. In order to address one aspect of the complexity of the problem, machine learning algorithms (random forests and boruta) for variable selection integrated with linear modelling, model selection and averaging are implemented. Model selection addresses model structural uncertainty, which is not routinely considered in assessments of food waste in literature. The main drivers of food waste in the home selected in the most parsimonious models include household size, the presence of fussy eaters, employment status, home ownership status, and the local authority. Results, regardless of which variable set the models are run on, point toward large households as being a key target element for food waste reduction interventions. PMID:29389949

  8. High-amplitude fluctuations and alternative dynamical states of midges in Lake Myvatn.

    PubMed

    Ives, Anthony R; Einarsson, Arni; Jansen, Vincent A A; Gardarsson, Arnthor

    2008-03-06

    Complex dynamics are often shown by simple ecological models and have been clearly demonstrated in laboratory and natural systems. Yet many classes of theoretically possible dynamics are still poorly documented in nature. Here we study long-term time-series data of a midge, Tanytarsus gracilentus (Diptera: Chironomidae), in Lake Myvatn, Iceland. The midge undergoes density fluctuations of almost six orders of magnitude. Rather than regular cycles, however, these fluctuations have irregular periods of 4-7 years, indicating complex dynamics. We fit three consumer-resource models capable of qualitatively distinct dynamics to the data. Of these, the best-fitting model shows alternative dynamical states in the absence of environmental variability; depending on the initial midge densities, the model shows either fluctuations around a fixed point or high-amplitude cycles. This explains the observed complex population dynamics: high-amplitude but irregular fluctuations occur because stochastic variability causes the dynamics to switch between domains of attraction to the alternative states. In the model, the amplitude of fluctuations depends strongly on minute resource subsidies into the midge habitat. These resource subsidies may be sensitive to human-caused changes in the hydrology of the lake, with human impacts such as dredging leading to higher-amplitude fluctuations. Tanytarsus gracilentus is a key component of the Myvatn ecosystem, representing two-thirds of the secondary productivity of the lake and providing vital food resources to fish and to breeding bird populations. Therefore the high-amplitude, irregular fluctuations in midge densities generated by alternative dynamical states dominate much of the ecology of the lake.

  9. Modelling the meteorological forest fire niche in heterogeneous pyrologic conditions.

    PubMed

    De Angelis, Antonella; Ricotta, Carlo; Conedera, Marco; Pezzatti, Gianni Boris

    2015-01-01

    Fire regimes are strongly related to weather conditions that directly and indirectly influence fire ignition and propagation. Identifying the most important meteorological fire drivers is thus fundamental for daily fire risk forecasting. In this context, several fire weather indices have been developed focussing mainly on fire-related local weather conditions and fuel characteristics. The specificity of the conditions for which fire danger indices are developed makes its direct transfer and applicability problematic in different areas or with other fuel types. In this paper we used the low-to-intermediate fire-prone region of Canton Ticino as a case study to develop a new daily fire danger index by implementing a niche modelling approach (Maxent). In order to identify the most suitable weather conditions for fires, different combinations of input variables were tested (meteorological variables, existing fire danger indices or a combination of both). Our findings demonstrate that such combinations of input variables increase the predictive power of the resulting index and surprisingly even using meteorological variables only allows similar or better performances than using the complex Canadian Fire Weather Index (FWI). Furthermore, the niche modelling approach based on Maxent resulted in slightly improved model performance and in a reduced number of selected variables with respect to the classical logistic approach. Factors influencing final model robustness were the number of fire events considered and the specificity of the meteorological conditions leading to fire ignition.

  10. Modelling the Meteorological Forest Fire Niche in Heterogeneous Pyrologic Conditions

    PubMed Central

    De Angelis, Antonella; Ricotta, Carlo; Conedera, Marco; Pezzatti, Gianni Boris

    2015-01-01

    Fire regimes are strongly related to weather conditions that directly and indirectly influence fire ignition and propagation. Identifying the most important meteorological fire drivers is thus fundamental for daily fire risk forecasting. In this context, several fire weather indices have been developed focussing mainly on fire-related local weather conditions and fuel characteristics. The specificity of the conditions for which fire danger indices are developed makes its direct transfer and applicability problematic in different areas or with other fuel types. In this paper we used the low-to-intermediate fire-prone region of Canton Ticino as a case study to develop a new daily fire danger index by implementing a niche modelling approach (Maxent). In order to identify the most suitable weather conditions for fires, different combinations of input variables were tested (meteorological variables, existing fire danger indices or a combination of both). Our findings demonstrate that such combinations of input variables increase the predictive power of the resulting index and surprisingly even using meteorological variables only allows similar or better performances than using the complex Canadian Fire Weather Index (FWI). Furthermore, the niche modelling approach based on Maxent resulted in slightly improved model performance and in a reduced number of selected variables with respect to the classical logistic approach. Factors influencing final model robustness were the number of fire events considered and the specificity of the meteorological conditions leading to fire ignition. PMID:25679957

  11. Spatial Models for Prediction and Early Warning of Aedes aegypti Proliferation from Data on Climate Change and Variability in Cuba.

    PubMed

    Ortiz, Paulo L; Rivero, Alina; Linares, Yzenia; Pérez, Alina; Vázquez, Juan R

    2015-04-01

    Climate variability, the primary expression of climate change, is one of the most important environmental problems affecting human health, particularly vector-borne diseases. Despite research efforts worldwide, there are few studies addressing the use of information on climate variability for prevention and early warning of vector-borne infectious diseases. Show the utility of climate information for vector surveillance by developing spatial models using an entomological indicator and information on predicted climate variability in Cuba to provide early warning of danger of increased risk of dengue transmission. An ecological study was carried out using retrospective and prospective analyses of time series combined with spatial statistics. Several entomological and climatic indicators were considered using complex Bultó indices -1 and -2. Moran's I spatial autocorrelation coefficient specified for a matrix of neighbors with a radius of 20 km, was used to identify the spatial structure. Spatial structure simulation was based on simultaneous autoregressive and conditional autoregressive models; agreement between predicted and observed values for number of Aedes aegypti foci was determined by the concordance index Di and skill factor Bi. Spatial and temporal distributions of populations of Aedes aegypti were obtained. Models for describing, simulating and predicting spatial patterns of Aedes aegypti populations associated with climate variability patterns were put forward. The ranges of climate variability affecting Aedes aegypti populations were identified. Forecast maps were generated for the municipal level. Using the Bultó indices of climate variability, it is possible to construct spatial models for predicting increased Aedes aegypti populations in Cuba. At 20 x 20 km resolution, the models are able to provide warning of potential changes in vector populations in rainy and dry seasons and by month, thus demonstrating the usefulness of climate information for epidemiological surveillance.

  12. An artificial neural network improves prediction of observed survival in patients with laryngeal squamous carcinoma.

    PubMed

    Jones, Andrew S; Taktak, Azzam G F; Helliwell, Timothy R; Fenton, John E; Birchall, Martin A; Husband, David J; Fisher, Anthony C

    2006-06-01

    The accepted method of modelling and predicting failure/survival, Cox's proportional hazards model, is theoretically inferior to neural network derived models for analysing highly complex systems with large datasets. A blinded comparison of the neural network versus the Cox's model in predicting survival utilising data from 873 treated patients with laryngeal cancer. These were divided randomly and equally into a training set and a study set and Cox's and neural network models applied in turn. Data were then divided into seven sets of binary covariates and the analysis repeated. Overall survival was not significantly different on Kaplan-Meier plot, or with either test model. Although the network produced qualitatively similar results to Cox's model it was significantly more sensitive to differences in survival curves for age and N stage. We propose that neural networks are capable of prediction in systems involving complex interactions between variables and non-linearity.

  13. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hero, Alfred O.; Rajaratnam, Bala

    When can reliable inference be drawn in the ‘‘Big Data’’ context? This article presents a framework for answering this fundamental question in the context of correlation mining, with implications for general large-scale inference. In large-scale data applications like genomics, connectomics, and eco-informatics, the data set is often variable rich but sample starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than the number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for ‘‘Big Data.’’ Sample complexity, however, hasmore » received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address this gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where the variable dimension is fixed and the sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; and 3) the purely high-dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa-scale data dimension. We illustrate this high-dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables that are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. We demonstrate various regimes of correlation mining based on the unifying perspective of high-dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.« less

  14. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining

    PubMed Central

    Hero, Alfred O.; Rajaratnam, Bala

    2015-01-01

    When can reliable inference be drawn in fue “Big Data” context? This paper presents a framework for answering this fundamental question in the context of correlation mining, wifu implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics fue dataset is often variable-rich but sample-starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than fue number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for “Big Data”. Sample complexity however has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address fuis gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where fue variable dimension is fixed and fue sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa cale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables fua t are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. we demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks. PMID:27087700

  15. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining

    DOE PAGES

    Hero, Alfred O.; Rajaratnam, Bala

    2015-12-09

    When can reliable inference be drawn in the ‘‘Big Data’’ context? This article presents a framework for answering this fundamental question in the context of correlation mining, with implications for general large-scale inference. In large-scale data applications like genomics, connectomics, and eco-informatics, the data set is often variable rich but sample starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than the number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for ‘‘Big Data.’’ Sample complexity, however, hasmore » received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address this gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where the variable dimension is fixed and the sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; and 3) the purely high-dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa-scale data dimension. We illustrate this high-dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables that are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. We demonstrate various regimes of correlation mining based on the unifying perspective of high-dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.« less

  16. Human systems dynamics: Toward a computational model

    NASA Astrophysics Data System (ADS)

    Eoyang, Glenda H.

    2012-09-01

    A robust and reliable computational model of complex human systems dynamics could support advancements in theory and practice for social systems at all levels, from intrapersonal experience to global politics and economics. Models of human interactions have evolved from traditional, Newtonian systems assumptions, which served a variety of practical and theoretical needs of the past. Another class of models has been inspired and informed by models and methods from nonlinear dynamics, chaos, and complexity science. None of the existing models, however, is able to represent the open, high dimension, and nonlinear self-organizing dynamics of social systems. An effective model will represent interactions at multiple levels to generate emergent patterns of social and political life of individuals and groups. Existing models and modeling methods are considered and assessed against characteristic pattern-forming processes in observed and experienced phenomena of human systems. A conceptual model, CDE Model, based on the conditions for self-organizing in human systems, is explored as an alternative to existing models and methods. While the new model overcomes the limitations of previous models, it also provides an explanatory base and foundation for prospective analysis to inform real-time meaning making and action taking in response to complex conditions in the real world. An invitation is extended to readers to engage in developing a computational model that incorporates the assumptions, meta-variables, and relationships of this open, high dimension, and nonlinear conceptual model of the complex dynamics of human systems.

  17. Integrating Map Algebra and Statistical Modeling for Spatio- Temporal Analysis of Monthly Mean Daily Incident Photosynthetically Active Radiation (PAR) over a Complex Terrain.

    PubMed

    Evrendilek, Fatih

    2007-12-12

    This study aims at quantifying spatio-temporal dynamics of monthly mean dailyincident photosynthetically active radiation (PAR) over a vast and complex terrain such asTurkey. The spatial interpolation method of universal kriging, and the combination ofmultiple linear regression (MLR) models and map algebra techniques were implemented togenerate surface maps of PAR with a grid resolution of 500 x 500 m as a function of fivegeographical and 14 climatic variables. Performance of the geostatistical and MLR modelswas compared using mean prediction error (MPE), root-mean-square prediction error(RMSPE), average standard prediction error (ASE), mean standardized prediction error(MSPE), root-mean-square standardized prediction error (RMSSPE), and adjustedcoefficient of determination (R² adj. ). The best-fit MLR- and universal kriging-generatedmodels of monthly mean daily PAR were validated against an independent 37-year observeddataset of 35 climate stations derived from 160 stations across Turkey by the Jackknifingmethod. The spatial variability patterns of monthly mean daily incident PAR were moreaccurately reflected in the surface maps created by the MLR-based models than in thosecreated by the universal kriging method, in particular, for spring (May) and autumn(November). The MLR-based spatial interpolation algorithms of PAR described in thisstudy indicated the significance of the multifactor approach to understanding and mappingspatio-temporal dynamics of PAR for a complex terrain over meso-scales.

  18. Dynamic Target Acquisition: Empirical Models of Operator Performance.

    DTIC Science & Technology

    1980-08-01

    for 30,000 Ft Initial Slant Range VARIABLES MEAN Signature X Scene Complexity Low Medium High Active Target FLIR 22794 20162 20449 Inactive Target...Interactions for 30,000 Ft Initial Slant Range I Signature X Scene Complexity V * ORDERED MEANS 14867 18076 18079 18315 19105 19643 20162 20449 22794...14867 18076 1 183159 19105* 1 19643 20162* 20449 * 1 22794Signature X Speed I ORDERED MEANS 13429 15226 16604 17344 19033 20586 22641 24033 24491 1

  19. A structural equation model of soil metal bioavailability to earthworms: confronting causal theory and observations using a laboratory exposure to field-contaminated soils.

    PubMed

    Beaumelle, Léa; Vile, Denis; Lamy, Isabelle; Vandenbulcke, Franck; Gimbert, Frédéric; Hedde, Mickaël

    2016-11-01

    Structural equation models (SEM) are increasingly used in ecology as multivariate analysis that can represent theoretical variables and address complex sets of hypotheses. Here we demonstrate the interest of SEM in ecotoxicology, more precisely to test the three-step concept of metal bioavailability to earthworms. The SEM modeled the three-step causal chain between environmental availability, environmental bioavailability and toxicological bioavailability. In the model, each step is an unmeasured (latent) variable reflected by several observed variables. In an exposure experiment designed specifically to test this SEM for Cd, Pb and Zn, Aporrectodea caliginosa was exposed to 31 agricultural field-contaminated soils. Chemical and biological measurements used included CaC12-extractable metal concentrations in soils, free ion concentration in soil solution as predicted by a geochemical model, dissolved metal concentration as predicted by a semi-mechanistic model, internal metal concentrations in total earthworms and in subcellular fractions, and several biomarkers. The observations verified the causal definition of Cd and Pb bioavailability in the SEM, but not for Zn. Several indicators consistently reflected the hypothetical causal definition and could thus be pertinent measurements of Cd and Pb bioavailability to earthworm in field-contaminated soils. SEM highlights that the metals present in the soil solution and easily extractable are not the main source of available metals for earthworms. This study further highlights SEM as a powerful tool that can handle natural ecosystem complexity, thus participating to the paradigm change in ecotoxicology from a bottom-up to a top-down approach. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Modeling and enhanced sampling of molecular systems with smooth and nonlinear data-driven collective variables

    NASA Astrophysics Data System (ADS)

    Hashemian, Behrooz; Millán, Daniel; Arroyo, Marino

    2013-12-01

    Collective variables (CVs) are low-dimensional representations of the state of a complex system, which help us rationalize molecular conformations and sample free energy landscapes with molecular dynamics simulations. Given their importance, there is need for systematic methods that effectively identify CVs for complex systems. In recent years, nonlinear manifold learning has shown its ability to automatically characterize molecular collective behavior. Unfortunately, these methods fail to provide a differentiable function mapping high-dimensional configurations to their low-dimensional representation, as required in enhanced sampling methods. We introduce a methodology that, starting from an ensemble representative of molecular flexibility, builds smooth and nonlinear data-driven collective variables (SandCV) from the output of nonlinear manifold learning algorithms. We demonstrate the method with a standard benchmark molecule, alanine dipeptide, and show how it can be non-intrusively combined with off-the-shelf enhanced sampling methods, here the adaptive biasing force method. We illustrate how enhanced sampling simulations with SandCV can explore regions that were poorly sampled in the original molecular ensemble. We further explore the transferability of SandCV from a simpler system, alanine dipeptide in vacuum, to a more complex system, alanine dipeptide in explicit water.

  1. Modeling and enhanced sampling of molecular systems with smooth and nonlinear data-driven collective variables.

    PubMed

    Hashemian, Behrooz; Millán, Daniel; Arroyo, Marino

    2013-12-07

    Collective variables (CVs) are low-dimensional representations of the state of a complex system, which help us rationalize molecular conformations and sample free energy landscapes with molecular dynamics simulations. Given their importance, there is need for systematic methods that effectively identify CVs for complex systems. In recent years, nonlinear manifold learning has shown its ability to automatically characterize molecular collective behavior. Unfortunately, these methods fail to provide a differentiable function mapping high-dimensional configurations to their low-dimensional representation, as required in enhanced sampling methods. We introduce a methodology that, starting from an ensemble representative of molecular flexibility, builds smooth and nonlinear data-driven collective variables (SandCV) from the output of nonlinear manifold learning algorithms. We demonstrate the method with a standard benchmark molecule, alanine dipeptide, and show how it can be non-intrusively combined with off-the-shelf enhanced sampling methods, here the adaptive biasing force method. We illustrate how enhanced sampling simulations with SandCV can explore regions that were poorly sampled in the original molecular ensemble. We further explore the transferability of SandCV from a simpler system, alanine dipeptide in vacuum, to a more complex system, alanine dipeptide in explicit water.

  2. Modeling Bivariate Change in Individual Differences: Prospective Associations Between Personality and Life Satisfaction.

    PubMed

    Hounkpatin, Hilda Osafo; Boyce, Christopher J; Dunn, Graham; Wood, Alex M

    2017-09-18

    A number of structural equation models have been developed to examine change in 1 variable or the longitudinal association between 2 variables. The most common of these are the latent growth model, the autoregressive cross-lagged model, the autoregressive latent trajectory model, and the latent change score model. The authors first overview each of these models through evaluating their different assumptions surrounding the nature of change and how these assumptions may result in different data interpretations. They then, to elucidate these issues in an empirical example, examine the longitudinal association between personality traits and life satisfaction. In a representative Dutch sample (N = 8,320), with participants providing data on both personality and life satisfaction measures every 2 years over an 8-year period, the authors reproduce findings from previous research. However, some of the structural equation models overviewed have not previously been applied to the personality-life satisfaction relation. The extended empirical examination suggests intraindividual changes in life satisfaction predict subsequent intraindividual changes in personality traits. The availability of data sets with 3 or more assessment waves allows the application of more advanced structural equation models such as the autoregressive latent trajectory or the extended latent change score model, which accounts for the complex dynamic nature of change processes and allows stronger inferences on the nature of the association between variables. However, the choice of model should be determined by theories of change processes in the variables being studied. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Modelling Inter-relationships among water, governance, human development variables in developing countries with Bayesian networks.

    NASA Astrophysics Data System (ADS)

    Dondeynaz, C.; Lopez-Puga, J.; Carmona-Moreno, C.

    2012-04-01

    Improving Water and Sanitation Services (WSS), being a complex and interdisciplinary issue, passes through collaboration and coordination of different sectors (environment, health, economic activities, governance, and international cooperation). This inter-dependency has been recognised with the adoption of the "Integrated Water Resources Management" principles that push for the integration of these various dimensions involved in WSS delivery to ensure an efficient and sustainable management. The understanding of these interrelations appears as crucial for decision makers in the water sector in particular in developing countries where WSS still represent an important leverage for livelihood improvement. In this framework, the Joint Research Centre of the European Commission has developed a coherent database (WatSan4Dev database) containing 29 indicators from environmental, socio-economic, governance and financial aid flows data focusing on developing countries (Celine et al, 2011 under publication). The aim of this work is to model the WatSan4Dev dataset using probabilistic models to identify the key variables influencing or being influenced by the water supply and sanitation access levels. Bayesian Network Models are suitable to map the conditional dependencies between variables and also allows ordering variables by level of influence on the dependent variable. Separated models have been built for water supply and for sanitation because of different behaviour. The models are validated if complying with statistical criteria but either with scientific knowledge and literature. A two steps approach has been adopted to build the structure of the model; Bayesian network is first built for each thematic cluster of variables (e.g governance, agricultural pressure, or human development) keeping a detailed level for interpretation later one. A global model is then built based on significant indicators of each cluster being previously modelled. The structure of the relationships between variable are set a priori according to literature and/or experience in the field (expert knowledge). The statistical validation is verified according to error rate of classification, and the significance of the variables. Sensibility analysis has also been performed to characterise the relative influence of every single variable in the model. Once validated, the models allow the estimation of impact of each variable on the behaviour of the water supply or sanitation providing an interesting mean to test scenarios and predict variables behaviours. The choices made, methods and description of the various models, for each cluster as well as the global model for water supply and sanitation will be presented. Key results and interpretation of the relationships depicted by the models will be detailed during the conference.

  4. An improved switching converter model using discrete and average techniques

    NASA Technical Reports Server (NTRS)

    Shortt, D. J.; Lee, F. C.

    1982-01-01

    The nonlinear modeling and analysis of dc-dc converters has been done by averaging and discrete-sampling techniques. The averaging technique is simple, but inaccurate as the modulation frequencies approach the theoretical limit of one-half the switching frequency. The discrete technique is accurate even at high frequencies, but is very complex and cumbersome. An improved model is developed by combining the aforementioned techniques. This new model is easy to implement in circuit and state variable forms and is accurate to the theoretical limit.

  5. The Health Belief Model as an Explanatory Framework in Communication Research: Exploring Parallel, Serial, and Moderated Mediation

    PubMed Central

    Jones, Christina L.; Jensen, Jakob D.; Scherr, Courtney L.; Brown, Natasha R.; Christy, Katheryn; Weaver, Jeremy

    2015-01-01

    The Health Belief Model (HBM) posits that messages will achieve optimal behavior change if they successfully target perceived barriers, benefits, self-efficacy, and threat. While the model seems to be an ideal explanatory framework for communication research, theoretical limitations have limited its use in the field. Notably, variable ordering is currently undefined in the HBM. Thus, it is unclear whether constructs mediate relationships comparably (parallel mediation), in sequence (serial mediation), or in tandem with a moderator (moderated mediation). To investigate variable ordering, adults (N = 1,377) completed a survey in the aftermath of an 8-month flu vaccine campaign grounded in the HBM. Exposure to the campaign was positively related to vaccination behavior. Statistical evaluation supported a model where the indirect effect of exposure on behavior through perceived barriers and threat was moderated by self-efficacy (moderated mediation). Perceived barriers and benefits also formed a serial mediation chain. The results indicate that variable ordering in the Health Belief Model may be complex, may help to explain conflicting results of the past, and may be a good focus for future research. PMID:25010519

  6. Rupture Propagation for Stochastic Fault Models

    NASA Astrophysics Data System (ADS)

    Favreau, P.; Lavallee, D.; Archuleta, R.

    2003-12-01

    The inversion of strong motion data of large earhquakes give the spatial distribution of pre-stress on the ruptured faults and it can be partially reproduced by stochastic models, but a fundamental question remains: how rupture propagates, constrained by the presence of spatial heterogeneity? For this purpose we investigate how the underlying random variables, that control the pre-stress spatial variability, condition the propagation of the rupture. Two stochastic models of prestress distributions are considered, respectively based on Cauchy and Gaussian random variables. The parameters of the two stochastic models have values corresponding to the slip distribution of the 1979 Imperial Valley earthquake. We use a finite difference code to simulate the spontaneous propagation of shear rupture on a flat fault in a 3D continuum elastic body. The friction law is the slip dependent friction law. The simulations show that the propagation of the rupture front is more complex, incoherent or snake-like for a prestress distribution based on Cauchy random variables. This may be related to the presence of a higher number of asperities in this case. These simulations suggest that directivity is stronger in the Cauchy scenario, compared to the smoother rupture of the Gauss scenario.

  7. Unification of the complex Langevin method and the Lefschetzthimble method

    NASA Astrophysics Data System (ADS)

    Nishimura, Jun; Shimasaki, Shinji

    2018-03-01

    Recently there has been remarkable progress in solving the sign problem, which occurs in investigating statistical systems with a complex weight. The two promising methods, the complex Langevin method and the Lefschetz thimble method, share the idea of complexifying the dynamical variables, but their relationship has not been clear. Here we propose a unified formulation, in which the sign problem is taken care of by both the Langevin dynamics and the holomorphic gradient flow. We apply our formulation to a simple model in three different ways and show that one of them interpolates the two methods by changing the flow time.

  8. Combining disparate data for decision making

    NASA Astrophysics Data System (ADS)

    Gettings, M. E.

    2010-12-01

    Combining information of disparate types from multiple data or model sources is a fundamental task in decision making theory. Procedures for combining and utilizing quantitative data with uncertainties are well-developed in several approaches, but methods for including qualitative and semi-quantitative data are much less so. Possibility theory offers an approach to treating all three data types in an objective and repeatable way. In decision making, biases are frequently present in several forms, including those arising from data quality, data spatial and temporal distribution, and the analyst's knowledge and beliefs as to which data or models are most important. The latter bias is particularly evident in the case of qualitative data and there are numerous examples of analysts feeling that a qualitative dataset is more relevant than a quantified one. Possibility theory and fuzzy logic now provide fairly general rules for quantifying qualitative and semi-quantitative data in ways that are repeatable and minimally biased. Once a set of quantified data and/or model layers is obtained, there are several methods of combining them to obtain insight useful in decision making. These include: various combinations of layers using formal fuzzy logic (for example, layer A and (layer B or layer C) but not layer D); connecting the layers with varying influence links in a Fuzzy Cognitive Map; and using the set of layers for the universe of discourse for agent based model simulations. One example of logical combinations that have proven useful is the definition of possible habitat for valley fever fungus (Coccidioides sp.) using variables such as soil type, altitude, aspect, moisture and temperature. A second example is the delineation of the lithology and possible mineralization of several areas beneath basin fill in southern Arizona. A Fuzzy Cognitive Map example is the impacts of development and operation of a hypothetical mine in an area adjacent to a city. In this model variables such as water use, environmental quality measures (visual and geochemical), deposit quality, rate of development, and commodity price combine in complex ways to yield frequently counter-intuitive results. By varying the interaction strengths linking the variables, insight into the complex interactions of the system can be gained. An example using agent-based modeling is a model designed to test the hypothesis that new valley fever fungus sites could be established from existing sites by wind transport of fungal spores. The variables include layers simulating precipitation, temperature, soil moisture, and soil chemistry based on historical climate records and studies of known valley fever habitat. Numerous agent-based model runs show that the system is self organizing to the extent that there will be new sites established by wind transport over decadal scales. Possibility theory provides a framework for gaining insight into the interaction of known or suspected variables in a complex system. Once the data layers are quantified into possibility functions, varying hypotheses of the relative importance of variables and processes can be obtained by repeated combinations with varying weights. This permits an evaluation of the effects of various data layers, their uncertainties, and biases from the layers, all of which improve the objectivity of decision making.

  9. An evaluation of Bayesian techniques for controlling model complexity and selecting inputs in a neural network for short-term load forecasting.

    PubMed

    Hippert, Henrique S; Taylor, James W

    2010-04-01

    Artificial neural networks have frequently been proposed for electricity load forecasting because of their capabilities for the nonlinear modelling of large multivariate data sets. Modelling with neural networks is not an easy task though; two of the main challenges are defining the appropriate level of model complexity, and choosing the input variables. This paper evaluates techniques for automatic neural network modelling within a Bayesian framework, as applied to six samples containing daily load and weather data for four different countries. We analyse input selection as carried out by the Bayesian 'automatic relevance determination', and the usefulness of the Bayesian 'evidence' for the selection of the best structure (in terms of number of neurones), as compared to methods based on cross-validation. Copyright 2009 Elsevier Ltd. All rights reserved.

  10. Alternatives for jet engine control

    NASA Technical Reports Server (NTRS)

    Sain, M. K.

    1980-01-01

    Nonlinear modeling researches involving the use of tensor analysis are presented. Progress was achieved by extending the studies to a controlled equation and by considering more complex situations. Included in the report are calculations illustrating the modeling methodology for cases in which variables take values in real spaces of dimension up to three, and in which the degree of tensor term retention is as high as three.

  11. Estimating the impact of internal climate variability on ice sheet model simulations

    NASA Astrophysics Data System (ADS)

    Tsai, C. Y.; Forest, C. E.; Pollard, D.

    2016-12-01

    Rising sea level threatens human societies and coastal habitats and melting ice sheets are a major contributor to sea level rise (SLR). Thus, understanding uncertainty of both forcing and variability within the climate system is essential for assessing long-term risk of SLR given their impact on ice sheet evolution. The predictability of polar climate is limited by uncertainties from the given forcing, the climate model response to this forcing, and the internal variability from feedbacks within the fully coupled climate system. Among those sources of uncertainty, the impact of internal climate variability on ice sheet changes has not yet been robustly assessed. Here we investigate how internal variability affects ice sheet projections using climate fields from two Community Earth System Model (CESM) large-ensemble (LE) experiments to force a three-dimensional ice sheet model. Each ensemble member in an LE experiment undergoes the same external forcings but with unique initial conditions. We find that for both LEs, 2m air temperature variability over Greenland ice sheet (GrIS) can lead to significantly different ice sheet responses. Our results show that the internal variability from two fully coupled CESM LEs can cause about 25 35 mm differences of GrIS's contribution to SLR in 2100 compared to present day (about 20% of the total change), and 100m differences of SLR in 2300. Moreover, only using ensemble-mean climate fields as the forcing in ice sheet model can significantly underestimate the melt of GrIS. As the Arctic region becomes warmer, the role of internal variability is critical given the complex nonlinear interactions between surface temperature and ice sheet. Our results demonstrate that internal variability from coupled atmosphere-ocean general circulation model can affect ice sheet simulations and the resulting sea-level projections. This study highlights an urgent need to reassess associated uncertainties of projecting ice sheet loss over the next few centuries to obtain robust estimates of the contribution of ice sheet melt to SLR.

  12. Statistical optics

    NASA Astrophysics Data System (ADS)

    Goodman, J. W.

    This book is based on the thesis that some training in the area of statistical optics should be included as a standard part of any advanced optics curriculum. Random variables are discussed, taking into account definitions of probability and random variables, distribution functions and density functions, an extension to two or more random variables, statistical averages, transformations of random variables, sums of real random variables, Gaussian random variables, complex-valued random variables, and random phasor sums. Other subjects examined are related to random processes, some first-order properties of light waves, the coherence of optical waves, some problems involving high-order coherence, effects of partial coherence on imaging systems, imaging in the presence of randomly inhomogeneous media, and fundamental limits in photoelectric detection of light. Attention is given to deterministic versus statistical phenomena and models, the Fourier transform, and the fourth-order moment of the spectrum of a detected speckle image.

  13. Solute Transport Dynamics in a Large Hyporheic Corridor System

    NASA Astrophysics Data System (ADS)

    Zachara, J. M.; Chen, X.; Murray, C. J.; Shuai, P.; Rizzo, C.; Song, X.; Dai, H.

    2016-12-01

    A hyporheic corridor is an extended zone of groundwater surface water-interaction that occurs within permeable aquifer sediments in hydrologic continuity with a river. These systems are dynamic and tightly coupled to river stage variations that may occur over variable time scales. Here we describe the behavior of a persistent uranium (U) contaminant plume that exists within the hyporheic corridor of a large, managed river system - the Columbia River. Temporally dense monitoring data were collected for a two year period from wells located within the plume at varying distances up to 400 m from the river shore. Groundwater U originates from desorption of residual U in the lower vadose zone during periods of high river stage and associated elevated water table. U is weakly adsorbed to aquifer sediments because of coarse texture, and along with specific conductance, serves as a tracer of vadose zone source terms, solute transport pathways, and groundwater-surface water mixing. Complex U concentration and specific conductance trends were observed for all wells that varied with distance from the river shoreline and the river hydrograph, although trends for each well were generally repeatable for each year during the monitoring period. Statistical clustering analysis was used to identify four groups of wells that exhibited common trends in dissolved U and specific conductance. A flow and reactive transport code, PFLOTRAN, was implemented within a hydrogeologic model of the groundwater-surface water interaction zone to provide insights on hydrologic processes controlling monitoring trends and cluster behavior. The hydrogeologic model was informed by extensive subsurface characterization, with the spatially variable topography of a basal aquitard being one of several key parameters. Numerical tracer experiments using PFLOTRAN revealed the presence of temporally complex flow trajectories, spatially variable domains of groundwater - river water mixing, and locations of enhanced groundwater - river exchange that helped to explain monitoring trends. Observations and modeling results are integrated into a conceptual model of this highly complex and dynamic system with applicability to hyporheic corridor systems elsewhere.

  14. Combined Parameter and State Estimation Problem in a Complex Domain: RF Hyperthermia Treatment Using Nanoparticles

    NASA Astrophysics Data System (ADS)

    Bermeo Varon, L. A.; Orlande, H. R. B.; Eliçabe, G. E.

    2016-09-01

    The particle filter methods have been widely used to solve inverse problems with sequential Bayesian inference in dynamic models, simultaneously estimating sequential state variables and fixed model parameters. This methods are an approximation of sequences of probability distributions of interest, that using a large set of random samples, with presence uncertainties in the model, measurements and parameters. In this paper the main focus is the solution combined parameters and state estimation in the radiofrequency hyperthermia with nanoparticles in a complex domain. This domain contains different tissues like muscle, pancreas, lungs, small intestine and a tumor which is loaded iron oxide nanoparticles. The results indicated that excellent agreements between estimated and exact value are obtained.

  15. Bayesian inference for multivariate meta-analysis Box-Cox transformation models for individual patient data with applications to evaluation of cholesterol lowering drugs

    PubMed Central

    Kim, Sungduk; Chen, Ming-Hui; Ibrahim, Joseph G.; Shah, Arvind K.; Lin, Jianxin

    2013-01-01

    In this paper, we propose a class of Box-Cox transformation regression models with multidimensional random effects for analyzing multivariate responses for individual patient data (IPD) in meta-analysis. Our modeling formulation uses a multivariate normal response meta-analysis model with multivariate random effects, in which each response is allowed to have its own Box-Cox transformation. Prior distributions are specified for the Box-Cox transformation parameters as well as the regression coefficients in this complex model, and the Deviance Information Criterion (DIC) is used to select the best transformation model. Since the model is quite complex, a novel Monte Carlo Markov chain (MCMC) sampling scheme is developed to sample from the joint posterior of the parameters. This model is motivated by a very rich dataset comprising 26 clinical trials involving cholesterol lowering drugs where the goal is to jointly model the three dimensional response consisting of Low Density Lipoprotein Cholesterol (LDL-C), High Density Lipoprotein Cholesterol (HDL-C), and Triglycerides (TG) (LDL-C, HDL-C, TG). Since the joint distribution of (LDL-C, HDL-C, TG) is not multivariate normal and in fact quite skewed, a Box-Cox transformation is needed to achieve normality. In the clinical literature, these three variables are usually analyzed univariately: however, a multivariate approach would be more appropriate since these variables are correlated with each other. A detailed analysis of these data is carried out using the proposed methodology. PMID:23580436

  16. Bayesian inference for multivariate meta-analysis Box-Cox transformation models for individual patient data with applications to evaluation of cholesterol-lowering drugs.

    PubMed

    Kim, Sungduk; Chen, Ming-Hui; Ibrahim, Joseph G; Shah, Arvind K; Lin, Jianxin

    2013-10-15

    In this paper, we propose a class of Box-Cox transformation regression models with multidimensional random effects for analyzing multivariate responses for individual patient data in meta-analysis. Our modeling formulation uses a multivariate normal response meta-analysis model with multivariate random effects, in which each response is allowed to have its own Box-Cox transformation. Prior distributions are specified for the Box-Cox transformation parameters as well as the regression coefficients in this complex model, and the deviance information criterion is used to select the best transformation model. Because the model is quite complex, we develop a novel Monte Carlo Markov chain sampling scheme to sample from the joint posterior of the parameters. This model is motivated by a very rich dataset comprising 26 clinical trials involving cholesterol-lowering drugs where the goal is to jointly model the three-dimensional response consisting of low density lipoprotein cholesterol (LDL-C), high density lipoprotein cholesterol (HDL-C), and triglycerides (TG) (LDL-C, HDL-C, TG). Because the joint distribution of (LDL-C, HDL-C, TG) is not multivariate normal and in fact quite skewed, a Box-Cox transformation is needed to achieve normality. In the clinical literature, these three variables are usually analyzed univariately; however, a multivariate approach would be more appropriate because these variables are correlated with each other. We carry out a detailed analysis of these data by using the proposed methodology. Copyright © 2013 John Wiley & Sons, Ltd.

  17. Performance of Geno-Fuzzy Model on rainfall-runoff predictions in claypan watersheds

    USDA-ARS?s Scientific Manuscript database

    Fuzzy logic provides a relatively simple approach to simulate complex hydrological systems while accounting for the uncertainty of environmental variables. The objective of this study was to develop a fuzzy inference system (FIS) with genetic algorithm (GA) optimization for membership functions (MF...

  18. Complex small pelagic fish population patterns arising from individual behavioral responses to their environment

    NASA Astrophysics Data System (ADS)

    Brochier, Timothée; Auger, Pierre-Amaël; Pecquerie, Laure; Machu, Eric; Capet, Xavier; Thiaw, Modou; Mbaye, Baye Cheikh; Braham, Cheikh-Baye; Ettahiri, Omar; Charouki, Najib; Sène, Ousseynou Ndaw; Werner, Francisco; Brehmer, Patrice

    2018-05-01

    Small pelagic fish (SPF) species are heavily exploited in eastern boundary upwelling systems (EBUS) as their transformation products are increasingly used in the world's food chain. Management relies on regular monitoring, but there is a lack of robust theories for the emergence of the populations' traits and their evolution in highly variable environments. This work aims to address existing knowledge gaps by combining physical and biogeochemical modelling with an individual life-cycle based model applied to round sardinella (Sardinella aurita) off northwest Africa, a key species for regional food security. Our approach focused on the processes responsible for seasonal migrations, spatio-temporal size-structure, and interannual biomass fluctuations. Emergence of preferred habitat resulted from interactions between natal homing behavior and environmental variability that impacts early life stages. Exploration of the environment by the fishes was determined by swimming capabilities, mesoscale to regional habitat structure, and horizontal currents. Fish spatio-temporal abundance variability emerged from a complex combination of distinct life-history traits. An alongshore gradient in fish size distributions is reported and validated by in situ measurements. New insights into population structure are provided, within an area where the species is abundant year-round (Mauritania) and with latitudinal migrations of variable (300-1200 km) amplitude. Interannual biomass fluctuations were linked to modulations of fish recruitment over the Sahara Bank driven by variability in alongshore current intensity. The identified processes constitute an analytical framework that can be implemented in other EBUS and used to explore impacts of regional climate change on SPF.

  19. ADAM: analysis of discrete models of biological systems using computer algebra.

    PubMed

    Hinkelmann, Franziska; Brandon, Madison; Guang, Bonny; McNeill, Rustin; Blekherman, Grigoriy; Veliz-Cuba, Alan; Laubenbacher, Reinhard

    2011-07-20

    Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics.

  20. Observations and Modeling of the Transient General Circulation of the North Pacific Basin

    NASA Technical Reports Server (NTRS)

    McWilliams, James C.

    2000-01-01

    Because of recent progress in satellite altimetry and numerical modeling and the accumulation and archiving of long records of hydrographic and meteorological variables, it is becoming feasible to describe and understand the transient general circulation of the ocean (i.e., variations with spatial scales larger than a few hundred kilometers and time scales of seasonal and longer-beyond the mesoscale). We have carried out various studies in investigation of the transient general circulation of the Pacific Ocean from a coordinated analysis of satellite altimeter data, historical hydrographic gauge data, scatterometer wind observations, reanalyzed operational wind fields, and a variety of ocean circulation models. Broadly stated, our goal was to achieve a phenomenological catalogue of different possible types of large-scale, low-frequency variability, as a context for understanding the observational record. The approach is to identify the simplest possible model from which particular observed phenomena can be isolated and understood dynamically and then to determine how well these dynamical processes are represented in more complex Oceanic General Circulation Models (OGCMs). Research results have been obtained on Rossby wave propagation and transformation, oceanic intrinsic low-frequency variability, effects of surface gravity waves, pacific data analyses, OGCM formulation and developments, and OGCM simulations of forced variability.

  1. Effects of topography on simulated net primary productivity at landscape scale.

    PubMed

    Chen, X F; Chen, J M; An, S Q; Ju, W M

    2007-11-01

    Local topography significantly affects spatial variations of climatic variables and soil water movement in complex terrain. Therefore, the distribution and productivity of ecosystems are closely linked to topography. Using a coupled terrestrial carbon and hydrological model (BEPS-TerrainLab model), the topographic effects on the net primary productivity (NPP) are analyzed through four modelling experiments for a 5700 km(2) area in Baohe River basin, Shaanxi Province, northwest of China. The model was able to capture 81% of the variability in NPP estimated from tree rings, with a mean relative error of 3.1%. The average NPP in 2003 for the study area was 741 gCm(-2)yr(-1) from a model run including topographic effects on the distributions of climate variables and lateral flow of ground water. Topography has considerable effect on NPP, which peaks near 1350 m above the sea level. An elevation increase of 100 m above this level reduces the average annual NPP by about 25 gCm(-2). The terrain aspect gives rise to a NPP change of 5% for forests located below 1900 m as a result of its influence on incident solar radiation. For the whole study area, a simulation totally excluding topographic effects on the distributions of climatic variables and ground water movement overestimated the average NPP by 5%.

  2. 2D Potential Theory using Complex Algebra: New Perspectives for Interpretation of Marine Magnetic Anomaly

    NASA Astrophysics Data System (ADS)

    Le Maire, P.; Munschy, M.

    2017-12-01

    Interpretation of marine magnetic anomalies enable to perform accurate global kinematic models. Several methods have been proposed to compute the paleo-latitude of the oceanic crust as its formation. A model of the Earth's magnetic field is used to determine a relationship between the apparent inclination of the magnetization and the paleo-latitude. Usually, the estimation of the apparent inclination is qualitative, with the fit between magnetic data and forward models. We propose to apply a new method using complex algebra to obtain the apparent inclination of the magnetization of the oceanic crust. For two dimensional bodies, we rewrite Talwani's equations using complex algebra; the corresponding complex function of the complex variable, called CMA (complex magnetic anomaly) is easier to use for forward modelling and inversion of the magnetic data. This complex equation allows to visualize the data in the complex plane (Argand diagram) and offers a new way to interpret data (curves to the right of the figure (B), while the curves to the left represent the standard display of magnetic anomalies (A) for the model displayed (C) at the bottom of the figure). In the complex plane, the effect of the apparent inclination is to rotate the curves, while on the standard display the evolution of the shape of the anomaly is more complicated (figure). This innovative method gives the opportunity to study a set of magnetic profiles (provided by the Geological Survey of Norway) acquired in the Norwegian Sea, near the Jan Mayen fracture zone. In this area, the age of the oceanic crust ranges from 40 to 55 Ma and the apparent inclination of the magnetization is computed.

  3. Mapping wildland fuels for fire management across multiple scales: integrating remote sensing, GIS, and biophysical modeling

    USGS Publications Warehouse

    Keane, Robert E.; Burgan, Robert E.; Van Wagtendonk, Jan W.

    2001-01-01

    Fuel maps are essential for computing spatial fire hazard and risk and simulating fire growth and intensity across a landscape. However, fuel mapping is an extremely difficult and complex process requiring expertise in remotely sensed image classification, fire behavior, fuels modeling, ecology, and geographical information systems (GIS). This paper first presents the challenges of mapping fuels: canopy concealment, fuelbed complexity, fuel type diversity, fuel variability, and fuel model generalization. Then, four approaches to mapping fuels are discussed with examples provided from the literature: (1) field reconnaissance; (2) direct mapping methods; (3) indirect mapping methods; and (4) gradient modeling. A fuel mapping method is proposed that uses current remote sensing and image processing technology. Future fuel mapping needs are also discussed which include better field data and fuel models, accurate GIS reference layers, improved satellite imagery, and comprehensive ecosystem models.

  4. Statistical Analysis of Large Simulated Yield Datasets for Studying Climate Effects

    NASA Technical Reports Server (NTRS)

    Makowski, David; Asseng, Senthold; Ewert, Frank; Bassu, Simona; Durand, Jean-Louis; Martre, Pierre; Adam, Myriam; Aggarwal, Pramod K.; Angulo, Carlos; Baron, Chritian; hide

    2015-01-01

    Many studies have been carried out during the last decade to study the effect of climate change on crop yields and other key crop characteristics. In these studies, one or several crop models were used to simulate crop growth and development for different climate scenarios that correspond to different projections of atmospheric CO2 concentration, temperature, and rainfall changes (Semenov et al., 1996; Tubiello and Ewert, 2002; White et al., 2011). The Agricultural Model Intercomparison and Improvement Project (AgMIP; Rosenzweig et al., 2013) builds on these studies with the goal of using an ensemble of multiple crop models in order to assess effects of climate change scenarios for several crops in contrasting environments. These studies generate large datasets, including thousands of simulated crop yield data. They include series of yield values obtained by combining several crop models with different climate scenarios that are defined by several climatic variables (temperature, CO2, rainfall, etc.). Such datasets potentially provide useful information on the possible effects of different climate change scenarios on crop yields. However, it is sometimes difficult to analyze these datasets and to summarize them in a useful way due to their structural complexity; simulated yield data can differ among contrasting climate scenarios, sites, and crop models. Another issue is that it is not straightforward to extrapolate the results obtained for the scenarios to alternative climate change scenarios not initially included in the simulation protocols. Additional dynamic crop model simulations for new climate change scenarios are an option but this approach is costly, especially when a large number of crop models are used to generate the simulated data, as in AgMIP. Statistical models have been used to analyze responses of measured yield data to climate variables in past studies (Lobell et al., 2011), but the use of a statistical model to analyze yields simulated by complex process-based crop models is a rather new idea. We demonstrate herewith that statistical methods can play an important role in analyzing simulated yield data sets obtained from the ensembles of process-based crop models. Formal statistical analysis is helpful to estimate the effects of different climatic variables on yield, and to describe the between-model variability of these effects.

  5. Effects of model complexity and priors on estimation using sequential importance sampling/resampling for species conservation

    USGS Publications Warehouse

    Dunham, Kylee; Grand, James B.

    2016-01-01

    We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.

  6. Comparative analysis of used car price evaluation models

    NASA Astrophysics Data System (ADS)

    Chen, Chuancan; Hao, Lulu; Xu, Cong

    2017-05-01

    An accurate used car price evaluation is a catalyst for the healthy development of used car market. Data mining has been applied to predict used car price in several articles. However, little is studied on the comparison of using different algorithms in used car price estimation. This paper collects more than 100,000 used car dealing records throughout China to do empirical analysis on a thorough comparison of two algorithms: linear regression and random forest. These two algorithms are used to predict used car price in three different models: model for a certain car make, model for a certain car series and universal model. Results show that random forest has a stable but not ideal effect in price evaluation model for a certain car make, but it shows great advantage in the universal model compared with linear regression. This indicates that random forest is an optimal algorithm when handling complex models with a large number of variables and samples, yet it shows no obvious advantage when coping with simple models with less variables.

  7. Field Extension of Real Values of Physical Observables in Classical Theory can Help Attain Quantum Results

    NASA Astrophysics Data System (ADS)

    Wang, Hai; Kumar, Asutosh; Cho, Minhyung; Wu, Junde

    2018-04-01

    Physical quantities are assumed to take real values, which stems from the fact that an usual measuring instrument that measures a physical observable always yields a real number. Here we consider the question of what would happen if physical observables are allowed to assume complex values. In this paper, we show that by allowing observables in the Bell inequality to take complex values, a classical physical theory can actually get the same upper bound of the Bell expression as quantum theory. Also, by extending the real field to the quaternionic field, we can puzzle out the GHZ problem using local hidden variable model. Furthermore, we try to build a new type of hidden-variable theory of a single qubit based on the result.

  8. Detecting and interpreting distortions in hierarchical organization of complex time series

    NASA Astrophysics Data System (ADS)

    DroŻdŻ, Stanisław; OświÈ©cimka, Paweł

    2015-03-01

    Hierarchical organization is a cornerstone of complexity and multifractality constitutes its central quantifying concept. For model uniform cascades the corresponding singularity spectra are symmetric while those extracted from empirical data are often asymmetric. Using selected time series representing such diverse phenomena as price changes and intertransaction times in financial markets, sentence length variability in narrative texts, Missouri River discharge, and sunspot number variability as examples, we show that the resulting singularity spectra appear strongly asymmetric, more often left sided but in some cases also right sided. We present a unified view on the origin of such effects and indicate that they may be crucially informative for identifying the composition of the time series. One particularly intriguing case of this latter kind of asymmetry is detected in the daily reported sunspot number variability. This signals that either the commonly used famous Wolf formula distorts the real dynamics in expressing the largest sunspot numbers or, if not, that their dynamics is governed by a somewhat different mechanism.

  9. Adaptive iterative design (AID): a novel approach for evaluating the interactive effects of multiple stressors on aquatic organisms.

    PubMed

    Glaholt, Stephen P; Chen, Celia Y; Demidenko, Eugene; Bugge, Deenie M; Folt, Carol L; Shaw, Joseph R

    2012-08-15

    The study of stressor interactions by eco-toxicologists using nonlinear response variables is limited by required amounts of a priori knowledge, complexity of experimental designs, the use of linear models, and the lack of use of optimal designs of nonlinear models to characterize complex interactions. Therefore, we developed AID, an adaptive-iterative design for eco-toxicologist to more accurately and efficiently examine complex multiple stressor interactions. AID incorporates the power of the general linear model and A-optimal criteria with an iterative process that: 1) minimizes the required amount of a priori knowledge, 2) simplifies the experimental design, and 3) quantifies both individual and interactive effects. Once a stable model is determined, the best fit model is identified and the direction and magnitude of stressors, individually and all combinations (including complex interactions) are quantified. To validate AID, we selected five commonly co-occurring components of polluted aquatic systems, three metal stressors (Cd, Zn, As) and two water chemistry parameters (pH, hardness) to be tested using standard acute toxicity tests in which Daphnia mortality is the (nonlinear) response variable. We found after the initial data input of experimental data, although literature values (e.g. EC-values) may also be used, and after only two iterations of AID, our dose response model was stable. The model ln(Cd)*ln(Zn) was determined the best predictor of Daphnia mortality response to the combined effects of Cd, Zn, As, pH, and hardness. This model was then used to accurately identify and quantify the strength of both greater- (e.g. As*Cd) and less-than additive interactions (e.g. Cd*Zn). Interestingly, our study found only binary interactions significant, not higher order interactions. We conclude that AID is more efficient and effective at assessing multiple stressor interactions than current methods. Other applications, including life-history endpoints commonly used by regulators, could benefit from AID's efficiency in assessing water quality criteria. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. Batch-mode Reinforcement Learning for improved hydro-environmental systems management

    NASA Astrophysics Data System (ADS)

    Castelletti, A.; Galelli, S.; Restelli, M.; Soncini-Sessa, R.

    2010-12-01

    Despite the great progresses made in the last decades, the optimal management of hydro-environmental systems still remains a very active and challenging research area. The combination of multiple, often conflicting interests, high non-linearities of the physical processes and the management objectives, strong uncertainties in the inputs, and high dimensional state makes the problem challenging and intriguing. Stochastic Dynamic Programming (SDP) is one of the most suitable methods for designing (Pareto) optimal management policies preserving the original problem complexity. However, it suffers from a dual curse, which, de facto, prevents its practical application to even reasonably complex water systems. (i) Computational requirement grows exponentially with state and control dimension (Bellman's curse of dimensionality), so that SDP can not be used with water systems where the state vector includes more than few (2-3) units. (ii) An explicit model of each system's component is required (curse of modelling) to anticipate the effects of the system transitions, i.e. any information included into the SDP framework can only be either a state variable described by a dynamic model or a stochastic disturbance, independent in time, with the associated pdf. Any exogenous information that could effectively improve the system operation cannot be explicitly considered in taking the management decision, unless a dynamic model is identified for each additional information, thus adding to the problem complexity through the curse of dimensionality (additional state variables). To mitigate this dual curse, the combined use of batch-mode Reinforcement Learning (bRL) and Dynamic Model Reduction (DMR) techniques is explored in this study. bRL overcomes the curse of modelling by replacing explicit modelling with an external simulator and/or historical observations. The curse of dimensionality is averted using a functional approximation of the SDP value function based on proper non-linear regressors. DMR reduces the complexity and the associated computational requirements of non-linear distributed process based models, making them suitable for being included into optimization schemes. Results from real world applications of the approach are also presented, including reservoir operation with both quality and quantity targets.

  11. ALC: automated reduction of rule-based models

    PubMed Central

    Koschorreck, Markus; Gilles, Ernst Dieter

    2008-01-01

    Background Combinatorial complexity is a challenging problem for the modeling of cellular signal transduction since the association of a few proteins can give rise to an enormous amount of feasible protein complexes. The layer-based approach is an approximative, but accurate method for the mathematical modeling of signaling systems with inherent combinatorial complexity. The number of variables in the simulation equations is highly reduced and the resulting dynamic models show a pronounced modularity. Layer-based modeling allows for the modeling of systems not accessible previously. Results ALC (Automated Layer Construction) is a computer program that highly simplifies the building of reduced modular models, according to the layer-based approach. The model is defined using a simple but powerful rule-based syntax that supports the concepts of modularity and macrostates. ALC performs consistency checks on the model definition and provides the model output in different formats (C MEX, MATLAB, Mathematica and SBML) as ready-to-run simulation files. ALC also provides additional documentation files that simplify the publication or presentation of the models. The tool can be used offline or via a form on the ALC website. Conclusion ALC allows for a simple rule-based generation of layer-based reduced models. The model files are given in different formats as ready-to-run simulation files. PMID:18973705

  12. National-scale aboveground biomass geostatistical mapping with FIA inventory and GLAS data: Preparation for sparsely sampled lidar assisted forest inventory

    NASA Astrophysics Data System (ADS)

    Babcock, C. R.; Finley, A. O.; Andersen, H. E.; Moskal, L. M.; Morton, D. C.; Cook, B.; Nelson, R.

    2017-12-01

    Upcoming satellite lidar missions, such as GEDI and IceSat-2, are designed to collect laser altimetry data from space for narrow bands along orbital tracts. As a result lidar metric sets derived from these sources will not be of complete spatial coverage. This lack of complete coverage, or sparsity, means traditional regression approaches that consider lidar metrics as explanatory variables (without error) cannot be used to generate wall-to-wall maps of forest inventory variables. We implement a coregionalization framework to jointly model sparsely sampled lidar information and point-referenced forest variable measurements to create wall-to-wall maps with full probabilistic uncertainty quantification of all inputs. We inform the model with USFS Forest Inventory and Analysis (FIA) in-situ forest measurements and GLAS lidar data to spatially predict aboveground forest biomass (AGB) across the contiguous US. We cast our model within a Bayesian hierarchical framework to better model complex space-varying correlation structures among the lidar metrics and FIA data, which yields improved prediction and uncertainty assessment. To circumvent computational difficulties that arise when fitting complex geostatistical models to massive datasets, we use a Nearest Neighbor Gaussian process (NNGP) prior. Results indicate that a coregionalization modeling approach to leveraging sampled lidar data to improve AGB estimation is effective. Further, fitting the coregionalization model within a Bayesian mode of inference allows for AGB quantification across scales ranging from individual pixel estimates of AGB density to total AGB for the continental US with uncertainty. The coregionalization framework examined here is directly applicable to future spaceborne lidar acquisitions from GEDI and IceSat-2. Pairing these lidar sources with the extensive FIA forest monitoring plot network using a joint prediction framework, such as the coregionalization model explored here, offers the potential to improve forest AGB accounting certainty and provide maps for post-model fitting analysis of the spatial distribution of AGB.

  13. Relating brain signal variability to knowledge representation.

    PubMed

    Heisz, Jennifer J; Shedden, Judith M; McIntosh, Anthony R

    2012-11-15

    We assessed the hypothesis that brain signal variability is a reflection of functional network reconfiguration during memory processing. In the present experiments, we use multiscale entropy to capture the variability of human electroencephalogram (EEG) while manipulating the knowledge representation associated with faces stored in memory. Across two experiments, we observed increased variability as a function of greater knowledge representation. In Experiment 1, individuals with greater familiarity for a group of famous faces displayed more brain signal variability. In Experiment 2, brain signal variability increased with learning after multiple experimental exposures to previously unfamiliar faces. The results demonstrate that variability increases with face familiarity; cognitive processes during the perception of familiar stimuli may engage a broader network of regions, which manifests as higher complexity/variability in spatial and temporal domains. In addition, effects of repetition suppression on brain signal variability were observed, and the pattern of results is consistent with a selectivity model of neural adaptation. Crown Copyright © 2012. Published by Elsevier Inc. All rights reserved.

  14. [Representation and mathematical analysis of human crystalline lens].

    PubMed

    Tălu, Stefan; Giovanzana, Stefano; Tălu, Mihai

    2011-01-01

    The surface of human crystalline lens can be described and analyzed using mathematical models based on parametric representations, used in biomechanical studies and 3D solid modeling of the lens. The mathematical models used in lens biomechanics allow the study and the behavior of crystalline lens on variables and complex dynamic loads. Also, the lens biomechanics has the potential to improve the results in the development of intraocular lenses and cataract surgery. The paper presents the most representative mathematical models currently used for the modeling of human crystalline lens, both optically and biomechanically.

  15. Advances in Parameter and Uncertainty Quantification Using Bayesian Hierarchical Techniques with a Spatially Referenced Watershed Model (Invited)

    NASA Astrophysics Data System (ADS)

    Alexander, R. B.; Boyer, E. W.; Schwarz, G. E.; Smith, R. A.

    2013-12-01

    Estimating water and material stores and fluxes in watershed studies is frequently complicated by uncertainties in quantifying hydrological and biogeochemical effects of factors such as land use, soils, and climate. Although these process-related effects are commonly measured and modeled in separate catchments, researchers are especially challenged by their complexity across catchments and diverse environmental settings, leading to a poor understanding of how model parameters and prediction uncertainties vary spatially. To address these concerns, we illustrate the use of Bayesian hierarchical modeling techniques with a dynamic version of the spatially referenced watershed model SPARROW (SPAtially Referenced Regression On Watershed attributes). The dynamic SPARROW model is designed to predict streamflow and other water cycle components (e.g., evapotranspiration, soil and groundwater storage) for monthly varying hydrological regimes, using mechanistic functions, mass conservation constraints, and statistically estimated parameters. In this application, the model domain includes nearly 30,000 NHD (National Hydrologic Data) stream reaches and their associated catchments in the Susquehanna River Basin. We report the results of our comparisons of alternative models of varying complexity, including models with different explanatory variables as well as hierarchical models that account for spatial and temporal variability in model parameters and variance (error) components. The model errors are evaluated for changes with season and catchment size and correlations in time and space. The hierarchical models consist of a two-tiered structure in which climate forcing parameters are modeled as random variables, conditioned on watershed properties. Quantification of spatial and temporal variations in the hydrological parameters and model uncertainties in this approach leads to more efficient (lower variance) and less biased model predictions throughout the river network. Moreover, predictions of water-balance components are reported according to probabilistic metrics (e.g., percentiles, prediction intervals) that include both parameter and model uncertainties. These improvements in predictions of streamflow dynamics can inform the development of more accurate predictions of spatial and temporal variations in biogeochemical stores and fluxes (e.g., nutrients and carbon) in watersheds.

  16. Computational analysis of Variable Thrust Engine (VTE) performance

    NASA Technical Reports Server (NTRS)

    Giridharan, M. G.; Krishnan, A.; Przekwas, A. J.

    1993-01-01

    The Variable Thrust Engine (VTE) of the Orbital Maneuvering Vehicle (OMV) uses a hypergolic propellant combination of Monomethyl Hydrazine (MMH) and Nitrogen Tetroxide (NTO) as fuel and oxidizer, respectively. The performance of the VTE depends on a number of complex interacting phenomena such as atomization, spray dynamics, vaporization, turbulent mixing, convective/radiative heat transfer, and hypergolic combustion. This study involved the development of a comprehensive numerical methodology to facilitate detailed analysis of the VTE. An existing Computational Fluid Dynamics (CFD) code was extensively modified to include the following models: a two-liquid, two-phase Eulerian-Lagrangian spray model; a chemical equilibrium model; and a discrete ordinate radiation heat transfer model. The modified code was used to conduct a series of simulations to assess the effects of various physical phenomena and boundary conditions on the VTE performance. The details of the models and the results of the simulations are presented.

  17. The seasonal response of the Held-Suarez climate model to prescribed ocean temperature anomalies. I - Results of decadal integrations

    NASA Technical Reports Server (NTRS)

    Phillips, T. J.; Semtner, A. J., Jr.

    1984-01-01

    Anomalies in ocean surface temperature have been identified as possible causes of variations in the climate of particular seasons or as a source of interannual climatic variability, and attempts have been made to forecast seasonal climate by using ocean temperatures as predictor variables. However, the seasonal atmospheric response to ocean temperature anomalies has not yet been systematically investigated with nonlinear models. The present investigation is concerned with ten-year integrations involving a model of intermediate complexity, the Held-Suarez climate model. The calculations have been performed to investigate the changes in seasonal climate which result from a fixed anomaly imposed on a seasonally varying, global ocean temperature field. Part I of the paper provides a report on the results of these decadal integrations. Attention is given to model properties, the experimental design, and the anomaly experiments.

  18. Hydropower Modeling Challenges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoll, Brady; Andrade, Juan; Cohen, Stuart

    Hydropower facilities are important assets for the electric power sector and represent a key source of flexibility for electric grids with large amounts of variable generation. As variable renewable generation sources expand, understanding the capabilities and limitations of the flexibility from hydropower resources is important for grid planning. Appropriately modeling these resources, however, is difficult because of the wide variety of constraints these plants face that other generators do not. These constraints can be broadly categorized as environmental, operational, and regulatory. This report highlights several key issues involving incorporating these constraints when modeling hydropower operations in terms of production costmore » and capacity expansion. Many of these challenges involve a lack of data to adequately represent the constraints or issues of model complexity and run time. We present several potential methods for improving the accuracy of hydropower representation in these models to allow for a better understanding of hydropower's capabilities.« less

  19. High-order sliding-mode control for blood glucose regulation in the presence of uncertain dynamics.

    PubMed

    Hernández, Ana Gabriela Gallardo; Fridman, Leonid; Leder, Ron; Andrade, Sergio Islas; Monsalve, Cristina Revilla; Shtessel, Yuri; Levant, Arie

    2011-01-01

    The success of blood glucose automatic regulation depends on the robustness of the control algorithm used. It is a difficult task to perform due to the complexity of the glucose-insulin regulation system. The variety of model existing reflects the great amount of phenomena involved in the process, and the inter-patient variability of the parameters represent another challenge. In this research a High-Order Sliding-Mode Control is proposed. It is applied to two well known models, Bergman Minimal Model, and Sorensen Model, to test its robustness with respect to uncertain dynamics, and patients' parameter variability. The controller designed based on the simulations is tested with the specific Bergman Minimal Model of a diabetic patient whose parameters were identified from an in vivo assay. To minimize the insulin infusion rate, and avoid the hypoglycemia risk, the glucose target is a dynamical profile.

  20. Multistressor predictive models of invertebrate condition in the Corn Belt, USA

    USGS Publications Warehouse

    Waite, Ian R.; Van Metre, Peter C.

    2017-01-01

    Understanding the complex relations between multiple environmental stressors and ecological conditions in streams can help guide resource-management decisions. During 14 weeks in spring/summer 2013, personnel from the US Geological Survey and the US Environmental Protection Agency sampled 98 wadeable streams across the Midwest Corn Belt region of the USA for water and sediment quality, physical and habitat characteristics, and ecological communities. We used these data to develop independent predictive disturbance models for 3 macroinvertebrate metrics and a multimetric index. We developed the models based on boosted regression trees (BRT) for 3 stressor categories, land use/land cover (geographic information system [GIS]), all in-stream stressors combined (nutrients, habitat, and contaminants), and for GIS plus in-stream stressors. The GIS plus in-stream stressor models had the best overall performance with an average cross-validation R2 across all models of 0.41. The models were generally consistent in the explanatory variables selected within each stressor group across the 4 invertebrate metrics modeled. Variables related to riparian condition, substrate size or embeddedness, velocity and channel shape, nutrients (primarily NH3), and contaminants (pyrethroid degradates) were important descriptors of the invertebrate metrics. Models based on all measured in-stream stressors performed comparably to models based on GIS landscape variables, suggesting that the in-stream stressor characterization reasonably represents the dominant factors affecting invertebrate communities and that GIS variables are acting as surrogates for in-stream stressors that directly affect in-stream biota.

  1. Determining the Ocean's Role on the Variable Gravity Field on Earth Rotation

    NASA Technical Reports Server (NTRS)

    Ponte, Rui M.

    1999-01-01

    A number of ocean models of different complexity have been used to study changes in the oceanic mass field and angular momentum and their relation to the variable Earth rotation and gravity field. Time scales examined range from seasonal to a few days. Results point to the importance of oceanic signals in driving polar motion, in particular the Chandler and annual wobbles. Results also show that oceanic signals have a measurable impact on length-of-day variations. Various circulation features and associated mass signals, including the North Pacific subtropical gyre, the equatorial currents, and the Antarctic Circumpolar Current play a significant role in oceanic angular momentum variability.

  2. Fluid Mechanics and Complex Variable Theory: Getting Past the 19th Century

    ERIC Educational Resources Information Center

    Newton, Paul K.

    2017-01-01

    The subject of fluid mechanics is a rich, vibrant, and rapidly developing branch of applied mathematics. Historically, it has developed hand-in-hand with the elegant subject of complex variable theory. The Westmont College NSF-sponsored workshop on the revitalization of complex variable theory in the undergraduate curriculum focused partly on…

  3. DigitalHuman (DH): An Integrative Mathematical Model ofHuman Physiology

    NASA Technical Reports Server (NTRS)

    Hester, Robert L.; Summers, Richard L.; lIescu, Radu; Esters, Joyee; Coleman, Thomas G.

    2010-01-01

    Mathematical models and simulation are important tools in discovering the key causal relationships governing physiological processes and improving medical intervention when physiological complexity is a central issue. We have developed a model of integrative human physiology called DigitalHuman (DH) consisting of -5000 variables modeling human physiology describing cardiovascular, renal, respiratory, endocrine, neural and metabolic physiology. Users can view time-dependent solutions and interactively introduce perturbations by altering numerical parameters to investigate new hypotheses. The variables, parameters and quantitative relationships as well as all other model details are described in XML text files. All aspects of the model, including the mathematical equations describing the physiological processes are written in XML open source, text-readable files. Model structure is based upon empirical data of physiological responses documented within the peer-reviewed literature. The model can be used to understand proposed physiological mechanisms and physiological interactions that may not be otherwise intUitively evident. Some of the current uses of this model include the analyses of renal control of blood pressure, the central role of the liver in creating and maintaining insulin resistance, and the mechanisms causing orthostatic hypotension in astronauts. Additionally the open source aspect of the modeling environment allows any investigator to add detailed descriptions of human physiology to test new concepts. The model accurately predicts both qualitative and more importantly quantitative changes in clinically and experimentally observed responses. DigitalHuman provides scientists a modeling environment to understand the complex interactions of integrative physiology. This research was supported by.NIH HL 51971, NSF EPSCoR, and NASA

  4. Interpreting Carbon Fluxes from a Spatially Heterogeneous Peatland with Thawing Permafrost: Scaling from Plant Community Scale to Ecosystem Scale

    NASA Astrophysics Data System (ADS)

    Harder, S. R.; Roulet, N. T.; Strachan, I. B.; Crill, P. M.; Persson, A.; Pelletier, L.; Watt, C.

    2014-12-01

    Various microforms, created by spatial differential thawing of permafrost, make up the subarctic heterogeneous Stordalen peatland complex (68°22'N, 19°03'E), near Abisko, Sweden. This results in significantly different peatland vegetation communities across short distances, as well as differences in wetness, temperature and peat substrates. We have been measuring the spatially integrated CO2, heat and water vapour fluxes from this peatland complex using eddy covariance and the CO2 exchange from specific plant communities within the EC tower footprint since spring 2008. With this data we are examining if it is possible to derive the spatially integrated ecosystem-wide fluxes from community-level simple light use efficiency (LUE) and ecosystem respiration (ER) models. These models have been developed using several years of continuous autochamber flux measurements for the three major plant functional types (PFTs) as well as knowledge of the spatial variability of the vegetation, water table and active layer depths. LIDAR was used to produce a 1 m resolution digital evaluation model of the complex and the spatial distribution of PFTs was obtained from concurrent high-resolution digital colour air photography trained from vegetation surveys. Continuous water table depths have been measured for four years at over 40 locations in the complex, and peat temperatures and active layer depths are surveyed every 10 days at more than 100 locations. The EC footprint is calculated for every half-hour and the PFT based models are run with the corresponding environmental variables weighted for the PFTs within the EC footprint. Our results show that the Sphagnum, palsa, and sedge PFTs have distinctly different LUE models, and that the tower fluxes are dominated by a blend of the Sphagnum and palsa PFTs. We also see a distinctly different energy partitioning between the fetches containing intact palsa and those with thawed palsa: the evaporative efficiency is higher and the Bowen ration lower for the thawed palsa fetches.

  5. Influence of variable chemical conditions on EDTA-enhanced transport of metal ions in mildly acidic groundwater

    USGS Publications Warehouse

    Kent, D.B.; Davis, J.A.; Joye, J.L.; Curtis, G.P.

    2008-01-01

    Adsorption of Ni and Pb on aquifer sediments from Cape Cod, Massachusetts, USA increased with increasing pH and metal-ion concentration. Adsorption could be described quantitatively using a semi-mechanistic surface complexation model (SCM), in which adsorption is described using chemical reactions between metal ions and adsorption sites. Equilibrium reactive transport simulations incorporating the SCMs, formation of metal-ion-EDTA complexes, and either Fe(III)-oxyhydroxide solubility or Zn desorption from sediments identified important factors responsible for trends observed during transport experiments conducted with EDTA complexes of Ni, Zn, and Pb in the Cape Cod aquifer. Dissociation of Pb-EDTA by Fe(III) is more favorable than Ni-EDTA because of differences in Ni- and Pb-adsorption to the sediments. Dissociation of Ni-EDTA becomes more favorable with decreasing Ni-EDTA concentration and decreasing pH. In contrast to Ni, Pb-EDTA can be dissociated by Zn desorbed from the aquifer sediments. Variability in adsorbed Zn concentrations has a large impact on Pb-EDTA dissociation.

  6. Recent experience in simultaneous control-structure optimization

    NASA Technical Reports Server (NTRS)

    Salama, M.; Ramaker, R.; Milman, M.

    1989-01-01

    To show the feasibility of simultaneous optimization as design procedure, low order problems were used in conjunction with simple control formulations. The numerical results indicate that simultaneous optimization is not only feasible, but also advantageous. Such advantages come at the expense of introducing complexities beyond those encountered in structure optimization alone, or control optimization alone. Examples include: larger design parameter space, optimization may combine continuous and combinatoric variables, and the combined objective function may be nonconvex. Future extensions to include large order problems, more complex objective functions and constraints, and more sophisticated control formulations will require further research to ensure that the additional complexities do not outweigh the advantages of simultaneous optimization. Some areas requiring more efficient tools than currently available include: multiobjective criteria and nonconvex optimization. Efficient techniques to deal with optimization over combinatoric and continuous variables, and with truncation issues for structure and control parameters of both the model space as well as the design space need to be developed.

  7. Tracking the complex absorption in NGC 2110 with two Suzaku observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rivers, Elizabeth; Markowitz, Alex; Rothschild, Richard

    2014-05-10

    We present spectral analysis of two Suzaku observations of the Seyfert 2 galaxy, NGC 2110. This source has been known to show complex, variable absorption which we study in depth by analyzing these two observations set 7 yr apart and by comparing them to previously analyzed observations with the XMM-Newton and Chandra observatories. We find that there is a relatively stable, full-covering absorber with a column density of ∼3× 10{sup 22} cm{sup –2}, with an additional patchy absorber that is likely variable in both column density and covering fraction over timescales of years, consistent with clouds in a patchy torusmore » or in the broad line region. We model a soft emission line complex, likely arising from ionized plasma and consistent with previous studies. We find no evidence for reflection from an accretion disk in this source with contribution from neither relativistically broadened Fe Kα line emission, nor from a Compton reflection hump.« less

  8. Reef flattening effects on total richness and species responses in the Caribbean.

    PubMed

    Newman, Steven P; Meesters, Erik H; Dryden, Charlie S; Williams, Stacey M; Sanchez, Cristina; Mumby, Peter J; Polunin, Nicholas V C

    2015-11-01

    There has been ongoing flattening of Caribbean coral reefs with the loss of habitat having severe implications for these systems. Complexity and its structural components are important to fish species richness and community composition, but little is known about its role for other taxa or species-specific responses. This study reveals the importance of reef habitat complexity and structural components to different taxa of macrofauna, total species richness, and individual coral and fish species in the Caribbean. Species presence and richness of different taxa were visually quantified in one hundred 25-m(2) plots in three marine reserves in the Caribbean. Sampling was evenly distributed across five levels of visually estimated reef complexity, with five structural components also recorded: the number of corals, number of large corals, slope angle, maximum sponge and maximum octocoral height. Taking advantage of natural heterogeneity in structural complexity within a particular coral reef habitat (Orbicella reefs) and discrete environmental envelope, thus minimizing other sources of variability, the relative importance of reef complexity and structural components was quantified for different taxa and individual fish and coral species on Caribbean coral reefs using boosted regression trees (BRTs). Boosted regression tree models performed very well when explaining variability in total (82·3%), coral (80·6%) and fish species richness (77·3%), for which the greatest declines in richness occurred below intermediate reef complexity levels. Complexity accounted for very little of the variability in octocorals, sponges, arthropods, annelids or anemones. BRTs revealed species-specific variability and importance for reef complexity and structural components. Coral and fish species occupancy generally declined at low complexity levels, with the exception of two coral species (Pseudodiploria strigosa and Porites divaricata) and four fish species (Halichoeres bivittatus, H. maculipinna, Malacoctenus triangulatus and Stegastes partitus) more common at lower reef complexity levels. A significant interaction between country and reef complexity revealed a non-additive decline in species richness in areas of low complexity and the reserve in Puerto Rico. Flattening of Caribbean coral reefs will result in substantial species losses, with few winners. Individual structural components have considerable value to different species, and their loss may have profound impacts on population responses of coral and fish due to identity effects of key species, which underpin population richness and resilience and may affect essential ecosystem processes and services. © 2015 The Authors. Journal of Animal Ecology © 2015 British Ecological Society.

  9. Simulating the natural variability of the freshwater budget of the Arctic ocean from the mid to late Holocene using LOVECLIM

    NASA Astrophysics Data System (ADS)

    Davies, F. J.; Goosse, H.; Renssen, H.

    2012-04-01

    The influence of freshwater on the long term climatic variability of the Arctic region is currently of significant interest. Alterations to the natural variability of the oceanic, terrestrial and atmospheric sources of freshwater to the Arctic ocean, caused by anthropogenic induced warming, are likely to have far reaching effects on oceanic processes and climate. A number of these changes are already observable, such as an intensification of the hydrological cycle, a 7% increase in Eurasian river runoff (1936-1999), a 9% reduction of sea-ice extent per decade (1979-2006), a 120km northward migration of permafrost in Northern Canada (1968-1994), and air temperatures 6°C warmer, in parts, from 2007 to 2010, when compared to the 1958-1996 average. All of these changes add another layer of complexity to understanding the role of the freshwater budget, and this makes it difficult to say with any certainty how these future changes will impact freshwater fluxes of the Arctic gateways, such as the Bering Strait, Fram Strait, Canadian Arctic Archipelago and the Barents Sea inflow. Despite these difficulties, there have been studies that have integrated the available data, from both in situ measurements and modelling studies, and used this as a basis to form a picture of the current freshwater budget, and then project upon these hypotheses for the future (Holland et al., 2007). However, one particular aspect of these future projections that is lacking is the accountability of how much future variance is attributable to both natural variability and anthropogenic influences. Here we present results of a mid to late (6-0ka) Holocene transient simulation, using the earth model of intermediate complexity, LOVECLIM (Goosse et al., 2010). The model is forced with orbital and greenhouse gas forcings appropriate for the time period. The results will highlight the natural variability of the oceanic, terrestrial and atmospheric components of the freshwater budget, over decadal and centennial timescales. When computing the freshwater budget for the period, where in situ measurements are available, LOVECLIM has been shown to perform reasonably well. The intention here is not to present a fully quantitative assessment of the freshwater budget of the Arctic Ocean as such, but to highlight the natural variability of the freshwater budget and its individual components. We hope that this inspires other modelling groups to take a similar approach and work towards understanding the natural variability of the freshwater budget over timescales longer than current measurements allow, and modelling studies have previously attempted. Goosse, H., Brovkin, V., Fichefet, T., Haarsma, R., Huybrechts, P., Jongma, J., Mouchet, A., Selten, F., Barriat, P-Y., Campin, J-M., Deleersnijder, E., Driesschaert, E., Goelzer, H., Janssens, I., Loutre, M-F., Morales Maqueda, M.A., Opsteegh, T., Mathieu, P-P., Munhoven, G., Pettersson, E.J., Renssen, H., Roche, D.M., Schaeffer, M., Tartinville, B., Timmermann, A., Weber, S.L. (2010) Description of the Earth System Model of Intermediate Complexity LOVECLIM Version 1.2, Geoscientific Model Development, 3:603-633 doi: 10.5194/gmd-3-603-2010. Holland, M.M., Finnis, J., Barrett, A.P., Serreze, M.C. (2007) Projected Changes in Arctic Ocean Freshwater Budgets, Journal of Geophysical Research, 112, G04S55, doi:10.1029/2006JG000354, 2007

  10. Combining data visualization and statistical approaches for interpreting measurements and meta-data: Integrating heatmaps, variable clustering, and mixed regression models

    EPA Science Inventory

    The advent of new higher throughput analytical instrumentation has put a strain on interpreting and explaining the results from complex studies. Contemporary human, environmental, and biomonitoring data sets are comprised of tens or hundreds of analytes, multiple repeat measures...

  11. The Role of Probability-Based Inference in an Intelligent Tutoring System.

    ERIC Educational Resources Information Center

    Mislevy, Robert J.; Gitomer, Drew H.

    Probability-based inference in complex networks of interdependent variables is an active topic in statistical research, spurred by such diverse applications as forecasting, pedigree analysis, troubleshooting, and medical diagnosis. This paper concerns the role of Bayesian inference networks for updating student models in intelligent tutoring…

  12. Towards a Dynamic Model of Skills Involved in Sight Reading Music

    ERIC Educational Resources Information Center

    Kopiez, Reinhard; Lee, Ji In

    2006-01-01

    This study investigates the relationship between selected predictors of achievement in playing unrehearsed music (sight reading) and the changing complexity of sight reading tasks. The question under investigation is, how different variables gain or lose significance as sight reading stimuli become more difficult. Fifty-two piano major graduates…

  13. Motivational Determinants of Alcohol Use: A Theory and Its Applications.

    ERIC Educational Resources Information Center

    Cox, W. Miles

    This transcript of a conference presentation describes a motivational model of alcohol use that shows the interrelationship between the various factors that affect drinking. First, a flow diagram is presented and described that shows how complex biological, psychological, and environmental variables contribute to a person's motivation for…

  14. Estimating under-five mortality in space and time in a developing world context.

    PubMed

    Wakefield, Jon; Fuglstad, Geir-Arne; Riebler, Andrea; Godwin, Jessica; Wilson, Katie; Clark, Samuel J

    2018-01-01

    Accurate estimates of the under-five mortality rate in a developing world context are a key barometer of the health of a nation. This paper describes a new model to analyze survey data on mortality in this context. We are interested in both spatial and temporal description, that is wishing to estimate under-five mortality rate across regions and years and to investigate the association between the under-five mortality rate and spatially varying covariate surfaces. We illustrate the methodology by producing yearly estimates for subnational areas in Kenya over the period 1980-2014 using data from the Demographic and Health Surveys, which use stratified cluster sampling. We use a binomial likelihood with fixed effects for the urban/rural strata and random effects for the clustering to account for the complex survey design. Smoothing is carried out using Bayesian hierarchical models with continuous spatial and temporally discrete components. A key component of the model is an offset to adjust for bias due to the effects of HIV epidemics. Substantively, there has been a sharp decline in Kenya in the under-five mortality rate in the period 1980-2014, but large variability in estimated subnational rates remains. A priority for future research is understanding this variability. In exploratory work, we examine whether a variety of spatial covariate surfaces can explain the variability in under-five mortality rate. Temperature, precipitation, a measure of malaria infection prevalence, and a measure of nearness to cities were candidates for inclusion in the covariate model, but the interplay between space, time, and covariates is complex.

  15. Aspects of porosity prediction using multivariate linear regression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byrnes, A.P.; Wilson, M.D.

    1991-03-01

    Highly accurate multiple linear regression models have been developed for sandstones of diverse compositions. Porosity reduction or enhancement processes are controlled by the fundamental variables, Pressure (P), Temperature (T), Time (t), and Composition (X), where composition includes mineralogy, size, sorting, fluid composition, etc. The multiple linear regression equation, of which all linear porosity prediction models are subsets, takes the generalized form: Porosity = C{sub 0} + C{sub 1}(P) + C{sub 2}(T) + C{sub 3}(X) + C{sub 4}(t) + C{sub 5}(PT) + C{sub 6}(PX) + C{sub 7}(Pt) + C{sub 8}(TX) + C{sub 9}(Tt) + C{sub 10}(Xt) + C{sub 11}(PTX) + C{submore » 12}(PXt) + C{sub 13}(PTt) + C{sub 14}(TXt) + C{sub 15}(PTXt). The first four primary variables are often interactive, thus requiring terms involving two or more primary variables (the form shown implies interaction and not necessarily multiplication). The final terms used may also involve simple mathematic transforms such as log X, e{sup T}, X{sup 2}, or more complex transformations such as the Time-Temperature Index (TTI). The X term in the equation above represents a suite of compositional variable and, therefore, a fully expanded equation may include a series of terms incorporating these variables. Numerous published bivariate porosity prediction models involving P (or depth) or Tt (TTI) are effective to a degree, largely because of the high degree of colinearity between p and TTI. However, all such bivariate models ignore the unique contributions of P and Tt, as well as various X terms. These simpler models become poor predictors in regions where colinear relations change, were important variables have been ignored, or where the database does not include a sufficient range or weight distribution for the critical variables.« less

  16. Uncertainties in Future Regional Sea Level Trends: How to Deal with the Internal Climate Variability?

    NASA Astrophysics Data System (ADS)

    Becker, M.; Karpytchev, M.; Hu, A.; Deser, C.; Lennartz-Sassinek, S.

    2017-12-01

    Today, the Climate models (CM) are the main tools for forecasting sea level rise (SLR) at global and regional scales. The CM forecasts are accompanied by inherent uncertainties. Understanding and reducing these uncertainties is becoming a matter of increasing urgency in order to provide robust estimates of SLR impact on coastal societies, which need sustainable choices of climate adaptation strategy. These CM uncertainties are linked to structural model formulation, initial conditions, emission scenario and internal variability. The internal variability is due to complex non-linear interactions within the Earth Climate System and can induce diverse quasi-periodic oscillatory modes and long-term persistences. To quantify the effects of internal variability, most studies used multi-model ensembles or sea level projections from a single model ran with perturbed initial conditions. However, large ensembles are not generally available, or too small, and computationally expensive. In this study, we use a power-law scaling of sea level fluctuations, as observed in many other geophysical signals and natural systems, which can be used to characterize the internal climate variability. From this specific statistical framework, we (1) use the pre-industrial control run of the National Center for Atmospheric Research Community Climate System Model (NCAR-CCSM) to test the robustness of the power-law scaling hypothesis; (2) employ the power-law statistics as a tool for assessing the spread of regional sea level projections due to the internal climate variability for the 21st century NCAR-CCSM; (3) compare the uncertainties in predicted sea level changes obtained from a NCAR-CCSM multi-member ensemble simulations with estimates derived for power-law processes, and (4) explore the sensitivity of spatial patterns of the internal variability and its effects on regional sea level projections.

  17. Fuzzy logic modeling of the resistivity parameter and topography features for aquifer assessment in hydrogeological investigation of a crystalline basement complex

    NASA Astrophysics Data System (ADS)

    Adabanija, M. A.; Omidiora, E. O.; Olayinka, A. I.

    2008-05-01

    A linguistic fuzzy logic system (LFLS)-based expert system model has been developed for the assessment of aquifers for the location of productive water boreholes in a crystalline basement complex. The model design employed a multiple input/single output (MISO) approach with geoelectrical parameters and topographic features as input variables and control crisp value as the output. The application of the method to the data acquired in Khondalitic terrain, a basement complex in Vizianagaram District, south India, shows that potential groundwater resource zones that have control output values in the range 0.3295-0.3484 have a yield greater than 6,000 liters per hour (LPH). The range 0.3174-0.3226 gives a yield less than 4,000 LPH. The validation of the control crisp value using data acquired from Oban Massif, a basement complex in southeastern Nigeria, indicates a yield less than 3,000 LPH for control output values in the range 0.2938-0.3065. This validation corroborates the ability of control output values to predict a yield, thereby vindicating the applicability of linguistic fuzzy logic system in siting productive water boreholes in a basement complex.

  18. Capturing complexity in work disability research: application of system dynamics modeling methodology.

    PubMed

    Jetha, Arif; Pransky, Glenn; Hettinger, Lawrence J

    2016-01-01

    Work disability (WD) is characterized by variable and occasionally undesirable outcomes. The underlying determinants of WD outcomes include patterns of dynamic relationships among health, personal, organizational and regulatory factors that have been challenging to characterize, and inadequately represented by contemporary WD models. System dynamics modeling (SDM) methodology applies a sociotechnical systems thinking lens to view WD systems as comprising a range of influential factors linked by feedback relationships. SDM can potentially overcome limitations in contemporary WD models by uncovering causal feedback relationships, and conceptualizing dynamic system behaviors. It employs a collaborative and stakeholder-based model building methodology to create a visual depiction of the system as a whole. SDM can also enable researchers to run dynamic simulations to provide evidence of anticipated or unanticipated outcomes that could result from policy and programmatic intervention. SDM may advance rehabilitation research by providing greater insights into the structure and dynamics of WD systems while helping to understand inherent complexity. Challenges related to data availability, determining validity, and the extensive time and technical skill requirements for model building may limit SDM's use in the field and should be considered. Contemporary work disability (WD) models provide limited insight into complexity associated with WD processes. System dynamics modeling (SDM) has the potential to capture complexity through a stakeholder-based approach that generates a simulation model consisting of multiple feedback loops. SDM may enable WD researchers and practitioners to understand the structure and behavior of the WD system as a whole, and inform development of improved strategies to manage straightforward and complex WD cases.

  19. A tree-like Bayesian structure learning algorithm for small-sample datasets from complex biological model systems.

    PubMed

    Yin, Weiwei; Garimalla, Swetha; Moreno, Alberto; Galinski, Mary R; Styczynski, Mark P

    2015-08-28

    There are increasing efforts to bring high-throughput systems biology techniques to bear on complex animal model systems, often with a goal of learning about underlying regulatory network structures (e.g., gene regulatory networks). However, complex animal model systems typically have significant limitations on cohort sizes, number of samples, and the ability to perform follow-up and validation experiments. These constraints are particularly problematic for many current network learning approaches, which require large numbers of samples and may predict many more regulatory relationships than actually exist. Here, we test the idea that by leveraging the accuracy and efficiency of classifiers, we can construct high-quality networks that capture important interactions between variables in datasets with few samples. We start from a previously-developed tree-like Bayesian classifier and generalize its network learning approach to allow for arbitrary depth and complexity of tree-like networks. Using four diverse sample networks, we demonstrate that this approach performs consistently better at low sample sizes than the Sparse Candidate Algorithm, a representative approach for comparison because it is known to generate Bayesian networks with high positive predictive value. We develop and demonstrate a resampling-based approach to enable the identification of a viable root for the learned tree-like network, important for cases where the root of a network is not known a priori. We also develop and demonstrate an integrated resampling-based approach to the reduction of variable space for the learning of the network. Finally, we demonstrate the utility of this approach via the analysis of a transcriptional dataset of a malaria challenge in a non-human primate model system, Macaca mulatta, suggesting the potential to capture indicators of the earliest stages of cellular differentiation during leukopoiesis. We demonstrate that by starting from effective and efficient approaches for creating classifiers, we can identify interesting tree-like network structures with significant ability to capture the relationships in the training data. This approach represents a promising strategy for inferring networks with high positive predictive value under the constraint of small numbers of samples, meeting a need that will only continue to grow as more high-throughput studies are applied to complex model systems.

  20. Deterministic modelling and stochastic simulation of biochemical pathways using MATLAB.

    PubMed

    Ullah, M; Schmidt, H; Cho, K H; Wolkenhauer, O

    2006-03-01

    The analysis of complex biochemical networks is conducted in two popular conceptual frameworks for modelling. The deterministic approach requires the solution of ordinary differential equations (ODEs, reaction rate equations) with concentrations as continuous state variables. The stochastic approach involves the simulation of differential-difference equations (chemical master equations, CMEs) with probabilities as variables. This is to generate counts of molecules for chemical species as realisations of random variables drawn from the probability distribution described by the CMEs. Although there are numerous tools available, many of them free, the modelling and simulation environment MATLAB is widely used in the physical and engineering sciences. We describe a collection of MATLAB functions to construct and solve ODEs for deterministic simulation and to implement realisations of CMEs for stochastic simulation using advanced MATLAB coding (Release 14). The program was successfully applied to pathway models from the literature for both cases. The results were compared to implementations using alternative tools for dynamic modelling and simulation of biochemical networks. The aim is to provide a concise set of MATLAB functions that encourage the experimentation with systems biology models. All the script files are available from www.sbi.uni-rostock.de/ publications_matlab-paper.html.

  1. Soil depth influence on Amazonian ecophysiology

    NASA Astrophysics Data System (ADS)

    Fagerstrom, I.; Baker, I. T.; Gallup, S.; Denning, A. S.

    2017-12-01

    Models of land-atmosphere interaction are important for simulating present day weather and critical for predictions of future climate. Land-atmosphere interaction models have become increasingly complex in the last 30 years, leading to the need for further studies examining their intricacies and improvement. This research focuses on the effect of variable soil depth on Amazonian Gross Primary Production (GPP), respiration, and their combination into overall carbon flux. We evaluate a control, which has a universal soil depth of 10 meters, with two experiments of variable soil depths. To conduct this study we ran the 3 models for the period 2000-2012, evaluating similarities and differences between them. We focus on the Amazon rain forest, and compare differences in components of carbon flux. Not surprisingly, we find that the main differences between the models arises in regions where the soil depth is dissimilar between models. However, we did not observe significant differences in GPP between known drought, wet, and average years; interannual variability in carbon dynamics was less than anticipated. We also anticipated that differences between models would be most significant during the dry season, but found discrepancies that persisted through the entire annual cycle.

  2. An attentional drift diffusion model over binary-attribute choice.

    PubMed

    Fisher, Geoffrey

    2017-11-01

    In order to make good decisions, individuals need to identify and properly integrate information about various attributes associated with a choice. Since choices are often complex and made rapidly, they are typically affected by contextual variables that are thought to influence how much attention is paid to different attributes. I propose a modification of the attentional drift-diffusion model, the binary-attribute attentional drift diffusion model (baDDM), which describes the choice process over simple binary-attribute choices and how it is affected by fluctuations in visual attention. Using an eye-tracking experiment, I find the baDDM makes accurate quantitative predictions about several key variables including choices, reaction times, and how these variables are correlated with attention to two attributes in an accept-reject decision. Furthermore, I estimate an attribute-based fixation bias that suggests attention to an attribute increases its subjective weight by 5%, while the unattended attribute's weight is decreased by 10%. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Analysis of Neuronal Spike Trains, Deconstructed

    PubMed Central

    Aljadeff, Johnatan; Lansdell, Benjamin J.; Fairhall, Adrienne L.; Kleinfeld, David

    2016-01-01

    As information flows through the brain, neuronal firing progresses from encoding the world as sensed by the animal to driving the motor output of subsequent behavior. One of the more tractable goals of quantitative neuroscience is to develop predictive models that relate the sensory or motor streams with neuronal firing. Here we review and contrast analytical tools used to accomplish this task. We focus on classes of models in which the external variable is compared with one or more feature vectors to extract a low-dimensional representation, the history of spiking and other variables are potentially incorporated, and these factors are nonlinearly transformed to predict the occurrences of spikes. We illustrate these techniques in application to datasets of different degrees of complexity. In particular, we address the fitting of models in the presence of strong correlations in the external variable, as occurs in natural sensory stimuli and in movement. Spectral correlation between predicted and measured spike trains is introduced to contrast the relative success of different methods. PMID:27477016

  4. Probabilistic Thermal Analysis During Mars Reconnaissance Orbiter Aerobraking

    NASA Technical Reports Server (NTRS)

    Dec, John A.

    2007-01-01

    A method for performing a probabilistic thermal analysis during aerobraking has been developed. The analysis is performed on the Mars Reconnaissance Orbiter solar array during aerobraking. The methodology makes use of a response surface model derived from a more complex finite element thermal model of the solar array. The response surface is a quadratic equation which calculates the peak temperature for a given orbit drag pass at a specific location on the solar panel. Five different response surface equations are used, one of which predicts the overall maximum solar panel temperature, and the remaining four predict the temperatures of the solar panel thermal sensors. The variables used to define the response surface can be characterized as either environmental, material property, or modeling variables. Response surface variables are statistically varied in a Monte Carlo simulation. The Monte Carlo simulation produces mean temperatures and 3 sigma bounds as well as the probability of exceeding the designated flight allowable temperature for a given orbit. Response surface temperature predictions are compared with the Mars Reconnaissance Orbiter flight temperature data.

  5. Study of Environmental Data Complexity using Extreme Learning Machine

    NASA Astrophysics Data System (ADS)

    Leuenberger, Michael; Kanevski, Mikhail

    2017-04-01

    The main goals of environmental data science using machine learning algorithm deal, in a broad sense, around the calibration, the prediction and the visualization of hidden relationship between input and output variables. In order to optimize the models and to understand the phenomenon under study, the characterization of the complexity (at different levels) should be taken into account. Therefore, the identification of the linear or non-linear behavior between input and output variables adds valuable information for the knowledge of the phenomenon complexity. The present research highlights and investigates the different issues that can occur when identifying the complexity (linear/non-linear) of environmental data using machine learning algorithm. In particular, the main attention is paid to the description of a self-consistent methodology for the use of Extreme Learning Machines (ELM, Huang et al., 2006), which recently gained a great popularity. By applying two ELM models (with linear and non-linear activation functions) and by comparing their efficiency, quantification of the linearity can be evaluated. The considered approach is accompanied by simulated and real high dimensional and multivariate data case studies. In conclusion, the current challenges and future development in complexity quantification using environmental data mining are discussed. References - Huang, G.-B., Zhu, Q.-Y., Siew, C.-K., 2006. Extreme learning machine: theory and applications. Neurocomputing 70 (1-3), 489-501. - Kanevski, M., Pozdnoukhov, A., Timonin, V., 2009. Machine Learning for Spatial Environmental Data. EPFL Press; Lausanne, Switzerland, p.392. - Leuenberger, M., Kanevski, M., 2015. Extreme Learning Machines for spatial environmental data. Computers and Geosciences 85, 64-73.

  6. Evaluation of an improved intermediate complexity snow scheme in the ORCHIDEE land surface model

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Ottlé, Catherine; Boone, Aaron; Ciais, Philippe; Brun, Eric; Morin, Samuel; Krinner, Gerhard; Piao, Shilong; Peng, Shushi

    2013-06-01

    Snow plays an important role in land surface models (LSM) for climate and model applied over Fran studies, but its current treatment as a single layer of constant density and thermal conductivity in ORCHIDEE (Organizing Carbon and Hydrology in Dynamic Ecosystems) induces significant deficiencies. The intermediate complexity snow scheme ISBA-ES (Interaction between Soil, Biosphere and Atmosphere-Explicit Snow) that includes key snow processes has been adapted and implemented into ORCHIDEE, referred to here as ORCHIDEE-ES. In this study, the adapted scheme is evaluated against the observations from the alpine site Col de Porte (CDP) with a continuous 18 year data set and from sites distributed in northern Eurasia. At CDP, the comparisons of snow depth, snow water equivalent, surface temperature, snow albedo, and snowmelt runoff reveal that the improved scheme in ORCHIDEE is capable of simulating the internal snow processes better than the original one. Preliminary sensitivity tests indicate that snow albedo parameterization is the main cause for the large difference in snow-related variables but not for soil temperature simulated by the two models. The ability of the ORCHIDEE-ES to better simulate snow thermal conductivity mainly results in differences in soil temperatures. These are confirmed by performing sensitivity analysis of ORCHIDEE-ES parameters using the Morris method. These features can enable us to more realistically investigate interactions between snow and soil thermal regimes (and related soil carbon decomposition). When the two models are compared over sites located in northern Eurasia from 1979 to 1993, snow-related variables and 20 cm soil temperature are better reproduced by ORCHIDEE-ES than ORCHIDEE, revealing a more accurate representation of spatio-temporal variability.

  7. A unifying framework for marginalized random intercept models of correlated binary outcomes

    PubMed Central

    Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian M.

    2013-01-01

    We demonstrate that many current approaches for marginal modeling of correlated binary outcomes produce likelihoods that are equivalent to the copula-based models herein. These general copula models of underlying latent threshold random variables yield likelihood-based models for marginal fixed effects estimation and interpretation in the analysis of correlated binary data with exchangeable correlation structures. Moreover, we propose a nomenclature and set of model relationships that substantially elucidates the complex area of marginalized random intercept models for binary data. A diverse collection of didactic mathematical and numerical examples are given to illustrate concepts. PMID:25342871

  8. Divergent patterns of experimental and model derived variables of tundra ecosystem carbon exchange in response to arctic warming

    NASA Astrophysics Data System (ADS)

    Schaedel, C.; Koven, C.; Celis, G.; Hutchings, J.; Lawrence, D. M.; Mauritz, M.; Pegoraro, E.; Salmon, V. G.; Taylor, M.; Wieder, W. R.; Schuur, E.

    2017-12-01

    Warming over the Arctic in the last decades has been twice as high as for the rest of the globe and has exposed large amounts of organic carbon to microbial decomposition in permafrost ecosystems. Continued warming and associated changes in soil moisture conditions not only lead to enhanced microbial decomposition from permafrost soil but also enhanced plant carbon uptake. Both processes impact the overall contribution of permafrost carbon dynamics to the global carbon cycle, yet field and modeling studies show large uncertainties in regard to both uptake and release mechanisms. Here, we compare variables associated with ecosystem carbon exchange (GPP: gross primary production; Reco: ecosystem respiration; and NEE: net ecosystem exchange) from eight years of experimental soil warming in moist acidic tundra with the same variables derived from an experimental model (Community Land Model version 4.5: CLM4.5) that simulates the same degree of arctic warming. While soil temperatures and thaw depths exhibited comparable increases with warming between field and model variables, carbon exchange related parameters showed divergent patterns. In the field non-linear responses to experimentally induced permafrost thaw were observed in GPP, Reco, and NEE. Indirect effects of continued soil warming and thaw created changes in soil moisture conditions causing ground surface subsidence and suppressing ecosystem carbon exchange over time. In contrast, the model predicted linear increases in GPP, Reco, and NEE with every year of warming turning the ecosystem into a net annual carbon sink. The field experiment revealed the importance of hydrology in carbon flux responses to permafrost thaw, a complexity that the model may fail to predict. Further parameterization of variables that drive GPP, Reco, and NEE in the model will help to inform and refine future model development.

  9. A Comparison between Multiple Regression Models and CUN-BAE Equation to Predict Body Fat in Adults

    PubMed Central

    Fuster-Parra, Pilar; Bennasar-Veny, Miquel; Tauler, Pedro; Yañez, Aina; López-González, Angel A.; Aguiló, Antoni

    2015-01-01

    Background Because the accurate measure of body fat (BF) is difficult, several prediction equations have been proposed. The aim of this study was to compare different multiple regression models to predict BF, including the recently reported CUN-BAE equation. Methods Multi regression models using body mass index (BMI) and body adiposity index (BAI) as predictors of BF will be compared. These models will be also compared with the CUN-BAE equation. For all the analysis a sample including all the participants and another one including only the overweight and obese subjects will be considered. The BF reference measure was made using Bioelectrical Impedance Analysis. Results The simplest models including only BMI or BAI as independent variables showed that BAI is a better predictor of BF. However, adding the variable sex to both models made BMI a better predictor than the BAI. For both the whole group of participants and the group of overweight and obese participants, using simple models (BMI, age and sex as variables) allowed obtaining similar correlations with BF as when the more complex CUN-BAE was used (ρ = 0:87 vs. ρ = 0:86 for the whole sample and ρ = 0:88 vs. ρ = 0:89 for overweight and obese subjects, being the second value the one for CUN-BAE). Conclusions There are simpler models than CUN-BAE equation that fits BF as well as CUN-BAE does. Therefore, it could be considered that CUN-BAE overfits. Using a simple linear regression model, the BAI, as the only variable, predicts BF better than BMI. However, when the sex variable is introduced, BMI becomes the indicator of choice to predict BF. PMID:25821960

  10. A comparison between multiple regression models and CUN-BAE equation to predict body fat in adults.

    PubMed

    Fuster-Parra, Pilar; Bennasar-Veny, Miquel; Tauler, Pedro; Yañez, Aina; López-González, Angel A; Aguiló, Antoni

    2015-01-01

    Because the accurate measure of body fat (BF) is difficult, several prediction equations have been proposed. The aim of this study was to compare different multiple regression models to predict BF, including the recently reported CUN-BAE equation. Multi regression models using body mass index (BMI) and body adiposity index (BAI) as predictors of BF will be compared. These models will be also compared with the CUN-BAE equation. For all the analysis a sample including all the participants and another one including only the overweight and obese subjects will be considered. The BF reference measure was made using Bioelectrical Impedance Analysis. The simplest models including only BMI or BAI as independent variables showed that BAI is a better predictor of BF. However, adding the variable sex to both models made BMI a better predictor than the BAI. For both the whole group of participants and the group of overweight and obese participants, using simple models (BMI, age and sex as variables) allowed obtaining similar correlations with BF as when the more complex CUN-BAE was used (ρ = 0:87 vs. ρ = 0:86 for the whole sample and ρ = 0:88 vs. ρ = 0:89 for overweight and obese subjects, being the second value the one for CUN-BAE). There are simpler models than CUN-BAE equation that fits BF as well as CUN-BAE does. Therefore, it could be considered that CUN-BAE overfits. Using a simple linear regression model, the BAI, as the only variable, predicts BF better than BMI. However, when the sex variable is introduced, BMI becomes the indicator of choice to predict BF.

  11. Modeling soybean canopy resistance from micrometeorological and plant variables for estimating evapotranspiration using one-step Penman-Monteith approach

    NASA Astrophysics Data System (ADS)

    Irmak, Suat; Mutiibwa, Denis; Payero, Jose; Marek, Thomas; Porter, Dana

    2013-12-01

    Canopy resistance (rc) is one of the most important variables in evapotranspiration, agronomy, hydrology and climate change studies that link vegetation response to changing environmental and climatic variables. This study investigates the concept of generalized nonlinear/linear modeling approach of rc from micrometeorological and plant variables for soybean [Glycine max (L.) Merr.] canopy at different climatic zones in Nebraska, USA (Clay Center, Geneva, Holdrege and North Platte). Eight models estimating rc as a function of different combination of micrometeorological and plant variables are presented. The models integrated the linear and non-linear effects of regulating variables (net radiation, Rn; relative humidity, RH; wind speed, U3; air temperature, Ta; vapor pressure deficit, VPD; leaf area index, LAI; aerodynamic resistance, ra; and solar zenith angle, Za) to predict hourly rc. The most complex rc model has all regulating variables and the simplest model has only Rn, Ta and RH. The rc models were developed at Clay Center in the growing season of 2007 and applied to other independent sites and years. The predicted rc for the growing seasons at four locations were then used to estimate actual crop evapotranspiration (ETc) as a one-step process using the Penman-Monteith model and compared to the measured data at all locations. The models were able to account for 66-93% of the variability in measured hourly ETc across locations. Models without LAI generally underperformed and underestimated due to overestimation of rc, especially during full canopy cover stage. Using vapor pressure deficit or relative humidity in the models had similar effect on estimating rc. The root squared error (RSE) between measured and estimated ETc was about 0.07 mm h-1 for most of the models at Clay Center, Geneva and Holdrege. At North Platte, RSE was above 0.10 mm h-1. The results at different sites and different growing seasons demonstrate the robustness and consistency of the models in estimating soybean rc, which is encouraging towards the general application of one-step estimation of soybean canopy ETc in practice using the Penman-Monteith model and could aid in enhancing the utilization of the approach by irrigation and water management community.

  12. Heart-Rate Variability—More than Heart Beats?

    PubMed Central

    Ernst, Gernot

    2017-01-01

    Heart-rate variability (HRV) is frequently introduced as mirroring imbalances within the autonomous nerve system. Many investigations are based on the paradigm that increased sympathetic tone is associated with decreased parasympathetic tone and vice versa. But HRV is probably more than an indicator for probable disturbances in the autonomous system. Some perturbations trigger not reciprocal, but parallel changes of vagal and sympathetic nerve activity. HRV has also been considered as a surrogate parameter of the complex interaction between brain and cardiovascular system. Systems biology is an inter-disciplinary field of study focusing on complex interactions within biological systems like the cardiovascular system, with the help of computational models and time series analysis, beyond others. Time series are considered surrogates of the particular system, reflecting robustness or fragility. Increased variability is usually seen as associated with a good health condition, whereas lowered variability might signify pathological changes. This might explain why lower HRV parameters were related to decreased life expectancy in several studies. Newer integrating theories have been proposed. According to them, HRV reflects as much the state of the heart as the state of the brain. The polyvagal theory suggests that the physiological state dictates the range of behavior and psychological experience. Stressful events perpetuate the rhythms of autonomic states, and subsequently, behaviors. Reduced variability will according to this theory not only be a surrogate but represent a fundamental homeostasis mechanism in a pathological state. The neurovisceral integration model proposes that cardiac vagal tone, described in HRV beyond others as HF-index, can mirror the functional balance of the neural networks implicated in emotion–cognition interactions. Both recent models represent a more holistic approach to understanding the significance of HRV. PMID:28955705

  13. Seasonality of semi-arid and savanna-type ecosystems in an Earth system model

    NASA Astrophysics Data System (ADS)

    Dahlin, K.; Swenson, S. C.; Lombardozzi, D.; Kamoske, A.

    2016-12-01

    Recent work has identified semi-arid and savanna-type (SAST) ecosystems as a critical component of interannual variability in the Earth system (Poulter et al. 2014, Ahlström et al. 2015), yet our understanding of the spatial and temporal patterns present in these systems remains limited. There are three major factors that contribute to the complex behavior of SAST ecosystems, globally. First is leaf phenology, the timing of the appearance, presence, and senescence of plant leaves. Plants grow and drop their leaves in response to a variety of cues, including soil moisture, rainfall, day length, and relative humidity, and alternative phenological strategies might often co-exist in the same location. The second major factor in savannas is soil moisture. The complex nature of soil behavior under extremely dry, then extremely wet conditions is critical to our understanding of how savannas function. The third factor is fire. Globally, virtually all savanna-type ecosystems operate with some non-zero fire return interval. Here we compare model output from the Community Land Model (CLM5-BGC) in SAST regions to remotely sensed data on these three variables - phenology (MODIS LAI), soil moisture (SMAP), and fire (GFED4) - assessing both annual spatial patterns and intra-annual variability, which is critical in these highly variable systems. We present new SAST-specific first- and second-order benchmarks, including numbers of annual LAI peaks (often >1 in SAST systems) and correlations between soil moisture, LAI, and fire. Developing a better understanding of how plants respond to seasonal patterns is a critical first step in understanding how SAST ecosystems will respond to and influence climate under future scenarios.

  14. Reconstruction of Complex Directional Networks with Group Lasso Nonlinear Conditional Granger Causality.

    PubMed

    Yang, Guanxue; Wang, Lin; Wang, Xiaofan

    2017-06-07

    Reconstruction of networks underlying complex systems is one of the most crucial problems in many areas of engineering and science. In this paper, rather than identifying parameters of complex systems governed by pre-defined models or taking some polynomial and rational functions as a prior information for subsequent model selection, we put forward a general framework for nonlinear causal network reconstruction from time-series with limited observations. With obtaining multi-source datasets based on the data-fusion strategy, we propose a novel method to handle nonlinearity and directionality of complex networked systems, namely group lasso nonlinear conditional granger causality. Specially, our method can exploit different sets of radial basis functions to approximate the nonlinear interactions between each pair of nodes and integrate sparsity into grouped variables selection. The performance characteristic of our approach is firstly assessed with two types of simulated datasets from nonlinear vector autoregressive model and nonlinear dynamic models, and then verified based on the benchmark datasets from DREAM3 Challenge4. Effects of data size and noise intensity are also discussed. All of the results demonstrate that the proposed method performs better in terms of higher area under precision-recall curve.

  15. Fuzzy logic based robotic controller

    NASA Technical Reports Server (NTRS)

    Attia, F.; Upadhyaya, M.

    1994-01-01

    Existing Proportional-Integral-Derivative (PID) robotic controllers rely on an inverse kinematic model to convert user-specified cartesian trajectory coordinates to joint variables. These joints experience friction, stiction, and gear backlash effects. Due to lack of proper linearization of these effects, modern control theory based on state space methods cannot provide adequate control for robotic systems. In the presence of loads, the dynamic behavior of robotic systems is complex and nonlinear, especially where mathematical modeling is evaluated for real-time operators. Fuzzy Logic Control is a fast emerging alternative to conventional control systems in situations where it may not be feasible to formulate an analytical model of the complex system. Fuzzy logic techniques track a user-defined trajectory without having the host computer to explicitly solve the nonlinear inverse kinematic equations. The goal is to provide a rule-based approach, which is closer to human reasoning. The approach used expresses end-point error, location of manipulator joints, and proximity to obstacles as fuzzy variables. The resulting decisions are based upon linguistic and non-numerical information. This paper presents a solution to the conventional robot controller which is independent of computationally intensive kinematic equations. Computer simulation results of this approach as obtained from software implementation are also discussed.

  16. It's a Sooty Problem: Black Carbon and Aerosols from Space

    NASA Technical Reports Server (NTRS)

    Kaufman, Yoram J.

    2005-01-01

    Our knowledge of atmospheric aerosols (smoke, pollution, dust or sea salt particles, small enough to be suspended in the air), their evolution, composition, variability in space and time and interaction with solar radiation, clouds and precipitation is lacking despite decades of research. Just recently we recognized that understanding the global aerosol system is fundamental for progress in climate change and hydrological cycle research. While a single instrument was used to demonstrate 50 yrs ago that the global CO2 levels are rising, posing thread to our climate, we need an may of satellites, surface networks of radiometers, elaborated laboratory and field experiments coupled with chemical transport models to understand the global aerosol system. This complexity of the aerosol problem results from their short lifetime (1 week), variability of the chemical composition and complex chemical and physical processes in the atmosphere. The result is a heterogeneous distribution of aerosol and their properties. The new generation of satellites and surface networks of radiometers provides exciting opportunities to measure the aerosol properties and their interaction with clouds and climate. However farther development in the satellite capability, aerosol chemical models and climate models is needed to fully decipher the aerosol secrets with accuracy required to predict future climates.

  17. User’s Guide for the VTRPE (Variable Terrain Radio Parabolic Equation) Computer Model

    DTIC Science & Technology

    1991-10-01

    propagation effects and antenna characteristics in radar system performance calculations. the radar transmission equation is oiten employed. Fol- lowing Kerr.2...electromagnetic wave equations for the complex electric and magnetic radiation fields. The model accounts for the effects of nonuniform atmospheric refractivity...mission equation, that is used in the performance prediction and analysis of radar and communication systems. Optimized fast Fourier transform (FFT

  18. Cognitive components of a mathematical processing network in 9-year-old children.

    PubMed

    Szűcs, Dénes; Devine, Amy; Soltesz, Fruzsina; Nobes, Alison; Gabriel, Florence

    2014-07-01

    We determined how various cognitive abilities, including several measures of a proposed domain-specific number sense, relate to mathematical competence in nearly 100 9-year-old children with normal reading skill. Results are consistent with an extended number processing network and suggest that important processing nodes of this network are phonological processing, verbal knowledge, visuo-spatial short-term and working memory, spatial ability and general executive functioning. The model was highly specific to predicting arithmetic performance. There were no strong relations between mathematical achievement and verbal short-term and working memory, sustained attention, response inhibition, finger knowledge and symbolic number comparison performance. Non-verbal intelligence measures were also non-significant predictors when added to our model. Number sense variables were non-significant predictors in the model and they were also non-significant predictors when entered into regression analysis with only a single visuo-spatial WM measure. Number sense variables were predicted by sustained attention. Results support a network theory of mathematical competence in primary school children and falsify the importance of a proposed modular 'number sense'. We suggest an 'executive memory function centric' model of mathematical processing. Mapping a complex processing network requires that studies consider the complex predictor space of mathematics rather than just focusing on a single or a few explanatory factors.

  19. Cognitive components of a mathematical processing network in 9-year-old children

    PubMed Central

    Szűcs, Dénes; Devine, Amy; Soltesz, Fruzsina; Nobes, Alison; Gabriel, Florence

    2014-01-01

    We determined how various cognitive abilities, including several measures of a proposed domain-specific number sense, relate to mathematical competence in nearly 100 9-year-old children with normal reading skill. Results are consistent with an extended number processing network and suggest that important processing nodes of this network are phonological processing, verbal knowledge, visuo-spatial short-term and working memory, spatial ability and general executive functioning. The model was highly specific to predicting arithmetic performance. There were no strong relations between mathematical achievement and verbal short-term and working memory, sustained attention, response inhibition, finger knowledge and symbolic number comparison performance. Non-verbal intelligence measures were also non-significant predictors when added to our model. Number sense variables were non-significant predictors in the model and they were also non-significant predictors when entered into regression analysis with only a single visuo-spatial WM measure. Number sense variables were predicted by sustained attention. Results support a network theory of mathematical competence in primary school children and falsify the importance of a proposed modular ‘number sense’. We suggest an ‘executive memory function centric’ model of mathematical processing. Mapping a complex processing network requires that studies consider the complex predictor space of mathematics rather than just focusing on a single or a few explanatory factors. PMID:25089322

  20. The "Horns" of FK Comae and the Complex Structure of its Outer Atmosphere

    NASA Astrophysics Data System (ADS)

    Saar, Steven H.; Ayres, T. R.; Kashyap, V.

    2014-01-01

    As part of a large multiwavelength campaign (COCOA-PUFS*) to explore magnetic activity in the unusual, single, rapidly rotating giant FK Comae, we have taken a time series of moderate resolution FUV spectra of the star with the COS spectrograph on HST. We find that the star has unusual, time-variable emission profiles in the chromosphere and transition region which show horn-like features. We use simple spatially inhomogeneous models to explain the variable line shapes. Modeling the lower chromospheric Cl I 1351 Å line, we find evidence for a very extended, spatial inhomogeneous outer atmosphere, likely composed of many huge "sling-shot" prominences of cooler material with embedded in a rotationally distended corona. We compare these results with hotter hotter transition region lines (Si IV) and optical spectra of the chromospheric He I D3 line. We also employ the model Cl I profiles, and data-derived empirical models, to fit the complex spectral region around the coronal Fe XXI 1354.1 Å line. We place limits on the flux of this line, and show these limits are consistent with expectations from the observed X-ray spectrum. *Campaign for Observation of the Corona and Outer Atmosphere of the Fast-rotating Star, FK Comae This work was supported by HST grant GO-12376.01-A.

Top