Sample records for observations process-based models

  1. Identification of AR(I)MA processes for modelling temporal correlations of GPS observations

    NASA Astrophysics Data System (ADS)

    Luo, X.; Mayer, M.; Heck, B.

    2009-04-01

    In many geodetic applications observations of the Global Positioning System (GPS) are routinely processed by means of the least-squares method. However, this algorithm delivers reliable estimates of unknown parameters und realistic accuracy measures only if both the functional and stochastic models are appropriately defined within GPS data processing. One deficiency of the stochastic model used in many GPS software products consists in neglecting temporal correlations of GPS observations. In practice the knowledge of the temporal stochastic behaviour of GPS observations can be improved by analysing time series of residuals resulting from the least-squares evaluation. This paper presents an approach based on the theory of autoregressive (integrated) moving average (AR(I)MA) processes to model temporal correlations of GPS observations using time series of observation residuals. A practicable integration of AR(I)MA models in GPS data processing requires the determination of the order parameters of AR(I)MA processes at first. In case of GPS, the identification of AR(I)MA processes could be affected by various factors impacting GPS positioning results, e.g. baseline length, multipath effects, observation weighting, or weather variations. The influences of these factors on AR(I)MA identification are empirically analysed based on a large amount of representative residual time series resulting from differential GPS post-processing using 1-Hz observation data collected within the permanent SAPOS® (Satellite Positioning Service of the German State Survey) network. Both short and long time series are modelled by means of AR(I)MA processes. The final order parameters are determined based on the whole residual database; the corresponding empirical distribution functions illustrate that multipath and weather variations seem to affect the identification of AR(I)MA processes much more significantly than baseline length and observation weighting. Additionally, the modelling results of temporal correlations using high-order AR(I)MA processes are compared with those by means of first order autoregressive (AR(1)) processes and empirically estimated autocorrelation functions.

  2. Process-oriented Observational Metrics for CMIP6 Climate Model Assessments

    NASA Astrophysics Data System (ADS)

    Jiang, J. H.; Su, H.

    2016-12-01

    Observational metrics based on satellite observations have been developed and effectively applied during post-CMIP5 model evaluation and improvement projects. As new physics and parameterizations continue to be included in models for the upcoming CMIP6, it is important to continue objective comparisons between observations and model results. This talk will summarize the process-oriented observational metrics and methodologies for constraining climate models with A-Train satellite observations and support CMIP6 model assessments. We target parameters and processes related to atmospheric clouds and water vapor, which are critically important for Earth's radiative budget, climate feedbacks, and water and energy cycles, and thus reduce uncertainties in climate models.

  3. An assessment of the carbon balance of arctic tundra: comparisons among observations, process models, and atmospheric inversions

    USGS Publications Warehouse

    McGuire, A.D.; Christensen, T.R.; Hayes, D.; Heroult, A.; Euskirchen, E.; Yi, Y.; Kimball, J.S.; Koven, C.; Lafleur, P.; Miller, P.A.; Oechel, W.; Peylin, P.; Williams, M.

    2012-01-01

    Although arctic tundra has been estimated to cover only 8% of the global land surface, the large and potentially labile carbon pools currently stored in tundra soils have the potential for large emissions of carbon (C) under a warming climate. These emissions as radiatively active greenhouse gases in the form of both CO2 and CH4 could amplify global warming. Given the potential sensitivity of these ecosystems to climate change and the expectation that the Arctic will experience appreciable warming over the next century, it is important to assess whether responses of C exchange in tundra regions are likely to enhance or mitigate warming. In this study we compared analyses of C exchange of Arctic tundra between 1990–1999 and 2000–2006 among observations, regional and global applications of process-based terrestrial biosphere models, and atmospheric inversion models. Syntheses of the compilation of flux observations and of inversion model results indicate that the annual exchange of CO2 between arctic tundra and the atmosphere has large uncertainties that cannot be distinguished from neutral balance. The mean estimate from an ensemble of process-based model simulations suggests that arctic tundra acted as a sink for atmospheric CO2 in recent decades, but based on the uncertainty estimates it cannot be determined with confidence whether these ecosystems represent a weak or a strong sink. Tundra was 0.6 °C warmer in the 2000s compared to the 1990s. The central estimates of the observations, process-based models, and inversion models each identify stronger sinks in the 2000s compared with the 1990s. Similarly, the observations and the applications of regional process-based models suggest that CH4 emissions from arctic tundra have increased from the 1990s to 2000s. Based on our analyses of the estimates from observations, process-based models, and inversion models, we estimate that arctic tundra was a sink for atmospheric CO2 of 110 Tg C yr-1 (uncertainty between a sink of 291 Tg C yr-1 and a source of 80 Tg C yr-1) and a source of CH4 to the atmosphere of 19 Tg C yr-1 (uncertainty between sources of 8 and 29 Tg C yr-1). The suite of analyses conducted in this study indicate that it is clearly important to reduce uncertainties in the observations, process-based models, and inversions in order to better understand the degree to which Arctic tundra is influencing atmospheric CO2 and CH4 concentrations. The reduction of uncertainties can be accomplished through (1) the strategic placement of more CO2 and CH4 monitoring stations to reduce uncertainties in inversions, (2) improved observation networks of ground-based measurements of CO2 and CH4 exchange to understand exchange in response to disturbance and across gradients of hydrological variability, and (3) the effective transfer of information from enhanced observation networks into process-based models to improve the simulation of CO2 and CH4 exchange from arctic tundra to the atmosphere.

  4. Dominant root locus in state estimator design for material flow processes: A case study of hot strip rolling.

    PubMed

    Fišer, Jaromír; Zítek, Pavel; Skopec, Pavel; Knobloch, Jan; Vyhlídal, Tomáš

    2017-05-01

    The purpose of the paper is to achieve a constrained estimation of process state variables using the anisochronic state observer tuned by the dominant root locus technique. The anisochronic state observer is based on the state-space time delay model of the process. Moreover the process model is identified not only as delayed but also as non-linear. This model is developed to describe a material flow process. The root locus technique combined with the magnitude optimum method is utilized to investigate the estimation process. Resulting dominant roots location serves as a measure of estimation process performance. The higher the dominant (natural) frequency in the leftmost position of the complex plane the more enhanced performance with good robustness is achieved. Also the model based observer control methodology for material flow processes is provided by means of the separation principle. For demonstration purposes, the computer-based anisochronic state observer is applied to the strip temperatures estimation in the hot strip finishing mill composed of seven stands. This application was the original motivation to the presented research. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Dali; Yuan, Fengming; Hernandez, Benjamin

    Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less

  6. Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations

    DOE PAGES

    Wang, Dali; Yuan, Fengming; Hernandez, Benjamin; ...

    2017-01-01

    Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less

  7. Virtual Sensor Web Architecture

    NASA Astrophysics Data System (ADS)

    Bose, P.; Zimdars, A.; Hurlburt, N.; Doug, S.

    2006-12-01

    NASA envisions the development of smart sensor webs, intelligent and integrated observation network that harness distributed sensing assets, their associated continuous and complex data sets, and predictive observation processing mechanisms for timely, collaborative hazard mitigation and enhanced science productivity and reliability. This paper presents Virtual Sensor Web Infrastructure for Collaborative Science (VSICS) Architecture for sustained coordination of (numerical and distributed) model-based processing, closed-loop resource allocation, and observation planning. VSICS's key ideas include i) rich descriptions of sensors as services based on semantic markup languages like OWL and SensorML; ii) service-oriented workflow composition and repair for simple and ensemble models; event-driven workflow execution based on event-based and distributed workflow management mechanisms; and iii) development of autonomous model interaction management capabilities providing closed-loop control of collection resources driven by competing targeted observation needs. We present results from initial work on collaborative science processing involving distributed services (COSEC framework) that is being extended to create VSICS.

  8. Rapid response tools and datasets for post-fire modeling: Linking Earth Observations and process-based hydrological models to support post-fire remediation

    Treesearch

    M. E. Miller; M. Billmire; W. J. Elliot; K. A. Endsley; P. R. Robichaud

    2015-01-01

    Preparation is key to utilizing Earth Observations and process-based models to support post-wildfire mitigation. Post-fire flooding and erosion can pose a serious threat to life, property and municipal water supplies. Increased runoff and sediment delivery due to the loss of surface cover and fire-induced changes in soil properties are of great concern. Remediation...

  9. Upscaling from research watersheds: an essential stage of trustworthy general-purpose hydrologic model building

    NASA Astrophysics Data System (ADS)

    McNamara, J. P.; Semenova, O.; Restrepo, P. J.

    2011-12-01

    Highly instrumented research watersheds provide excellent opportunities for investigating hydrologic processes. A danger, however, is that the processes observed at a particular research watershed are too specific to the watershed and not representative even of the larger scale watershed that contains that particular research watershed. Thus, models developed based on those partial observations may not be suitable for general hydrologic use. Therefore demonstrating the upscaling of hydrologic process from research watersheds to larger watersheds is essential to validate concepts and test model structure. The Hydrograph model has been developed as a general-purpose process-based hydrologic distributed system. In its applications and further development we evaluate the scaling of model concepts and parameters in a wide range of hydrologic landscapes. All models, either lumped or distributed, are based on a discretization concept. It is common practice that watersheds are discretized into so called hydrologic units or hydrologic landscapes possessing assumed homogeneous hydrologic functioning. If a model structure is fixed, the difference in hydrologic functioning (difference in hydrologic landscapes) should be reflected by a specific set of model parameters. Research watersheds provide the possibility for reasonable detailed combining of processes into some typical hydrologic concept such as hydrologic units, hydrologic forms, and runoff formation complexes in the Hydrograph model. And here by upscaling we imply not the upscaling of a single process but upscaling of such unified hydrologic functioning. The simulation of runoff processes for the Dry Creek research watershed, Idaho, USA (27 km2) was undertaken using the Hydrograph model. The information on the watershed was provided by Boise State University and included a GIS database of watershed characteristics and a detailed hydrometeorological observational dataset. The model provided good simulation results in terms of runoff and variable states of soil and snow over a simulation period 2000 - 2009. The parameters of the model were hand-adjusted based on rational sense, observational data and available understanding of underlying processes. For the first run some processes as riparian vegetation impact on runoff and streamflow/groundwater interaction were handled in a conceptual way. It was shown that the use of Hydrograph model which requires modest amount of parameter calibration may serve also as a quality control for observations. Based on the obtained parameters values and process understanding at the research watershed the model was applied to the larger scale watersheds located in similar environment - the Boise River at South Fork (1660 km2) and Twin Springs (2155 km2). The evaluation of the results of such upscaling will be presented.

  10. Observer-based perturbation extremum seeking control with input constraints for direct-contact membrane distillation process

    NASA Astrophysics Data System (ADS)

    Eleiwi, Fadi; Laleg-Kirati, Taous Meriem

    2018-06-01

    An observer-based perturbation extremum seeking control is proposed for a direct-contact membrane distillation (DCMD) process. The process is described with a dynamic model that is based on a 2D advection-diffusion equation model which has pump flow rates as process inputs. The objective of the controller is to optimise the trade-off between the permeate mass flux and the energy consumption by the pumps inside the process. Cases of single and multiple control inputs are considered through the use of only the feed pump flow rate or both the feed and the permeate pump flow rates. A nonlinear Lyapunov-based observer is designed to provide an estimation for the temperature distribution all over the designated domain of the DCMD process. Moreover, control inputs are constrained with an anti-windup technique to be within feasible and physical ranges. Performance of the proposed structure is analysed, and simulations based on real DCMD process parameters for each control input are provided.

  11. Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2014-02-01

    Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.

  12. OBSERVATIONAL DATA PROCESSING AT NCEP

    Science.gov Websites

    operations, but also for research and study. 2. The various NCEP networks access the observational data base Statistics Observational Data Processing Data Assimilation Monsoon Desk Model Transition Seminars Seminar / VISION | About EMC Observational Data Processing at NCEP Dennis Keyser - NOAA/NWS/NCEP/EMC (Last Revised

  13. Developing Emotion-Based Case Formulations: A Research-Informed Method.

    PubMed

    Pascual-Leone, Antonio; Kramer, Ueli

    2017-01-01

    New research-informed methods for case conceptualization that cut across traditional therapy approaches are increasingly popular. This paper presents a trans-theoretical approach to case formulation based on the research observations of emotion. The sequential model of emotional processing (Pascual-Leone & Greenberg, 2007) is a process research model that provides concrete markers for therapists to observe the emerging emotional development of their clients. We illustrate how this model can be used by clinicians to track change and provides a 'clinical map,' by which therapist may orient themselves in-session and plan treatment interventions. Emotional processing offers as a trans-theoretical framework for therapists who wish to conduct emotion-based case formulations. First, we present criteria for why this research model translates well into practice. Second, two contrasting case studies are presented to demonstrate the method. The model bridges research with practice by using client emotion as an axis of integration. Key Practitioner Message Process research on emotion can offer a template for therapists to make case formulations while using a range of treatment approaches. The sequential model of emotional processing provides a 'process map' of concrete markers for therapists to (1) observe the emerging emotional development of their clients, and (2) help therapists develop a treatment plan. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Observability Analysis of a Matrix Kalman Filter-Based Navigation System Using Visual/Inertial/Magnetic Sensors

    PubMed Central

    Feng, Guohu; Wu, Wenqi; Wang, Jinling

    2012-01-01

    A matrix Kalman filter (MKF) has been implemented for an integrated navigation system using visual/inertial/magnetic sensors. The MKF rearranges the original nonlinear process model in a pseudo-linear process model. We employ the observability rank criterion based on Lie derivatives to verify the conditions under which the nonlinear system is observable. It has been proved that such observability conditions are: (a) at least one degree of rotational freedom is excited, and (b) at least two linearly independent horizontal lines and one vertical line are observed. Experimental results have validated the correctness of these observability conditions. PMID:23012523

  15. Space-based infrared scanning sensor LOS determination and calibration using star observation

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Xu, Zhan; An, Wei; Deng, Xin-Pu; Yang, Jun-Gang

    2015-10-01

    This paper provides a novel methodology for removing sensor bias from a space based infrared (IR) system (SBIRS) through the use of stars detected in the background field of the sensor. Space based IR system uses the LOS (line of sight) of target for target location. LOS determination and calibration is the key precondition of accurate location and tracking of targets in Space based IR system and the LOS calibration of scanning sensor is one of the difficulties. The subsequent changes of sensor bias are not been taking into account in the conventional LOS determination and calibration process. Based on the analysis of the imaging process of scanning sensor, a theoretical model based on the estimation of bias angles using star observation is proposed. By establishing the process model of the bias angles and the observation model of stars, using an extended Kalman filter (EKF) to estimate the bias angles, and then calibrating the sensor LOS. Time domain simulations results indicate that the proposed method has a high precision and smooth performance for sensor LOS determination and calibration. The timeliness and precision of target tracking process in the space based infrared (IR) tracking system could be met with the proposed algorithm.

  16. Synchrony and motor mimicking in chimpanzee observational learning

    PubMed Central

    Fuhrmann, Delia; Ravignani, Andrea; Marshall-Pescini, Sarah; Whiten, Andrew

    2014-01-01

    Cumulative tool-based culture underwrote our species' evolutionary success, and tool-based nut-cracking is one of the strongest candidates for cultural transmission in our closest relatives, chimpanzees. However the social learning processes that may explain both the similarities and differences between the species remain unclear. A previous study of nut-cracking by initially naïve chimpanzees suggested that a learning chimpanzee holding no hammer nevertheless replicated hammering actions it witnessed. This observation has potentially important implications for the nature of the social learning processes and underlying motor coding involved. In the present study, model and observer actions were quantified frame-by-frame and analysed with stringent statistical methods, demonstrating synchrony between the observer's and model's movements, cross-correlation of these movements above chance level and a unidirectional transmission process from model to observer. These results provide the first quantitative evidence for motor mimicking underlain by motor coding in apes, with implications for mirror neuron function. PMID:24923651

  17. Synchrony and motor mimicking in chimpanzee observational learning.

    PubMed

    Fuhrmann, Delia; Ravignani, Andrea; Marshall-Pescini, Sarah; Whiten, Andrew

    2014-06-13

    Cumulative tool-based culture underwrote our species' evolutionary success, and tool-based nut-cracking is one of the strongest candidates for cultural transmission in our closest relatives, chimpanzees. However the social learning processes that may explain both the similarities and differences between the species remain unclear. A previous study of nut-cracking by initially naïve chimpanzees suggested that a learning chimpanzee holding no hammer nevertheless replicated hammering actions it witnessed. This observation has potentially important implications for the nature of the social learning processes and underlying motor coding involved. In the present study, model and observer actions were quantified frame-by-frame and analysed with stringent statistical methods, demonstrating synchrony between the observer's and model's movements, cross-correlation of these movements above chance level and a unidirectional transmission process from model to observer. These results provide the first quantitative evidence for motor mimicking underlain by motor coding in apes, with implications for mirror neuron function.

  18. From point process observations to collective neural dynamics: Nonlinear Hawkes process GLMs, low-dimensional dynamics and coarse graining

    PubMed Central

    Truccolo, Wilson

    2017-01-01

    This review presents a perspective on capturing collective dynamics in recorded neuronal ensembles based on multivariate point process models, inference of low-dimensional dynamics and coarse graining of spatiotemporal measurements. A general probabilistic framework for continuous time point processes reviewed, with an emphasis on multivariate nonlinear Hawkes processes with exogenous inputs. A point process generalized linear model (PP-GLM) framework for the estimation of discrete time multivariate nonlinear Hawkes processes is described. The approach is illustrated with the modeling of collective dynamics in neocortical neuronal ensembles recorded in human and non-human primates, and prediction of single-neuron spiking. A complementary approach to capture collective dynamics based on low-dimensional dynamics (“order parameters”) inferred via latent state-space models with point process observations is presented. The approach is illustrated by inferring and decoding low-dimensional dynamics in primate motor cortex during naturalistic reach and grasp movements. Finally, we briefly review hypothesis tests based on conditional inference and spatiotemporal coarse graining for assessing collective dynamics in recorded neuronal ensembles. PMID:28336305

  19. From point process observations to collective neural dynamics: Nonlinear Hawkes process GLMs, low-dimensional dynamics and coarse graining.

    PubMed

    Truccolo, Wilson

    2016-11-01

    This review presents a perspective on capturing collective dynamics in recorded neuronal ensembles based on multivariate point process models, inference of low-dimensional dynamics and coarse graining of spatiotemporal measurements. A general probabilistic framework for continuous time point processes reviewed, with an emphasis on multivariate nonlinear Hawkes processes with exogenous inputs. A point process generalized linear model (PP-GLM) framework for the estimation of discrete time multivariate nonlinear Hawkes processes is described. The approach is illustrated with the modeling of collective dynamics in neocortical neuronal ensembles recorded in human and non-human primates, and prediction of single-neuron spiking. A complementary approach to capture collective dynamics based on low-dimensional dynamics ("order parameters") inferred via latent state-space models with point process observations is presented. The approach is illustrated by inferring and decoding low-dimensional dynamics in primate motor cortex during naturalistic reach and grasp movements. Finally, we briefly review hypothesis tests based on conditional inference and spatiotemporal coarse graining for assessing collective dynamics in recorded neuronal ensembles. Published by Elsevier Ltd.

  20. Inverse Modeling of Tropospheric Methane Constrained by 13C Isotope in Methane

    NASA Astrophysics Data System (ADS)

    Mikaloff Fletcher, S. E.; Tans, P. P.; Bruhwiler, L. M.

    2001-12-01

    Understanding the budget of methane is crucial to predicting climate change and managing earth's carbon reservoirs. Methane is responsible for approximately 15% of the anthropogenic greenhouse forcing and has a large impact on the oxidative capacity of Earth's atmosphere due to its reaction with hydroxyl radical. At present, many of the sources and sinks of methane are poorly understood, due in part to the large spatial and temporal variability of the methane flux. Model calculations of methane mixing ratios using most process-based source estimates typically over-predict the inter-hemispheric gradient of atmospheric methane. Inverse models, which estimate trace gas budgets by using observations of atmospheric mixing ratios and transport models to estimate sources and sinks, have been used to incorporate features of the atmospheric observations into methane budgets. While inverse models of methane generally tend to find a decrease in northern hemisphere sources and an increase in southern hemisphere sources relative to process-based estimates,no inverse study has definitively associated the inter-hemispheric gradient difference with a specific source process or group of processes. In this presentation, observations of isotopic ratios of 13C in methane and isotopic signatures of methane source processes are used in conjunction with an inverse model of methane to further constrain the source estimates of methane. In order to investigate the advantages of incorporating 13C, the TM3 three-dimensional transport model was used. The methane and carbon dioxide measurements used are from a cooperative international effort, the Cooperative Air Sampling Network, lead by the Climate Monitoring Diagnostics Laboratory (CMDL) at the National Oceanic and Atmospheric Administration (NOAA). Experiments using model calculations based on process-based source estimates show that the inter-hemispheric gradient of δ 13CH4 is not reproduced by these source estimates, showing that the addition of observations of δ 13CH4 should provide unique insight into the methane problem.

  1. Technical Note: Approximate Bayesian parameterization of a complex tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2013-08-01

    Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.

  2. Computational modeling of residual stress formation during the electron beam melting process for Inconel 718

    DOE PAGES

    Prabhakar, P.; Sames, William J.; Dehoff, Ryan R.; ...

    2015-03-28

    Here, a computational modeling approach to simulate residual stress formation during the electron beam melting (EBM) process within the additive manufacturing (AM) technologies for Inconel 718 is presented in this paper. The EBM process has demonstrated a high potential to fabricate components with complex geometries, but the resulting components are influenced by the thermal cycles observed during the manufacturing process. When processing nickel based superalloys, very high temperatures (approx. 1000 °C) are observed in the powder bed, base plate, and build. These high temperatures, when combined with substrate adherence, can result in warping of the base plate and affect themore » final component by causing defects. It is important to have an understanding of the thermo-mechanical response of the entire system, that is, its mechanical behavior towards thermal loading occurring during the EBM process prior to manufacturing a component. Therefore, computational models to predict the response of the system during the EBM process will aid in eliminating the undesired process conditions, a priori, in order to fabricate the optimum component. Such a comprehensive computational modeling approach is demonstrated to analyze warping of the base plate, stress and plastic strain accumulation within the material, and thermal cycles in the system during different stages of the EBM process.« less

  3. Optimization of Nd: YAG Laser Marking of Alumina Ceramic Using RSM And ANN

    NASA Astrophysics Data System (ADS)

    Peter, Josephine; Doloi, B.; Bhattacharyya, B.

    2011-01-01

    The present research papers deals with the artificial neural network (ANN) and the response surface methodology (RSM) based mathematical modeling and also an optimization analysis on marking characteristics on alumina ceramic. The experiments have been planned and carried out based on Design of Experiment (DOE). It also analyses the influence of the major laser marking process parameters and the optimal combination of laser marking process parametric setting has been obtained. The output of the RSM optimal data is validated through experimentation and ANN predictive model. A good agreement is observed between the results based on ANN predictive model and actual experimental observations.

  4. A novel track-before-detect algorithm based on optimal nonlinear filtering for detecting and tracking infrared dim target

    NASA Astrophysics Data System (ADS)

    Tian, Yuexin; Gao, Kun; Liu, Ying; Han, Lu

    2015-08-01

    Aiming at the nonlinear and non-Gaussian features of the real infrared scenes, an optimal nonlinear filtering based algorithm for the infrared dim target tracking-before-detecting application is proposed. It uses the nonlinear theory to construct the state and observation models and uses the spectral separation scheme based Wiener chaos expansion method to resolve the stochastic differential equation of the constructed models. In order to improve computation efficiency, the most time-consuming operations independent of observation data are processed on the fore observation stage. The other observation data related rapid computations are implemented subsequently. Simulation results show that the algorithm possesses excellent detection performance and is more suitable for real-time processing.

  5. Observations and 3D hydrodynamics-based modeling of decadal-scale shoreline change along the Outer Banks, North Carolina

    USGS Publications Warehouse

    Safak, Ilgar; List, Jeffrey; Warner, John C.; Kumar, Nirnimesh

    2017-01-01

    Long-term decadal-scale shoreline change is an important parameter for quantifying the stability of coastal systems. The decadal-scale coastal change is controlled by processes that occur on short time scales (such as storms) and long-term processes (such as prevailing waves). The ability to predict decadal-scale shoreline change is not well established and the fundamental physical processes controlling this change are not well understood. Here we investigate the processes that create large-scale long-term shoreline change along the Outer Banks of North Carolina, an uninterrupted 60 km stretch of coastline, using both observations and a numerical modeling approach. Shoreline positions for a 24-yr period were derived from aerial photographs of the Outer Banks. Analysis of the shoreline position data showed that, although variable, the shoreline eroded an average of 1.5 m/yr throughout this period. The modeling approach uses a three-dimensional hydrodynamics-based numerical model coupled to a spectral wave model and simulates the full 24-yr time period on a spatial grid running on a short (second scale) time-step to compute the sediment transport patterns. The observations and the model results show similar magnitudes (O(105 m3/yr)) and patterns of alongshore sediment fluxes. Both the observed and the modeled alongshore sediment transport rates have more rapid changes at the north of our section due to continuously curving coastline, and possible effects of alongshore variations in shelf bathymetry. The southern section with a relatively uniform orientation, on the other hand, has less rapid transport rate changes. Alongshore gradients of the modeled sediment fluxes are translated into shoreline change rates that have agreement in some locations but vary in others. Differences between observations and model results are potentially influenced by geologic framework processes not included in the model. Both the observations and the model results show higher rates of erosion (∼−1 m/yr) averaged over the northern half of the section as compared to the southern half where the observed and modeled averaged net shoreline changes are smaller (<0.1 m/yr). The model indicates accretion in some shallow embayments, whereas observations indicate erosion in these locations. Further analysis identifies that the magnitude of net alongshore sediment transport is strongly dominated by events associated with high wave energy. However, both big- and small- wave events cause shoreline change of the same order of magnitude because it is the gradients in transport, not the magnitude, that are controlling shoreline change. Results also indicate that alongshore momentum is not a simple balance between wave breaking and bottom stress, but also includes processes of horizontal vortex force, horizontal advection and pressure gradient that contribute to long-term alongshore sediment transport. As a comparison to a more simple approach, an empirical formulation for alongshore sediment transport is used. The empirical estimates capture the effect of the breaking term in the hydrodynamics-based model, however, other processes that are accounted for in the hydrodynamics-based model improve the agreement with the observed alongshore sediment transport.

  6. PREDICTING SUBSURFACE CONTAMINANT TRANSPORT AND TRANSFORMATION: CONSIDERATIONS FOR MODEL SELECTION AND FIELD VALIDATION

    EPA Science Inventory

    Predicting subsurface contaminant transport and transformation requires mathematical models based on a variety of physical, chemical, and biological processes. The mathematical model is an attempt to quantitatively describe observed processes in order to permit systematic forecas...

  7. Verification of ARMA identification for modelling temporal correlation of GPS observations using the toolbox ARMASA

    NASA Astrophysics Data System (ADS)

    Luo, Xiaoguang; Mayer, Michael; Heck, Bernhard

    2010-05-01

    One essential deficiency of the stochastic model used in many GNSS (Global Navigation Satellite Systems) software products consists in neglecting temporal correlation of GNSS observations. Analysing appropriately detrended time series of observation residuals resulting from GPS (Global Positioning System) data processing, the temporal correlation behaviour of GPS observations can be sufficiently described by means of so-called autoregressive moving average (ARMA) processes. Using the toolbox ARMASA which is available free of charge in MATLAB® Central (open exchange platform for the MATLAB® and SIMULINK® user community), a well-fitting time series model can be identified automatically in three steps. Firstly, AR, MA, and ARMA models are computed up to some user-specified maximum order. Subsequently, for each model type, the best-fitting model is selected using the combined (for AR processes) resp. generalised (for MA and ARMA processes) information criterion. The final model identification among the best-fitting AR, MA, and ARMA models is performed based on the minimum prediction error characterising the discrepancies between the given data and the fitted model. The ARMA coefficients are computed using Burg's maximum entropy algorithm (for AR processes), Durbin's first (for MA processes) and second (for ARMA processes) methods, respectively. This paper verifies the performance of the automated ARMA identification using the toolbox ARMASA. For this purpose, a representative data base is generated by means of ARMA simulation with respect to sample size, correlation level, and model complexity. The model error defined as a transform of the prediction error is used as measure for the deviation between the true and the estimated model. The results of the study show that the recognition rates of underlying true processes increase with increasing sample sizes and decrease with rising model complexity. Considering large sample sizes, the true underlying processes can be correctly recognised for nearly 80% of the analysed data sets. Additionally, the model errors of first-order AR resp. MA processes converge clearly more rapidly to the corresponding asymptotical values than those of high-order ARMA processes.

  8. Optimization of Nd: YAG Laser Marking of Alumina Ceramic Using RSM And ANN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peter, Josephine; Doloi, B.; Bhattacharyya, B.

    The present research papers deals with the artificial neural network (ANN) and the response surface methodology (RSM) based mathematical modeling and also an optimization analysis on marking characteristics on alumina ceramic. The experiments have been planned and carried out based on Design of Experiment (DOE). It also analyses the influence of the major laser marking process parameters and the optimal combination of laser marking process parametric setting has been obtained. The output of the RSM optimal data is validated through experimentation and ANN predictive model. A good agreement is observed between the results based on ANN predictive model and actualmore » experimental observations.« less

  9. Data Assimilation at FLUXNET to Improve Models towards Ecological Forecasting (Invited)

    NASA Astrophysics Data System (ADS)

    Luo, Y.

    2009-12-01

    Dramatically increased volumes of data from observational and experimental networks such as FLUXNET call for transformation of ecological research to increase its emphasis on quantitative forecasting. Ecological forecasting will also meet the societal need to develop better strategies for natural resource management in a world of ongoing global change. Traditionally, ecological forecasting has been based on process-based models, informed by data in largely ad hoc ways. Although most ecological models incorporate some representation of mechanistic processes, today’s ecological models are generally not adequate to quantify real-world dynamics and provide reliable forecasts with accompanying estimates of uncertainty. A key tool to improve ecological forecasting is data assimilation, which uses data to inform initial conditions and to help constrain a model during simulation to yield results that approximate reality as closely as possible. In an era with dramatically increased availability of data from observational and experimental networks, data assimilation is a key technique that helps convert the raw data into ecologically meaningful products so as to accelerate our understanding of ecological processes, test ecological theory, forecast changes in ecological services, and better serve the society. This talk will use examples to illustrate how data from FLUXNET have been assimilated with process-based models to improve estimates of model parameters and state variables; to quantify uncertainties in ecological forecasting arising from observations, models and their interactions; and to evaluate information contributions of data and model toward short- and long-term forecasting of ecosystem responses to global change.

  10. Support of surgical process modeling by using adaptable software user interfaces

    NASA Astrophysics Data System (ADS)

    Neumuth, T.; Kaschek, B.; Czygan, M.; Goldstein, D.; Strauß, G.; Meixensberger, J.; Burgert, O.

    2010-03-01

    Surgical Process Modeling (SPM) is a powerful method for acquiring data about the evolution of surgical procedures. Surgical Process Models are used in a variety of use cases including evaluation studies, requirements analysis and procedure optimization, surgical education, and workflow management scheme design. This work proposes the use of adaptive, situation-aware user interfaces for observation support software for SPM. We developed a method to support the modeling of the observer by using an ontological knowledge base. This is used to drive the graphical user interface for the observer to restrict the search space of terminology depending on the current situation. In the evaluation study it is shown, that the workload of the observer was decreased significantly by using adaptive user interfaces. 54 SPM observation protocols were analyzed by using the NASA Task Load Index and it was shown that the use of the adaptive user interface disburdens the observer significantly in workload criteria effort, mental demand and temporal demand, helping him to concentrate on his essential task of modeling the Surgical Process.

  11. Momentum Concept in the Process of Knowledge Construction

    ERIC Educational Resources Information Center

    Ergul, N. Remziye

    2013-01-01

    Abstraction is one of the methods for learning knowledge with using mental processes that cannot be obtained through experiment and observation. RBC model that is based on abstraction in the process of creating knowledge is directly related to mental processes. In this study, the RBC model is used for the high school students' processes of…

  12. A Method for Combining Experimentation and Molecular Dynamics Simulation to Improve Cohesive Zone Models for Metallic Microstructures

    NASA Technical Reports Server (NTRS)

    Hochhalter, J. D.; Glaessgen, E. H.; Ingraffea, A. R.; Aquino, W. A.

    2009-01-01

    Fracture processes within a material begin at the nanometer length scale at which the formation, propagation, and interaction of fundamental damage mechanisms occur. Physics-based modeling of these atomic processes quickly becomes computationally intractable as the system size increases. Thus, a multiscale modeling method, based on the aggregation of fundamental damage processes occurring at the nanoscale within a cohesive zone model, is under development and will enable computationally feasible and physically meaningful microscale fracture simulation in polycrystalline metals. This method employs atomistic simulation to provide an optimization loop with an initial prediction of a cohesive zone model (CZM). This initial CZM is then applied at the crack front region within a finite element model. The optimization procedure iterates upon the CZM until the finite element model acceptably reproduces the near-crack-front displacement fields obtained from experimental observation. With this approach, a comparison can be made between the original CZM predicted by atomistic simulation and the converged CZM that is based on experimental observation. Comparison of the two CZMs gives insight into how atomistic simulation scales.

  13. Modeling Sediment Detention Ponds Using Reactor Theory and Advection-Diffusion Concepts

    NASA Astrophysics Data System (ADS)

    Wilson, Bruce N.; Barfield, Billy J.

    1985-04-01

    An algorithm is presented to model the sedimentation process in detention ponds. This algorithm is based on a mass balance for an infinitesimal layer that couples reactor theory concepts with advection-diffusion processes. Reactor theory concepts are used to (1) determine residence time of sediment particles and to (2) mix influent sediment with previously stored flow. Advection-diffusion processes are used to model the (1) settling characteristics of sediment and the (2) vertical diffusion of sediment due to turbulence. Predicted results of the model are compared to those observed on two pilot scale ponds for a total of 12 runs. The average percent error between predicted and observed trap efficiency was 5.2%. Overall, the observed sedimentology values were predicted with reasonable accuracy.

  14. Characterizing and reducing equifinality by constraining a distributed catchment model with regional signatures, local observations, and process understanding

    NASA Astrophysics Data System (ADS)

    Kelleher, Christa; McGlynn, Brian; Wagener, Thorsten

    2017-07-01

    Distributed catchment models are widely used tools for predicting hydrologic behavior. While distributed models require many parameters to describe a system, they are expected to simulate behavior that is more consistent with observed processes. However, obtaining a single set of acceptable parameters can be problematic, as parameter equifinality often results in several behavioral sets that fit observations (typically streamflow). In this study, we investigate the extent to which equifinality impacts a typical distributed modeling application. We outline a hierarchical approach to reduce the number of behavioral sets based on regional, observation-driven, and expert-knowledge-based constraints. For our application, we explore how each of these constraint classes reduced the number of behavioral parameter sets and altered distributions of spatiotemporal simulations, simulating a well-studied headwater catchment, Stringer Creek, Montana, using the distributed hydrology-soil-vegetation model (DHSVM). As a demonstrative exercise, we investigated model performance across 10 000 parameter sets. Constraints on regional signatures, the hydrograph, and two internal measurements of snow water equivalent time series reduced the number of behavioral parameter sets but still left a small number with similar goodness of fit. This subset was ultimately further reduced by incorporating pattern expectations of groundwater table depth across the catchment. Our results suggest that utilizing a hierarchical approach based on regional datasets, observations, and expert knowledge to identify behavioral parameter sets can reduce equifinality and bolster more careful application and simulation of spatiotemporal processes via distributed modeling at the catchment scale.

  15. Mechanisms and Model Diversity of Trade-Wind Shallow Cumulus Cloud Feedbacks: A Review.

    PubMed

    Vial, Jessica; Bony, Sandrine; Stevens, Bjorn; Vogel, Raphaela

    2017-01-01

    Shallow cumulus clouds in the trade-wind regions are at the heart of the long standing uncertainty in climate sensitivity estimates. In current climate models, cloud feedbacks are strongly influenced by cloud-base cloud amount in the trades. Therefore, understanding the key factors controlling cloudiness near cloud-base in shallow convective regimes has emerged as an important topic of investigation. We review physical understanding of these key controlling factors and discuss the value of the different approaches that have been developed so far, based on global and high-resolution model experimentations and process-oriented analyses across a range of models and for observations. The trade-wind cloud feedbacks appear to depend on two important aspects: (1) how cloudiness near cloud-base is controlled by the local interplay between turbulent, convective and radiative processes; (2) how these processes interact with their surrounding environment and are influenced by mesoscale organization. Our synthesis of studies that have explored these aspects suggests that the large diversity of model responses is related to fundamental differences in how the processes controlling trade cumulus operate in models, notably, whether they are parameterized or resolved. In models with parameterized convection, cloudiness near cloud-base is very sensitive to the vigor of convective mixing in response to changes in environmental conditions. This is in contrast with results from high-resolution models, which suggest that cloudiness near cloud-base is nearly invariant with warming and independent of large-scale environmental changes. Uncertainties are difficult to narrow using current observations, as the trade cumulus variability and its relation to large-scale environmental factors strongly depend on the time and/or spatial scales at which the mechanisms are evaluated. New opportunities for testing physical understanding of the factors controlling shallow cumulus cloud responses using observations and high-resolution modeling on large domains are discussed.

  16. Mechanisms and Model Diversity of Trade-Wind Shallow Cumulus Cloud Feedbacks: A Review

    NASA Astrophysics Data System (ADS)

    Vial, Jessica; Bony, Sandrine; Stevens, Bjorn; Vogel, Raphaela

    2017-11-01

    Shallow cumulus clouds in the trade-wind regions are at the heart of the long standing uncertainty in climate sensitivity estimates. In current climate models, cloud feedbacks are strongly influenced by cloud-base cloud amount in the trades. Therefore, understanding the key factors controlling cloudiness near cloud-base in shallow convective regimes has emerged as an important topic of investigation. We review physical understanding of these key controlling factors and discuss the value of the different approaches that have been developed so far, based on global and high-resolution model experimentations and process-oriented analyses across a range of models and for observations. The trade-wind cloud feedbacks appear to depend on two important aspects: (1) how cloudiness near cloud-base is controlled by the local interplay between turbulent, convective and radiative processes; (2) how these processes interact with their surrounding environment and are influenced by mesoscale organization. Our synthesis of studies that have explored these aspects suggests that the large diversity of model responses is related to fundamental differences in how the processes controlling trade cumulus operate in models, notably, whether they are parameterized or resolved. In models with parameterized convection, cloudiness near cloud-base is very sensitive to the vigor of convective mixing in response to changes in environmental conditions. This is in contrast with results from high-resolution models, which suggest that cloudiness near cloud-base is nearly invariant with warming and independent of large-scale environmental changes. Uncertainties are difficult to narrow using current observations, as the trade cumulus variability and its relation to large-scale environmental factors strongly depend on the time and/or spatial scales at which the mechanisms are evaluated. New opportunities for testing physical understanding of the factors controlling shallow cumulus cloud responses using observations and high-resolution modeling on large domains are discussed.

  17. Mechanisms and Model Diversity of Trade-Wind Shallow Cumulus Cloud Feedbacks: A Review

    NASA Astrophysics Data System (ADS)

    Vial, Jessica; Bony, Sandrine; Stevens, Bjorn; Vogel, Raphaela

    Shallow cumulus clouds in the trade-wind regions are at the heart of the long standing uncertainty in climate sensitivity estimates. In current climate models, cloud feedbacks are strongly influenced by cloud-base cloud amount in the trades. Therefore, understanding the key factors controlling cloudiness near cloud-base in shallow convective regimes has emerged as an important topic of investigation. We review physical understanding of these key controlling factors and discuss the value of the different approaches that have been developed so far, based on global and high-resolution model experimentations and process-oriented analyses across a range of models and for observations. The trade-wind cloud feedbacks appear to depend on two important aspects: (1) how cloudiness near cloud-base is controlled by the local interplay between turbulent, convective and radiative processes; (2) how these processes interact with their surrounding environment and are influenced by mesoscale organization. Our synthesis of studies that have explored these aspects suggests that the large diversity of model responses is related to fundamental differences in how the processes controlling trade cumulus operate in models, notably, whether they are parameterized or resolved. In models with parameterized convection, cloudiness near cloud-base is very sensitive to the vigor of convective mixing in response to changes in environmental conditions. This is in contrast with results from high-resolution models, which suggest that cloudiness near cloud-base is nearly invariant with warming and independent of large-scale environmental changes. Uncertainties are difficult to narrow using current observations, as the trade cumulus variability and its relation to large-scale environmental factors strongly depend on the time and/or spatial scales at which the mechanisms are evaluated. New opportunities for testing physical understanding of the factors controlling shallow cumulus cloud responses using observations and highresolution modeling on large domains are discussed.

  18. PROcess Based Diagnostics PROBE

    NASA Technical Reports Server (NTRS)

    Clune, T.; Schmidt, G.; Kuo, K.; Bauer, M.; Oloso, H.

    2013-01-01

    Many of the aspects of the climate system that are of the greatest interest (e.g., the sensitivity of the system to external forcings) are emergent properties that arise via the complex interplay between disparate processes. This is also true for climate models most diagnostics are not a function of an isolated portion of source code, but rather are affected by multiple components and procedures. Thus any model-observation mismatch is hard to attribute to any specific piece of code or imperfection in a specific model assumption. An alternative approach is to identify diagnostics that are more closely tied to specific processes -- implying that if a mismatch is found, it should be much easier to identify and address specific algorithmic choices that will improve the simulation. However, this approach requires looking at model output and observational data in a more sophisticated way than the more traditional production of monthly or annual mean quantities. The data must instead be filtered in time and space for examples of the specific process being targeted.We are developing a data analysis environment called PROcess-Based Explorer (PROBE) that seeks to enable efficient and systematic computation of process-based diagnostics on very large sets of data. In this environment, investigators can define arbitrarily complex filters and then seamlessly perform computations in parallel on the filtered output from their model. The same analysis can be performed on additional related data sets (e.g., reanalyses) thereby enabling routine comparisons between model and observational data. PROBE also incorporates workflow technology to automatically update computed diagnostics for subsequent executions of a model. In this presentation, we will discuss the design and current status of PROBE as well as share results from some preliminary use cases.

  19. Alternative ways of using field-based estimates to calibrate ecosystem models and their implications for ecosystem carbon cycle studies

    Treesearch

    Y. He; Q. Zhuang; A.D. McGuire; Y. Liu; M. Chen

    2013-01-01

    Model-data fusion is a process in which field observations are used to constrain model parameters. How observations are used to constrain parameters has a direct impact on the carbon cycle dynamics simulated by ecosystem models. In this study, we present an evaluation of several options for the use of observations inmodeling regional carbon dynamics and explore the...

  20. Understanding the Day Cent model: Calibration, sensitivity, and identifiability through inverse modeling

    USGS Publications Warehouse

    Necpálová, Magdalena; Anex, Robert P.; Fienen, Michael N.; Del Grosso, Stephen J.; Castellano, Michael J.; Sawyer, John E.; Iqbal, Javed; Pantoja, Jose L.; Barker, Daniel W.

    2015-01-01

    The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil3NO− compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil 3NO− and 4NH+. Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving our understanding of model function and the underlying ecosystem biogeochemical processes that they represent.

  1. Parameter Estimation and Model Selection for Indoor Environments Based on Sparse Observations

    NASA Astrophysics Data System (ADS)

    Dehbi, Y.; Loch-Dehbi, S.; Plümer, L.

    2017-09-01

    This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

  2. An object-based approach to weather analysis and its applications

    NASA Astrophysics Data System (ADS)

    Troemel, Silke; Diederich, Malte; Horvath, Akos; Simmer, Clemens; Kumjian, Matthew

    2013-04-01

    The research group 'Object-based Analysis and SEamless prediction' (OASE) within the Hans Ertel Centre for Weather Research programme (HErZ) pursues an object-based approach to weather analysis. The object-based tracking approach adopts the Lagrange perspective by identifying and following the development of convective events over the course of their lifetime. Prerequisites of the object-based analysis are a high-resolved observational data base and a tracking algorithm. A near real-time radar and satellite remote sensing-driven 3D observation-microphysics composite covering Germany, currently under development, contains gridded observations and estimated microphysical quantities. A 3D scale-space tracking identifies convective rain events in the dual-composite and monitors the development over the course of their lifetime. The OASE-group exploits the object-based approach in several fields of application: (1) For a better understanding and analysis of precipitation processes responsible for extreme weather events, (2) in nowcasting, (3) as a novel approach for validation of meso-γ atmospheric models, and (4) in data assimilation. Results from the different fields of application will be presented. The basic idea of the object-based approach is to identify a small set of radar- and satellite derived descriptors which characterize the temporal development of precipitation systems which constitute the objects. So-called proxies of the precipitation process are e.g. the temporal change of the brightband, vertically extensive columns of enhanced differential reflectivity ZDR or the cloud top temperature and heights identified in the 4D field of ground-based radar reflectivities and satellite retrievals generated by a cell during its life time. They quantify (micro-) physical differences among rain events and relate to the precipitation yield. Analyses on the informative content of ZDR columns as precursor for storm evolution for example will be presented to demonstrate the use of such system-oriented predictors for nowcasting. Columns of differential reflectivity ZDR measured by polarimetric weather radars are prominent signatures associated with thunderstorm updrafts. Since greater vertical velocities can loft larger drops and water-coated ice particles to higher altitudes above the environmental freezing level, the integrated ZDR column above the freezing level increases with increasing updraft intensity. Validation of atmospheric models concerning precipitation representation or prediction is usually confined to comparisons of precipitation fields or their temporal and spatial statistics. A comparison of the rain rates alone, however, does not immediately explain discrepancies between models and observations, because similar rain rates might be produced by different processes. Within the event-based approach for validation of models both observed and modeled rain events are analyzed by means of proxies of the precipitation process. Both sets of descriptors represent the basis for model validation since different leading descriptors - in a statistical sense- hint at process formulations potentially responsible for model failures.

  3. Optimal Estimation with Two Process Models and No Measurements

    DTIC Science & Technology

    2015-08-01

    models will be lost if either of the models includes deterministic modeling errors. 12 5. References and Notes 1. Brown RG, Hwang PYC. Introduction to...independent process models when no measurements are present. The observer follows a derivation similar to that of the discrete time Kalman filter. A simulation...discrete time Kalman filter. A simulation example is provided in which a process model based on the dynamics of a ballistic projectile is blended with an

  4. A multi-year estimate of methane fluxes in Alaska from CARVE atmospheric observations

    PubMed Central

    Miller, Scot M.; Miller, Charles E.; Commane, Roisin; Chang, Rachel Y.-W.; Dinardo, Steven J.; Henderson, John M.; Karion, Anna; Lindaas, Jakob; Melton, Joe R.; Miller, John B.; Sweeney, Colm; Wofsy, Steven C.; Michalak, Anna M.

    2016-01-01

    Methane (CH4) fluxes from Alaska and other arctic regions may be sensitive to thawing permafrost and future climate change, but estimates of both current and future fluxes from the region are uncertain. This study estimates CH4 fluxes across Alaska for 2012–2014 using aircraft observations from the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) and a geostatistical inverse model (GIM). We find that a simple flux model based on a daily soil temperature map and a static map of wetland extent reproduces the atmospheric CH4 observations at the state-wide, multi-year scale more effectively than global-scale, state-of-the-art process-based models. This result points to a simple and effective way of representing CH4 flux patterns across Alaska. It further suggests that contemporary process-based models can improve their representation of key processes that control fluxes at regional scales, and that more complex processes included in these models cannot be evaluated given the information content of available atmospheric CH4 observations. In addition, we find that CH4 emissions from the North Slope of Alaska account for 24% of the total statewide flux of 1.74 ± 0.44 Tg CH4 (for May–Oct.). Contemporary global-scale process models only attribute an average of 3% of the total flux to this region. This mismatch occurs for two reasons: process models likely underestimate wetland area in regions without visible surface water, and these models prematurely shut down CH4 fluxes at soil temperatures near 0°C. As a consequence, wetlands covered by vegetation and wetlands with persistently cold soils could be larger contributors to natural CH4 fluxes than in process estimates. Lastly, we find that the seasonality of CH4 fluxes varied during 2012–2014, but that total emissions did not differ significantly among years, despite substantial differences in soil temperature and precipitation; year-to-year variability in these environmental conditions did not affect obvious changes in total CH4 fluxes from the state. PMID:28066129

  5. A multi-year estimate of methane fluxes in Alaska from CARVE atmospheric observations.

    PubMed

    Miller, Scot M; Miller, Charles E; Commane, Roisin; Chang, Rachel Y-W; Dinardo, Steven J; Henderson, John M; Karion, Anna; Lindaas, Jakob; Melton, Joe R; Miller, John B; Sweeney, Colm; Wofsy, Steven C; Michalak, Anna M

    2016-10-01

    Methane (CH 4 ) fluxes from Alaska and other arctic regions may be sensitive to thawing permafrost and future climate change, but estimates of both current and future fluxes from the region are uncertain. This study estimates CH 4 fluxes across Alaska for 2012-2014 using aircraft observations from the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) and a geostatistical inverse model (GIM). We find that a simple flux model based on a daily soil temperature map and a static map of wetland extent reproduces the atmospheric CH 4 observations at the state-wide, multi-year scale more effectively than global-scale, state-of-the-art process-based models. This result points to a simple and effective way of representing CH 4 flux patterns across Alaska. It further suggests that contemporary process-based models can improve their representation of key processes that control fluxes at regional scales, and that more complex processes included in these models cannot be evaluated given the information content of available atmospheric CH 4 observations. In addition, we find that CH 4 emissions from the North Slope of Alaska account for 24% of the total statewide flux of 1.74 ± 0.44 Tg CH 4 ( for May-Oct.). Contemporary global-scale process models only attribute an average of 3% of the total flux to this region. This mismatch occurs for two reasons: process models likely underestimate wetland area in regions without visible surface water, and these models prematurely shut down CH 4 fluxes at soil temperatures near 0°C. As a consequence, wetlands covered by vegetation and wetlands with persistently cold soils could be larger contributors to natural CH 4 fluxes than in process estimates. Lastly, we find that the seasonality of CH 4 fluxes varied during 2012-2014, but that total emissions did not differ significantly among years, despite substantial differences in soil temperature and precipitation; year-to-year variability in these environmental conditions did not affect obvious changes in total CH 4 fluxes from the state.

  6. Modeling cell adhesion and proliferation: a cellular-automata based approach.

    PubMed

    Vivas, J; Garzón-Alvarado, D; Cerrolaza, M

    Cell adhesion is a process that involves the interaction between the cell membrane and another surface, either a cell or a substrate. Unlike experimental tests, computer models can simulate processes and study the result of experiments in a shorter time and lower costs. One of the tools used to simulate biological processes is the cellular automata, which is a dynamic system that is discrete both in space and time. This work describes a computer model based on cellular automata for the adhesion process and cell proliferation to predict the behavior of a cell population in suspension and adhered to a substrate. The values of the simulated system were obtained through experimental tests on fibroblast monolayer cultures. The results allow us to estimate the cells settling time in culture as well as the adhesion and proliferation time. The change in the cells morphology as the adhesion over the contact surface progress was also observed. The formation of the initial link between cell and the substrate of the adhesion was observed after 100 min where the cell on the substrate retains its spherical morphology during the simulation. The cellular automata model developed is, however, a simplified representation of the steps in the adhesion process and the subsequent proliferation. A combined framework of experimental and computational simulation based on cellular automata was proposed to represent the fibroblast adhesion on substrates and changes in a macro-scale observed in the cell during the adhesion process. The approach showed to be simple and efficient.

  7. Evaluation of NCAR CAM5 Simulated Marine Boundary Layer Cloud Properties Using a Combination of Satellite and Surface Observations

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Song, H.; Wang, M.; Ghan, S. J.; Dong, X.

    2016-12-01

    he main objective of this study is to systematically evaluate the MBL cloud properties simulated in CAM5 family models using a combination of satellite-based CloudSat/MODIS observations and ground-based observations from the ARM Azores site, with a special focus on MBL cloud microphysics and warm rain process. First, we will present a global evaluation based on satellite observations and retrievals. We will compare global cloud properties (e.g., cloud fraction, cloud vertical structure, cloud CER, COT, and LWP, as well as drizzle frequency and intensity diagnosed using the CAM5-COSP instrumental simulators) simulated in the CAM5 models with the collocated CloudSat and MODIS observations. We will also present some preliminary results from a regional evaluation based mainly on ground observations from ARM Azores site. We will compare MBL cloud properties simulated in CAM5 models over the ARM Azores site with collocated satellite (MODIS and CloudSat) and ground-based observations from the ARM site.

  8. Assimilating solar-induced chlorophyll fluorescence into the terrestrial biosphere model BETHY-SCOPE v1.0: model description and information content

    NASA Astrophysics Data System (ADS)

    Norton, Alexander J.; Rayner, Peter J.; Koffi, Ernest N.; Scholze, Marko

    2018-04-01

    The synthesis of model and observational information using data assimilation can improve our understanding of the terrestrial carbon cycle, a key component of the Earth's climate-carbon system. Here we provide a data assimilation framework for combining observations of solar-induced chlorophyll fluorescence (SIF) and a process-based model to improve estimates of terrestrial carbon uptake or gross primary production (GPP). We then quantify and assess the constraint SIF provides on the uncertainty in global GPP through model process parameters in an error propagation study. By incorporating 1 year of SIF observations from the GOSAT satellite, we find that the parametric uncertainty in global annual GPP is reduced by 73 % from ±19.0 to ±5.2 Pg C yr-1. This improvement is achieved through strong constraint of leaf growth processes and weak to moderate constraint of physiological parameters. We also find that the inclusion of uncertainty in shortwave down-radiation forcing has a net-zero effect on uncertainty in GPP when incorporated into the SIF assimilation framework. This study demonstrates the powerful capacity of SIF to reduce uncertainties in process-based model estimates of GPP and the potential for improving our predictive capability of this uncertain carbon flux.

  9. Approximation of epidemic models by diffusion processes and their statistical inference.

    PubMed

    Guy, Romain; Larédo, Catherine; Vergu, Elisabeta

    2015-02-01

    Multidimensional continuous-time Markov jump processes [Formula: see text] on [Formula: see text] form a usual set-up for modeling [Formula: see text]-like epidemics. However, when facing incomplete epidemic data, inference based on [Formula: see text] is not easy to be achieved. Here, we start building a new framework for the estimation of key parameters of epidemic models based on statistics of diffusion processes approximating [Formula: see text]. First, previous results on the approximation of density-dependent [Formula: see text]-like models by diffusion processes with small diffusion coefficient [Formula: see text], where [Formula: see text] is the population size, are generalized to non-autonomous systems. Second, our previous inference results on discretely observed diffusion processes with small diffusion coefficient are extended to time-dependent diffusions. Consistent and asymptotically Gaussian estimates are obtained for a fixed number [Formula: see text] of observations, which corresponds to the epidemic context, and for [Formula: see text]. A correction term, which yields better estimates non asymptotically, is also included. Finally, performances and robustness of our estimators with respect to various parameters such as [Formula: see text] (the basic reproduction number), [Formula: see text], [Formula: see text] are investigated on simulations. Two models, [Formula: see text] and [Formula: see text], corresponding to single and recurrent outbreaks, respectively, are used to simulate data. The findings indicate that our estimators have good asymptotic properties and behave noticeably well for realistic numbers of observations and population sizes. This study lays the foundations of a generic inference method currently under extension to incompletely observed epidemic data. Indeed, contrary to the majority of current inference techniques for partially observed processes, which necessitates computer intensive simulations, our method being mostly an analytical approach requires only the classical optimization steps.

  10. High-resolution urban observation network for user-specific meteorological information service in the Seoul Metropolitan Area, South Korea

    NASA Astrophysics Data System (ADS)

    Park, Moon-Soo; Park, Sung-Hwa; Chae, Jung-Hoon; Choi, Min-Hyeok; Song, Yunyoung; Kang, Minsoo; Roh, Joon-Woo

    2017-04-01

    To improve our knowledge of urban meteorology, including those processes applicable to high-resolution meteorological models in the Seoul Metropolitan Area (SMA), the Weather Information Service Engine (WISE) Urban Meteorological Observation System (UMS-Seoul) has been designed and installed. The UMS-Seoul incorporates 14 surface energy balance (EB) systems, 7 surface-based three-dimensional (3-D) meteorological observation systems and applied meteorological (AP) observation systems, and the existing surface-based meteorological observation network. The EB system consists of a radiation balance system, sonic anemometers, infrared CO2/H2O gas analyzers, and many sensors measuring the wind speed and direction, temperature and humidity, precipitation, and air pressure. The EB-produced radiation, meteorological, and turbulence data will be used to quantify the surface EB according to land use and to improve the boundary-layer and surface processes in meteorological models. The 3-D system, composed of a wind lidar, microwave radiometer, aerosol lidar, or ceilometer, produces the cloud height, vertical profiles of backscatter by aerosols, wind speed and direction, temperature, humidity, and liquid water content. It will be used for high-resolution reanalysis data based on observations and for the improvement of the boundary-layer, radiation, and microphysics processes in meteorological models. The AP system includes road weather information, mosquito activity, water quality, and agrometeorological observation instruments. The standardized metadata for networks and stations are documented and renewed periodically to provide a detailed observation environment. The UMS-Seoul data are designed to support real-time acquisition and display and automatically quality check within 10 min from observation. After the quality check, data can be distributed to relevant potential users such as researchers and policy makers. Finally, two case studies demonstrate that the observed data have a great potential to help to understand the boundary-layer structures more deeply, improve the performance of high-resolution meteorological models, and provide useful information customized based on the user demands in the SMA.

  11. Learning-Testing Process in Classroom: An Empirical Simulation Model

    ERIC Educational Resources Information Center

    Buda, Rodolphe

    2009-01-01

    This paper presents an empirical micro-simulation model of the teaching and the testing process in the classroom (Programs and sample data are available--the actual names of pupils have been hidden). It is a non-econometric micro-simulation model describing informational behaviors of the pupils, based on the observation of the pupils'…

  12. Implementation of a GPS-RO data processing system for the KIAPS-LETKF data assimilation system

    NASA Astrophysics Data System (ADS)

    Kwon, H.; Kang, J.-S.; Jo, Y.; Kang, J. H.

    2014-11-01

    The Korea Institute of Atmospheric Prediction Systems (KIAPS) has been developing a new global numerical weather prediction model and an advanced data assimilation system. As part of the KIAPS Package for Observation Processing (KPOP) system for data assimilation, preprocessing and quality control modules for bending angle measurements of global positioning system radio occultation (GPS-RO) data have been implemented and examined. GPS-RO data processing system is composed of several steps for checking observation locations, missing values, physical values for Earth radius of curvature, and geoid undulation. An observation-minus-background check is implemented by use of a one-dimensional observational bending angle operator and tangent point drift is also considered in the quality control process. We have tested GPS-RO observations utilized by the Korean Meteorological Administration (KMA) within KPOP, based on both the KMA global model and the National Center for Atmospheric Research (NCAR) Community Atmosphere Model-Spectral Element (CAM-SE) as a model background. Background fields from the CAM-SE model are incorporated for the preparation of assimilation experiments with the KIAPS-LETKF data assimilation system, which has been successfully implemented to a cubed-sphere model with fully unstructured quadrilateral meshes. As a result of data processing, the bending angle departure statistics between observation and background shows significant improvement. Also, the first experiment in assimilating GPS-RO bending angle resulting from KPOP within KIAPS-LETKF shows encouraging results.

  13. Observation and integrated Earth-system science: A roadmap for 2016-2025

    NASA Astrophysics Data System (ADS)

    Simmons, Adrian; Fellous, Jean-Louis; Ramaswamy, Venkatachalam; Trenberth, Kevin; Asrar, Ghassem; Balmaseda, Magdalena; Burrows, John P.; Ciais, Philippe; Drinkwater, Mark; Friedlingstein, Pierre; Gobron, Nadine; Guilyardi, Eric; Halpern, David; Heimann, Martin; Johannessen, Johnny; Levelt, Pieternel F.; Lopez-Baeza, Ernesto; Penner, Joyce; Scholes, Robert; Shepherd, Ted

    2016-05-01

    This report is the response to a request by the Committee on Space Research of the International Council for Science to prepare a roadmap on observation and integrated Earth-system science for the coming ten years. Its focus is on the combined use of observations and modelling to address the functioning, predictability and projected evolution of interacting components of the Earth system on timescales out to a century or so. It discusses how observations support integrated Earth-system science and its applications, and identifies planned enhancements to the contributing observing systems and other requirements for observations and their processing. All types of observation are considered, but emphasis is placed on those made from space. The origins and development of the integrated view of the Earth system are outlined, noting the interactions between the main components that lead to requirements for integrated science and modelling, and for the observations that guide and support them. What constitutes an Earth-system model is discussed. Summaries are given of key cycles within the Earth system. The nature of Earth observation and the arrangements for international coordination essential for effective operation of global observing systems are introduced. Instances are given of present types of observation, what is already on the roadmap for 2016-2025 and some of the issues to be faced. Observations that are organised on a systematic basis and observations that are made for process understanding and model development, or other research or demonstration purposes, are covered. Specific accounts are given for many of the variables of the Earth system. The current status and prospects for Earth-system modelling are summarized. The evolution towards applying Earth-system models for environmental monitoring and prediction as well as for climate simulation and projection is outlined. General aspects of the improvement of models, whether through refining the representations of processes that are already incorporated or through adding new processes or components, are discussed. Some important elements of Earth-system models are considered more fully. Data assimilation is discussed not only because it uses observations and models to generate datasets for monitoring the Earth system and for initiating and evaluating predictions, in particular through reanalysis, but also because of the feedback it provides on the quality of both the observations and the models employed. Inverse methods for surface-flux or model-parameter estimation are also covered. Reviews are given of the way observations and the processed datasets based on them are used for evaluating models, and of the combined use of observations and models for monitoring and interpreting the behaviour of the Earth system and for predicting and projecting its future. A set of concluding discussions covers general developmental needs, requirements for continuity of space-based observing systems, further long-term requirements for observations and other data, technological advances and data challenges, and the importance of enhanced international co-operation.

  14. Observation and integrated Earth-system science: A roadmap for 2016–2025

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simmons, Adrian; Fellous, Jean-Louis; Ramaswamy, V.

    This report is the response to a request by the Committee on Space Research of the International Council for Science to prepare a roadmap on observation and integrated Earth-system science for the coming ten years. Its focus is on the combined use of observations and modelling to address the functioning, predictability and projected evolution of interacting components of the Earth system on timescales out to a century or so. It discusses how observations support integrated Earth-system science and its applications, and identifies planned enhancements to the contributing observing systems and other requirements for observations and their processing. All types ofmore » observation are considered, but emphasis is placed on those made from space. The origins and development of the integrated view of the Earth system are outlined, noting the interactions between the main components that lead to requirements for integrated science and modelling, and for the observations that guide and support them. What constitutes an Earth-system model is discussed. Summaries are given of key cycles within the Earth system. The nature of Earth observation and the arrangements for international coordination essential for effective operation of global observing systems are introduced. Instances are given of present types of observation, what is already on the roadmap for 2016–2025 and some of the issues to be faced. Observations that are organized on a systematic basis and observations that are made for process understanding and model development, or other research or demonstration purposes, are covered. Specific accounts are given for many of the variables of the Earth system. The current status and prospects for Earth-system modelling are summarized. The evolution towards applying Earth-system models for environmental monitoring and prediction as well as for climate simulation and projection is outlined. General aspects of the improvement of models, whether through refining the representations of processes that are already incorporated or through adding new processes or components, are discussed. Some important elements of Earth-system models are considered more fully. Data assimilation is discussed not only because it uses observations and models to generate datasets for monitoring the Earth system and for initiating and evaluating predictions, in particular through reanalysis, but also because of the feedback it provides on the quality of both the observations and the models employed. Inverse methods for surface-flux or model-parameter estimation are also covered. Reviews are given of the way observations and the processed datasets based on them are used for evaluating models, and of the combined use of observations and models for monitoring and interpreting the behaviour of the Earth system and for predicting and projecting its future. A set of concluding discussions covers general developmental needs, requirements for continuity of space-based observing systems, further long-term requirements for observations and other data, technological advances and data challenges, and the importance of enhanced international co-operation.« less

  15. Modeling urbanized watershed flood response changes with distributed hydrological model: key hydrological processes, parameterization and case studies

    NASA Astrophysics Data System (ADS)

    Chen, Y.

    2017-12-01

    Urbanization is the world development trend for the past century, and the developing countries have been experiencing much rapider urbanization in the past decades. Urbanization brings many benefits to human beings, but also causes negative impacts, such as increasing flood risk. Impact of urbanization on flood response has long been observed, but quantitatively studying this effect still faces great challenges. For example, setting up an appropriate hydrological model representing the changed flood responses and determining accurate model parameters are very difficult in the urbanized or urbanizing watershed. In the Pearl River Delta area, rapidest urbanization has been observed in China for the past decades, and dozens of highly urbanized watersheds have been appeared. In this study, a physically based distributed watershed hydrological model, the Liuxihe model is employed and revised to simulate the hydrological processes of the highly urbanized watershed flood in the Pearl River Delta area. A virtual soil type is then defined in the terrain properties dataset, and its runoff production and routing algorithms are added to the Liuxihe model. Based on a parameter sensitive analysis, the key hydrological processes of a highly urbanized watershed is proposed, that provides insight into the hydrological processes and for parameter optimization. Based on the above analysis, the model is set up in the Songmushan watershed where there is hydrological data observation. A model parameter optimization and updating strategy is proposed based on the remotely sensed LUC types, which optimizes model parameters with PSO algorithm and updates them based on the changed LUC types. The model parameters in Songmushan watershed are regionalized at the Pearl River Delta area watersheds based on the LUC types of the other watersheds. A dozen watersheds in the highly urbanized area of Dongguan City in the Pearl River Delta area were studied for the flood response changes due to urbanization, and the results show urbanization has big impact on the watershed flood responses. The peak flow increased a few times after urbanization which is much higher than previous reports.

  16. Implications of Bandura's Observational Learning Theory for a Competency Based Teacher Education Model.

    ERIC Educational Resources Information Center

    Hartjen, Raymond H.

    Albert Bandura of Stanford University has proposed four component processes to his theory of observational learning: a) attention, b) retention, c) motor reproduction, and d) reinforcement and motivation. This study represents one phase of an effort to relate modeling and observational learning theory to teacher training. The problem of this study…

  17. Development and evaluation of spatial point process models for epidermal nerve fibers.

    PubMed

    Olsbo, Viktor; Myllymäki, Mari; Waller, Lance A; Särkkä, Aila

    2013-06-01

    We propose two spatial point process models for the spatial structure of epidermal nerve fibers (ENFs) across human skin. The models derive from two point processes, Φb and Φe, describing the locations of the base and end points of the fibers. Each point of Φe (the end point process) is connected to a unique point in Φb (the base point process). In the first model, both Φe and Φb are Poisson processes, yielding a null model of uniform coverage of the skin by end points and general baseline results and reference values for moments of key physiologic indicators. The second model provides a mechanistic model to generate end points for each base, and we model the branching structure more directly by defining Φe as a cluster process conditioned on the realization of Φb as its parent points. In both cases, we derive distributional properties for observable quantities of direct interest to neurologists such as the number of fibers per base, and the direction and range of fibers on the skin. We contrast both models by fitting them to data from skin blister biopsy images of ENFs and provide inference regarding physiological properties of ENFs. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. A simulation framework for mapping risks in clinical processes: the case of in-patient transfers.

    PubMed

    Dunn, Adam G; Ong, Mei-Sing; Westbrook, Johanna I; Magrabi, Farah; Coiera, Enrico; Wobcke, Wayne

    2011-05-01

    To model how individual violations in routine clinical processes cumulatively contribute to the risk of adverse events in hospital using an agent-based simulation framework. An agent-based simulation was designed to model the cascade of common violations that contribute to the risk of adverse events in routine clinical processes. Clinicians and the information systems that support them were represented as a group of interacting agents using data from direct observations. The model was calibrated using data from 101 patient transfers observed in a hospital and results were validated for one of two scenarios (a misidentification scenario and an infection control scenario). Repeated simulations using the calibrated model were undertaken to create a distribution of possible process outcomes. The likelihood of end-of-chain risk is the main outcome measure, reported for each of the two scenarios. The simulations demonstrate end-of-chain risks of 8% and 24% for the misidentification and infection control scenarios, respectively. Over 95% of the simulations in both scenarios are unique, indicating that the in-patient transfer process diverges from prescribed work practices in a variety of ways. The simulation allowed us to model the risk of adverse events in a clinical process, by generating the variety of possible work subject to violations, a novel prospective risk analysis method. The in-patient transfer process has a high proportion of unique trajectories, implying that risk mitigation may benefit from focusing on reducing complexity rather than augmenting the process with further rule-based protocols.

  19. Model-checking techniques based on cumulative residuals.

    PubMed

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  20. Aligning observed and modelled behaviour based on workflow decomposition

    NASA Astrophysics Data System (ADS)

    Wang, Lu; Du, YuYue; Liu, Wei

    2017-09-01

    When business processes are mostly supported by information systems, the availability of event logs generated from these systems, as well as the requirement of appropriate process models are increasing. Business processes can be discovered, monitored and enhanced by extracting process-related information. However, some events cannot be correctly identified because of the explosion of the amount of event logs. Therefore, a new process mining technique is proposed based on a workflow decomposition method in this paper. Petri nets (PNs) are used to describe business processes, and then conformance checking of event logs and process models is investigated. A decomposition approach is proposed to divide large process models and event logs into several separate parts that can be analysed independently; while an alignment approach based on a state equation method in PN theory enhances the performance of conformance checking. Both approaches are implemented in programmable read-only memory (ProM). The correctness and effectiveness of the proposed methods are illustrated through experiments.

  1. Pitfalls in alignment of observation models resolved using PROV as an upper ontology

    NASA Astrophysics Data System (ADS)

    Cox, S. J. D.

    2015-12-01

    A number of models for observation metadata have been developed in the earth and environmental science communities, including OGC's Observations and Measurements (O&M), the ecosystems community's Extensible Observation Ontology (OBOE), the W3C's Semantic Sensor Network Ontology (SSNO), and the CUAHSI/NSF Observations Data Model v2 (ODM2). In order to combine data formalized in the various models, mappings between these must be developed. In some cases this is straightforward: since ODM2 took O&M as its starting point, their terminology is almost completely aligned. In the eco-informatics world observations are almost never made in isolation of other observations, so OBOE pays particular attention to groupings, with multiple atomic 'Measurements' in each oboe:Observation which does not have a result of its own and thus plays a different role to an om:Observation. And while SSN also adopted terminology from O&M, mapping is confounded by the fact that SSN uses DOLCE as its foundation and places ssn:Observations as 'Social Objects' which are explicitly disjoint from 'Events', while O&M is formalized as part of the ISO/TC 211 harmonised (UML) model and sees om:Observations as value assignment activities. Foundational ontologies (such as BFO, GFO, UFO or DOLCE) can provide a framework for alignment, but different upper ontologies can be based in profoundly different worldviews and use of incommensurate frameworks can confound rather than help. A potential resolution is provided by comparing recent studies that align SSN and O&M, respectively, with the PROV-O ontology. PROV-O provides just three base classes: Entity, Activity and Agent. om:Observation is sub-classed from prov:Activity, while ssn:Observation is sub-classed from prov:Entity. This confirms that, despite the same name, om:Observation and ssn:Observation denote different aspects of the observation process: the observation event, and the record of the observation event, respectively. Alignment with the simple PROV-O classes has clarified this issue in a way that had previously proved difficult to resolve. The simple 3-class base model from PROV appears to provide just enough logic to serve as a lightweight upper ontology, particularly for workflow or process-based information.

  2. An Approach to the Evaluation of Hypermedia.

    ERIC Educational Resources Information Center

    Knussen, Christina; And Others

    1991-01-01

    Discusses methods that may be applied to the evaluation of hypermedia, based on six models described by Lawton. Techniques described include observation, self-report measures, interviews, automated measures, psychometric tests, checklists and criterion-based techniques, process models, Experimentally Measuring Usability (EMU), and a naturalistic…

  3. Impacts of Subgrid Heterogeneous Mixing between Cloud Liquid and Ice on the Wegner-Bergeron-Findeisen Process and Mixed-phase Clouds in NCAR CAM5

    NASA Astrophysics Data System (ADS)

    Liu, X.; Zhang, M.; Zhang, D.; Wang, Z.; Wang, Y.

    2017-12-01

    Mixed-phase clouds are persistently observed over the Arctic and the phase partitioning between cloud liquid and ice hydrometeors in mixed-phase clouds has important impacts on the surface energy budget and Arctic climate. In this study, we test the NCAR Community Atmosphere Model Version 5 (CAM5) with the single-column and weather forecast configurations and evaluate the model performance against observation data from the DOE Atmospheric Radiation Measurement (ARM) Program's M-PACE field campaign in October 2004 and long-term ground-based multi-sensor remote sensing measurements. Like most global climate models, we find that CAM5 also poorly simulates the phase partitioning in mixed-phase clouds by significantly underestimating the cloud liquid water content. Assuming pocket structures in the distribution of cloud liquid and ice in mixed-phase clouds as suggested by in situ observations provides a plausible solution to improve the model performance by reducing the Wegner-Bergeron-Findeisen (WBF) process rate. In this study, the modification of the WBF process in the CAM5 model has been achieved with applying a stochastic perturbation to the time scale of the WBF process relevant to both ice and snow to account for the heterogeneous mixture of cloud liquid and ice. Our results show that this modification of WBF process improves the modeled phase partitioning in the mixed-phase clouds. The seasonal variation of mixed-phase cloud properties is also better reproduced in the model in comparison with the long-term ground-based remote sensing observations. Furthermore, the phase partitioning is insensitive to the reassignment time step of perturbations.

  4. Regulation-Structured Dynamic Metabolic Model Provides a Potential Mechanism for Delayed Enzyme Response in Denitrification Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Hyun-Seob; Thomas, Dennis G.; Stegen, James C.

    In a recent study of denitrification dynamics in hyporheic zone sediments, we observed a significant time lag (up to several days) in enzymatic response to the changes in substrate concentration. To explore an underlying mechanism and understand the interactive dynamics between enzymes and nutrients, we developed a trait-based model that associates a community’s traits with functional enzymes, instead of typically used species guilds (or functional guilds). This enzyme-based formulation allows to collectively describe biogeochemical functions of microbial communities without directly parameterizing the dynamics of species guilds, therefore being scalable to complex communities. As a key component of modeling, we accountedmore » for microbial regulation occurring through transcriptional and translational processes, the dynamics of which was parameterized based on the temporal profiles of enzyme concentrations measured using a new signature peptide-based method. The simulation results using the resulting model showed several days of a time lag in enzymatic responses as observed in experiments. Further, the model showed that the delayed enzymatic reactions could be primarily controlled by transcriptional responses and that the dynamics of transcripts and enzymes are closely correlated. The developed model can serve as a useful tool for predicting biogeochemical processes in natural environments, either independently or through integration with hydrologic flow simulators.« less

  5. Introduction of the Notion of Differential Equations by Modelling Based Teaching

    ERIC Educational Resources Information Center

    Budinski, Natalija; Takaci, Djurdjica

    2011-01-01

    This paper proposes modelling based learning as a tool for learning and teaching mathematics. The example of modelling real world problems leading to the exponential function as the solution of differential equations is described, as well as the observations about students' activities during the process. The students were acquainted with the…

  6. Developing a probability-based model of aquifer vulnerability in an agricultural region

    NASA Astrophysics Data System (ADS)

    Chen, Shih-Kai; Jang, Cheng-Shin; Peng, Yi-Huei

    2013-04-01

    SummaryHydrogeological settings of aquifers strongly influence the regional groundwater movement and pollution processes. Establishing a map of aquifer vulnerability is considerably critical for planning a scheme of groundwater quality protection. This study developed a novel probability-based DRASTIC model of aquifer vulnerability in the Choushui River alluvial fan, Taiwan, using indicator kriging and to determine various risk categories of contamination potentials based on estimated vulnerability indexes. Categories and ratings of six parameters in the probability-based DRASTIC model were probabilistically characterized according to the parameter classification methods of selecting a maximum estimation probability and calculating an expected value. Moreover, the probability-based estimation and assessment gave us an excellent insight into propagating the uncertainty of parameters due to limited observation data. To examine the prediction capacity of pollutants for the developed probability-based DRASTIC model, medium, high, and very high risk categories of contamination potentials were compared with observed nitrate-N exceeding 0.5 mg/L indicating the anthropogenic groundwater pollution. The analyzed results reveal that the developed probability-based DRASTIC model is capable of predicting high nitrate-N groundwater pollution and characterizing the parameter uncertainty via the probability estimation processes.

  7. Developing an Automated Method for Detection of Operationally Relevant Ocean Fronts and Eddies

    NASA Astrophysics Data System (ADS)

    Rogers-Cotrone, J. D.; Cadden, D. D. H.; Rivera, P.; Wynn, L. L.

    2016-02-01

    Since the early 90's, the U.S. Navy has utilized an observation-based process for identification of frontal systems and eddies. These Ocean Feature Assessments (OFA) rely on trained analysts to identify and position ocean features using satellite-observed sea surface temperatures. Meanwhile, as enhancements and expansion of the navy's Hybrid Coastal Ocean Model (HYCOM) and Regional Navy Coastal Ocean Model (RNCOM) domains have proceeded, the Naval Oceanographic Office (NAVO) has provided Tactical Oceanographic Feature Assessments (TOFA) that are based on data-validated model output but also rely on analyst identification of significant features. A recently completed project has migrated OFA production to the ArcGIS-based Acoustic Reach-back Cell Ocean Analysis Suite (ARCOAS), enabling use of additional observational datasets and significantly decreasing production time; however, it has highlighted inconsistencies inherent to this analyst-based identification process. Current efforts are focused on development of an automated method for detecting operationally significant fronts and eddies that integrates model output and observational data on a global scale. Previous attempts to employ techniques from the scientific community have been unable to meet the production tempo at NAVO. Thus, a system that incorporates existing techniques (Marr-Hildreth, Okubo-Weiss, etc.) with internally-developed feature identification methods (from model-derived physical and acoustic properties) is required. Ongoing expansions to the ARCOAS toolset have shown promising early results.

  8. An analysis of USSPACECOM's space surveillance network sensor tasking methodology

    NASA Astrophysics Data System (ADS)

    Berger, Jeff M.; Moles, Joseph B.; Wilsey, David G.

    1992-12-01

    This study provides the basis for the development of a cost/benefit assessment model to determine the effects of alterations to the Space Surveillance Network (SSN) on orbital element (OE) set accuracy. It provides a review of current methods used by NORAD and the SSN to gather and process observations, an alternative to the current Gabbard classification method, and the development of a model to determine the effects of observation rate and correction interval on OE set accuracy. The proposed classification scheme is based on satellite J2 perturbations. Specifically, classes were established based on mean motion, eccentricity, and inclination since J2 perturbation effects are functions of only these elements. Model development began by creating representative sensor observations using a highly accurate orbital propagation model. These observations were compared to predicted observations generated using the NORAD Simplified General Perturbation (SGP4) model and differentially corrected using a Bayes, sequential estimation, algorithm. A 10-run Monte Carlo analysis was performed using this model on 12 satellites using 16 different observation rate/correction interval combinations. An ANOVA and confidence interval analysis of the results show that this model does demonstrate the differences in steady state position error based on varying observation rate and correction interval.

  9. Spatial perspectives in state-and-transition models: A missing link to land management?

    USDA-ARS?s Scientific Manuscript database

    Conceptual models of alternative states and thresholds are based largely on observations of ecosystem processes at a few points in space. Because the distribution of alternative states in spatially-structured ecosystems is the result of variations in pattern-process interactions at different scales,...

  10. Polar Processes in a 50-year Simulation of Stratospheric Chemistry and Transport

    NASA Technical Reports Server (NTRS)

    Kawa, S.R.; Douglass, A. R.; Patrick, L. C.; Allen, D. R.; Randall, C. E.

    2004-01-01

    The unique chemical, dynamical, and microphysical processes that occur in the winter polar lower stratosphere are expected to interact strongly with changing climate and trace gas abundances. Significant changes in ozone have been observed and prediction of future ozone and climate interactions depends on modeling these processes successfully. We have conducted an off-line model simulation of the stratosphere for trace gas conditions representative of 1975-2025 using meteorology from the NASA finite-volume general circulation model. The objective of this simulation is to examine the sensitivity of stratospheric ozone and chemical change to varying meteorology and trace gas inputs. This presentation will examine the dependence of ozone and related processes in polar regions on the climatological and trace gas changes in the model. The model past performance is base-lined against available observations, and a future ozone recovery scenario is forecast. Overall the model ozone simulation is quite realistic, but initial analysis of the detailed evolution of some observable processes suggests systematic shortcomings in our description of the polar chemical rates and/or mechanisms. Model sensitivities, strengths, and weaknesses will be discussed with implications for uncertainty and confidence in coupled climate chemistry predictions.

  11. Bayesian Analysis of the Glacial-Interglacial Methane Increase Constrained by Stable Isotopes and Earth System Modeling

    NASA Astrophysics Data System (ADS)

    Hopcroft, Peter O.; Valdes, Paul J.; Kaplan, Jed O.

    2018-04-01

    The observed rise in atmospheric methane (CH4) from 375 ppbv during the Last Glacial Maximum (LGM: 21,000 years ago) to 680 ppbv during the late preindustrial era is not well understood. Atmospheric chemistry considerations implicate an increase in CH4 sources, but process-based estimates fail to reproduce the required amplitude. CH4 stable isotopes provide complementary information that can help constrain the underlying causes of the increase. We combine Earth System model simulations of the late preindustrial and LGM CH4 cycles, including process-based estimates of the isotopic discrimination of vegetation, in a box model of atmospheric CH4 and its isotopes. Using a Bayesian approach, we show how model-based constraints and ice core observations may be combined in a consistent probabilistic framework. The resultant posterior distributions point to a strong reduction in wetland and other biogenic CH4 emissions during the LGM, with a modest increase in the geological source, or potentially natural or anthropogenic fires, accounting for the observed enrichment of δ13CH4.

  12. Parameter-induced uncertainty quantification of crop yields, soil N2O and CO2 emission for 8 arable sites across Europe using the LandscapeDNDC model

    NASA Astrophysics Data System (ADS)

    Santabarbara, Ignacio; Haas, Edwin; Kraus, David; Herrera, Saul; Klatt, Steffen; Kiese, Ralf

    2014-05-01

    When using biogeochemical models to estimate greenhouse gas emissions at site to regional/national levels, the assessment and quantification of the uncertainties of simulation results are of significant importance. The uncertainties in simulation results of process-based ecosystem models may result from uncertainties of the process parameters that describe the processes of the model, model structure inadequacy as well as uncertainties in the observations. Data for development and testing of uncertainty analisys were corp yield observations, measurements of soil fluxes of nitrous oxide (N2O) and carbon dioxide (CO2) from 8 arable sites across Europe. Using the process-based biogeochemical model LandscapeDNDC for simulating crop yields, N2O and CO2 emissions, our aim is to assess the simulation uncertainty by setting up a Bayesian framework based on Metropolis-Hastings algorithm. Using Gelman statistics convergence criteria and parallel computing techniques, enable multi Markov Chains to run independently in parallel and create a random walk to estimate the joint model parameter distribution. Through means distribution we limit the parameter space, get probabilities of parameter values and find the complex dependencies among them. With this parameter distribution that determines soil-atmosphere C and N exchange, we are able to obtain the parameter-induced uncertainty of simulation results and compare them with the measurements data.

  13. SEIPS-based process modeling in primary care.

    PubMed

    Wooldridge, Abigail R; Carayon, Pascale; Hundt, Ann Schoofs; Hoonakker, Peter L T

    2017-04-01

    Process mapping, often used as part of the human factors and systems engineering approach to improve care delivery and outcomes, should be expanded to represent the complex, interconnected sociotechnical aspects of health care. Here, we propose a new sociotechnical process modeling method to describe and evaluate processes, using the SEIPS model as the conceptual framework. The method produces a process map and supplementary table, which identify work system barriers and facilitators. In this paper, we present a case study applying this method to three primary care processes. We used purposeful sampling to select staff (care managers, providers, nurses, administrators and patient access representatives) from two clinics to observe and interview. We show the proposed method can be used to understand and analyze healthcare processes systematically and identify specific areas of improvement. Future work is needed to assess usability and usefulness of the SEIPS-based process modeling method and further refine it. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. SEIPS-Based Process Modeling in Primary Care

    PubMed Central

    Wooldridge, Abigail R.; Carayon, Pascale; Hundt, Ann Schoofs; Hoonakker, Peter

    2016-01-01

    Process mapping, often used as part of the human factors and systems engineering approach to improve care delivery and outcomes, should be expanded to represent the complex, interconnected sociotechnical aspects of health care. Here, we propose a new sociotechnical process modeling method to describe and evaluate processes, using the SEIPS model as the conceptual framework. The method produces a process map and supplementary table, which identify work system barriers and facilitators. In this paper, we present a case study applying this method to three primary care processes. We used purposeful sampling to select staff (care managers, providers, nurses, administrators and patient access representatives) from two clinics to observe and interview. We show the proposed method can be used to understand and analyze healthcare processes systematically and identify specific areas of improvement. Future work is needed to assess usability and usefulness of the SEIPS-based process modeling method and further refine it. PMID:28166883

  15. Implementation of a GPS-RO data processing system for the KIAPS-LETKF data assimilation system

    NASA Astrophysics Data System (ADS)

    Kwon, H.; Kang, J.-S.; Jo, Y.; Kang, J. H.

    2015-03-01

    The Korea Institute of Atmospheric Prediction Systems (KIAPS) has been developing a new global numerical weather prediction model and an advanced data assimilation system. As part of the KIAPS package for observation processing (KPOP) system for data assimilation, preprocessing, and quality control modules for bending-angle measurements of global positioning system radio occultation (GPS-RO) data have been implemented and examined. The GPS-RO data processing system is composed of several steps for checking observation locations, missing values, physical values for Earth radius of curvature, and geoid undulation. An observation-minus-background check is implemented by use of a one-dimensional observational bending-angle operator, and tangent point drift is also considered in the quality control process. We have tested GPS-RO observations utilized by the Korean Meteorological Administration (KMA) within KPOP, based on both the KMA global model and the National Center for Atmospheric Research Community Atmosphere Model with Spectral Element dynamical core (CAM-SE) as a model background. Background fields from the CAM-SE model are incorporated for the preparation of assimilation experiments with the KIAPS local ensemble transform Kalman filter (LETKF) data assimilation system, which has been successfully implemented to a cubed-sphere model with unstructured quadrilateral meshes. As a result of data processing, the bending-angle departure statistics between observation and background show significant improvement. Also, the first experiment in assimilating GPS-RO bending angle from KPOP within KIAPS-LETKF shows encouraging results.

  16. A Harris-Todaro Agent-Based Model to Rural-Urban Migration

    NASA Astrophysics Data System (ADS)

    Espíndola, Aquino L.; Silveira, Jaylson J.; Penna, T. J. P.

    2006-09-01

    The Harris-Todaro model of the rural-urban migration process is revisited under an agent-based approach. The migration of the workers is interpreted as a process of social learning by imitation, formalized by a computational model. By simulating this model, we observe a transitional dynamics with continuous growth of the urban fraction of overall population toward an equilibrium. Such an equilibrium is characterized by stabilization of rural-urban expected wages differential (generalized Harris-Todaro equilibrium condition), urban concentration and urban unemployment. These classic results obtained originally by Harris and Todaro are emergent properties of our model.

  17. Orientational analysis of planar fibre systems observed as a Poisson shot-noise process.

    PubMed

    Kärkkäinen, Salme; Lantuéjoul, Christian

    2007-10-01

    We consider two-dimensional fibrous materials observed as a digital greyscale image. The problem addressed is to estimate the orientation distribution of unobservable thin fibres from a greyscale image modelled by a planar Poisson shot-noise process. The classical stereological approach is not straightforward, because the point intensities of thin fibres along sampling lines may not be observable. For such cases, Kärkkäinen et al. (2001) suggested the use of scaled variograms determined from grey values along sampling lines in several directions. Their method is based on the assumption that the proportion between the scaled variograms and point intensities in all directions of sampling lines is constant. This assumption is proved to be valid asymptotically for Boolean models and dead leaves models, under some regularity conditions. In this work, we derive the scaled variogram and its approximations for a planar Poisson shot-noise process using the modified Bessel function. In the case of reasonable high resolution of the observed image, the scaled variogram has an approximate functional relation to the point intensity, and in the case of high resolution the relation is proportional. As the obtained relations are approximative, they are tested on simulations. The existing orientation analysis method based on the proportional relation is further experimented on images with different resolutions. The new result, the asymptotic proportionality between the scaled variograms and the point intensities for a Poisson shot-noise process, completes the earlier results for the Boolean models and for the dead leaves models.

  18. Improved workflow modelling using role activity diagram-based modelling with application to a radiology service case study.

    PubMed

    Shukla, Nagesh; Keast, John E; Ceglarek, Darek

    2014-10-01

    The modelling of complex workflows is an important problem-solving technique within healthcare settings. However, currently most of the workflow models use a simplified flow chart of patient flow obtained using on-site observations, group-based debates and brainstorming sessions, together with historic patient data. This paper presents a systematic and semi-automatic methodology for knowledge acquisition with detailed process representation using sequential interviews of people in the key roles involved in the service delivery process. The proposed methodology allows the modelling of roles, interactions, actions, and decisions involved in the service delivery process. This approach is based on protocol generation and analysis techniques such as: (i) initial protocol generation based on qualitative interviews of radiology staff, (ii) extraction of key features of the service delivery process, (iii) discovering the relationships among the key features extracted, and, (iv) a graphical representation of the final structured model of the service delivery process. The methodology is demonstrated through a case study of a magnetic resonance (MR) scanning service-delivery process in the radiology department of a large hospital. A set of guidelines is also presented in this paper to visually analyze the resulting process model for identifying process vulnerabilities. A comparative analysis of different workflow models is also conducted. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  19. Spatiotemporal patterns of terrestrial gross primary production: A review

    NASA Astrophysics Data System (ADS)

    Anav, Alessandro; Friedlingstein, Pierre; Beer, Christian; Ciais, Philippe; Harper, Anna; Jones, Chris; Murray-Tortarolo, Guillermo; Papale, Dario; Parazoo, Nicholas C.; Peylin, Philippe; Piao, Shilong; Sitch, Stephen; Viovy, Nicolas; Wiltshire, Andy; Zhao, Maosheng

    2015-09-01

    Great advances have been made in the last decade in quantifying and understanding the spatiotemporal patterns of terrestrial gross primary production (GPP) with ground, atmospheric, and space observations. However, although global GPP estimates exist, each data set relies upon assumptions and none of the available data are based only on measurements. Consequently, there is no consensus on the global total GPP and large uncertainties exist in its benchmarking. The objective of this review is to assess how the different available data sets predict the spatiotemporal patterns of GPP, identify the differences among data sets, and highlight the main advantages/disadvantages of each data set. We compare GPP estimates for the historical period (1990-2009) from two observation-based data sets (Model Tree Ensemble and Moderate Resolution Imaging Spectroradiometer) to coupled carbon-climate models and terrestrial carbon cycle models from the Fifth Climate Model Intercomparison Project and TRENDY projects and to a new hybrid data set (CARBONES). Results show a large range in the mean global GPP estimates. The different data sets broadly agree on GPP seasonal cycle in terms of phasing, while there is still discrepancy on the amplitude. For interannual variability (IAV) and trends, there is a clear separation between the observation-based data that show little IAV and trend, while the process-based models have large GPP variability and significant trends. These results suggest that there is an urgent need to improve observation-based data sets and develop carbon cycle modeling with processes that are currently treated either very simplistically to correctly estimate present GPP and better quantify the future uptake of carbon dioxide by the world's vegetation.

  20. Statistical and engineering methods for model enhancement

    NASA Astrophysics Data System (ADS)

    Chang, Chia-Jung

    Models which describe the performance of physical process are essential for quality prediction, experimental planning, process control and optimization. Engineering models developed based on the underlying physics/mechanics of the process such as analytic models or finite element models are widely used to capture the deterministic trend of the process. However, there usually exists stochastic randomness in the system which may introduce the discrepancy between physics-based model predictions and observations in reality. Alternatively, statistical models can be used to develop models to obtain predictions purely based on the data generated from the process. However, such models tend to perform poorly when predictions are made away from the observed data points. This dissertation contributes to model enhancement research by integrating physics-based model and statistical model to mitigate the individual drawbacks and provide models with better accuracy by combining the strengths of both models. The proposed model enhancement methodologies including the following two streams: (1) data-driven enhancement approach and (2) engineering-driven enhancement approach. Through these efforts, more adequate models are obtained, which leads to better performance in system forecasting, process monitoring and decision optimization. Among different data-driven enhancement approaches, Gaussian Process (GP) model provides a powerful methodology for calibrating a physical model in the presence of model uncertainties. However, if the data contain systematic experimental errors, the GP model can lead to an unnecessarily complex adjustment of the physical model. In Chapter 2, we proposed a novel enhancement procedure, named as “Minimal Adjustment”, which brings the physical model closer to the data by making minimal changes to it. This is achieved by approximating the GP model by a linear regression model and then applying a simultaneous variable selection of the model and experimental bias terms. Two real examples and simulations are presented to demonstrate the advantages of the proposed approach. Different from enhancing the model based on data-driven perspective, an alternative approach is to focus on adjusting the model by incorporating the additional domain or engineering knowledge when available. This often leads to models that are very simple and easy to interpret. The concepts of engineering-driven enhancement are carried out through two applications to demonstrate the proposed methodologies. In the first application where polymer composite quality is focused, nanoparticle dispersion has been identified as a crucial factor affecting the mechanical properties. Transmission Electron Microscopy (TEM) images are commonly used to represent nanoparticle dispersion without further quantifications on its characteristics. In Chapter 3, we developed the engineering-driven nonhomogeneous Poisson random field modeling strategy to characterize nanoparticle dispersion status of nanocomposite polymer, which quantitatively represents the nanomaterial quality presented through image data. The model parameters are estimated through the Bayesian MCMC technique to overcome the challenge of limited amount of accessible data due to the time consuming sampling schemes. The second application is to calibrate the engineering-driven force models of laser-assisted micro milling (LAMM) process statistically, which facilitates a systematic understanding and optimization of targeted processes. In Chapter 4, the force prediction interval has been derived by incorporating the variability in the runout parameters as well as the variability in the measured cutting forces. The experimental results indicate that the model predicts the cutting force profile with good accuracy using a 95% confidence interval. To conclude, this dissertation is the research drawing attention to model enhancement, which has considerable impacts on modeling, design, and optimization of various processes and systems. The fundamental methodologies of model enhancement are developed and further applied to various applications. These research activities developed engineering compliant models for adequate system predictions based on observational data with complex variable relationships and uncertainty, which facilitate process planning, monitoring, and real-time control.

  1. Sensory Processing Subtypes in Autism: Association with Adaptive Behavior

    ERIC Educational Resources Information Center

    Lane, Alison E.; Young, Robyn L.; Baker, Amy E. Z.; Angley, Manya T.

    2010-01-01

    Children with autism are frequently observed to experience difficulties in sensory processing. This study examined specific patterns of sensory processing in 54 children with autistic disorder and their association with adaptive behavior. Model-based cluster analysis revealed three distinct sensory processing subtypes in autism. These subtypes…

  2. Physics of Accretion in X-Ray Binaries

    NASA Technical Reports Server (NTRS)

    Vrtilek, Saeqa D.

    2004-01-01

    This project consists of several related investigations directed to the study of mass transfer processes in X-ray binaries. Models developed over several years incorporating highly detailed physics will be tested on a balanced mix of existing data and planned observations with both ground and space-based observatories. The extended time coverage of the observations and the existence of {\\it simultaneous} X-ray, ultraviolet, and optical observations will be particularly beneficial for studying the accretion flows. These investigations, which take as detailed a look at the accretion process in X-ray binaries as is now possible, test current models to their limits, and force us to extend them. We now have the ability to do simultaneous ultraviolet/X-ray/optical spectroscopy with HST, Chandra, XMM, and ground-based observatories. The rich spectroscopy that these Observations give us must be interpreted principally by reference to detailed models, the development of which is already well underway; tests of these essential interpretive tools are an important product of the proposed investigations.

  3. The Physics of Accretion in X-Ray Binaries

    NASA Technical Reports Server (NTRS)

    Vrtilek, S.; Oliversen, Ronald (Technical Monitor)

    2001-01-01

    This project consists of several related investigations directed to the study of mass transfer processes in X-ray binaries. Models developed over several years incorporating highly detailed physics will be tested on a balanced mix of existing data and planned observations with both ground and space-based observatories. The extended time coverage of the observations and the existence of simultaneous X-ray, ultraviolet, and optical observations will be particularly beneficial for studying the accretion flows. These investigations, which take as detailed a look at the accretion process in X-ray binaries as is now possible, test current models to their limits, and force us to extend them. We now have the ability to do simultaneous ultraviolet/X-ray/optical spectroscopy with HST, Chandra, XMM, and ground-based observatories. The rich spectroscopy that these observations give us must be interpreted principally by reference to detailed models, the development of which is already well underway; tests of these essential interpretive tools are an important product of the proposed investigations.

  4. Aircraft- and ground-based assessment of the CCN-AOD relationship and implications on model analysis of ACI and underlying aerosol processes

    NASA Astrophysics Data System (ADS)

    Shinozuka, Y.; Clarke, A. D.; Nenes, A.; Lathem, T. L.; Redemann, J.; Jefferson, A.; Wood, R.

    2014-12-01

    Contrary to common assumptions in satellite-based modeling of aerosol-cloud interactions, ∂logCCN/∂logAOD is less than unity, i.e., the number concentration of cloud condensation nuclei (CCN) less than doubles as aerosol optical depth (AOD) doubles. This can be explained by omnipresent aerosol processes. Condensation, coagulation and cloud processing, for example, generally make particles scatter more light while hardly increasing their number. This paper reports on the relationship in local air masses between CCN concentration, aerosol size distribution and light extinction observed from aircraft and the ground at diverse locations. The CCN-to-local-extinction relationship, when averaged over ~1 km distance and sorted by the wavelength dependence of extinction, varies approximately by a factor of 2, reflecting the variability in aerosol intensive properties. This, together with retrieval uncertainties and the variability in aerosol spatio-temporal distribution and hygroscopic growth, challenges satellite-based CCN estimates. However, the large differences in estimated CCN may correspond to a considerably lower uncertainty in cloud drop number concentration (CDNC), given the sublinear response of CDNC to CCN. Overall, our findings from airborne and ground-based observations call for model-based reexamination of aerosol-cloud interactions and underlying aerosol processes.

  5. REVIEWS OF TOPICAL PROBLEMS: Nonlinear dynamics of the brain: emotion and cognition

    NASA Astrophysics Data System (ADS)

    Rabinovich, Mikhail I.; Muezzinoglu, M. K.

    2010-07-01

    Experimental investigations of neural system functioning and brain activity are standardly based on the assumption that perceptions, emotions, and cognitive functions can be understood by analyzing steady-state neural processes and static tomographic snapshots. The new approaches discussed in this review are based on the analysis of transient processes and metastable states. Transient dynamics is characterized by two basic properties, structural stability and information sensitivity. The ideas and methods that we discuss provide an explanation for the occurrence of and successive transitions between metastable states observed in experiments, and offer new approaches to behavior analysis. Models of the emotional and cognitive functions of the brain are suggested. The mathematical object that represents the observed transient brain processes in the phase space of the model is a structurally stable heteroclinic channel. The possibility of using the suggested models to construct a quantitative theory of some emotional and cognitive functions is illustrated.

  6. Tree injury and mortality in fires: developing process-based models

    Treesearch

    Bret W. Butler; Matthew B. Dickinson

    2010-01-01

    Wildland fire managers are often required to predict tree injury and mortality when planning a prescribed burn or when considering wildfire management options; and, currently, statistical models based on post-fire observations are the only tools available for this purpose. Implicit in the derivation of statistical models is the assumption that they are strictly...

  7. An Amorphous Model for Morphological Processing in Visual Comprehension Based on Naive Discriminative Learning

    ERIC Educational Resources Information Center

    Baayen, R. Harald; Milin, Petar; Durdevic, Dusica Filipovic; Hendrix, Peter; Marelli, Marco

    2011-01-01

    A 2-layer symbolic network model based on the equilibrium equations of the Rescorla-Wagner model (Danks, 2003) is proposed. The study first presents 2 experiments in Serbian, which reveal for sentential reading the inflectional paradigmatic effects previously observed by Milin, Filipovic Durdevic, and Moscoso del Prado Martin (2009) for unprimed…

  8. Sieve estimation in semiparametric modeling of longitudinal data with informative observation times.

    PubMed

    Zhao, Xingqiu; Deng, Shirong; Liu, Li; Liu, Lei

    2014-01-01

    Analyzing irregularly spaced longitudinal data often involves modeling possibly correlated response and observation processes. In this article, we propose a new class of semiparametric mean models that allows for the interaction between the observation history and covariates, leaving patterns of the observation process to be arbitrary. For inference on the regression parameters and the baseline mean function, a spline-based least squares estimation approach is proposed. The consistency, rate of convergence, and asymptotic normality of the proposed estimators are established. Our new approach is different from the usual approaches relying on the model specification of the observation scheme, and it can be easily used for predicting the longitudinal response. Simulation studies demonstrate that the proposed inference procedure performs well and is more robust. The analyses of bladder tumor data and medical cost data are presented to illustrate the proposed method.

  9. Direct observation of the oxidation of DNA bases by phosphate radicals formed under radiation: a model of the backbone-to-base hole transfer.

    PubMed

    Ma, Jun; Marignier, Jean-Louis; Pernot, Pascal; Houée-Levin, Chantal; Kumar, Anil; Sevilla, Michael D; Adhikary, Amitava; Mostafavi, Mehran

    2018-05-30

    In irradiated DNA, by the base-to-base and backbone-to-base hole transfer processes, the hole (i.e., the unpaired spin) localizes on the most electropositive base, guanine. Phosphate radicals formed via ionization events in the DNA-backbone must play an important role in the backbone-to-base hole transfer process. However, earlier studies on irradiated hydrated DNA, on irradiated DNA-models in frozen aqueous solution and in neat dimethyl phosphate showed the formation of carbon-centered radicals and not phosphate radicals. Therefore, to model the backbone-to-base hole transfer process, we report picosecond pulse radiolysis studies of the reactions between H2PO4˙ with the DNA bases - G, A, T, and C in 6 M H3PO4 at 22 °C. The time-resolved observations show that in 6 M H3PO4, H2PO4˙ causes the one-electron oxidation of adenine, guanine and thymine, by forming the cation radicals via a single electron transfer (SET) process; however, the rate constant of the reaction of H2PO4˙ with cytosine is too low (<107 L mol-1 s-1) to be measured. The rates of these reactions are influenced by the protonation states and the reorganization energies of the base radicals and of the phosphate radical in 6 M H3PO4.

  10. A review of sources of systematic errors and uncertainties in observations and simulations at 183 GHz

    NASA Astrophysics Data System (ADS)

    Brogniez, Helene; English, Stephen; Mahfouf, Jean-Francois; Behrendt, Andreas; Berg, Wesley; Boukabara, Sid; Buehler, Stefan Alexander; Chambon, Philippe; Gambacorta, Antonia; Geer, Alan; Ingram, William; Kursinski, E. Robert; Matricardi, Marco; Odintsova, Tatyana A.; Payne, Vivienne H.; Thorne, Peter W.; Tretyakov, Mikhail Yu.; Wang, Junhong

    2016-05-01

    Several recent studies have observed systematic differences between measurements in the 183.31 GHz water vapor line by space-borne sounders and calculations using radiative transfer models, with inputs from either radiosondes (radiosonde observations, RAOBs) or short-range forecasts by numerical weather prediction (NWP) models. This paper discusses all the relevant categories of observation-based or model-based data, quantifies their uncertainties and separates biases that could be common to all causes from those attributable to a particular cause. Reference observations from radiosondes, Global Navigation Satellite System (GNSS) receivers, differential absorption lidar (DIAL) and Raman lidar are thus overviewed. Biases arising from their calibration procedures, NWP models and data assimilation, instrument biases and radiative transfer models (both the models themselves and the underlying spectroscopy) are presented and discussed. Although presently no single process in the comparisons seems capable of explaining the observed structure of bias, recommendations are made in order to better understand the causes.

  11. Diagnosis by integrating model-based reasoning with knowledge-based reasoning

    NASA Technical Reports Server (NTRS)

    Bylander, Tom

    1988-01-01

    Our research investigates how observations can be categorized by integrating a qualitative physical model with experiential knowledge. Our domain is diagnosis of pathologic gait in humans, in which the observations are the gait motions, muscle activity during gait, and physical exam data, and the diagnostic hypotheses are the potential muscle weaknesses, muscle mistimings, and joint restrictions. Patients with underlying neurological disorders typically have several malfunctions. Among the problems that need to be faced are: the ambiguity of the observations, the ambiguity of the qualitative physical model, correspondence of the observations and hypotheses to the qualitative physical model, the inherent uncertainty of experiential knowledge, and the combinatorics involved in forming composite hypotheses. Our system divides the work so that the knowledge-based reasoning suggests which hypotheses appear more likely than others, the qualitative physical model is used to determine which hypotheses explain which observations, and another process combines these functionalities to construct a composite hypothesis based on explanatory power and plausibility. We speculate that the reasoning architecture of our system is generally applicable to complex domains in which a less-than-perfect physical model and less-than-perfect experiential knowledge need to be combined to perform diagnosis.

  12. Circular analysis in complex stochastic systems

    PubMed Central

    Valleriani, Angelo

    2015-01-01

    Ruling out observations can lead to wrong models. This danger occurs unwillingly when one selects observations, experiments, simulations or time-series based on their outcome. In stochastic processes, conditioning on the future outcome biases all local transition probabilities and makes them consistent with the selected outcome. This circular self-consistency leads to models that are inconsistent with physical reality. It is also the reason why models built solely on macroscopic observations are prone to this fallacy. PMID:26656656

  13. Advancing coastal ocean modelling, analysis, and prediction for the US Integrated Ocean Observing System

    USGS Publications Warehouse

    Wilkin, John L.; Rosenfeld, Leslie; Allen, Arthur; Baltes, Rebecca; Baptista, Antonio; He, Ruoying; Hogan, Patrick; Kurapov, Alexander; Mehra, Avichal; Quintrell, Josie; Schwab, David; Signell, Richard; Smith, Jane

    2017-01-01

    This paper outlines strategies that would advance coastal ocean modelling, analysis and prediction as a complement to the observing and data management activities of the coastal components of the US Integrated Ocean Observing System (IOOS®) and the Global Ocean Observing System (GOOS). The views presented are the consensus of a group of US-based researchers with a cross-section of coastal oceanography and ocean modelling expertise and community representation drawn from Regional and US Federal partners in IOOS. Priorities for research and development are suggested that would enhance the value of IOOS observations through model-based synthesis, deliver better model-based information products, and assist the design, evaluation, and operation of the observing system itself. The proposed priorities are: model coupling, data assimilation, nearshore processes, cyberinfrastructure and model skill assessment, modelling for observing system design, evaluation and operation, ensemble prediction, and fast predictors. Approaches are suggested to accomplish substantial progress in a 3–8-year timeframe. In addition, the group proposes steps to promote collaboration between research and operations groups in Regional Associations, US Federal Agencies, and the international ocean research community in general that would foster coordination on scientific and technical issues, and strengthen federal–academic partnerships benefiting IOOS stakeholders and end users.

  14. Quantifying the predictive consequences of model error with linear subspace analysis

    USGS Publications Warehouse

    White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.

    2014-01-01

    All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.

  15. ARTSN: An Automated Real-Time Spacecraft Navigation System

    NASA Technical Reports Server (NTRS)

    Burkhart, P. Daniel; Pollmeier, Vincent M.

    1996-01-01

    As part of the Deep Space Network (DSN) advanced technology program an effort is underway to design a filter to automate the deep space navigation process.The automated real-time spacecraft navigation (ARTSN) filter task is based on a prototype consisting of a FORTRAN77 package operating on an HP-9000/700 workstation running HP-UX 9.05. This will be converted to C, and maintained as the operational version. The processing tasks required are: (1) read a measurement, (2) integrate the spacecraft state to the current measurement time, (3) compute the observable based on the integrated state, and (4) incorporate the measurement information into the state using an extended Kalman filter. This filter processes radiometric data collected by the DSN. The dynamic (force) models currently include point mass gravitational terms for all planets, the Sun and Moon, solar radiation pressure, finite maneuvers, and attitude maintenance activity modeled quadratically. In addition, observable errors due to troposphere are included. Further data types, force and observable models will be ncluded to enhance the accuracy of the models and the capability of the package. The heart of the ARSTSN is a currently available continuous-discrete extended Kalman filter. Simulated data used to test the implementation at various stages of development and the results from processing actual mission data are presented.

  16. On-Line Critiques in Collaborative Design Studio

    ERIC Educational Resources Information Center

    Sagun, Aysu; Demirkan, Halime

    2009-01-01

    In this study, the Design Collaboration Model (DCM) was developed to provide a medium for the on-line collaboration of the design courses. The model was based on the situated and reflective practice characteristics of the design process. The segmentation method was used to analyse the design process observed both in the design diaries and the…

  17. VPPA weld model evaluation

    NASA Technical Reports Server (NTRS)

    Mccutcheon, Kimble D.; Gordon, Stephen S.; Thompson, Paul A.

    1992-01-01

    NASA uses the Variable Polarity Plasma Arc Welding (VPPAW) process extensively for fabrication of Space Shuttle External Tanks. This welding process has been in use at NASA since the late 1970's but the physics of the process have never been satisfactorily modeled and understood. In an attempt to advance the level of understanding of VPPAW, Dr. Arthur C. Nunes, Jr., (NASA) has developed a mathematical model of the process. The work described in this report evaluated and used two versions (level-0 and level-1) of Dr. Nunes' model, and a model derived by the University of Alabama at Huntsville (UAH) from Dr. Nunes' level-1 model. Two series of VPPAW experiments were done, using over 400 different combinations of welding parameters. Observations were made of VPPAW process behavior as a function of specific welding parameter changes. Data from these weld experiments was used to evaluate and suggest improvements to Dr. Nunes' model. Experimental data and correlations with the model were used to develop a multi-variable control algorithm for use with a future VPPAW controller. This algorithm is designed to control weld widths (both on the crown and root of the weld) based upon the weld parameters, base metal properties, and real-time observation of the crown width. The algorithm exhibited accuracy comparable to that of the weld width measurements for both aluminum and mild steel welds.

  18. Visual and Auditory Components in the Perception of Asynchronous Audiovisual Speech

    PubMed Central

    Alcalá-Quintana, Rocío

    2015-01-01

    Research on asynchronous audiovisual speech perception manipulates experimental conditions to observe their effects on synchrony judgments. Probabilistic models establish a link between the sensory and decisional processes underlying such judgments and the observed data, via interpretable parameters that allow testing hypotheses and making inferences about how experimental manipulations affect such processes. Two models of this type have recently been proposed, one based on independent channels and the other using a Bayesian approach. Both models are fitted here to a common data set, with a subsequent analysis of the interpretation they provide about how experimental manipulations affected the processes underlying perceived synchrony. The data consist of synchrony judgments as a function of audiovisual offset in a speech stimulus, under four within-subjects manipulations of the quality of the visual component. The Bayesian model could not accommodate asymmetric data, was rejected by goodness-of-fit statistics for 8/16 observers, and was found to be nonidentifiable, which renders uninterpretable parameter estimates. The independent-channels model captured asymmetric data, was rejected for only 1/16 observers, and identified how sensory and decisional processes mediating asynchronous audiovisual speech perception are affected by manipulations that only alter the quality of the visual component of the speech signal. PMID:27551361

  19. Sensor-Web Operations Explorer

    NASA Technical Reports Server (NTRS)

    Meemong, Lee; Miller, Charles; Bowman, Kevin; Weidner, Richard

    2008-01-01

    Understanding the atmospheric state and its impact on air quality requires observations of trace gases, aerosols, clouds, and physical parameters across temporal and spatial scales that range from minutes to days and from meters to more than 10,000 kilometers. Observations include continuous local monitoring for particle formation; field campaigns for emissions, local transport, and chemistry; and periodic global measurements for continental transport and chemistry. Understanding includes global data assimilation framework capable of hierarchical coupling, dynamic integration of chemical data and atmospheric models, and feedback loops between models and observations. The objective of the sensor-web system is to observe trace gases, aerosols, clouds, and physical parameters, an integrated observation infrastructure composed of space-borne, air-borne, and in-situ sensors will be simulated based on their measurement physics properties. The objective of the sensor-web operation is to optimally plan for heterogeneous multiple sensors, the sampling strategies will be explored and science impact will be analyzed based on comprehensive modeling of atmospheric phenomena including convection, transport, and chemical process. Topics include system architecture, software architecture, hardware architecture, process flow, technology infusion, challenges, and future direction.

  20. Improvement of Storm Forecasts Using Gridded Bayesian Linear Regression for Northeast United States

    NASA Astrophysics Data System (ADS)

    Yang, J.; Astitha, M.; Schwartz, C. S.

    2017-12-01

    Bayesian linear regression (BLR) is a post-processing technique in which regression coefficients are derived and used to correct raw forecasts based on pairs of observation-model values. This study presents the development and application of a gridded Bayesian linear regression (GBLR) as a new post-processing technique to improve numerical weather prediction (NWP) of rain and wind storm forecasts over northeast United States. Ten controlled variables produced from ten ensemble members of the National Center for Atmospheric Research (NCAR) real-time prediction system are used for a GBLR model. In the GBLR framework, leave-one-storm-out cross-validation is utilized to study the performances of the post-processing technique in a database composed of 92 storms. To estimate the regression coefficients of the GBLR, optimization procedures that minimize the systematic and random error of predicted atmospheric variables (wind speed, precipitation, etc.) are implemented for the modeled-observed pairs of training storms. The regression coefficients calculated for meteorological stations of the National Weather Service are interpolated back to the model domain. An analysis of forecast improvements based on error reductions during the storms will demonstrate the value of GBLR approach. This presentation will also illustrate how the variances are optimized for the training partition in GBLR and discuss the verification strategy for grid points where no observations are available. The new post-processing technique is successful in improving wind speed and precipitation storm forecasts using past event-based data and has the potential to be implemented in real-time.

  1. Science-Grade Observing Systems as Process Observatories: Mapping and Understanding Nonlinearity and Multiscale Memory with Models and Observations

    NASA Astrophysics Data System (ADS)

    Barros, A. P.; Wilson, A. M.; Miller, D. K.; Tao, J.; Genereux, D. P.; Prat, O.; Petersen, W. A.; Brunsell, N. A.; Petters, M. D.; Duan, Y.

    2015-12-01

    Using the planet as a study domain and collecting observations over unprecedented ranges of spatial and temporal scales, NASA's EOS (Earth Observing System) program was an agent of transformational change in Earth Sciences over the last thirty years. The remarkable space-time organization and variability of atmospheric and terrestrial moist processes that emerged from the analysis of comprehensive satellite observations provided much impetus to expand the scope of land-atmosphere interaction studies in Hydrology and Hydrometeorology. Consequently, input and output terms in the mass and energy balance equations evolved from being treated as fluxes that can be used as boundary conditions, or forcing, to being viewed as dynamic processes of a coupled system interacting at multiple scales. Measurements of states or fluxes are most useful if together they map, reveal and/or constrain the underlying physical processes and their interactions. This can only be accomplished through an integrated observing system designed to capture the coupled physics, including nonlinear feedbacks and tipping points. Here, we first review and synthesize lessons learned from hydrometeorology studies in the Southern Appalachians and in the Southern Great Plains using both ground-based and satellite observations, physical models and data-assimilation systems. We will specifically focus on mapping and understanding nonlinearity and multiscale memory of rainfall-runoff processes in mountainous regions. It will be shown that beyond technical rigor, variety, quantity and duration of measurements, the utility of observing systems is determined by their interpretive value in the context of physical models to describe the linkages among different observations. Second, we propose a framework for designing science-grade and science-minded process-oriented integrated observing and modeling platforms for hydrometeorological studies.

  2. Evaluating the Utility of Satellite Soil Moisture Retrievals over Irrigated Areas and the Ability of Land Data Assimilation Methods to Correct for Unmodeled Processes

    NASA Technical Reports Server (NTRS)

    Kumar, S. V.; Peters-Lidard, C. D.; Santanello, J. A.; Reichle, R. H.; Draper, C. S.; Koster, R. D.; Nearing, G.; Jasinski, M. F.

    2015-01-01

    Earth's land surface is characterized by tremendous natural heterogeneity and human-engineered modifications, both of which are challenging to represent in land surface models. Satellite remote sensing is often the most practical and effective method to observe the land surface over large geographical areas. Agricultural irrigation is an important human-induced modification to natural land surface processes, as it is pervasive across the world and because of its significant influence on the regional and global water budgets. In this article, irrigation is used as an example of a human-engineered, often unmodeled land surface process, and the utility of satellite soil moisture retrievals over irrigated areas in the continental US is examined. Such retrievals are based on passive or active microwave observations from the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E), the Advanced Microwave Scanning Radiometer 2 (AMSR2), the Soil Moisture Ocean Salinity (SMOS) mission, WindSat and the Advanced Scatterometer (ASCAT). The analysis suggests that the skill of these retrievals for representing irrigation effects is mixed, with ASCAT-based products somewhat more skillful than SMOS and AMSR2 products. The article then examines the suitability of typical bias correction strategies in current land data assimilation systems when unmodeled processes dominate the bias between the model and the observations. Using a suite of synthetic experiments that includes bias correction strategies such as quantile mapping and trained forward modeling, it is demonstrated that the bias correction practices lead to the exclusion of the signals from unmodeled processes, if these processes are the major source of the biases. It is further shown that new methods are needed to preserve the observational information about unmodeled processes during data assimilation.

  3. On modeling animal movements using Brownian motion with measurement error.

    PubMed

    Pozdnyakov, Vladimir; Meyer, Thomas; Wang, Yu-Bo; Yan, Jun

    2014-02-01

    Modeling animal movements with Brownian motion (or more generally by a Gaussian process) has a long tradition in ecological studies. The recent Brownian bridge movement model (BBMM), which incorporates measurement errors, has been quickly adopted by ecologists because of its simplicity and tractability. We discuss some nontrivial properties of the discrete-time stochastic process that results from observing a Brownian motion with added normal noise at discrete times. In particular, we demonstrate that the observed sequence of random variables is not Markov. Consequently the expected occupation time between two successively observed locations does not depend on just those two observations; the whole path must be taken into account. Nonetheless, the exact likelihood function of the observed time series remains tractable; it requires only sparse matrix computations. The likelihood-based estimation procedure is described in detail and compared to the BBMM estimation.

  4. Evaluation of climate-related carbon turnover processes in global vegetation models for boreal and temperate forests.

    PubMed

    Thurner, Martin; Beer, Christian; Ciais, Philippe; Friend, Andrew D; Ito, Akihiko; Kleidon, Axel; Lomas, Mark R; Quegan, Shaun; Rademacher, Tim T; Schaphoff, Sibyll; Tum, Markus; Wiltshire, Andy; Carvalhais, Nuno

    2017-08-01

    Turnover concepts in state-of-the-art global vegetation models (GVMs) account for various processes, but are often highly simplified and may not include an adequate representation of the dominant processes that shape vegetation carbon turnover rates in real forest ecosystems at a large spatial scale. Here, we evaluate vegetation carbon turnover processes in GVMs participating in the Inter-Sectoral Impact Model Intercomparison Project (ISI-MIP, including HYBRID4, JeDi, JULES, LPJml, ORCHIDEE, SDGVM, and VISIT) using estimates of vegetation carbon turnover rate (k) derived from a combination of remote sensing based products of biomass and net primary production (NPP). We find that current model limitations lead to considerable biases in the simulated biomass and in k (severe underestimations by all models except JeDi and VISIT compared to observation-based average k), likely contributing to underestimation of positive feedbacks of the northern forest carbon balance to climate change caused by changes in forest mortality. A need for improved turnover concepts related to frost damage, drought, and insect outbreaks to better reproduce observation-based spatial patterns in k is identified. As direct frost damage effects on mortality are usually not accounted for in these GVMs, simulated relationships between k and winter length in boreal forests are not consistent between different regions and strongly biased compared to the observation-based relationships. Some models show a response of k to drought in temperate forests as a result of impacts of water availability on NPP, growth efficiency or carbon balance dependent mortality as well as soil or litter moisture effects on leaf turnover or fire. However, further direct drought effects such as carbon starvation (only in HYBRID4) or hydraulic failure are usually not taken into account by the investigated GVMs. While they are considered dominant large-scale mortality agents, mortality mechanisms related to insects and pathogens are not explicitly treated in these models. © 2017 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.

  5. Modelling hydrological processes in mountainous permafrost basin in North-East of Russia

    NASA Astrophysics Data System (ADS)

    Makarieva, Olga; Lebedeva, Lyudmila; Nesterova, Natalia

    2017-04-01

    The studies of hydrological processes in continuous permafrost and the projections of their changes in future have been receiving a lot of attention in the recent years. They are limited by the availability of long-term joint observational data on permafrost dynamic and river runoff which would allow revealing the mechanisms of interaction, tracking the dynamic in historical period and projecting changes in future. The Kolyma Water-Balance Station (KWBS), the Kontaktovy Creek watershed with an area of 22 km2, is situated in the zone of continuous permafrost in the upper reaches of the Kolyma River (Magadan district of Russia). The topography at KWBS is mountainous with the elevations up to 1700 m. Permafrost thickness ranges from 100 to 400 m with temperature -4...-6 °C. Detailed observations of river runoff, active layer dynamics and water balance were carried out at the KWBS from 1948 to 1997. After that permafrost studies were ceased but runoff gauges have been in use and have continuous time series of observations up to 68 years. The hydrological processes at KWBS are representative for the vast NE region of Russia where standard observational network is very scarce. We aim to study and model the mechanisms of interactions between permafrost and runoff, including water flow paths in different landscapes of mountainous permafrost based on detailed historical data of KWBS and the analysis of stable isotopes composition from water samples collected at KWBS in 2016. Mathematical modelling of soil temperature, active layer properties and dynamics, flow formation and interactions between ground and surface water is performed by the means of Hydrograph model (Vinogradov et al. 2011, Semenova et al. 2013). The model algorithms combine process-based and conceptual approaches, which allows for maintaining a balance between the complexity of model design and the use of limited input information. The method for modeling heat dynamics in soil was integrated into Hydrograph model (Semenova et al., 2015; Lebedeva et al., 2015). Small watersheds of KWBS with areas less than 0.5 km2 presenting rocky talus, mountainous tundra and moist larch-forest landscapes were modelled with satisfactory results. The dependence of surface and subsurface flow formation on thawing depth and landscape characteristics is parametrically described. Process analysis and modelling in permafrost regions, including ungauged basins, is suggested, with observable properties of landscapes being used as model parameters, combined with an appropriate level of physically-based conceptualization. The study is partially supported by Russian foundation of basic research, projects 16-35-50151 and 17-05-01138.

  6. Fault detection and diagnosis using neural network approaches

    NASA Technical Reports Server (NTRS)

    Kramer, Mark A.

    1992-01-01

    Neural networks can be used to detect and identify abnormalities in real-time process data. Two basic approaches can be used, the first based on training networks using data representing both normal and abnormal modes of process behavior, and the second based on statistical characterization of the normal mode only. Given data representative of process faults, radial basis function networks can effectively identify failures. This approach is often limited by the lack of fault data, but can be facilitated by process simulation. The second approach employs elliptical and radial basis function neural networks and other models to learn the statistical distributions of process observables under normal conditions. Analytical models of failure modes can then be applied in combination with the neural network models to identify faults. Special methods can be applied to compensate for sensor failures, to produce real-time estimation of missing or failed sensors based on the correlations codified in the neural network.

  7. Unified Modeling Language (UML) for hospital-based cancer registration processes.

    PubMed

    Shiki, Naomi; Ohno, Yuko; Fujii, Ayumi; Murata, Taizo; Matsumura, Yasushi

    2008-01-01

    Hospital-based cancer registry involves complex processing steps that span across multiple departments. In addition, management techniques and registration procedures differ depending on each medical facility. Establishing processes for hospital-based cancer registry requires clarifying specific functions and labor needed. In recent years, the business modeling technique, in which management evaluation is done by clearly spelling out processes and functions, has been applied to business process analysis. However, there are few analytical reports describing the applications of these concepts to medical-related work. In this study, we initially sought to model hospital-based cancer registration processes using the Unified Modeling Language (UML), to clarify functions. The object of this study was the cancer registry of Osaka University Hospital. We organized the hospital-based cancer registration processes based on interview and observational surveys, and produced an As-Is model using activity, use-case, and class diagrams. After drafting every UML model, it was fed-back to practitioners to check its validity and improved. We were able to define the workflow for each department using activity diagrams. In addition, by using use-case diagrams we were able to classify each department within the hospital as a system, and thereby specify the core processes and staff that were responsible for each department. The class diagrams were effective in systematically organizing the information to be used for hospital-based cancer registries. Using UML modeling, hospital-based cancer registration processes were broadly classified into three separate processes, namely, registration tasks, quality control, and filing data. An additional 14 functions were also extracted. Many tasks take place within the hospital-based cancer registry office, but the process of providing information spans across multiple departments. Moreover, additional tasks were required in comparison to using a standardized system because the hospital-based cancer registration system was constructed with the pre-existing computer system in Osaka University Hospital. Difficulty of utilization of useful information for cancer registration processes was shown to increase the task workload. By using UML, we were able to clarify functions and extract the typical processes for a hospital-based cancer registry. Modeling can provide a basis of process analysis for establishment of efficient hospital-based cancer registration processes in each institute.

  8. Model-based adaptive 3D sonar reconstruction in reverberating environments.

    PubMed

    Saucan, Augustin-Alexandru; Sintes, Christophe; Chonavel, Thierry; Caillec, Jean-Marc Le

    2015-10-01

    In this paper, we propose a novel model-based approach for 3D underwater scene reconstruction, i.e., bathymetry, for side scan sonar arrays in complex and highly reverberating environments like shallow water areas. The presence of multipath echoes and volume reverberation generates false depth estimates. To improve the resulting bathymetry, this paper proposes and develops an adaptive filter, based on several original geometrical models. This multimodel approach makes it possible to track and separate the direction of arrival trajectories of multiple echoes impinging the array. Echo tracking is perceived as a model-based processing stage, incorporating prior information on the temporal evolution of echoes in order to reject cluttered observations generated by interfering echoes. The results of the proposed filter on simulated and real sonar data showcase the clutter-free and regularized bathymetric reconstruction. Model validation is carried out with goodness of fit tests, and demonstrates the importance of model-based processing for bathymetry reconstruction.

  9. The Lunar Phases Project: A Mental Model-Based Observational Project for Undergraduate Nonscience Majors

    ERIC Educational Resources Information Center

    Meyer, Angela Osterman; Mon, Manuel J.; Hibbard, Susan T.

    2011-01-01

    We present our Lunar Phases Project, an ongoing effort utilizing students' actual observations within a mental model building framework to improve student understanding of the causes and process of the lunar phases. We implement this project with a sample of undergraduate, nonscience major students enrolled in a midsized public university located…

  10. Anvil Glaciation in a Deep Cumulus Updraught over Florida Simulated with the Explicit Microphysics Model. I: Impact of Various Nucleation Processes

    NASA Technical Reports Server (NTRS)

    Phillips, Vaughan T. J.; Andronache, Constantin; Sherwood, Steven C.; Bansemer, Aaron; Conant, William C.; Demott, Paul J.; Flagan, Richard C.; Heymsfield, Andy; Jonsson, Haflidi; Poellot, Micheal; hide

    2005-01-01

    Simulations of a cumulonimbus cloud observed in the Cirrus regional Study of Tropical Anvils and Cirrus Layers-Florida Area Cirrus Experiment (CRYSTAL-FACE) with an advanced version of the Explicit Microphysics Model (EMM) are presented. The EMM has size-resolved aerosols and predicts the time evolution of sizes, bulk densities and axial ratios of ice particles. Observations by multiple aircraft in the troposphere provide inputs to the model, including observations of the ice nuclei and of the entire size distribution of condensation nuclei. Homogeneous droplet freezing is found to be the source of almost all of the ice crystals in the anvil updraught of this particular model cloud. Most of the simulated droplets that freeze to form anvil crystals appear to be nucleated by activation of aerosols far above cloud base in the interior of the cloud ("secondary" or "in cloud" droplet nucleation). This is partly because primary droplets formed at cloud base are invariably depleted by accretion before they can reach the anvil base in the updraught, which promotes an increase with height of the average supersaturation in the updraught aloft. More than half of these aerosols, activated far above cloud base, are entrained into the updraught of this model cloud from the lateral environment above about 5 km above mean sea level. This confirms the importance of remote sources of atmospheric aerosol for anvil glaciation. Other nucleation processes impinge indirectly upon the anvil glaciation by modifying the concentration of supercooled droplets in the upper levels of the mixed-phase region. For instance, the warm-rain process produces a massive indirect impact on the anvil crystal concentration, because it determines the mass of precipitation forming in the updraught. It competes with homogeneous freezing as a sink for cloud droplets. The effects from turbulent enhancement of the warm-rain process and from the nucleation processes on the anvil ice properties are assessed.

  11. Traffic dynamics of carnival processions

    NASA Astrophysics Data System (ADS)

    Polichronidis, Petros; Wegerle, Dominik; Dieper, Alexander; Schreckenberg, Michael

    2018-03-01

    The traffic dynamics of processions are described in this study. GPS data from participating groups in the Cologne Rose Monday processions 2014–2017 are used to analyze the kinematic characteristics. The preparation of the measured data requires an adjustment by a specially adapted algorithm for the map matching method. A higher average velocity is observed for the last participant, the Carnival Prince, than for the leading participant of the parade. Based on the results of the data analysis, for the first time a model can be established for defilading parade groups as a modified Nagel-Schreckenberg model. This model can reproduce the observed characteristics in simulations. They can be explained partly by the constantly moving vehicle driving ahead of the parade leaving the pathway and partly due to a spatial contraction of the parade during the procession.

  12. Development of KIAPS Observation Processing Package for Data Assimilation System

    NASA Astrophysics Data System (ADS)

    Kang, Jeon-Ho; Chun, Hyoung-Wook; Lee, Sihye; Han, Hyun-Jun; Ha, Su-Jin

    2015-04-01

    The Korea Institute of Atmospheric Prediction Systems (KIAPS) was founded in 2011 by the Korea Meteorological Administration (KMA) to develop Korea's own global Numerical Weather Prediction (NWP) system as nine year (2011-2019) project. Data assimilation team at KIAPS has been developing the observation processing system (KIAPS Package for Observation Processing: KPOP) to provide optimal observations to the data assimilation system for the KIAPS Global Model (KIAPS Integrated Model - Spectral Element method based on HOMME: KIM-SH). Currently, the KPOP is capable of processing the satellite radiance data (AMSU-A, IASI), GPS Radio Occultation (GPS-RO), AIRCRAFT (AMDAR, AIREP, and etc…), and synoptic observation (SONDE and SURFACE). KPOP adopted Radiative Transfer for TOVS version 10 (RTTOV_v10) to get brightness temperature (TB) for each channel at top of the atmosphere (TOA), and Radio Occultation Processing Package (ROPP) 1-dimensional forward module to get bending angle (BA) at each tangent point. The observation data are obtained from the KMA which has been composited with BUFR format to be converted with ODB that are used for operational data assimilation and monitoring at the KMA. The Unified Model (UM), Community Atmosphere - Spectral Element (CAM-SE) and KIM-SH model outputs are used for the bias correction (BC) and quality control (QC) of the observations, respectively. KPOP provides radiance and RO data for Local Ensemble Transform Kalman Filter (LETKF) and also provides SONDE, SURFACE and AIRCRAFT data for Three-Dimensional Variational Assimilation (3DVAR). We are expecting all of the observation type which processed in KPOP could be combined with both of the data assimilation method as soon as possible. The preliminary results from each observation type will be introduced with the current development status of the KPOP.

  13. From Data to Knowledge: GEOSS experience and the GEOSS Knowledge Base contribution to the GCI

    NASA Astrophysics Data System (ADS)

    Santoro, M.; Nativi, S.; Mazzetti, P., Sr.; Plag, H. P.

    2016-12-01

    According to systems theory, data is raw, it simply exists and has no significance beyond its existence; while, information is data that has been given meaning by way of relational connection. The appropriate collection of information, such that it contributes to understanding, is a process of knowledge creation.The Global Earth Observation System of Systems (GEOSS) developed by the Group on Earth Observations (GEO) is a set of coordinated, independent Earth observation, information and processing systems that interact and provide access to diverse information for a broad range of users in both public and private sectors. GEOSS links these systems to strengthen the monitoring of the state of the Earth. In the past ten years, the development of GEOSS has taught several lessons dealing with the need to move from (open) data to information and knowledge sharing. Advanced user-focused services require to move from a data-driven framework to a knowledge sharing platform. Such a platform needs to manage information and knowledge, in addition to datasets linked to them. For this scope, GEO has launched a specific task called "GEOSS Knowledge Base", which deals with resources, like user requirements, Sustainable Development Goals (SDGs), observation and processing ontologies, publications, guidelines, best practices, business processes/algorithms, definition of advanced concepts like Essential Variables (EVs), indicators, strategic goals, etc. In turn, information and knowledge (e.g. guidelines, best practices, user requirements, business processes, algorithms, etc.) can be used to generate additional information and knowledge from shared datasets. To fully utilize and leverage the GEOSS Knowledge Base, the current GEOSS Common Infrastructure (GCI) model will be extended and advanced to consider important concepts and implementation artifacts, such as data processing services and environmental/economic models as well as EVs, Primary Indicators, and SDGs. The new GCI model will link these concepts to the present dataset, observation and sensor concepts, enabling a set of very important new capabilities to be offered to GEOSS users.

  14. Geospatial application of the Water Erosion Prediction Project (WEPP) Model

    Treesearch

    D. C. Flanagan; J. R. Frankenberger; T. A. Cochrane; C. S. Renschler; W. J. Elliot

    2011-01-01

    The Water Erosion Prediction Project (WEPP) model is a process-based technology for prediction of soil erosion by water at hillslope profile, field, and small watershed scales. In particular, WEPP utilizes observed or generated daily climate inputs to drive the surface hydrology processes (infiltration, runoff, ET) component, which subsequently impacts the rest of the...

  15. Modeling Polio Data Using the First Order Non-Negative Integer-Valued Autoregressive, INAR(1), Model

    NASA Astrophysics Data System (ADS)

    Vazifedan, Turaj; Shitan, Mahendran

    Time series data may consists of counts, such as the number of road accidents, the number of patients in a certain hospital, the number of customers waiting for service at a certain time and etc. When the value of the observations are large it is usual to use Gaussian Autoregressive Moving Average (ARMA) process to model the time series. However if the observed counts are small, it is not appropriate to use ARMA process to model the observed phenomenon. In such cases we need to model the time series data by using Non-Negative Integer valued Autoregressive (INAR) process. The modeling of counts data is based on the binomial thinning operator. In this paper we illustrate the modeling of counts data using the monthly number of Poliomyelitis data in United States between January 1970 until December 1983. We applied the AR(1), Poisson regression model and INAR(1) model and the suitability of these models were assessed by using the Index of Agreement(I.A.). We found that INAR(1) model is more appropriate in the sense it had a better I.A. and it is natural since the data are counts.

  16. Diagnosing the Dynamics of Observed and Simulated Ecosystem Gross Primary Productivity with Time Causal Information Theory Quantifiers

    PubMed Central

    Sippel, Sebastian; Mahecha, Miguel D.; Hauhs, Michael; Bodesheim, Paul; Kaminski, Thomas; Gans, Fabian; Rosso, Osvaldo A.

    2016-01-01

    Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observed and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. We demonstrate here that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics. PMID:27764187

  17. Maximum profile likelihood estimation of differential equation parameters through model based smoothing state estimates.

    PubMed

    Campbell, D A; Chkrebtii, O

    2013-12-01

    Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  18. Distribution of model uncertainty across multiple data streams

    NASA Astrophysics Data System (ADS)

    Wutzler, Thomas

    2014-05-01

    When confronting biogeochemical models with a diversity of observational data streams, we are faced with the problem of weighing the data streams. Without weighing or multiple blocked cost functions, model uncertainty is allocated to the sparse data streams and possible bias in processes that are strongly constraint is exported to processes that are constrained by sparse data streams only. In this study we propose an approach that aims at making model uncertainty a factor of observations uncertainty, that is constant over all data streams. Further we propose an implementation based on Monte-Carlo Markov chain sampling combined with simulated annealing that is able to determine this variance factor. The method is exemplified both with very simple models, artificial data and with an inversion of the DALEC ecosystem carbon model against multiple observations of Howland forest. We argue that the presented approach is able to help and maybe resolve the problem of bias export to sparse data streams.

  19. Aeronomy of the Venus Upper Atmosphere

    NASA Astrophysics Data System (ADS)

    Gérard, J.-C.; Bougher, S. W.; López-Valverde, M. A.; Pätzold, M.; Drossart, P.; Piccioni, G.

    2017-11-01

    We present aeronomical observations collected using remote sensing instruments on board Venus Express, complemented with ground-based observations and numerical modeling. They are mostly based on VIRTIS and SPICAV measurements of airglow obtained in the nadir mode and at the limb above 90 km. They complement our understanding of the behavior of Venus' upper atmosphere that was largely based on Pioneer Venus observations mostly performed over thirty years earlier. Following a summary of recent spectral data from the EUV to the infrared, we examine how these observations have improved our knowledge of the composition, thermal structure, dynamics and transport of the Venus upper atmosphere. We then synthesize progress in three-dimensional modeling of the upper atmosphere which is largely based on global mapping and observations of time variations of the nitric oxide and O2 nightglow emissions. Processes controlling the escape flux of atoms to space are described. Results based on the VeRA radio propagation experiment are summarized and compared to ionospheric measurements collected during earlier space missions. Finally, we point out some unsolved and open questions generated by these recent datasets and model comparisons.

  20. Modeling marine oily wastewater treatment by a probabilistic agent-based approach.

    PubMed

    Jing, Liang; Chen, Bing; Zhang, Baiyu; Ye, Xudong

    2018-02-01

    This study developed a novel probabilistic agent-based approach for modeling of marine oily wastewater treatment processes. It begins first by constructing a probability-based agent simulation model, followed by a global sensitivity analysis and a genetic algorithm-based calibration. The proposed modeling approach was tested through a case study of the removal of naphthalene from marine oily wastewater using UV irradiation. The removal of naphthalene was described by an agent-based simulation model using 8 types of agents and 11 reactions. Each reaction was governed by a probability parameter to determine its occurrence. The modeling results showed that the root mean square errors between modeled and observed removal rates were 8.73 and 11.03% for calibration and validation runs, respectively. Reaction competition was analyzed by comparing agent-based reaction probabilities, while agents' heterogeneity was visualized by plotting their real-time spatial distribution, showing a strong potential for reactor design and process optimization. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. The Livingstone Model of a Main Propulsion System

    NASA Technical Reports Server (NTRS)

    Bajwa, Anupa; Sweet, Adam; Korsmeyer, David (Technical Monitor)

    2003-01-01

    Livingstone is a discrete, propositional logic-based inference engine that has been used for diagnosis of physical systems. We present a component-based model of a Main Propulsion System (MPS) and say how it is used with Livingstone (L2) in order to implement a diagnostic system for integrated vehicle health management (IVHM) for the Propulsion IVHM Technology Experiment (PITEX). We start by discussing the process of conceptualizing such a model. We describe graphical tools that facilitated the generation of the model. The model is composed of components (which map onto physical components), connections between components and constraints. A component is specified by variables, with a set of discrete, qualitative values for each variable in its local nominal and failure modes. For each mode, the model specifies the component's behavior and transitions. We describe the MPS components' nominal and fault modes and associated Livingstone variables and data structures. Given this model, and observed external commands and observations from the system, Livingstone tracks the state of the MPS over discrete time-steps by choosing trajectories that are consistent with observations. We briefly discuss how the compiled model fits into the overall PITEX architecture. Finally we summarize our modeling experience, discuss advantages and disadvantages of our approach, and suggest enhancements to the modeling process.

  2. Results from the VALUE perfect predictor experiment: process-based evaluation

    NASA Astrophysics Data System (ADS)

    Maraun, Douglas; Soares, Pedro; Hertig, Elke; Brands, Swen; Huth, Radan; Cardoso, Rita; Kotlarski, Sven; Casado, Maria; Pongracz, Rita; Bartholy, Judit

    2016-04-01

    Until recently, the evaluation of downscaled climate model simulations has typically been limited to surface climatologies, including long term means, spatial variability and extremes. But these aspects are often, at least partly, tuned in regional climate models to match observed climate. The tuning issue is of course particularly relevant for bias corrected regional climate models. In general, a good performance of a model for these aspects in present climate does therefore not imply a good performance in simulating climate change. It is now widely accepted that, to increase our condidence in climate change simulations, it is necessary to evaluate how climate models simulate relevant underlying processes. In other words, it is important to assess whether downscaling does the right for the right reason. Therefore, VALUE has carried out a broad process-based evaluation study based on its perfect predictor experiment simulations: the downscaling methods are driven by ERA-Interim data over the period 1979-2008, reference observations are given by a network of 85 meteorological stations covering all European climates. More than 30 methods participated in the evaluation. In order to compare statistical and dynamical methods, only variables provided by both types of approaches could be considered. This limited the analysis to conditioning local surface variables on variables from driving processes that are simulated by ERA-Interim. We considered the following types of processes: at the continental scale, we evaluated the performance of downscaling methods for positive and negative North Atlantic Oscillation, Atlantic ridge and blocking situations. At synoptic scales, we considered Lamb weather types for selected European regions such as Scandinavia, the United Kingdom, the Iberian Pensinsula or the Alps. At regional scales we considered phenomena such as the Mistral, the Bora or the Iberian coastal jet. Such process-based evaluation helps to attribute biases in surface variables to underlying processes and ultimately to improve climate models.

  3. Spatiotemporal variability of water and energy fluxes: TERENO- prealpine hydrometeorological data analysis and inverse modeling with GEOtop and PEST

    NASA Astrophysics Data System (ADS)

    Soltani, M.; Kunstmann, H.; Laux, P.; Mauder, M.

    2016-12-01

    In mountainous and prealpine regions echohydrological processes exhibit rapid changes within short distances due to the complex orography and strong elevation gradients. Water- and energy fluxes between the land surface and the atmosphere are crucial drivers for nearly all ecosystem processes. The aim of this research is to analyze the variability of surface water- and energy fluxes by both comprehensive observational hydrometeorological data analysis and process-based high resolution hydrological modeling for a mountainous and prealpine region in Germany. We particularly focus on the closure of the observed energy balance and on the added value of energy flux observations for parameter estimation in our hydrological model (GEOtop) by inverse modeling using PEST. Our study area is the catchment of the river Rott (55 km2), being part of the TERENO prealpine observatory in Southern Germany, and we focus particularly on the observations during the summer episode May to July 2013. We present the coupling of GEOtop and the parameter estimation tool PEST, which is based on the Gauss-Marquardt-Levenberg method, a gradient-based nonlinear parameter estimation algorithm. Estimation of the surface energy partitioning during the data analysis process revealed that the latent heat flux was considered as the main consumer of available energy. The relative imbalance was largest during nocturnal periods. An energy imbalance was observed at the eddy-covariance site Fendt due to either underestimated turbulent fluxes or overestimated available energy. The calculation of the simulated energy and water balances for the entire catchment indicated that 78% of net radiation leaves the catchment as latent heat flux, 17% as sensible heat, and 5% enters the soil in the form of soil heat flux. 45% of the catchment aggregated precipitation leaves the catchment as discharge and 55% as evaporation. Using the developed GEOtop-PEST interface, the hydrological model is calibrated by comparing simulated and observed discharge, soil moisture and -temperature, sensible-, latent-, and soil heat fluxes. A reasonable quality of fit could be achieved. Uncertainty- and covariance analyses are performed, allowing the derivation of confidence intervals for all estimated parameters.

  4. Observationally-based Metrics of Ocean Carbon and Biogeochemical Variables are Essential for Evaluating Earth System Model Projections

    NASA Astrophysics Data System (ADS)

    Russell, J. L.; Sarmiento, J. L.

    2017-12-01

    The Southern Ocean is central to the climate's response to increasing levels of atmospheric greenhouse gases as it ventilates a large fraction of the global ocean volume. Global coupled climate models and earth system models, however, vary widely in their simulations of the Southern Ocean and its role in, and response to, the ongoing anthropogenic forcing. Due to its complex water-mass structure and dynamics, Southern Ocean carbon and heat uptake depend on a combination of winds, eddies, mixing, buoyancy fluxes and topography. Understanding how the ocean carries heat and carbon into its interior and how the observed wind changes are affecting this uptake is essential to accurately projecting transient climate sensitivity. Observationally-based metrics are critical for discerning processes and mechanisms, and for validating and comparing climate models. As the community shifts toward Earth system models with explicit carbon simulations, more direct observations of important biogeochemical parameters, like those obtained from the biogeochemically-sensored floats that are part of the Southern Ocean Carbon and Climate Observations and Modeling project, are essential. One goal of future observing systems should be to create observationally-based benchmarks that will lead to reducing uncertainties in climate projections, and especially uncertainties related to oceanic heat and carbon uptake.

  5. Diagnosing soil moisture anomalies and neglected soil moisture source/sink processes via a thermal infrared-based two-source energy balance model

    USDA-ARS?s Scientific Manuscript database

    Atmospheric processes, especially those that occur in the surface and boundary layer, are significantly impacted by soil moisture (SM). Due to the observational gaps in the ground-based monitoring of SM, methodologies have been developed to monitor SM from satellite platforms. While many have focuse...

  6. Comparison of Two Conceptually Different Physically-based Hydrological Models - Looking Beyond Streamflows

    NASA Astrophysics Data System (ADS)

    Rousseau, A. N.; Álvarez; Yu, X.; Savary, S.; Duffy, C.

    2015-12-01

    Most physically-based hydrological models simulate to various extents the relevant watershed processes occurring at different spatiotemporal scales. These models use different physical domain representations (e.g., hydrological response units, discretized control volumes) and numerical solution techniques (e.g., finite difference method, finite element method) as well as a variety of approximations for representing the physical processes. Despite the fact that several models have been developed so far, very few inter-comparison studies have been conducted to check beyond streamflows whether different modeling approaches could simulate in a similar fashion the other processes at the watershed scale. In this study, PIHM (Qu and Duffy, 2007), a fully coupled, distributed model, and HYDROTEL (Fortin et al., 2001; Turcotte et al., 2003, 2007), a pseudo-coupled, semi-distributed model, were compared to check whether the models could corroborate observed streamflows while equally representing other processes as well such as evapotranspiration, snow accumulation/melt or infiltration, etc. For this study, the Young Womans Creek watershed, PA, was used to compare: streamflows (channel routing), actual evapotranspiration, snow water equivalent (snow accumulation and melt), infiltration, recharge, shallow water depth above the soil surface (surface flow), lateral flow into the river (surface and subsurface flow) and height of the saturated soil column (subsurface flow). Despite a lack of observed data for contrasting most of the simulated processes, it can be said that the two models can be used as simulation tools for streamflows, actual evapotranspiration, infiltration, lateral flows into the river, and height of the saturated soil column. However, each process presents particular differences as a result of the physical parameters and the modeling approaches used by each model. Potentially, these differences should be object of further analyses to definitively confirm or reject modeling hypotheses.

  7. BoolFilter: an R package for estimation and identification of partially-observed Boolean dynamical systems.

    PubMed

    Mcclenny, Levi D; Imani, Mahdi; Braga-Neto, Ulisses M

    2017-11-25

    Gene regulatory networks govern the function of key cellular processes, such as control of the cell cycle, response to stress, DNA repair mechanisms, and more. Boolean networks have been used successfully in modeling gene regulatory networks. In the Boolean network model, the transcriptional state of each gene is represented by 0 (inactive) or 1 (active), and the relationship among genes is represented by logical gates updated at discrete time points. However, the Boolean gene states are never observed directly, but only indirectly and incompletely through noisy measurements based on expression technologies such as cDNA microarrays, RNA-Seq, and cell imaging-based assays. The Partially-Observed Boolean Dynamical System (POBDS) signal model is distinct from other deterministic and stochastic Boolean network models in removing the requirement of a directly observable Boolean state vector and allowing uncertainty in the measurement process, addressing the scenario encountered in practice in transcriptomic analysis. BoolFilter is an R package that implements the POBDS model and associated algorithms for state and parameter estimation. It allows the user to estimate the Boolean states, network topology, and measurement parameters from time series of transcriptomic data using exact and approximated (particle) filters, as well as simulate the transcriptomic data for a given Boolean network model. Some of its infrastructure, such as the network interface, is the same as in the previously published R package for Boolean Networks BoolNet, which enhances compatibility and user accessibility to the new package. We introduce the R package BoolFilter for Partially-Observed Boolean Dynamical Systems (POBDS). The BoolFilter package provides a useful toolbox for the bioinformatics community, with state-of-the-art algorithms for simulation of time series transcriptomic data as well as the inverse process of system identification from data obtained with various expression technologies such as cDNA microarrays, RNA-Seq, and cell imaging-based assays.

  8. Application of an Ensemble Smoother to Precipitation Assimilation

    NASA Technical Reports Server (NTRS)

    Zhang, Sara; Zupanski, Dusanka; Hou, Arthur; Zupanski, Milija

    2008-01-01

    Assimilation of precipitation in a global modeling system poses a special challenge in that the observation operators for precipitation processes are highly nonlinear. In the variational approach, substantial development work and model simplifications are required to include precipitation-related physical processes in the tangent linear model and its adjoint. An ensemble based data assimilation algorithm "Maximum Likelihood Ensemble Smoother (MLES)" has been developed to explore the ensemble representation of the precipitation observation operator with nonlinear convection and large-scale moist physics. An ensemble assimilation system based on the NASA GEOS-5 GCM has been constructed to assimilate satellite precipitation data within the MLES framework. The configuration of the smoother takes the time dimension into account for the relationship between state variables and observable rainfall. The full nonlinear forward model ensembles are used to represent components involving the observation operator and its transpose. Several assimilation experiments using satellite precipitation observations have been carried out to investigate the effectiveness of the ensemble representation of the nonlinear observation operator and the data impact of assimilating rain retrievals from the TMI and SSM/I sensors. Preliminary results show that this ensemble assimilation approach is capable of extracting information from nonlinear observations to improve the analysis and forecast if ensemble size is adequate, and a suitable localization scheme is applied. In addition to a dynamically consistent precipitation analysis, the assimilation system produces a statistical estimate of the analysis uncertainty.

  9. Generation of global VTEC maps from low latency GNSS observations based on B-spline modelling and Kalman filtering

    NASA Astrophysics Data System (ADS)

    Erdogan, Eren; Dettmering, Denise; Limberger, Marco; Schmidt, Michael; Seitz, Florian; Börger, Klaus; Brandert, Sylvia; Görres, Barbara; Kersten, Wilhelm F.; Bothmer, Volker; Hinrichs, Johannes; Venzmer, Malte

    2015-04-01

    In May 2014 DGFI-TUM (the former DGFI) and the German Space Situational Awareness Centre (GSSAC) started to develop an OPerational Tool for Ionospheric Mapping And Prediction (OPTIMAP); since November 2014 the Institute of Astrophysics at the University of Göttingen (IAG) joined the group as the third partner. This project aims on the computation and prediction of maps of the vertical total electron content (VTEC) and the electron density distribution of the ionosphere on a global scale from both various space-geodetic observation techniques such as GNSS and satellite altimetry as well as Sun observations. In this contribution we present first results, i.e. a near-real time processing framework for generating VTEC maps by assimilating GNSS (GPS, GLONASS) based ionospheric data into a two-dimensional global B-spline approach. To be more specific, the spatial variations of VTEC are modelled by trigonometric B-spline functions in longitude and by endpoint-interpolating polynomial B-spline functions in latitude, respectively. Since B-spline functions are compactly supported and highly localizing our approach can handle large data gaps appropriately and, thus, provides a better approximation of data with heterogeneous density and quality compared to the commonly used spherical harmonics. The presented method models temporal variations of VTEC inside a Kalman filter. The unknown parameters of the filter state vector are composed of the B-spline coefficients as well as the satellite and receiver DCBs. To approximate the temporal variation of these state vector components as part of the filter the dynamical model has to be set up. The current implementation of the filter allows to select between a random walk process, a Gauss-Markov process and a dynamic process driven by an empirical ionosphere model, e.g. the International Reference Ionosphere (IRI). For running the model ionospheric input data is acquired from terrestrial GNSS networks through online archive systems (such as IGS) with approximately one hour latency. Before feeding the filter with new hourly data, the raw GNSS observations are downloaded and pre-processed via geometry free linear combinations to provide signal delay information including the ionospheric effects and the differential code biases. Next steps will implement further space geodetic techniques and will introduce the Sun observations into the procedure. The final destination is to develop a time dependent model of the electron density based on different geodetic and solar observations.

  10. Evaluating and improving count-based population inference: A case study from 31 years of monitoring Sandhill Cranes

    USGS Publications Warehouse

    Gerber, Brian D.; Kendall, William L.

    2017-01-01

    Monitoring animal populations can be difficult. Limited resources often force monitoring programs to rely on unadjusted or smoothed counts as an index of abundance. Smoothing counts is commonly done using a moving-average estimator to dampen sampling variation. These indices are commonly used to inform management decisions, although their reliability is often unknown. We outline a process to evaluate the biological plausibility of annual changes in population counts and indices from a typical monitoring scenario and compare results with a hierarchical Bayesian time series (HBTS) model. We evaluated spring and fall counts, fall indices, and model-based predictions for the Rocky Mountain population (RMP) of Sandhill Cranes (Antigone canadensis) by integrating juvenile recruitment, harvest, and survival into a stochastic stage-based population model. We used simulation to evaluate population indices from the HBTS model and the commonly used 3-yr moving average estimator. We found counts of the RMP to exhibit biologically unrealistic annual change, while the fall population index was largely biologically realistic. HBTS model predictions suggested that the RMP changed little over 31 yr of monitoring, but the pattern depended on assumptions about the observational process. The HBTS model fall population predictions were biologically plausible if observed crane harvest mortality was compensatory up to natural mortality, as empirical evidence suggests. Simulations indicated that the predicted mean of the HBTS model was generally a more reliable estimate of the true population than population indices derived using a moving 3-yr average estimator. Practitioners could gain considerable advantages from modeling population counts using a hierarchical Bayesian autoregressive approach. Advantages would include: (1) obtaining measures of uncertainty; (2) incorporating direct knowledge of the observational and population processes; (3) accommodating missing years of data; and (4) forecasting population size.

  11. Spatio-Temporal Data Analysis at Scale Using Models Based on Gaussian Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stein, Michael

    Gaussian processes are the most commonly used statistical model for spatial and spatio-temporal processes that vary continuously. They are broadly applicable in the physical sciences and engineering and are also frequently used to approximate the output of complex computer models, deterministic or stochastic. We undertook research related to theory, computation, and applications of Gaussian processes as well as some work on estimating extremes of distributions for which a Gaussian process assumption might be inappropriate. Our theoretical contributions include the development of new classes of spatial-temporal covariance functions with desirable properties and new results showing that certain covariance models lead tomore » predictions with undesirable properties. To understand how Gaussian process models behave when applied to deterministic computer models, we derived what we believe to be the first significant results on the large sample properties of estimators of parameters of Gaussian processes when the actual process is a simple deterministic function. Finally, we investigated some theoretical issues related to maxima of observations with varying upper bounds and found that, depending on the circumstances, standard large sample results for maxima may or may not hold. Our computational innovations include methods for analyzing large spatial datasets when observations fall on a partially observed grid and methods for estimating parameters of a Gaussian process model from observations taken by a polar-orbiting satellite. In our application of Gaussian process models to deterministic computer experiments, we carried out some matrix computations that would have been infeasible using even extended precision arithmetic by focusing on special cases in which all elements of the matrices under study are rational and using exact arithmetic. The applications we studied include total column ozone as measured from a polar-orbiting satellite, sea surface temperatures over the Pacific Ocean, and annual temperature extremes at a site in New York City. In each of these applications, our theoretical and computational innovations were directly motivated by the challenges posed by analyzing these and similar types of data.« less

  12. Classification framework for partially observed dynamical systems

    NASA Astrophysics Data System (ADS)

    Shen, Yuan; Tino, Peter; Tsaneva-Atanasova, Krasimira

    2017-04-01

    We present a general framework for classifying partially observed dynamical systems based on the idea of learning in the model space. In contrast to the existing approaches using point estimates of model parameters to represent individual data items, we employ posterior distributions over model parameters, thus taking into account in a principled manner the uncertainty due to both the generative (observational and/or dynamic noise) and observation (sampling in time) processes. We evaluate the framework on two test beds: a biological pathway model and a stochastic double-well system. Crucially, we show that the classification performance is not impaired when the model structure used for inferring posterior distributions is much more simple than the observation-generating model structure, provided the reduced-complexity inferential model structure captures the essential characteristics needed for the given classification task.

  13. Physiological studies of the brain: Implications for science teaching

    NASA Astrophysics Data System (ADS)

    Esler, William K.

    Physiological changes resulting from repeated, long-term stimulation have been observed in the brains of both humans and laboratory animals. It may be speculated that these changes are related to short-term and long-term memory processes. A physiologically based model for memory processing (PBMMP) can serve to explain the interrelations of various areas of the brain as they process new stimuli and recall past events. The model can also serve to explain many current principles of learning theory and serve as a foundation for developing new theories of learning based upon the physiology of the brain.

  14. Unraveling the sub-processes of selective attention: insights from dynamic modeling and continuous behavior.

    PubMed

    Frisch, Simon; Dshemuchadse, Maja; Görner, Max; Goschke, Thomas; Scherbaum, Stefan

    2015-11-01

    Selective attention biases information processing toward stimuli that are relevant for achieving our goals. However, the nature of this bias is under debate: Does it solely rely on the amplification of goal-relevant information or is there a need for additional inhibitory processes that selectively suppress currently distracting information? Here, we explored the processes underlying selective attention with a dynamic, modeling-based approach that focuses on the continuous evolution of behavior over time. We present two dynamic neural field models incorporating the diverging theoretical assumptions. Simulations with both models showed that they make similar predictions with regard to response times but differ markedly with regard to their continuous behavior. Human data observed via mouse tracking as a continuous measure of performance revealed evidence for the model solely based on amplification but no indication of persisting selective distracter inhibition.

  15. The calculation of theoretical chromospheric models and the interpretation of the solar spectrum

    NASA Technical Reports Server (NTRS)

    Avrett, Eugene H.

    1994-01-01

    Since the early 1970s we have been developing the extensive computer programs needed to construct models of the solar atmosphere and to calculate detailed spectra for use in the interpretation of solar observations. This research involves two major related efforts: work by Avrett and Loeser on the Pandora computer program for non-LTE modeling of the solar atmosphere including a wide range of physical processes, and work by Kurucz on the detailed synthesis of the solar spectrum based on opacity data for over 58 million atomic and molecular lines. Our goals are to determine models of the various features observed on the sun (sunspots, different components of quiet and active regions, and flares) by means of physically realistic models, and to calculate detailed spectra at all wavelengths that match observations of those features. These two goals are interrelated: discrepancies between calculated and observed spectra are used to determine improvements in the structure of the models, and in the detailed physical processes used in both the model calculations and the spectrum calculations. The atmospheric models obtained in this way provide not only the depth variation of various atmospheric parameters, but also a description of the internal physical processes that are responsible for nonradiative heating, and for solar activity in general.

  16. The calculation of theoretical chromospheric models and the interpretation of solar spectra from rockets and spacecraft

    NASA Technical Reports Server (NTRS)

    Avrett, Eugene H.

    1993-01-01

    Since the early 1970s we have been developing the extensive computer programs needed to construct models of the solar atmosphere and to calculate detailed spectra for use in the interpretation of solar observations. This research involves two major related efforts: work by Avrett and Loeser on the Pandora computer program for non-LTE modeling of the solar atmosphere including a wide range of physical processes, and work by Kurucz on the detailed synthesis of the solar spectrum based on opacity data for over 58 million atomic and molecular lines. Our goals are to determine models of the various features observed on the Sun (sunspots, different components of quiet and active regions, and flares) by means of physically realistic models, and to calculate detailed spectra at all wavelengths that match observations of those features. These two goals are interrelated: discrepancies between calculated and observed spectra are used to determine improvements in the structure of the models, and in the detailed physical processes used in both the model calculations and the spectrum calculations. The atmospheric models obtained in this way provide not only the depth variation of various atmospheric parameters, but also a description of the internal physical processes that are responsible for non-radiative heating, and for solar activity in general.

  17. Snow multivariable data assimilation for hydrological predictions in Alpine sites

    NASA Astrophysics Data System (ADS)

    Piazzi, Gaia; Thirel, Guillaume; Campo, Lorenzo; Gabellani, Simone; Stevenin, Hervè

    2017-04-01

    Snowpack dynamics (snow accumulation and ablation) strongly impacts on hydrological processes in Alpine areas. During the winter season the presence of snow cover (snow accumulation) reduces the drainage in the basin with a resulting lower watershed time of concentration in case of possible rainfall events. Moreover, the release of the significant water volume stored in winter (snowmelt) considerably contributes to the total discharge during the melting period. Therefore when modeling hydrological processes in snow-dominated catchments the quality of predictions deeply depends on how the model succeeds in catching snowpack dynamics. The integration of a hydrological model with a snow module allows improving predictions of river discharges. Besides the well-known modeling limitations (uncertainty in parameterizations; possible errors affecting both meteorological forcing data and initial conditions; approximations in boundary conditions), there are physical factors that make an exhaustive reconstruction of snow dynamics complicated: snow intermittence in space and time, stratification and slow phenomena like metamorphism processes, uncertainty in snowfall evaluation, wind transportation, etc. Data Assimilation (DA) techniques provide an objective methodology to combine several independent snow-related data sources (model simulations, ground-based measurements and remote sensed observations) in order to obtain the most likely estimate of snowpack state. This study presents SMASH (Snow Multidata Assimilation System for Hydrology), a multi-layer snow dynamic model strengthened by a multivariable DA framework for hydrological purposes. The model is physically based on mass and energy balances and can be used to reproduce the main physical processes occurring within the snowpack: accumulation, density dynamics, melting, sublimation, radiative balance, heat and mass exchanges. The model is driven by observed forcing meteorological data (air temperature, wind velocity, relative air humidity, precipitation and incident solar radiation) to provide a complete estimate of snowpack state. The implementation of a DA scheme enables to assimilate simultaneously ground-based observations of different snow-related variables (snow depth, snow density, surface temperature and albedo). SMASH performances are evaluated by using observed data supplied by meteorological stations located in three experimental Alpine sites: Col de Porte (1325 m, France); Torgnon (2160 m, Italy); Weissfluhjoch (2540 m, Switzerland). A comparison analysis between the resulting performaces of Particle Filter and Ensemble Kalman Filter schemes is shown.

  18. Sojourning with the Homogeneous Poisson Process.

    PubMed

    Liu, Piaomu; Peña, Edsel A

    2016-01-01

    In this pedagogical article, distributional properties, some surprising, pertaining to the homogeneous Poisson process (HPP), when observed over a possibly random window, are presented. Properties of the gap-time that covered the termination time and the correlations among gap-times of the observed events are obtained. Inference procedures, such as estimation and model validation, based on event occurrence data over the observation window, are also presented. We envision that through the results in this paper, a better appreciation of the subtleties involved in the modeling and analysis of recurrent events data will ensue, since the HPP is arguably one of the simplest among recurrent event models. In addition, the use of the theorem of total probability, Bayes theorem, the iterated rules of expectation, variance and covariance, and the renewal equation could be illustrative when teaching distribution theory, mathematical statistics, and stochastic processes at both the undergraduate and graduate levels. This article is targeted towards both instructors and students.

  19. A Dirichlet process model for classifying and forecasting epidemic curves.

    PubMed

    Nsoesie, Elaine O; Leman, Scotland C; Marathe, Madhav V

    2014-01-09

    A forecast can be defined as an endeavor to quantitatively estimate a future event or probabilities assigned to a future occurrence. Forecasting stochastic processes such as epidemics is challenging since there are several biological, behavioral, and environmental factors that influence the number of cases observed at each point during an epidemic. However, accurate forecasts of epidemics would impact timely and effective implementation of public health interventions. In this study, we introduce a Dirichlet process (DP) model for classifying and forecasting influenza epidemic curves. The DP model is a nonparametric Bayesian approach that enables the matching of current influenza activity to simulated and historical patterns, identifies epidemic curves different from those observed in the past and enables prediction of the expected epidemic peak time. The method was validated using simulated influenza epidemics from an individual-based model and the accuracy was compared to that of the tree-based classification technique, Random Forest (RF), which has been shown to achieve high accuracy in the early prediction of epidemic curves using a classification approach. We also applied the method to forecasting influenza outbreaks in the United States from 1997-2013 using influenza-like illness (ILI) data from the Centers for Disease Control and Prevention (CDC). We made the following observations. First, the DP model performed as well as RF in identifying several of the simulated epidemics. Second, the DP model correctly forecasted the peak time several days in advance for most of the simulated epidemics. Third, the accuracy of identifying epidemics different from those already observed improved with additional data, as expected. Fourth, both methods correctly classified epidemics with higher reproduction numbers (R) with a higher accuracy compared to epidemics with lower R values. Lastly, in the classification of seasonal influenza epidemics based on ILI data from the CDC, the methods' performance was comparable. Although RF requires less computational time compared to the DP model, the algorithm is fully supervised implying that epidemic curves different from those previously observed will always be misclassified. In contrast, the DP model can be unsupervised, semi-supervised or fully supervised. Since both methods have their relative merits, an approach that uses both RF and the DP model could be beneficial.

  20. High-resolution modelling of waves, currents and sediment transport in the Catalan Sea.

    NASA Astrophysics Data System (ADS)

    Sánchez-Arcilla, Agustín; Grifoll, Manel; Pallares, Elena; Espino, Manuel

    2013-04-01

    In order to investigate coastal shelf dynamics, a sequence of high resolution multi-scale models have been implemented for the Catalan shelf (North-western Mediterranean Sea). The suite consists of a set of increasing-resolution nested models, based on the circulation model ROMS (Regional Ocean Modelling System), the wave model SWAN (Simulation Waves Nearshore) and the sediment transport model CSTM (Community Sediment Transport Model), covering different ranges of spatial (from ~1 km at shelf-slope regions to ~40 m around river mouth or local beaches) and temporal scales (from storms events to seasonal variability). Contributions in the understanding of local processes such as along-shelf dynamics in the inner-shelf, sediment dispersal from the river discharge or bi-directional wave-current interactions under different synoptic conditions and resolution have been obtained using the Catalan Coast as a pilot site. Numerical results have been compared with "ad-hoc" intensive field campaigns, data from observational models and remote sensing products. The results exhibit acceptable agreement with observations and the investigation has allowed developing generic knowledge and more efficient (process-based) strategies for the coastal and shelf management.

  1. An accurate Kriging-based regional ionospheric model using combined GPS/BeiDou observations

    NASA Astrophysics Data System (ADS)

    Abdelazeem, Mohamed; Çelik, Rahmi N.; El-Rabbany, Ahmed

    2018-01-01

    In this study, we propose a regional ionospheric model (RIM) based on both of the GPS-only and the combined GPS/BeiDou observations for single-frequency precise point positioning (SF-PPP) users in Europe. GPS/BeiDou observations from 16 reference stations are processed in the zero-difference mode. A least-squares algorithm is developed to determine the vertical total electron content (VTEC) bi-linear function parameters for a 15-minute time interval. The Kriging interpolation method is used to estimate the VTEC values at a 1 ° × 1 ° grid. The resulting RIMs are validated for PPP applications using GNSS observations from another set of stations. The SF-PPP accuracy and convergence time obtained through the proposed RIMs are computed and compared with those obtained through the international GNSS service global ionospheric maps (IGS-GIM). The results show that the RIMs speed up the convergence time and enhance the overall positioning accuracy in comparison with the IGS-GIM model, particularly the combined GPS/BeiDou-based model.

  2. Toward a model-based cognitive neuroscience of mind wandering.

    PubMed

    Hawkins, G E; Mittner, M; Boekel, W; Heathcote, A; Forstmann, B U

    2015-12-03

    People often "mind wander" during everyday tasks, temporarily losing track of time, place, or current task goals. In laboratory-based tasks, mind wandering is often associated with performance decrements in behavioral variables and changes in neural recordings. Such empirical associations provide descriptive accounts of mind wandering - how it affects ongoing task performance - but fail to provide true explanatory accounts - why it affects task performance. In this perspectives paper, we consider mind wandering as a neural state or process that affects the parameters of quantitative cognitive process models, which in turn affect observed behavioral performance. Our approach thus uses cognitive process models to bridge the explanatory divide between neural and behavioral data. We provide an overview of two general frameworks for developing a model-based cognitive neuroscience of mind wandering. The first approach uses neural data to segment observed performance into a discrete mixture of latent task-related and task-unrelated states, and the second regresses single-trial measures of neural activity onto structured trial-by-trial variation in the parameters of cognitive process models. We discuss the relative merits of the two approaches, and the research questions they can answer, and highlight that both approaches allow neural data to provide additional constraint on the parameters of cognitive models, which will lead to a more precise account of the effect of mind wandering on brain and behavior. We conclude by summarizing prospects for mind wandering as conceived within a model-based cognitive neuroscience framework, highlighting the opportunities for its continued study and the benefits that arise from using well-developed quantitative techniques to study abstract theoretical constructs. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Continuous data assimilation for downscaling large-footprint soil moisture retrievals

    NASA Astrophysics Data System (ADS)

    Altaf, Muhammad U.; Jana, Raghavendra B.; Hoteit, Ibrahim; McCabe, Matthew F.

    2016-10-01

    Soil moisture is a key component of the hydrologic cycle, influencing processes leading to runoff generation, infiltration and groundwater recharge, evaporation and transpiration. Generally, the measurement scale for soil moisture is found to be different from the modeling scales for these processes. Reducing this mismatch between observation and model scales in necessary for improved hydrological modeling. An innovative approach to downscaling coarse resolution soil moisture data by combining continuous data assimilation and physically based modeling is presented. In this approach, we exploit the features of Continuous Data Assimilation (CDA) which was initially designed for general dissipative dynamical systems and later tested numerically on the incompressible Navier-Stokes equation, and the Benard equation. A nudging term, estimated as the misfit between interpolants of the assimilated coarse grid measurements and the fine grid model solution, is added to the model equations to constrain the model's large scale variability by available measurements. Soil moisture fields generated at a fine resolution by a physically-based vadose zone model (HYDRUS) are subjected to data assimilation conditioned upon coarse resolution observations. This enables nudging of the model outputs towards values that honor the coarse resolution dynamics while still being generated at the fine scale. Results show that the approach is feasible to generate fine scale soil moisture fields across large extents, based on coarse scale observations. Application of this approach is likely in generating fine and intermediate resolution soil moisture fields conditioned on the radiometerbased, coarse resolution products from remote sensing satellites.

  4. Diagnosing the Dynamics of Observed and Simulated Ecosystem Gross Primary Productivity with Time Causal Information Theory Quantifiers

    DOE PAGES

    Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D.; ...

    2016-10-20

    Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observedmore » and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. Here we demonstrate that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics.« less

  5. Diagnosing the Dynamics of Observed and Simulated Ecosystem Gross Primary Productivity with Time Causal Information Theory Quantifiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D.

    Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observedmore » and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. Here we demonstrate that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics.« less

  6. Bayesian experimental design for models with intractable likelihoods.

    PubMed

    Drovandi, Christopher C; Pettitt, Anthony N

    2013-12-01

    In this paper we present a methodology for designing experiments for efficiently estimating the parameters of models with computationally intractable likelihoods. The approach combines a commonly used methodology for robust experimental design, based on Markov chain Monte Carlo sampling, with approximate Bayesian computation (ABC) to ensure that no likelihood evaluations are required. The utility function considered for precise parameter estimation is based upon the precision of the ABC posterior distribution, which we form efficiently via the ABC rejection algorithm based on pre-computed model simulations. Our focus is on stochastic models and, in particular, we investigate the methodology for Markov process models of epidemics and macroparasite population evolution. The macroparasite example involves a multivariate process and we assess the loss of information from not observing all variables. © 2013, The International Biometric Society.

  7. A macrophysical life cycle description for precipitating systems

    NASA Astrophysics Data System (ADS)

    Evaristo, Raquel; Xie, Xinxin; Troemel, Silke; Diederich, Malte; Simon, Juergen; Simmer, Clemens

    2014-05-01

    The lack of understanding of cloud and precipitation processes is still the overarching problem of climate simulation, and prediction. The work presented is part of the HD(CP)2 project (High Definition Clouds and Precipitation for Advancing Climate Predictions) which aims at building a very high resolution model in order to evaluate and exploit regional hindcasts for the purpose of parameterization development. To this end, an observational object-based climatology for precipitation systems will be built, and shall later be compared with a twin model-based climatological data base for pseudo precipitation events within an event-based model validation approach. This is done by identifying internal structures, described by means of macrophysical descriptors used to characterize the temporal development of tracked rain events. 2 pre-requisites are necessary for this: 1) a tracking algorithm, and 2) 3D radar/satellite composite. Both prerequisites are ready to be used, and have already been applied to a few case studies. Some examples of these macrophysical descriptors are differential reflectivity columns, bright band fraction and trend, cloud top heights, the spatial extent of updrafts or downdrafts or the ice content. We will show one case study from August 5th 2012, when convective precipitation was observed simultaneously by the BOXPOL and JUXPOL X-band polarimetric radars. We will follow the main paths identified by the tracking algorithm during this event and identify in the 3D composite the descriptors that characterize precipitation development, their temporal evolution, and the different macrophysical processes that are ultimately related to the precipitation observed. In a later stage these observations will be compared to the results of hydrometeor classification algorithm, in order to link the macrophysical and microphysical aspects of the storm evolution. The detailed microphysical processes are the subject of a closely related work also presented in this session: Microphysical processes observed by X band polarimetric radars during the evolution of storm systems, by Xinxin Xie et al.

  8. Evidence of Nanoflare Heating in Coronal Loops Observed with Hinolde-XRT and SDO-AIA

    NASA Technical Reports Server (NTRS)

    Lopez-Fuentes, M. C.; Klimchuk, James

    2013-01-01

    We study a series of coronal loop lightcurves from X-ray and EUV observations. In search for signatures of nanoflare heating, we analyze the statistical properties of the observed lightcurves and compare them with synthetic cases obtained with a 2D cellular-automaton model based on nanoflare heating driven by photospheric motions. Our analysis shows that the observed and the model lightcurves have similar statistical properties. The asymmetries observed in the distribution of the intensity fluctuations indicate the possible presence of widespread cooling processes in sub-resolution magnetic strands.

  9. Accelerated Aging in Electrolytic Capacitors for Prognostics

    NASA Technical Reports Server (NTRS)

    Celaya, Jose R.; Kulkarni, Chetan; Saha, Sankalita; Biswas, Gautam; Goebel, Kai Frank

    2012-01-01

    The focus of this work is the analysis of different degradation phenomena based on thermal overstress and electrical overstress accelerated aging systems and the use of accelerated aging techniques for prognostics algorithm development. Results on thermal overstress and electrical overstress experiments are presented. In addition, preliminary results toward the development of physics-based degradation models are presented focusing on the electrolyte evaporation failure mechanism. An empirical degradation model based on percentage capacitance loss under electrical overstress is presented and used in: (i) a Bayesian-based implementation of model-based prognostics using a discrete Kalman filter for health state estimation, and (ii) a dynamic system representation of the degradation model for forecasting and remaining useful life (RUL) estimation. A leave-one-out validation methodology is used to assess the validity of the methodology under the small sample size constrain. The results observed on the RUL estimation are consistent through the validation tests comparing relative accuracy and prediction error. It has been observed that the inaccuracy of the model to represent the change in degradation behavior observed at the end of the test data is consistent throughout the validation tests, indicating the need of a more detailed degradation model or the use of an algorithm that could estimate model parameters on-line. Based on the observed degradation process under different stress intensity with rest periods, the need for more sophisticated degradation models is further supported. The current degradation model does not represent the capacitance recovery over rest periods following an accelerated aging stress period.

  10. A decision-making model based on a spiking neural circuit and synaptic plasticity.

    PubMed

    Wei, Hui; Bu, Yijie; Dai, Dawei

    2017-10-01

    To adapt to the environment and survive, most animals can control their behaviors by making decisions. The process of decision-making and responding according to cues in the environment is stable, sustainable, and learnable. Understanding how behaviors are regulated by neural circuits and the encoding and decoding mechanisms from stimuli to responses are important goals in neuroscience. From results observed in Drosophila experiments, the underlying decision-making process is discussed, and a neural circuit that implements a two-choice decision-making model is proposed to explain and reproduce the observations. Compared with previous two-choice decision making models, our model uses synaptic plasticity to explain changes in decision output given the same environment. Moreover, biological meanings of parameters of our decision-making model are discussed. In this paper, we explain at the micro-level (i.e., neurons and synapses) how observable decision-making behavior at the macro-level is acquired and achieved.

  11. Investigating the Seasonal and Diurnal Evolution of Fog and its Effect on the Hydrometeorological Regime in the Southern Appalachian Mountains Using a Mobile Observing Platform

    NASA Astrophysics Data System (ADS)

    Wilson, A. M.; Barros, A.

    2015-12-01

    Accurate, high resolution observations of fog and low clouds in regions of complex terrain are largely unavailable, due to a lack of existing in situ observations and obstacles to satellite observations such as ground clutter. For the past year, a mobile observing platform including a ground-based passive cavity aerosol spectrometer probe (PCASP-X2), an optical disdrometer (PARSIVEL-2), a tipping bucket rain gauge, and a Vaisala weather station, collocated with a Micro Rain Radar, has been recording observations in valley locations in the inner mountain region of the Southern Appalachian Mountains (SAM). In 2014, the SAM hosted a Global Precipitation Mission field campaign (the Integrated Precipitation and Hydrology Experiment), and during this experiment the platform was also collocated at various times with a microwave radiometer, W- and X- band radars, a Pluvio weighing rain gauge, a 2D video disdrometer, among other instruments. These observations will be discussed in the context of previous findings based on observations and model results (stochastic column model and the Advanced Research Weather and Forecasting Model (WRF)). Specifically, in previous work, seeder-feeder processes have been found to govern the enhancement of light rainfall in the SAM through increased coalescence efficiency in stratiform rainfall due to the interactions with low level clouds and topography modulated fog. This presentation will focus on measurements made by the platform and collocated instruments, as well as observations made by fog collectors on ridges, with the aim of developing a process-based understanding of the characteristics of low cloud and fog through describing the diurnal cycle of microphysical and dynamical processes and properties in the region. The overarching goal is to employ observations of the formation and evolution of the "feeder" clouds and fog to further understand the magnitude and function of their contribution to the local hydrometeorological regime.

  12. Validation in the Absence of Observed Events.

    PubMed

    Lathrop, John; Ezell, Barry

    2016-04-01

    This article addresses the problem of validating models in the absence of observed events, in the area of weapons of mass destruction terrorism risk assessment. We address that problem with a broadened definition of "validation," based on stepping "up" a level to considering the reason why decisionmakers seek validation, and from that basis redefine validation as testing how well the model can advise decisionmakers in terrorism risk management decisions. We develop that into two conditions: validation must be based on cues available in the observable world; and it must focus on what can be done to affect that observable world, i.e., risk management. That leads to two foci: (1) the real-world risk generating process, and (2) best use of available data. Based on our experience with nine WMD terrorism risk assessment models, we then describe three best use of available data pitfalls: SME confidence bias, lack of SME cross-referencing, and problematic initiation rates. Those two foci and three pitfalls provide a basis from which we define validation in this context in terms of four tests--Does the model: … capture initiation? … capture the sequence of events by which attack scenarios unfold? … consider unanticipated scenarios? … consider alternative causal chains? Finally, we corroborate our approach against three validation tests from the DOD literature: Is the model a correct representation of the process to be simulated? To what degree are the model results comparable to the real world? Over what range of inputs are the model results useful? © 2015 Society for Risk Analysis.

  13. Contract Monitoring in Agent-Based Systems: Case Study

    NASA Astrophysics Data System (ADS)

    Hodík, Jiří; Vokřínek, Jiří; Jakob, Michal

    Monitoring of fulfilment of obligations defined by electronic contracts in distributed domains is presented in this paper. A two-level model of contract-based systems and the types of observations needed for contract monitoring are introduced. The observations (inter-agent communication and agents’ actions) are collected and processed by the contract observation and analysis pipeline. The presented approach has been utilized in a multi-agent system for electronic contracting in a modular certification testing domain.

  14. Ag2S atomic switch-based `tug of war' for decision making

    NASA Astrophysics Data System (ADS)

    Lutz, C.; Hasegawa, T.; Chikyow, T.

    2016-07-01

    For a computing process such as making a decision, a software controlled chip of several transistors is necessary. Inspired by how a single cell amoeba decides its movements, the theoretical `tug of war' computing model was proposed but not yet implemented in an analogue device suitable for integrated circuits. Based on this model, we now developed a new electronic element for decision making processes, which will have no need for prior programming. The devices are based on the growth and shrinkage of Ag filaments in α-Ag2+δS gap-type atomic switches. Here we present the adapted device design and the new materials. We demonstrate the basic `tug of war' operation by IV-measurements and Scanning Electron Microscopy (SEM) observation. These devices could be the base for a CMOS-free new computer architecture.For a computing process such as making a decision, a software controlled chip of several transistors is necessary. Inspired by how a single cell amoeba decides its movements, the theoretical `tug of war' computing model was proposed but not yet implemented in an analogue device suitable for integrated circuits. Based on this model, we now developed a new electronic element for decision making processes, which will have no need for prior programming. The devices are based on the growth and shrinkage of Ag filaments in α-Ag2+δS gap-type atomic switches. Here we present the adapted device design and the new materials. We demonstrate the basic `tug of war' operation by IV-measurements and Scanning Electron Microscopy (SEM) observation. These devices could be the base for a CMOS-free new computer architecture. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr00690f

  15. Connecting Satellite Observations with Water Cycle Variables Through Land Data Assimilation: Examples Using the NASA GEOS-5 LDAS

    NASA Technical Reports Server (NTRS)

    Reichle, Rolf H.; De Lannoy, Gabrielle J. M.; Forman, Barton A.; Draper, Clara S.; Liu, Qing

    2013-01-01

    A land data assimilation system (LDAS) can merge satellite observations (or retrievals) of land surface hydrological conditions, including soil moisture, snow, and terrestrial water storage (TWS), into a numerical model of land surface processes. In theory, the output from such a system is superior to estimates based on the observations or the model alone, thereby enhancing our ability to understand, monitor, and predict key elements of the terrestrial water cycle. In practice, however, satellite observations do not correspond directly to the water cycle variables of interest. The present paper addresses various aspects of this seeming mismatch using examples drawn from recent research with the ensemble-based NASA GEOS-5 LDAS. These aspects include (1) the assimilation of coarse-scale observations into higher-resolution land surface models, (2) the partitioning of satellite observations (such as TWS retrievals) into their constituent water cycle components, (3) the forward modeling of microwave brightness temperatures over land for radiance-based soil moisture and snow assimilation, and (4) the selection of the most relevant types of observations for the analysis of a specific water cycle variable that is not observed (such as root zone soil moisture). The solution to these challenges involves the careful construction of an observation operator that maps from the land surface model variables of interest to the space of the assimilated observations.

  16. Model-Based Control of Observer Bias for the Analysis of Presence-Only Data in Ecology

    PubMed Central

    Warton, David I.; Renner, Ian W.; Ramp, Daniel

    2013-01-01

    Presence-only data, where information is available concerning species presence but not species absence, are subject to bias due to observers being more likely to visit and record sightings at some locations than others (hereafter “observer bias”). In this paper, we describe and evaluate a model-based approach to accounting for observer bias directly – by modelling presence locations as a function of known observer bias variables (such as accessibility variables) in addition to environmental variables, then conditioning on a common level of bias to make predictions of species occurrence free of such observer bias. We implement this idea using point process models with a LASSO penalty, a new presence-only method related to maximum entropy modelling, that implicitly addresses the “pseudo-absence problem” of where to locate pseudo-absences (and how many). The proposed method of bias-correction is evaluated using systematically collected presence/absence data for 62 plant species endemic to the Blue Mountains near Sydney, Australia. It is shown that modelling and controlling for observer bias significantly improves the accuracy of predictions made using presence-only data, and usually improves predictions as compared to pseudo-absence or “inventory” methods of bias correction based on absences from non-target species. Future research will consider the potential for improving the proposed bias-correction approach by estimating the observer bias simultaneously across multiple species. PMID:24260167

  17. Analyzing the Impact of Different Pcv Calibration Models on Height Determination Using Gps/Glonass Observations from Asg-Eupos Network

    NASA Astrophysics Data System (ADS)

    Dawidowicz, Karol

    2014-12-01

    The integration of GPS with GLONASS is very important in satellite-based positioning because it can clearly improve reliability and availability. However, unlike GPS, GLONASS satellites transmit signals at different frequencies. This results in significant difficulties in modeling and ambiguity resolution for integrated GNSS positioning. There are also some difficulties related to the antenna Phase Center Variations (PCV) problem because, as is well known, the PCV is dependent on the received signal frequency dependent. Thus, processing simultaneous observations from different positioning systems, e.g. GPS and GLONASS, we can expect complications resulting from the different structure of signals and differences in satellite constellations. The ASG-EUPOS multifunctional system for precise satellite positioning is a part of the EUPOS project involving countries of Central and Eastern Europe. The number of its users is increasing rapidly. Currently 31 of 101 reference stations are equipped with GPS/GLONASS receivers and the number is still increasing. The aim of this paper is to study the height solution differences caused by using different PCV calibration models in integrated GPS/GLONASS observation processing. Studies were conducted based on the datasets from the ASG-EUPOS network. Since the study was intended to evaluate the impact on height determination from the users' point of view, a so-called "commercial" software was chosen for post-processing. The analysis was done in a baseline mode: 3 days of GNSS data collected with three different receivers and antennas were used. For the purposes of research the daily observations were divided into different sessions with a session length of one hour. The results show that switching between relative and absolute PCV models may cause an obvious effect on height determination. This issue is particularly important when mixed GPS/GLONASS observations are post-processed.

  18. Modeling of 2D diffusion processes based on microscopy data: parameter estimation and practical identifiability analysis.

    PubMed

    Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J

    2013-01-01

    Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.

  19. How Monte Carlo heuristics aid to identify the physical processes of drug release kinetics.

    PubMed

    Lecca, Paola

    2018-01-01

    We implement a Monte Carlo heuristic algorithm to model drug release from a solid dosage form. We show that with Monte Carlo simulations it is possible to identify and explain the causes of the unsatisfactory predictive power of current drug release models. It is well known that the power-law, the exponential models, as well as those derived from or inspired by them accurately reproduce only the first 60% of the release curve of a drug from a dosage form. In this study, by using Monte Carlo simulation approaches, we show that these models fit quite accurately almost the entire release profile when the release kinetics is not governed by the coexistence of different physico-chemical mechanisms. We show that the accuracy of the traditional models are comparable with those of Monte Carlo heuristics when these heuristics approximate and oversimply the phenomenology of drug release. This observation suggests to develop and use novel Monte Carlo simulation heuristics able to describe the complexity of the release kinetics, and consequently to generate data more similar to those observed in real experiments. Implementing Monte Carlo simulation heuristics of the drug release phenomenology may be much straightforward and efficient than hypothesizing and implementing from scratch complex mathematical models of the physical processes involved in drug release. Identifying and understanding through simulation heuristics what processes of this phenomenology reproduce the observed data and then formalize them in mathematics may allow avoiding time-consuming, trial-error based regression procedures. Three bullet points, highlighting the customization of the procedure. •An efficient heuristics based on Monte Carlo methods for simulating drug release from solid dosage form encodes is presented. It specifies the model of the physical process in a simple but accurate way in the formula of the Monte Carlo Micro Step (MCS) time interval.•Given the experimentally observed curve of drug release, we point out how Monte Carlo heuristics can be integrated in an evolutionary algorithmic approach to infer the mode of MCS best fitting the observed data, and thus the observed release kinetics.•The software implementing the method is written in R language, the free most used language in the bioinformaticians community.

  20. Regional TEC dynamic modeling based on Slepian functions

    NASA Astrophysics Data System (ADS)

    Sharifi, Mohammad Ali; Farzaneh, Saeed

    2015-09-01

    In this work, the three-dimensional state of the ionosphere has been estimated by integrating the spherical Slepian harmonic function and Kalman filter. The spherical Slepian harmonic functions have been used to establish the observation equations because of their properties in local modeling. Spherical harmonics are poor choices to represent or analyze geophysical processes without perfect global coverage but the Slepian functions afford spatial and spectral selectivity. The Kalman filter has been utilized to perform the parameter estimation due to its suitable properties in processing the GPS measurements in the real-time mode. The proposed model has been applied to the real data obtained from the ground-based GPS observations across some portion of the IGS network in Europe. Results have been compared with the estimated TECs by the CODE, ESA, IGS centers and IRI-2012 model. The results indicated that the proposed model which takes advantage of the Slepian basis and Kalman filter is efficient and allows for the generation of the near-real-time regional TEC map.

  1. Process-based Modeling of Ammonia Emission from Beef Cattle Feedyards with the Integrated Farm Systems Model.

    PubMed

    Waldrip, Heidi M; Rotz, C Alan; Hafner, Sasha D; Todd, Richard W; Cole, N Andy

    2014-07-01

    Ammonia (NH) volatilization from manure in beef cattle feedyards results in loss of agronomically important nitrogen (N) and potentially leads to overfertilization and acidification of aquatic and terrestrial ecosystems. In addition, NH is involved in the formation of atmospheric fine particulate matter (PM), which can affect human health. Process-based models have been developed to estimate NH emissions from various livestock production systems; however, little work has been conducted to assess their accuracy for large, open-lot beef cattle feedyards. This work describes the extension of an existing process-based model, the Integrated Farm Systems Model (IFSM), to include simulation of N dynamics in this type of system. To evaluate the model, IFSM-simulated daily per capita NH emission rates were compared with emissions data collected from two commercial feedyards in the Texas High Plains from 2007 to 2009. Model predictions were in good agreement with observations and were sensitive to variations in air temperature and dietary crude protein concentration. Predicted mean daily NH emission rates for the two feedyards had 71 to 81% agreement with observations. In addition, IFSM estimates of annual feedyard emissions were within 11 to 24% of observations, whereas a constant emission factor currently in use by the USEPA underestimated feedyard emissions by as much as 79%. The results from this study indicate that IFSM can quantify average feedyard NH emissions, assist with emissions reporting, provide accurate information for legislators and policymakers, investigate methods to mitigate NH losses, and evaluate the effects of specific management practices on farm nutrient balances. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  2. Multi-scale hydrometeorological observation and modelling for flash flood understanding

    NASA Astrophysics Data System (ADS)

    Braud, I.; Ayral, P.-A.; Bouvier, C.; Branger, F.; Delrieu, G.; Le Coz, J.; Nord, G.; Vandervaere, J.-P.; Anquetin, S.; Adamovic, M.; Andrieu, J.; Batiot, C.; Boudevillain, B.; Brunet, P.; Carreau, J.; Confoland, A.; Didon-Lescot, J.-F.; Domergue, J.-M.; Douvinet, J.; Dramais, G.; Freydier, R.; Gérard, S.; Huza, J.; Leblois, E.; Le Bourgeois, O.; Le Boursicaud, R.; Marchand, P.; Martin, P.; Nottale, L.; Patris, N.; Renard, B.; Seidel, J.-L.; Taupin, J.-D.; Vannier, O.; Vincendon, B.; Wijbrans, A.

    2014-09-01

    This paper presents a coupled observation and modelling strategy aiming at improving the understanding of processes triggering flash floods. This strategy is illustrated for the Mediterranean area using two French catchments (Gard and Ardèche) larger than 2000 km2. The approach is based on the monitoring of nested spatial scales: (1) the hillslope scale, where processes influencing the runoff generation and its concentration can be tackled; (2) the small to medium catchment scale (1-100 km2), where the impact of the network structure and of the spatial variability of rainfall, landscape and initial soil moisture can be quantified; (3) the larger scale (100-1000 km2), where the river routing and flooding processes become important. These observations are part of the HyMeX (HYdrological cycle in the Mediterranean EXperiment) enhanced observation period (EOP), which will last 4 years (2012-2015). In terms of hydrological modelling, the objective is to set up regional-scale models, while addressing small and generally ungauged catchments, which represent the scale of interest for flood risk assessment. Top-down and bottom-up approaches are combined and the models are used as "hypothesis testing" tools by coupling model development with data analyses in order to incrementally evaluate the validity of model hypotheses. The paper first presents the rationale behind the experimental set-up and the instrumentation itself. Second, we discuss the associated modelling strategy. Results illustrate the potential of the approach in advancing our understanding of flash flood processes on various scales.

  3. Multi-scale hydrometeorological observation and modelling for flash-flood understanding

    NASA Astrophysics Data System (ADS)

    Braud, I.; Ayral, P.-A.; Bouvier, C.; Branger, F.; Delrieu, G.; Le Coz, J.; Nord, G.; Vandervaere, J.-P.; Anquetin, S.; Adamovic, M.; Andrieu, J.; Batiot, C.; Boudevillain, B.; Brunet, P.; Carreau, J.; Confoland, A.; Didon-Lescot, J.-F.; Domergue, J.-M.; Douvinet, J.; Dramais, G.; Freydier, R.; Gérard, S.; Huza, J.; Leblois, E.; Le Bourgeois, O.; Le Boursicaud, R.; Marchand, P.; Martin, P.; Nottale, L.; Patris, N.; Renard, B.; Seidel, J.-L.; Taupin, J.-D.; Vannier, O.; Vincendon, B.; Wijbrans, A.

    2014-02-01

    This paper presents a coupled observation and modelling strategy aiming at improving the understanding of processes triggering flash floods. This strategy is illustrated for the Mediterranean area using two French catchments (Gard and Ardèche) larger than 2000 km2. The approach is based on the monitoring of nested spatial scales: (1) the hillslope scale, where processes influencing the runoff generation and its concentration can be tackled; (2) the small to medium catchment scale (1-100 km2) where the impact of the network structure and of the spatial variability of rainfall, landscape and initial soil moisture can be quantified; (3) the larger scale (100-1000 km2) where the river routing and flooding processes become important. These observations are part of the HyMeX (Hydrological Cycle in the Mediterranean Experiment) Enhanced Observation Period (EOP) and lasts four years (2012-2015). In terms of hydrological modelling the objective is to set up models at the regional scale, while addressing small and generally ungauged catchments, which is the scale of interest for flooding risk assessment. Top-down and bottom-up approaches are combined and the models are used as "hypothesis testing" tools by coupling model development with data analyses, in order to incrementally evaluate the validity of model hypotheses. The paper first presents the rationale behind the experimental set up and the instrumentation itself. Second, we discuss the associated modelling strategy. Results illustrate the potential of the approach in advancing our understanding of flash flood processes at various scales.

  4. Global validation of a process-based model on vegetation gross primary production using eddy covariance observations.

    PubMed

    Liu, Dan; Cai, Wenwen; Xia, Jiangzhou; Dong, Wenjie; Zhou, Guangsheng; Chen, Yang; Zhang, Haicheng; Yuan, Wenping

    2014-01-01

    Gross Primary Production (GPP) is the largest flux in the global carbon cycle. However, large uncertainties in current global estimations persist. In this study, we examined the performance of a process-based model (Integrated BIosphere Simulator, IBIS) at 62 eddy covariance sites around the world. Our results indicated that the IBIS model explained 60% of the observed variation in daily GPP at all validation sites. Comparison with a satellite-based vegetation model (Eddy Covariance-Light Use Efficiency, EC-LUE) revealed that the IBIS simulations yielded comparable GPP results as the EC-LUE model. Global mean GPP estimated by the IBIS model was 107.50±1.37 Pg C year(-1) (mean value ± standard deviation) across the vegetated area for the period 2000-2006, consistent with the results of the EC-LUE model (109.39±1.48 Pg C year(-1)). To evaluate the uncertainty introduced by the parameter Vcmax, which represents the maximum photosynthetic capacity, we inversed Vcmax using Markov Chain-Monte Carlo (MCMC) procedures. Using the inversed Vcmax values, the simulated global GPP increased by 16.5 Pg C year(-1), indicating that IBIS model is sensitive to Vcmax, and large uncertainty exists in model parameterization.

  5. Advancing land surface model development with satellite-based Earth observations

    NASA Astrophysics Data System (ADS)

    Orth, Rene; Dutra, Emanuel; Trigo, Isabel F.; Balsamo, Gianpaolo

    2017-04-01

    The land surface forms an essential part of the climate system. It interacts with the atmosphere through the exchange of water and energy and hence influences weather and climate, as well as their predictability. Correspondingly, the land surface model (LSM) is an essential part of any weather forecasting system. LSMs rely on partly poorly constrained parameters, due to sparse land surface observations. With the use of newly available land surface temperature observations, we show in this study that novel satellite-derived datasets help to improve LSM configuration, and hence can contribute to improved weather predictability. We use the Hydrology Tiled ECMWF Scheme of Surface Exchanges over Land (HTESSEL) and validate it comprehensively against an array of Earth observation reference datasets, including the new land surface temperature product. This reveals satisfactory model performance in terms of hydrology, but poor performance in terms of land surface temperature. This is due to inconsistencies of process representations in the model as identified from an analysis of perturbed parameter simulations. We show that HTESSEL can be more robustly calibrated with multiple instead of single reference datasets as this mitigates the impact of the structural inconsistencies. Finally, performing coupled global weather forecasts we find that a more robust calibration of HTESSEL also contributes to improved weather forecast skills. In summary, new satellite-based Earth observations are shown to enhance the multi-dataset calibration of LSMs, thereby improving the representation of insufficiently captured processes, advancing weather predictability and understanding of climate system feedbacks. Orth, R., E. Dutra, I. F. Trigo, and G. Balsamo (2016): Advancing land surface model development with satellite-based Earth observations. Hydrol. Earth Syst. Sci. Discuss., doi:10.5194/hess-2016-628

  6. Process factors facilitating and inhibiting medical ethics teaching in small groups.

    PubMed

    Bentwich, Miriam Ethel; Bokek-Cohen, Ya'arit

    2017-11-01

    To examine process factors that either facilitate or inhibit learning medical ethics during case-based learning. A qualitative research approach using microanalysis of transcribed videotaped discussions of three consecutive small-group learning (SGL) sessions on medical ethics teaching (MET) for three groups, each with 10 students. This research effort revealed 12 themes of learning strategies, divided into 6 coping and 6 evasive strategies. Cognitive-based strategies were found to relate to Kamin's model of critical thinking in medical education, thereby supporting our distinction between the themes of coping and evasive strategies. The findings also showed that cognitive efforts as well as emotional strategies are involved in discussions of ethical dilemmas. Based on Kamin's model and the constructivist learning theory, an examination of the different themes within the two learning strategies-coping and evasive-revealed that these strategies may be understood as corresponding to process factors either facilitating or inhibiting MET in SGL, respectively. Our classification offers a more nuanced observation, specifically geared to pinpointing the desired and less desired process factors in the learning involved in MET in the SGL environment. Two key advantages of this observation are: (1) it brings to the forefront process factors that may inhibit and not merely facilitate MET in SGL and (2) it acknowledges the existence of emotional and not just cognitive process factors. Further enhancement of MET in SGL may thus be achieved based on these observations. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  7. Local Scale Radiobrightness Modeling During the Intensive Observing Period-4 of the Cold Land Processes Experiment-1

    NASA Astrophysics Data System (ADS)

    Kim, E.; Tedesco, M.; de Roo, R.; England, A. W.; Gu, H.; Pham, H.; Boprie, D.; Graf, T.; Koike, T.; Armstrong, R.; Brodzik, M.; Hardy, J.; Cline, D.

    2004-12-01

    The NASA Cold Land Processes Field Experiment (CLPX-1) was designed to provide microwave remote sensing observations and ground truth for studies of snow and frozen ground remote sensing, particularly issues related to scaling. CLPX-1 was conducted in 2002 and 2003 in Colorado, USA. One of the goals of the experiment was to test the capabilities of microwave emission models at different scales. Initial forward model validation work has concentrated on the Local-Scale Observation Site (LSOS), a 0.8~ha study site consisting of open meadows separated by trees where the most detailed measurements were made of snow depth and temperature, density, and grain size profiles. Results obtained in the case of the 3rd Intensive Observing Period (IOP3) period (February, 2003, dry snow) suggest that a model based on Dense Medium Radiative Transfer (DMRT) theory is able to model the recorded brightness temperatures using snow parameters derived from field measurements. This paper focuses on the ability of forward DMRT modelling, combined with snowpack measurements, to reproduce the radiobrightness signatures observed by the University of Michigan's Truck-Mounted Radiometer System (TMRS) at 19 and 37~GHz during the 4th IOP (IOP4) in March, 2003. Unlike in IOP3, conditions during IOP4 include both wet and dry periods, providing a valuable test of DMRT model performance. In addition, a comparison will be made for the one day of coincident observations by the University of Tokyo's Ground-Based Microwave Radiometer-7 (GBMR-7) and the TMRS. The plot-scale study in this paper establishes a baseline of DMRT performance for later studies at successively larger scales. And these scaling studies will help guide the choice of future snow retrieval algorithms and the design of future Cold Lands observing systems.

  8. The impacts of data constraints on the predictive performance of a general process-based crop model (PeakN-crop v1.0)

    NASA Astrophysics Data System (ADS)

    Caldararu, Silvia; Purves, Drew W.; Smith, Matthew J.

    2017-04-01

    Improving international food security under a changing climate and increasing human population will be greatly aided by improving our ability to modify, understand and predict crop growth. What we predominantly have at our disposal are either process-based models of crop physiology or statistical analyses of yield datasets, both of which suffer from various sources of error. In this paper, we present a generic process-based crop model (PeakN-crop v1.0) which we parametrise using a Bayesian model-fitting algorithm to three different sources: data-space-based vegetation indices, eddy covariance productivity measurements and regional crop yields. We show that the model parametrised without data, based on prior knowledge of the parameters, can largely capture the observed behaviour but the data-constrained model greatly improves both the model fit and reduces prediction uncertainty. We investigate the extent to which each dataset contributes to the model performance and show that while all data improve on the prior model fit, the satellite-based data and crop yield estimates are particularly important for reducing model error and uncertainty. Despite these improvements, we conclude that there are still significant knowledge gaps, in terms of available data for model parametrisation, but our study can help indicate the necessary data collection to improve our predictions of crop yields and crop responses to environmental changes.

  9. Dynamics of cross-bridge cycling, ATP hydrolysis, force generation, and deformation in cardiac muscle

    PubMed Central

    Tewari, Shivendra G.; Bugenhagen, Scott M.; Palmer, Bradley M.; Beard, Daniel A.

    2015-01-01

    Despite extensive study over the past six decades the coupling of chemical reaction and mechanical processes in muscle dynamics is not well understood. We lack a theoretical description of how chemical processes (metabolite binding, ATP hydrolysis) influence and are influenced by mechanical processes (deformation and force generation). To address this need, a mathematical model of the muscle cross-bridge (XB) cycle based on Huxley’s sliding filament theory is developed that explicitly accounts for the chemical transformation events and the influence of strain on state transitions. The model is identified based on elastic and viscous moduli data from mouse and rat myocardial strips over a range of perturbation frequencies, and MgATP and inorganic phosphate (Pi) concentrations. Simulations of the identified model reproduce the observed effects of MgATP and MgADP on the rate of force development. Furthermore, simulations reveal that the rate of force re-development measured in slack-restretch experiments is not directly proportional to the rate of XB cycling. For these experiments, the model predicts that the observed increase in the rate of force generation with increased Pi concentration is due to inhibition of cycle turnover by Pi. Finally, the model captures the observed phenomena of force yielding suggesting that it is a result of rapid detachment of stretched attached myosin heads. PMID:25681584

  10. Gaze-contingent displays: a review.

    PubMed

    Duchowski, Andrew T; Cournia, Nathan; Murphy, Hunter

    2004-12-01

    Gaze-contingent displays (GCDs) attempt to balance the amount of information displayed against the visual information processing capacity of the observer through real-time eye movement sensing. Based on the assumed knowledge of the instantaneous location of the observer's focus of attention, GCD content can be "tuned" through several display processing means. Screen-based displays alter pixel level information generally matching the resolvability of the human retina in an effort to maximize bandwidth. Model-based displays alter geometric-level primitives along similar goals. Attentive user interfaces (AUIs) manage object- level entities (e.g., windows, applications) depending on the assumed attentive state of the observer. Such real-time display manipulation is generally achieved through non-contact, unobtrusive tracking of the observer's eye movements. This paper briefly reviews past and present display techniques as well as emerging graphics and eye tracking technology for GCD development.

  11. Implementation of nursing conceptual models: observations of a multi-site research team.

    PubMed

    Shea, H; Rogers, M; Ross, E; Tucker, D; Fitch, M; Smith, I

    1989-01-01

    The general acceptance by nursing of the nursing process as the methodology of practice enabled nurses to have a common grounding for practice, research and theory development in the 1970s. It has become clear, however, that the nursing process is just that--a process. What is sorely needed is the nursing content for that process and consequently in the past 10 years nursing theorists have further developed their particular conceptual models (CM). Three major teaching hospitals in Toronto have instituted a conceptual model (CM) of nursing as a basis of nursing practice. Mount Sinai Hospital has adopted Roy's adaptation model; Sunnybrook Medical Centre, Kings's goal attainment model; and Toronto General Hospital, Orem's self-care deficit theory model. All of these hospitals are affiliated through a series of cross appointments with the Faculty of Nursing at the University of Toronto. Two community hospitals, Mississauga and Scarborough General, have also adopted Orem's model and are related to the University through educational, community and interest groups. A group of researchers from these hospitals and the University of Toronto have proposed a collaborative project to determine what impact using a conceptual model will make on nursing practice. Discussions among the participants of this research group indicate that there are observations associated with instituting conceptual models that can be identified early in the process of implementation. These observations may be of assistance to others contemplating the implementation of conceptually based practice in their institution.

  12. Evaluation of Cirrus Cloud Simulations using ARM Data-Development of Case Study Data Set

    NASA Technical Reports Server (NTRS)

    Starr, David OC.; Demoz, Belay; Wang, Yansen; Lin, Ruei-Fong; Lare, Andrew; Mace, Jay; Poellot, Michael; Sassen, Kenneth; Brown, Philip

    2002-01-01

    Cloud-resolving models (CRMs) are being increasingly used to develop parametric treatments of clouds and related processes for use in global climate models (GCMs). CRMs represent the integrated knowledge of the physical processes acting to determine cloud system lifecycle and are well matched to typical observational data in terms of physical parameters/measurables and scale-resolved physical processes. Thus, they are suitable for direct comparison to field observations for model validation and improvement. The goal of this project is to improve state-of-the-art CRMs used for studies of cirrus clouds and to establish a relative calibration with GCMs through comparisons among CRMs, single column model (SCM) versions of the GCMs, and observations. The objective is to compare and evaluate a variety of CRMs and SCMs, under the auspices of the GEWEX Cloud Systems Study (GCSS) Working Group on Cirrus Cloud Systems (WG2), using ARM data acquired at the Southern Great Plains (SGP) site. This poster will report on progress in developing a suitable WG2 case study data set based on the September 26, 1996 ARM IOP case - the Hurricane Nora outflow case. Progress is assessing cloud and other environmental conditions will be described. Results of preliminary simulations using a regional cloud system model (MM5) and a CRM will be discussed. Focal science questions for the model comparison are strongly based on results of the idealized GCSS WG2 cirrus cloud model comparison projects (Idealized Cirrus Cloud Model Comparison Project and Cirrus Parcel Model Comparison Project), which will also be briefly summarized.

  13. Interactive Computing and Processing of NASA Land Surface Observations Using Google Earth Engine

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew; Burks, Jason; Bell, Jordan

    2016-01-01

    Google's Earth Engine offers a "big data" approach to processing large volumes of NASA and other remote sensing products. h\\ps://earthengine.google.com/ Interfaces include a Javascript or Python-based API, useful for accessing and processing over large periods of record for Landsat and MODIS observations. Other data sets are frequently added, including weather and climate model data sets, etc. Demonstrations here focus on exploratory efforts to perform land surface change detection related to severe weather, and other disaster events.

  14. Assessing the detail needed to capture rainfall-runoff dynamics with physics-based hydrologic response simulation

    USGS Publications Warehouse

    Mirus, B.B.; Ebel, B.A.; Heppner, C.S.; Loague, K.

    2011-01-01

    Concept development simulation with distributed, physics-based models provides a quantitative approach for investigating runoff generation processes across environmental conditions. Disparities within data sets employed to design and parameterize boundary value problems used in heuristic simulation inevitably introduce various levels of bias. The objective was to evaluate the impact of boundary value problem complexity on process representation for different runoff generation mechanisms. The comprehensive physics-based hydrologic response model InHM has been employed to generate base case simulations for four well-characterized catchments. The C3 and CB catchments are located within steep, forested environments dominated by subsurface stormflow; the TW and R5 catchments are located in gently sloping rangeland environments dominated by Dunne and Horton overland flows. Observational details are well captured within all four of the base case simulations, but the characterization of soil depth, permeability, rainfall intensity, and evapotranspiration differs for each. These differences are investigated through the conversion of each base case into a reduced case scenario, all sharing the same level of complexity. Evaluation of how individual boundary value problem characteristics impact simulated runoff generation processes is facilitated by quantitative analysis of integrated and distributed responses at high spatial and temporal resolution. Generally, the base case reduction causes moderate changes in discharge and runoff patterns, with the dominant process remaining unchanged. Moderate differences between the base and reduced cases highlight the importance of detailed field observations for parameterizing and evaluating physics-based models. Overall, similarities between the base and reduced cases indicate that the simpler boundary value problems may be useful for concept development simulation to investigate fundamental controls on the spectrum of runoff generation mechanisms. Copyright 2011 by the American Geophysical Union.

  15. The Monash University Interactive Simple Climate Model

    NASA Astrophysics Data System (ADS)

    Dommenget, D.

    2013-12-01

    The Monash university interactive simple climate model is a web-based interface that allows students and the general public to explore the physical simulation of the climate system with a real global climate model. It is based on the Globally Resolved Energy Balance (GREB) model, which is a climate model published by Dommenget and Floeter [2011] in the international peer review science journal Climate Dynamics. The model simulates most of the main physical processes in the climate system in a very simplistic way and therefore allows very fast and simple climate model simulations on a normal PC computer. Despite its simplicity the model simulates the climate response to external forcings, such as doubling of the CO2 concentrations very realistically (similar to state of the art climate models). The Monash simple climate model web-interface allows you to study the results of more than a 2000 different model experiments in an interactive way and it allows you to study a number of tutorials on the interactions of physical processes in the climate system and solve some puzzles. By switching OFF/ON physical processes you can deconstruct the climate and learn how all the different processes interact to generate the observed climate and how the processes interact to generate the IPCC predicted climate change for anthropogenic CO2 increase. The presentation will illustrate how this web-base tool works and what are the possibilities in teaching students with this tool are.

  16. Separation of time scales in one-dimensional directed nucleation-growth processes

    NASA Astrophysics Data System (ADS)

    Pierobon, Paolo; Miné-Hattab, Judith; Cappello, Giovanni; Viovy, Jean-Louis; Lagomarsino, Marco Cosentino

    2010-12-01

    Proteins involved in homologous recombination such as RecA and hRad51 polymerize on single- and double-stranded DNA according to a nucleation-growth kinetics, which can be monitored by single-molecule in vitro assays. The basic models currently used to extract biochemical rates rely on ensemble averages and are typically based on an underlying process of bidirectional polymerization, in contrast with the often observed anisotropic polymerization of similar proteins. For these reasons, if one considers single-molecule experiments, the available models are useful to understand observations only in some regimes. In particular, recent experiments have highlighted a steplike polymerization kinetics. The classical model of one-dimensional nucleation growth, the Kolmogorov-Avrami-Mehl-Johnson (KAMJ) model, predicts the correct polymerization kinetics only in some regimes and fails to predict the steplike behavior. This work illustrates by simulations and analytical arguments the limitation of applicability of the KAMJ description and proposes a minimal model for the statistics of the steps based on the so-called stick-breaking stochastic process. We argue that this insight might be useful to extract information on the time and length scales involved in the polymerization kinetics.

  17. Health behavior change in advance care planning: an agent-based model.

    PubMed

    Ernecoff, Natalie C; Keane, Christopher R; Albert, Steven M

    2016-02-29

    A practical and ethical challenge in advance care planning research is controlling and intervening on human behavior. Additionally, observing dynamic changes in advance care planning (ACP) behavior proves difficult, though tracking changes over time is important for intervention development. Agent-based modeling (ABM) allows researchers to integrate complex behavioral data about advance care planning behaviors and thought processes into a controlled environment that is more easily alterable and observable. Literature to date has not addressed how best to motivate individuals, increase facilitators and reduce barriers associated with ACP. We aimed to build an ABM that applies the Transtheoretical Model of behavior change to ACP as a health behavior and accurately reflects: 1) the rates at which individuals complete the process, 2) how individuals respond to barriers, facilitators, and behavioral variables, and 3) the interactions between these variables. We developed a dynamic ABM of the ACP decision making process based on the stages of change posited by the Transtheoretical Model. We integrated barriers, facilitators, and other behavioral variables that agents encounter as they move through the process. We successfully incorporated ACP barriers, facilitators, and other behavioral variables into our ABM, forming a plausible representation of ACP behavior and decision-making. The resulting distributions across the stages of change replicated those found in the literature, with approximately half of participants in the action-maintenance stage in both the model and the literature. Our ABM is a useful method for representing dynamic social and experiential influences on the ACP decision making process. This model suggests structural interventions, e.g. increasing access to ACP materials in primary care clinics, in addition to improved methods of data collection for behavioral studies, e.g. incorporating longitudinal data to capture behavioral dynamics.

  18. Potential Applications of Gosat Based Carbon Budget Products to Refine Terrestrial Ecosystem Model

    NASA Astrophysics Data System (ADS)

    Kondo, M.; Ichii, K.

    2011-12-01

    Estimation of carbon exchange in terrestrial ecosystem associates with difficulties due to complex entanglement of physical and biological processes: thus, the net ecosystem productivity (NEP) estimated from simulation often differs among process-based terrestrial ecosystem models. In addition to complexity of the system, validation can only be conducted in a point scale since reliable observation is only available from ground observations. With a lack of large spatial data, extension of model simulation to a global scale results in significant uncertainty in the future carbon balance and climate change. Greenhouse gases Observing SATellite (GOSAT), launched by the Japanese space agency (JAXA) in January, 2009, is the 1st operational satellite promised to deliver the net land-atmosphere carbon budget to the terrestrial biosphere research community. Using that information, the model reproducibility of carbon budget is expected to improve: hence, gives a better estimation of the future climate change. This initial analysis is to seek and evaluate the potential applications of GOSAT observation toward the sophistication of terrestrial ecosystem model. The present study was conducted in two processes: site-based analysis using eddy covariance observation data to assess the potential use of terrestrial carbon fluxes (GPP, RE, and NEP) to refine the model, and extension of the point scale analysis to spatial using Carbon Tracker product as a prototype of GOSAT product. In the first phase of the experiment, it was verified that an optimization routine adapted to a terrestrial model, Biome-BGC, yielded the improved result with respect to eddy covariance observation data from AsiaFlux Network. Spatial data sets used in the second phase were consists of GPP from empirical algorithm (e.g. support vector machine), NEP from Carbon Tracker, and RE from the combination of these. These spatial carbon flux estimations was used to refine the model applying the exactly same optimization procedure as the point analysis, and found that these spatial data help to improve the model's overall reproducibility. The GOSAT product is expected to have higher accuracy since it uses global CO2 observations. Therefore, with the application of GOSAT data, a better estimation of terrestrial carbon cycle can be achieved with optimization. It is anticipated to carry out more detailed analysis upon the arrival of GOSAT product and to verify the reduction in the uncertainty in the future carbon budget and the climate change with the calibrated models, which is the major contribution can be achieved from GOSAT.

  19. Model medication management process in Australian nursing homes using business process modeling.

    PubMed

    Qian, Siyu; Yu, Ping

    2013-01-01

    One of the reasons for end user avoidance or rejection to use health information systems is poor alignment of the system with healthcare workflow, likely causing by system designers' lack of thorough understanding about healthcare process. Therefore, understanding the healthcare workflow is the essential first step for the design of optimal technologies that will enable care staff to complete the intended tasks faster and better. The often use of multiple or "high risk" medicines by older people in nursing homes has the potential to increase medication error rate. To facilitate the design of information systems with most potential to improve patient safety, this study aims to understand medication management process in nursing homes using business process modeling method. The paper presents study design and preliminary findings from interviewing two registered nurses, who were team leaders in two nursing homes. Although there were subtle differences in medication management between the two homes, major medication management activities were similar. Further field observation will be conducted. Based on the data collected from observations, an as-is process model for medication management will be developed.

  20. Modeling, simulation, and analysis of optical remote sensing systems

    NASA Technical Reports Server (NTRS)

    Kerekes, John Paul; Landgrebe, David A.

    1989-01-01

    Remote Sensing of the Earth's resources from space-based sensors has evolved in the past 20 years from a scientific experiment to a commonly used technological tool. The scientific applications and engineering aspects of remote sensing systems have been studied extensively. However, most of these studies have been aimed at understanding individual aspects of the remote sensing process while relatively few have studied their interrelations. A motivation for studying these interrelationships has arisen with the advent of highly sophisticated configurable sensors as part of the Earth Observing System (EOS) proposed by NASA for the 1990's. Two approaches to investigating remote sensing systems are developed. In one approach, detailed models of the scene, the sensor, and the processing aspects of the system are implemented in a discrete simulation. This approach is useful in creating simulated images with desired characteristics for use in sensor or processing algorithm development. A less complete, but computationally simpler method based on a parametric model of the system is also developed. In this analytical model the various informational classes are parameterized by their spectral mean vector and covariance matrix. These class statistics are modified by models for the atmosphere, the sensor, and processing algorithms and an estimate made of the resulting classification accuracy among the informational classes. Application of these models is made to the study of the proposed High Resolution Imaging Spectrometer (HRIS). The interrelationships among observational conditions, sensor effects, and processing choices are investigated with several interesting results.

  1. Quantifying structural uncertainty on fault networks using a marked point process within a Bayesian framework

    NASA Astrophysics Data System (ADS)

    Aydin, Orhun; Caers, Jef Karel

    2017-08-01

    Faults are one of the building-blocks for subsurface modeling studies. Incomplete observations of subsurface fault networks lead to uncertainty pertaining to location, geometry and existence of faults. In practice, gaps in incomplete fault network observations are filled based on tectonic knowledge and interpreter's intuition pertaining to fault relationships. Modeling fault network uncertainty with realistic models that represent tectonic knowledge is still a challenge. Although methods that address specific sources of fault network uncertainty and complexities of fault modeling exists, a unifying framework is still lacking. In this paper, we propose a rigorous approach to quantify fault network uncertainty. Fault pattern and intensity information are expressed by means of a marked point process, marked Strauss point process. Fault network information is constrained to fault surface observations (complete or partial) within a Bayesian framework. A structural prior model is defined to quantitatively express fault patterns, geometries and relationships within the Bayesian framework. Structural relationships between faults, in particular fault abutting relations, are represented with a level-set based approach. A Markov Chain Monte Carlo sampler is used to sample posterior fault network realizations that reflect tectonic knowledge and honor fault observations. We apply the methodology to a field study from Nankai Trough & Kumano Basin. The target for uncertainty quantification is a deep site with attenuated seismic data with only partially visible faults and many faults missing from the survey or interpretation. A structural prior model is built from shallow analog sites that are believed to have undergone similar tectonics compared to the site of study. Fault network uncertainty for the field is quantified with fault network realizations that are conditioned to structural rules, tectonic information and partially observed fault surfaces. We show the proposed methodology generates realistic fault network models conditioned to data and a conceptual model of the underlying tectonics.

  2. Modeling Elevation and Aspect Controls on Emerging Ecohydrologic Processes and Ecosystem Patterns Using the Component-based Landlab Framework

    NASA Astrophysics Data System (ADS)

    Nudurupati, S. S.; Istanbulluoglu, E.; Adams, J. M.; Hobley, D. E. J.; Gasparini, N. M.; Tucker, G. E.; Hutton, E. W. H.

    2014-12-01

    Topography plays a commanding role on the organization of ecohydrologic processes and resulting vegetation patterns. In southwestern United States, climate conditions lead to terrain aspect- and elevation-controlled ecosystems, with mesic north-facing and xeric south-facing vegetation types; and changes in biodiversity as a function of elevation from shrublands in low desert elevations, to mixed grass/shrublands in mid elevations, and forests at high elevations and ridge tops. These observed patterns have been attributed to differences in topography-mediated local soil moisture availability, micro-climatology, and life history processes of plants that control chances of plant establishment and survival. While ecohydrologic models represent local vegetation dynamics in sufficient detail up to sub-hourly time scales, plant life history and competition for space and resources has not been adequately represented in models. In this study we develop an ecohydrologic cellular automata model within the Landlab component-based modeling framework. This model couples local vegetation dynamics (biomass production, death) and plant establishment and competition processes for resources and space. This model is used to study the vegetation organization in a semiarid New Mexico catchment where elevation and hillslope aspect play a defining role on plant types. Processes that lead to observed plant types across the landscape are examined by initializing the domain with randomly assigned plant types and systematically changing model parameters that couple plant response with soil moisture dynamics. Climate perturbation experiments are conducted to examine the plant response in space and time. Understanding the inherently transient ecohydrologic systems is critical to improve predictions of climate change impacts on ecosystems.

  3. Steering operational synergies in terrestrial observation networks: opportunity for advancing Earth system dynamics modelling

    NASA Astrophysics Data System (ADS)

    Baatz, Roland; Sullivan, Pamela L.; Li, Li; Weintraub, Samantha R.; Loescher, Henry W.; Mirtl, Michael; Groffman, Peter M.; Wall, Diana H.; Young, Michael; White, Tim; Wen, Hang; Zacharias, Steffen; Kühn, Ingolf; Tang, Jianwu; Gaillardet, Jérôme; Braud, Isabelle; Flores, Alejandro N.; Kumar, Praveen; Lin, Henry; Ghezzehei, Teamrat; Jones, Julia; Gholz, Henry L.; Vereecken, Harry; Van Looy, Kris

    2018-05-01

    Advancing our understanding of Earth system dynamics (ESD) depends on the development of models and other analytical tools that apply physical, biological, and chemical data. This ambition to increase understanding and develop models of ESD based on site observations was the stimulus for creating the networks of Long-Term Ecological Research (LTER), Critical Zone Observatories (CZOs), and others. We organized a survey, the results of which identified pressing gaps in data availability from these networks, in particular for the future development and evaluation of models that represent ESD processes, and provide insights for improvement in both data collection and model integration. From this survey overview of data applications in the context of LTER and CZO research, we identified three challenges: (1) widen application of terrestrial observation network data in Earth system modelling, (2) develop integrated Earth system models that incorporate process representation and data of multiple disciplines, and (3) identify complementarity in measured variables and spatial extent, and promoting synergies in the existing observational networks. These challenges lead to perspectives and recommendations for an improved dialogue between the observation networks and the ESD modelling community, including co-location of sites in the existing networks and further formalizing these recommendations among these communities. Developing these synergies will enable cross-site and cross-network comparison and synthesis studies, which will help produce insights around organizing principles, classifications, and general rules of coupling processes with environmental conditions.

  4. Analysis of the hydrological response of a distributed physically-based model using post-assimilation (EnKF) diagnostics of streamflow and in situ soil moisture observations

    NASA Astrophysics Data System (ADS)

    Trudel, Mélanie; Leconte, Robert; Paniconi, Claudio

    2014-06-01

    Data assimilation techniques not only enhance model simulations and forecast, they also provide the opportunity to obtain a diagnostic of both the model and observations used in the assimilation process. In this research, an ensemble Kalman filter was used to assimilate streamflow observations at a basin outlet and at interior locations, as well as soil moisture at two different depths (15 and 45 cm). The simulation model is the distributed physically-based hydrological model CATHY (CATchment HYdrology) and the study site is the Des Anglais watershed, a 690 km2 river basin located in southern Quebec, Canada. Use of Latin hypercube sampling instead of a conventional Monte Carlo method to generate the ensemble reduced the size of the ensemble, and therefore the calculation time. Different post-assimilation diagnostics, based on innovations (observation minus background), analysis residuals (observation minus analysis), and analysis increments (analysis minus background), were used to evaluate assimilation optimality. An important issue in data assimilation is the estimation of error covariance matrices. These diagnostics were also used in a calibration exercise to determine the standard deviation of model parameters, forcing data, and observations that led to optimal assimilations. The analysis of innovations showed a lag between the model forecast and the observation during rainfall events. Assimilation of streamflow observations corrected this discrepancy. Assimilation of outlet streamflow observations improved the Nash-Sutcliffe efficiencies (NSE) between the model forecast (one day) and the observation at both outlet and interior point locations, owing to the structure of the state vector used. However, assimilation of streamflow observations systematically increased the simulated soil moisture values.

  5. Filtering with Marked Point Process Observations via Poisson Chaos Expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun Wei, E-mail: wsun@mathstat.concordia.ca; Zeng Yong, E-mail: zengy@umkc.edu; Zhang Shu, E-mail: zhangshuisme@hotmail.com

    2013-06-15

    We study a general filtering problem with marked point process observations. The motivation comes from modeling financial ultra-high frequency data. First, we rigorously derive the unnormalized filtering equation with marked point process observations under mild assumptions, especially relaxing the bounded condition of stochastic intensity. Then, we derive the Poisson chaos expansion for the unnormalized filter. Based on the chaos expansion, we establish the uniqueness of solutions of the unnormalized filtering equation. Moreover, we derive the Poisson chaos expansion for the unnormalized filter density under additional conditions. To explore the computational advantage, we further construct a new consistent recursive numerical schememore » based on the truncation of the chaos density expansion for a simple case. The new algorithm divides the computations into those containing solely system coefficients and those including the observations, and assign the former off-line.« less

  6. Features Based Assessments of Warm Season Convective Precipitation Forecasts From the High Resolution Rapid Refresh Model

    NASA Astrophysics Data System (ADS)

    Bytheway, Janice L.

    Forecast models have seen vast improvements in recent years, via increased spatial and temporal resolution, rapid updating, assimilation of more observational data, and continued development and improvement of the representation of the atmosphere. One such model is the High Resolution Rapid Refresh (HRRR) model, a 3 km, hourly-updated, convection-allowing model that has been in development since 2010 and running operationally over the contiguous US since 2014. In 2013, the HRRR became the only US model to assimilate radar reflectivity via diabatic assimilation, a process in which the observed reflectivity is used to induce a latent heating perturbation in the model initial state in order to produce precipitation in those areas where it is indicated by the radar. In order to support the continued development and improvement of the HRRR model with regard to forecasts of convective precipitation, the concept of an assessment is introduced. The assessment process aims to connect model output with observations by first validating model performance then attempting to connect that performance to model assumptions, parameterizations and processes to identify areas for improvement. Observations from remote sensing platforms such as radar and satellite can provide valuable information about three-dimensional storm structure and microphysical properties for use in the assessment, including estimates of surface rainfall, hydrometeor types and size distributions, and column moisture content. A features-based methodology is used to identify warm season convective precipitating objects in the 2013, 2014, and 2015 versions of HRRR precipitation forecasts, Stage IV multisensor precipitation products, and Global Precipitation Measurement (GPM) core satellite observations. Quantitative precipitation forecasts (QPFs) are evaluated for biases in hourly rainfall intensity, total rainfall, and areal coverage in both the US Central Plains (29-49N, 85-105W) and US Mountain West (29-49N, 105-125W). Features identified in the model and Stage IV were tracked through time in order to evaluate forecasts through several hours of the forecast period. The 2013 version of the model was found to produce significantly stronger convective storms than observed, with a slight southerly displacement from the observed storms during the peak hours of convective activity (17-00 UTC). This version of the model also displayed a strong relationship between atmospheric water vapor content and cloud thickness over the central plains. In the 2014 and 2015 versions of the model, storms in the western US were found to be smaller and weaker than the observed, and satellite products (brightness temperatures and reflectivities) simulated using model output indicated that many of the forecast storms contained too much ice above the freezing level. Model upgrades intended to decrease the biases seen in early versions include changes to the reflectivity assimilation, the addition of sub-grid scale cloud parameterizations, changes to the representation of surface processes and the addition of aerosol processes to the microphysics. The effects of these changes are evident in each successive version of the model, with reduced biases in intensity, elimination of the southerly bias, and improved representation of the onset of convection.

  7. A dynamic model of reasoning and memory.

    PubMed

    Hawkins, Guy E; Hayes, Brett K; Heit, Evan

    2016-02-01

    Previous models of category-based induction have neglected how the process of induction unfolds over time. We conceive of induction as a dynamic process and provide the first fine-grained examination of the distribution of response times observed in inductive reasoning. We used these data to develop and empirically test the first major quantitative modeling scheme that simultaneously accounts for inductive decisions and their time course. The model assumes that knowledge of similarity relations among novel test probes and items stored in memory drive an accumulation-to-bound sequential sampling process: Test probes with high similarity to studied exemplars are more likely to trigger a generalization response, and more rapidly, than items with low exemplar similarity. We contrast data and model predictions for inductive decisions with a recognition memory task using a common stimulus set. Hierarchical Bayesian analyses across 2 experiments demonstrated that inductive reasoning and recognition memory primarily differ in the threshold to trigger a decision: Observers required less evidence to make a property generalization judgment (induction) than an identity statement about a previously studied item (recognition). Experiment 1 and a condition emphasizing decision speed in Experiment 2 also found evidence that inductive decisions use lower quality similarity-based information than recognition. The findings suggest that induction might represent a less cautious form of recognition. We conclude that sequential sampling models grounded in exemplar-based similarity, combined with hierarchical Bayesian analysis, provide a more fine-grained and informative analysis of the processes involved in inductive reasoning than is possible solely through examination of choice data. PsycINFO Database Record (c) 2016 APA, all rights reserved.

  8. Mammalian cell culture process for monoclonal antibody production: nonlinear modelling and parameter estimation.

    PubMed

    Selişteanu, Dan; Șendrescu, Dorin; Georgeanu, Vlad; Roman, Monica

    2015-01-01

    Monoclonal antibodies (mAbs) are at present one of the fastest growing products of pharmaceutical industry, with widespread applications in biochemistry, biology, and medicine. The operation of mAbs production processes is predominantly based on empirical knowledge, the improvements being achieved by using trial-and-error experiments and precedent practices. The nonlinearity of these processes and the absence of suitable instrumentation require an enhanced modelling effort and modern kinetic parameter estimation strategies. The present work is dedicated to nonlinear dynamic modelling and parameter estimation for a mammalian cell culture process used for mAb production. By using a dynamical model of such kind of processes, an optimization-based technique for estimation of kinetic parameters in the model of mammalian cell culture process is developed. The estimation is achieved as a result of minimizing an error function by a particle swarm optimization (PSO) algorithm. The proposed estimation approach is analyzed in this work by using a particular model of mammalian cell culture, as a case study, but is generic for this class of bioprocesses. The presented case study shows that the proposed parameter estimation technique provides a more accurate simulation of the experimentally observed process behaviour than reported in previous studies.

  9. Mammalian Cell Culture Process for Monoclonal Antibody Production: Nonlinear Modelling and Parameter Estimation

    PubMed Central

    Selişteanu, Dan; Șendrescu, Dorin; Georgeanu, Vlad

    2015-01-01

    Monoclonal antibodies (mAbs) are at present one of the fastest growing products of pharmaceutical industry, with widespread applications in biochemistry, biology, and medicine. The operation of mAbs production processes is predominantly based on empirical knowledge, the improvements being achieved by using trial-and-error experiments and precedent practices. The nonlinearity of these processes and the absence of suitable instrumentation require an enhanced modelling effort and modern kinetic parameter estimation strategies. The present work is dedicated to nonlinear dynamic modelling and parameter estimation for a mammalian cell culture process used for mAb production. By using a dynamical model of such kind of processes, an optimization-based technique for estimation of kinetic parameters in the model of mammalian cell culture process is developed. The estimation is achieved as a result of minimizing an error function by a particle swarm optimization (PSO) algorithm. The proposed estimation approach is analyzed in this work by using a particular model of mammalian cell culture, as a case study, but is generic for this class of bioprocesses. The presented case study shows that the proposed parameter estimation technique provides a more accurate simulation of the experimentally observed process behaviour than reported in previous studies. PMID:25685797

  10. The PEcAn Project: Accessible Tools for On-demand Ecosystem Modeling

    NASA Astrophysics Data System (ADS)

    Cowdery, E.; Kooper, R.; LeBauer, D.; Desai, A. R.; Mantooth, J.; Dietze, M.

    2014-12-01

    Ecosystem models play a critical role in understanding the terrestrial biosphere and forecasting changes in the carbon cycle, however current forecasts have considerable uncertainty. The amount of data being collected and produced is increasing on daily basis as we enter the "big data" era, but only a fraction of this data is being used to constrain models. Until we can improve the problems of model accessibility and model-data communication, none of these resources can be used to their full potential. The Predictive Ecosystem Analyzer (PEcAn) is an ecoinformatics toolbox and a set of workflows that wrap around an ecosystem model and manage the flow of information in and out of regional-scale TBMs. Here we present new modules developed in PEcAn to manage the processing of meteorological data, one of the primary driver dependencies for ecosystem models. The module downloads, reads, extracts, and converts meteorological observations to Unidata Climate Forecast (CF) NetCDF community standard, a convention used for most climate forecast and weather models. The module also automates the conversion from NetCDF to model specific formats, including basic merging, gap-filling, and downscaling procedures. PEcAn currently supports tower-based micrometeorological observations at Ameriflux and FluxNET sites, site-level CSV-formatted data, and regional and global reanalysis products such as the North American Regional Reanalysis and CRU-NCEP. The workflow is easily extensible to additional products and processing algorithms.These meteorological workflows have been coupled with the PEcAn web interface and now allow anyone to run multiple ecosystem models for any location on the Earth by simply clicking on an intuitive Google-map based interface. This will allow users to more readily compare models to observations at those sites, leading to better calibration and validation. Current work is extending these workflows to also process field, remotely-sensed, and historical observations of vegetation composition and structure. The processing of heterogeneous met and veg data within PEcAn is made possible using the Brown Dog cyberinfrastructure tools for unstructured data.

  11. Modeling the Diurnal Tides in the MLT Region with the Doppler Spread Parameterization of Gravity Waves

    NASA Technical Reports Server (NTRS)

    Mayr, H. G.; Mengel, J. G.; Chan, K. L.; Trob, D.; Porter, H. C.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Special Session: SA03 The mesosphere/lower thermosphere region: Structure, dynamics, composition, and emission. Ground based and satellite observations in the upper mesosphere and lower thermosphere (MLT) reveal large seasonal variations in the horizontal wind fields of the diurnal and semidiurnal tides. To provide an understanding of the observations, we discuss results obtained with our Numerical Spectral Model (NMS) that incorporates the gravity wave Doppler Spread Parameterization (DSP) of Hines. Our model reproduces many of the salient features observed, and we discuss numerical experiments that delineate the important processes involved. Gravity wave momentum deposition and the seasonal variations in the tidal excitation contribute primarily to produce the large equinoctial amplitude maxima in the diurnal tide. Gravity wave induced variations in eddy viscosity, not accounted for in the model, have been shown by Akmaev to be important too. For the semidiurnal tide, with amplitude maximum observed during winter solstice, these processes also contribute, but filtering by the mean zonal circulation is more important. A deficiency of our model is that it cannot reproduce the observed seasonal variations in the phase of the semidiurnal tide, and numerical experiments are being carried out to diagnose the cause and to alleviate this problem. The dynamical components of the upper mesosphere are tightly coupled through non-linear processes and wave filtering, and this may constrain the model and require it to reproduce in detail the observed phenomenology.

  12. Implementation of the nursing process in a health area: models and assessment structures used

    PubMed Central

    Huitzi-Egilegor, Joseba Xabier; Elorza-Puyadena, Maria Isabel; Urkia-Etxabe, Jose Maria; Asurabarrena-Iraola, Carmen

    2014-01-01

    OBJECTIVE: to analyze what nursing models and nursing assessment structures have been used in the implementation of the nursing process at the public and private centers in the health area Gipuzkoa (Basque Country). METHOD: a retrospective study was undertaken, based on the analysis of the nursing records used at the 158 centers studied. RESULTS: the Henderson model, Carpenito's bifocal structure, Gordon's assessment structure and the Resident Assessment Instrument Nursing Home 2.0 have been used as nursing models and assessment structures to implement the nursing process. At some centers, the selected model or assessment structure has varied over time. CONCLUSION: Henderson's model has been the most used to implement the nursing process. Furthermore, the trend is observed to complement or replace Henderson's model by nursing assessment structures. PMID:25493672

  13. Thermoplastic matrix composite processing model

    NASA Technical Reports Server (NTRS)

    Dara, P. H.; Loos, A. C.

    1985-01-01

    The effects the processing parameters pressure, temperature, and time have on the quality of continuous graphite fiber reinforced thermoplastic matrix composites were quantitatively accessed by defining the extent to which intimate contact and bond formation has occurred at successive ply interfaces. Two models are presented predicting the extents to which the ply interfaces have achieved intimate contact and cohesive strength. The models are based on experimental observation of compression molded laminates and neat resin conditions, respectively. Identified as the mechanism explaining the phenomenon by which the plies bond to themselves is the theory of autohesion (or self diffusion). Theoretical predictions from the Reptation Theory between autohesive strength and contact time are used to explain the effects of the processing parameters on the observed experimental strengths. The application of a time-temperature relationship for autohesive strength predictions is evaluated. A viscoelastic compression molding model of a tow was developed to explain the phenomenon by which the prepreg ply interfaces develop intimate contact.

  14. A New Strategy in Observer Modeling for Greenhouse Cucumber Seedling Growth

    PubMed Central

    Qiu, Quan; Zheng, Chenfei; Wang, Wenping; Qiao, Xiaojun; Bai, He; Yu, Jingquan; Shi, Kai

    2017-01-01

    State observer is an essential component in computerized control loops for greenhouse-crop systems. However, the current accomplishments of observer modeling for greenhouse-crop systems mainly focus on mass/energy balance, ignoring physiological responses of crops. As a result, state observers for crop physiological responses are rarely developed, and control operations are typically made based on experience rather than actual crop requirements. In addition, existing observer models require a large number of parameters, leading to heavy computational load and poor application feasibility. To address these problems, we present a new state observer modeling strategy that takes both environmental information and crop physiological responses into consideration during the observer modeling process. Using greenhouse cucumber seedlings as an instance, we sample 10 physiological parameters of cucumber seedlings at different time point during the exponential growth stage, and employ them to build growth state observers together with 8 environmental parameters. Support vector machine (SVM) acts as the mathematical tool for observer modeling. Canonical correlation analysis (CCA) is used to select the dominant environmental and physiological parameters in the modeling process. With the dominant parameters, simplified observer models are built and tested. We conduct contrast experiments with different input parameter combinations on simplified and un-simplified observers. Experimental results indicate that physiological information can improve the prediction accuracies of the growth state observers. Furthermore, the simplified observer models can give equivalent or even better performance than the un-simplified ones, which verifies the feasibility of CCA. The current study can enable state observers to reflect crop requirements and make them feasible for applications with simplified shapes, which is significant for developing intelligent greenhouse control systems for modern greenhouse production. PMID:28848565

  15. A New Strategy in Observer Modeling for Greenhouse Cucumber Seedling Growth.

    PubMed

    Qiu, Quan; Zheng, Chenfei; Wang, Wenping; Qiao, Xiaojun; Bai, He; Yu, Jingquan; Shi, Kai

    2017-01-01

    State observer is an essential component in computerized control loops for greenhouse-crop systems. However, the current accomplishments of observer modeling for greenhouse-crop systems mainly focus on mass/energy balance, ignoring physiological responses of crops. As a result, state observers for crop physiological responses are rarely developed, and control operations are typically made based on experience rather than actual crop requirements. In addition, existing observer models require a large number of parameters, leading to heavy computational load and poor application feasibility. To address these problems, we present a new state observer modeling strategy that takes both environmental information and crop physiological responses into consideration during the observer modeling process. Using greenhouse cucumber seedlings as an instance, we sample 10 physiological parameters of cucumber seedlings at different time point during the exponential growth stage, and employ them to build growth state observers together with 8 environmental parameters. Support vector machine (SVM) acts as the mathematical tool for observer modeling. Canonical correlation analysis (CCA) is used to select the dominant environmental and physiological parameters in the modeling process. With the dominant parameters, simplified observer models are built and tested. We conduct contrast experiments with different input parameter combinations on simplified and un-simplified observers. Experimental results indicate that physiological information can improve the prediction accuracies of the growth state observers. Furthermore, the simplified observer models can give equivalent or even better performance than the un-simplified ones, which verifies the feasibility of CCA. The current study can enable state observers to reflect crop requirements and make them feasible for applications with simplified shapes, which is significant for developing intelligent greenhouse control systems for modern greenhouse production.

  16. The implementation of assessment model based on character building to improve students’ discipline and achievement

    NASA Astrophysics Data System (ADS)

    Rusijono; Khotimah, K.

    2018-01-01

    The purpose of this research was to investigate the effect of implementing the assessment model based on character building to improve discipline and student’s achievement. Assessment model based on character building includes three components, which are the behaviour of students, the efforts, and student’s achievement. This assessment model based on the character building is implemented in science philosophy and educational assessment courses, in Graduate Program of Educational Technology Department, Educational Faculty, Universitas Negeri Surabaya. This research used control group pre-test and post-test design. Data collection method used in this research were observation and test. The observation was used to collect the data about the disciplines of the student in the instructional process, while the test was used to collect the data about student’s achievement. Moreover, the study applied t-test to the analysis of data. The result of this research showed that assessment model based on character building improved discipline and student’s achievement.

  17. Coupling biology and oceanography in models.

    PubMed

    Fennel, W; Neumann, T

    2001-08-01

    The dynamics of marine ecosystems, i.e. the changes of observable chemical-biological quantities in space and time, are driven by biological and physical processes. Predictions of future developments of marine systems need a theoretical framework, i.e. models, solidly based on research and understanding of the different processes involved. The natural way to describe marine systems theoretically seems to be the embedding of chemical-biological models into circulation models. However, while circulation models are relatively advanced the quantitative theoretical description of chemical-biological processes lags behind. This paper discusses some of the approaches and problems in the development of consistent theories and indicates the beneficial potential of the coupling of marine biology and oceanography in models.

  18. Introducing Multisensor Satellite Radiance-Based Evaluation for Regional Earth System Modeling

    NASA Technical Reports Server (NTRS)

    Matsui, T.; Santanello, J.; Shi, J. J.; Tao, W.-K.; Wu, D.; Peters-Lidard, C.; Kemp, E.; Chin, M.; Starr, D.; Sekiguchi, M.; hide

    2014-01-01

    Earth System modeling has become more complex, and its evaluation using satellite data has also become more difficult due to model and data diversity. Therefore, the fundamental methodology of using satellite direct measurements with instrumental simulators should be addressed especially for modeling community members lacking a solid background of radiative transfer and scattering theory. This manuscript introduces principles of multisatellite, multisensor radiance-based evaluation methods for a fully coupled regional Earth System model: NASA-Unified Weather Research and Forecasting (NU-WRF) model. We use a NU-WRF case study simulation over West Africa as an example of evaluating aerosol-cloud-precipitation-land processes with various satellite observations. NU-WRF-simulated geophysical parameters are converted to the satellite-observable raw radiance and backscatter under nearly consistent physics assumptions via the multisensor satellite simulator, the Goddard Satellite Data Simulator Unit. We present varied examples of simple yet robust methods that characterize forecast errors and model physics biases through the spatial and statistical interpretation of various satellite raw signals: infrared brightness temperature (Tb) for surface skin temperature and cloud top temperature, microwave Tb for precipitation ice and surface flooding, and radar and lidar backscatter for aerosol-cloud profiling simultaneously. Because raw satellite signals integrate many sources of geophysical information, we demonstrate user-defined thresholds and a simple statistical process to facilitate evaluations, including the infrared-microwave-based cloud types and lidar/radar-based profile classifications.

  19. Modelling the Cooling of Coffee: Insights from a Preliminary Study in Indonesia

    ERIC Educational Resources Information Center

    Widjaja, Wanty

    2010-01-01

    This paper discusses an attempt to examine pre-service teachers' mathematical modelling skills. A modelling project investigating relationships between temperature and time in the process of cooling of coffee was chosen. The analysis was based on group written reports of the cooling of coffee project and observation of classroom discussion.…

  20. Field warming experiments shed light on the wheat yield response to temperature in China

    PubMed Central

    Zhao, Chuang; Piao, Shilong; Huang, Yao; Wang, Xuhui; Ciais, Philippe; Huang, Mengtian; Zeng, Zhenzhong; Peng, Shushi

    2016-01-01

    Wheat growth is sensitive to temperature, but the effect of future warming on yield is uncertain. Here, focusing on China, we compiled 46 observations of the sensitivity of wheat yield to temperature change (SY,T, yield change per °C) from field warming experiments and 102 SY,T estimates from local process-based and statistical models. The average SY,T from field warming experiments, local process-based models and statistical models is −0.7±7.8(±s.d.)% per °C, −5.7±6.5% per °C and 0.4±4.4% per °C, respectively. Moreover, SY,T is different across regions and warming experiments indicate positive SY,T values in regions where growing-season mean temperature is low, and water supply is not limiting, and negative values elsewhere. Gridded crop model simulations from the Inter-Sectoral Impact Model Intercomparison Project appear to capture the spatial pattern of SY,T deduced from warming observations. These results from local manipulative experiments could be used to improve crop models in the future. PMID:27853151

  1. A necessarily complex model to explain the biogeography of the amphibians and reptiles of Madagascar.

    PubMed

    Brown, Jason L; Cameron, Alison; Yoder, Anne D; Vences, Miguel

    2014-10-09

    Pattern and process are inextricably linked in biogeographic analyses, though we can observe pattern, we must infer process. Inferences of process are often based on ad hoc comparisons using a single spatial predictor. Here, we present an alternative approach that uses mixed-spatial models to measure the predictive potential of combinations of hypotheses. Biodiversity patterns are estimated from 8,362 occurrence records from 745 species of Malagasy amphibians and reptiles. By incorporating 18 spatially explicit predictions of 12 major biogeographic hypotheses, we show that mixed models greatly improve our ability to explain the observed biodiversity patterns. We conclude that patterns are influenced by a combination of diversification processes rather than by a single predominant mechanism. A 'one-size-fits-all' model does not exist. By developing a novel method for examining and synthesizing spatial parameters such as species richness, endemism and community similarity, we demonstrate the potential of these analyses for understanding the diversification history of Madagascar's biota.

  2. Energy-based and process-based constraints on aerosol-climate interaction

    NASA Astrophysics Data System (ADS)

    Suzuki, K.; Sato, Y.; Takemura, T.; Michibata, T.; Goto, D.; Oikawa, E.

    2017-12-01

    Recent advance in both satellite observations and global modeling provides us with a novel opportunity to investigate the long-standing aerosol-climate interaction issue at a fundamental process level, particularly with a combined use of them. In this presentation, we will highlight our recent progress in understanding the aerosol-cloud-precipitation interaction and its implication for global climate with a synergistic use of a state-of-the-art global climate model (MIROC), a global cloud-resolving model (NICAM) and recent satellite observations (A-Train). In particular, we explore two different aspects of the aerosol-climate interaction issue, i.e. (i) the global energy balance perspective with its modulation due to aerosols and (ii) the process-level characteristics of the aerosol-induced perturbations to cloud and precipitation. For the former, climate model simulations are used to quantify how components of global energy budget are modulated by the aerosol forcing. The moist processes are shown to be a critical pathway that links the forcing efficacy and the hydrologic sensitivity arising from aerosol perturbations. Effects of scattering (e.g. sulfate) and absorbing (e.g. black carbon) aerosols are compared in this context to highlight their distinctively different impacts on climate and hydrologic cycle. The aerosol-induced modulation of moist processes is also investigated in the context of the second aspect above to facilitate recent arguments on possible overestimates of the aerosol-cloud interaction in climate models. Our recent simulations with NICAM are shown to highlight how diverse responses of cloud to aerosol perturbation, which have been failed to represent in traditional climate models, are reproduced by the high-resolution global model with sophisticated cloud microphysics. We will discuss implications of these findings for a linkage between the two aspects above to aid advance process-based understandings of the aerosol-climate interaction and also to mitigate a "dichotomy" recently found by the authors between the two aspects in the context of the climate projection.

  3. Intelligent earthquake data processing for global adjoint tomography

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Hill, J.; Li, T.; Lei, W.; Ruan, Y.; Lefebvre, M. P.; Tromp, J.

    2016-12-01

    Due to the increased computational capability afforded by modern and future computing architectures, the seismology community is demanding a more comprehensive understanding of the full waveform information from the recorded earthquake seismograms. Global waveform tomography is a complex workflow that matches observed seismic data with synthesized seismograms by iteratively updating the earth model parameters based on the adjoint state method. This methodology allows us to compute a very accurate model of the earth's interior. The synthetic data is simulated by solving the wave equation in the entire globe using a spectral-element method. In order to ensure the inversion accuracy and stability, both the synthesized and observed seismograms must be carefully pre-processed. Because the scale of the inversion problem is extremely large and there is a very large volume of data to both be read and written, an efficient and reliable pre-processing workflow must be developed. We are investigating intelligent algorithms based on a machine-learning (ML) framework that will automatically tune parameters for the data processing chain. One straightforward application of ML in data processing is to classify all possible misfit calculation windows into usable and unusable ones, based on some intelligent ML models such as neural network, support vector machine or principle component analysis. The intelligent earthquake data processing framework will enable the seismology community to compute the global waveform tomography using seismic data from an arbitrarily large number of earthquake events in the fastest, most efficient way.

  4. MODFLOW-2000, the U.S. Geological Survey Modular Ground-Water Model -Documentation of the Hydrogeologic-Unit Flow (HUF) Package

    USGS Publications Warehouse

    Anderman, E.R.; Hill, M.C.

    2000-01-01

    This report documents the Hydrogeologic-Unit Flow (HUF) Package for the groundwater modeling computer program MODFLOW-2000. The HUF Package is an alternative internal flow package that allows the vertical geometry of the system hydrogeology to be defined explicitly within the model using hydrogeologic units that can be different than the definition of the model layers. The HUF Package works with all the processes of MODFLOW-2000. For the Ground-Water Flow Process, the HUF Package calculates effective hydraulic properties for the model layers based on the hydraulic properties of the hydrogeologic units, which are defined by the user using parameters. The hydraulic properties are used to calculate the conductance coefficients and other terms needed to solve the ground-water flow equation. The sensitivity of the model to the parameters defined within the HUF Package input file can be calculated using the Sensitivity Process, using observations defined with the Observation Process. Optimal values of the parameters can be estimated by using the Parameter-Estimation Process. The HUF Package is nearly identical to the Layer-Property Flow (LPF) Package, the major difference being the definition of the vertical geometry of the system hydrogeology. Use of the HUF Package is illustrated in two test cases, which also serve to verify the performance of the package by showing that the Parameter-Estimation Process produces the true parameter values when exact observations are used.

  5. On-Ground Processing of Yaogan-24 Remote Sensing Satellite Attitude Data and Verification Using Geometric Field Calibration

    PubMed Central

    Wang, Mi; Fan, Chengcheng; Yang, Bo; Jin, Shuying; Pan, Jun

    2016-01-01

    Satellite attitude accuracy is an important factor affecting the geometric processing accuracy of high-resolution optical satellite imagery. To address the problem whereby the accuracy of the Yaogan-24 remote sensing satellite’s on-board attitude data processing is not high enough and thus cannot meet its image geometry processing requirements, we developed an approach involving on-ground attitude data processing and digital orthophoto (DOM) and the digital elevation model (DEM) verification of a geometric calibration field. The approach focuses on three modules: on-ground processing based on bidirectional filter, overall weighted smoothing and fitting, and evaluation in the geometric calibration field. Our experimental results demonstrate that the proposed on-ground processing method is both robust and feasible, which ensures the reliability of the observation data quality, convergence and stability of the parameter estimation model. In addition, both the Euler angle and quaternion could be used to build a mathematical fitting model, while the orthogonal polynomial fitting model is more suitable for modeling the attitude parameter. Furthermore, compared to the image geometric processing results based on on-board attitude data, the image uncontrolled and relative geometric positioning result accuracy can be increased by about 50%. PMID:27483287

  6. Process-based upscaling of surface-atmosphere exchange

    NASA Astrophysics Data System (ADS)

    Keenan, T. F.; Prentice, I. C.; Canadell, J.; Williams, C. A.; Wang, H.; Raupach, M. R.; Collatz, G. J.; Davis, T.; Stocker, B.; Evans, B. J.

    2015-12-01

    Empirical upscaling techniques such as machine learning and data-mining have proven invaluable tools for the global scaling of disparate observations of surface-atmosphere exchange, but are not based on a theoretical understanding of the key processes involved. This makes spatial and temporal extrapolation outside of the training domain difficult at best. There is therefore a clear need for the incorporation of knowledge of ecosystem function, in combination with the strength of data mining. Here, we present such an approach. We describe a novel diagnostic process-based model of global photosynthesis and ecosystem respiration, which is directly informed by a variety of global datasets relevant to ecosystem state and function. We use the model framework to estimate global carbon cycling both spatially and temporally, with a specific focus on the mechanisms responsible for long-term change. Our results show the importance of incorporating process knowledge into upscaling approaches, and highlight the effect of key processes on the terrestrial carbon cycle.

  7. Integrating statistical and process-based models to produce probabilistic landslide hazard at regional scale

    NASA Astrophysics Data System (ADS)

    Strauch, R. L.; Istanbulluoglu, E.

    2017-12-01

    We develop a landslide hazard modeling approach that integrates a data-driven statistical model and a probabilistic process-based shallow landslide model for mapping probability of landslide initiation, transport, and deposition at regional scales. The empirical model integrates the influence of seven site attribute (SA) classes: elevation, slope, curvature, aspect, land use-land cover, lithology, and topographic wetness index, on over 1,600 observed landslides using a frequency ratio (FR) approach. A susceptibility index is calculated by adding FRs for each SA on a grid-cell basis. Using landslide observations we relate susceptibility index to an empirically-derived probability of landslide impact. This probability is combined with results from a physically-based model to produce an integrated probabilistic map. Slope was key in landslide initiation while deposition was linked to lithology and elevation. Vegetation transition from forest to alpine vegetation and barren land cover with lower root cohesion leads to higher frequency of initiation. Aspect effects are likely linked to differences in root cohesion and moisture controlled by solar insulation and snow. We demonstrate the model in the North Cascades of Washington, USA and identify locations of high and low probability of landslide impacts that can be used by land managers in their design, planning, and maintenance.

  8. A data-driven approach to identify controls on global fire activity from satellite and climate observations (SOFIA V1)

    NASA Astrophysics Data System (ADS)

    Forkel, Matthias; Dorigo, Wouter; Lasslop, Gitta; Teubner, Irene; Chuvieco, Emilio; Thonicke, Kirsten

    2017-12-01

    Vegetation fires affect human infrastructures, ecosystems, global vegetation distribution, and atmospheric composition. However, the climatic, environmental, and socioeconomic factors that control global fire activity in vegetation are only poorly understood, and in various complexities and formulations are represented in global process-oriented vegetation-fire models. Data-driven model approaches such as machine learning algorithms have successfully been used to identify and better understand controlling factors for fire activity. However, such machine learning models cannot be easily adapted or even implemented within process-oriented global vegetation-fire models. To overcome this gap between machine learning-based approaches and process-oriented global fire models, we introduce a new flexible data-driven fire modelling approach here (Satellite Observations to predict FIre Activity, SOFIA approach version 1). SOFIA models can use several predictor variables and functional relationships to estimate burned area that can be easily adapted with more complex process-oriented vegetation-fire models. We created an ensemble of SOFIA models to test the importance of several predictor variables. SOFIA models result in the highest performance in predicting burned area if they account for a direct restriction of fire activity under wet conditions and if they include a land cover-dependent restriction or allowance of fire activity by vegetation density and biomass. The use of vegetation optical depth data from microwave satellite observations, a proxy for vegetation biomass and water content, reaches higher model performance than commonly used vegetation variables from optical sensors. We further analyse spatial patterns of the sensitivity between anthropogenic, climate, and vegetation predictor variables and burned area. We finally discuss how multiple observational datasets on climate, hydrological, vegetation, and socioeconomic variables together with data-driven modelling and model-data integration approaches can guide the future development of global process-oriented vegetation-fire models.

  9. Statistical physics of medical diagnostics: Study of a probabilistic model.

    PubMed

    Mashaghi, Alireza; Ramezanpour, Abolfazl

    2018-03-01

    We study a diagnostic strategy which is based on the anticipation of the diagnostic process by simulation of the dynamical process starting from the initial findings. We show that such a strategy could result in more accurate diagnoses compared to a strategy that is solely based on the direct implications of the initial observations. We demonstrate this by employing the mean-field approximation of statistical physics to compute the posterior disease probabilities for a given subset of observed signs (symptoms) in a probabilistic model of signs and diseases. A Monte Carlo optimization algorithm is then used to maximize an objective function of the sequence of observations, which favors the more decisive observations resulting in more polarized disease probabilities. We see how the observed signs change the nature of the macroscopic (Gibbs) states of the sign and disease probability distributions. The structure of these macroscopic states in the configuration space of the variables affects the quality of any approximate inference algorithm (so the diagnostic performance) which tries to estimate the sign-disease marginal probabilities. In particular, we find that the simulation (or extrapolation) of the diagnostic process is helpful when the disease landscape is not trivial and the system undergoes a phase transition to an ordered phase.

  10. Statistical physics of medical diagnostics: Study of a probabilistic model

    NASA Astrophysics Data System (ADS)

    Mashaghi, Alireza; Ramezanpour, Abolfazl

    2018-03-01

    We study a diagnostic strategy which is based on the anticipation of the diagnostic process by simulation of the dynamical process starting from the initial findings. We show that such a strategy could result in more accurate diagnoses compared to a strategy that is solely based on the direct implications of the initial observations. We demonstrate this by employing the mean-field approximation of statistical physics to compute the posterior disease probabilities for a given subset of observed signs (symptoms) in a probabilistic model of signs and diseases. A Monte Carlo optimization algorithm is then used to maximize an objective function of the sequence of observations, which favors the more decisive observations resulting in more polarized disease probabilities. We see how the observed signs change the nature of the macroscopic (Gibbs) states of the sign and disease probability distributions. The structure of these macroscopic states in the configuration space of the variables affects the quality of any approximate inference algorithm (so the diagnostic performance) which tries to estimate the sign-disease marginal probabilities. In particular, we find that the simulation (or extrapolation) of the diagnostic process is helpful when the disease landscape is not trivial and the system undergoes a phase transition to an ordered phase.

  11. Physics-based process model approach for detecting discontinuity during friction stir welding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shrivastava, Amber; Pfefferkorn, Frank E.; Duffie, Neil A.

    2015-02-12

    The goal of this work is to develop a method for detecting the creation of discontinuities during friction stir welding. This in situ weld monitoring method could significantly reduce the need for post-process inspection. A process force model and a discontinuity force model were created based on the state-of-the-art understanding of flow around an friction stir welding (FSW) tool. These models are used to predict the FSW forces and size of discontinuities formed in the weld. Friction stir welds with discontinuities and welds without discontinuities were created, and the differences in force dynamics were observed. In this paper, discontinuities weremore » generated by reducing the tool rotation frequency and increasing the tool traverse speed in order to create "cold" welds. Experimental force data for welds with discontinuities and welds without discontinuities compared favorably with the predicted forces. The model currently overpredicts the discontinuity size.« less

  12. Optimal filtering and Bayesian detection for friction-based diagnostics in machines.

    PubMed

    Ray, L R; Townsend, J R; Ramasubramanian, A

    2001-01-01

    Non-model-based diagnostic methods typically rely on measured signals that must be empirically related to process behavior or incipient faults. The difficulty in interpreting a signal that is indirectly related to the fundamental process behavior is significant. This paper presents an integrated non-model and model-based approach to detecting when process behavior varies from a proposed model. The method, which is based on nonlinear filtering combined with maximum likelihood hypothesis testing, is applicable to dynamic systems whose constitutive model is well known, and whose process inputs are poorly known. Here, the method is applied to friction estimation and diagnosis during motion control in a rotating machine. A nonlinear observer estimates friction torque in a machine from shaft angular position measurements and the known input voltage to the motor. The resulting friction torque estimate can be analyzed directly for statistical abnormalities, or it can be directly compared to friction torque outputs of an applicable friction process model in order to diagnose faults or model variations. Nonlinear estimation of friction torque provides a variable on which to apply diagnostic methods that is directly related to model variations or faults. The method is evaluated experimentally by its ability to detect normal load variations in a closed-loop controlled motor driven inertia with bearing friction and an artificially-induced external line contact. Results show an ability to detect statistically significant changes in friction characteristics induced by normal load variations over a wide range of underlying friction behaviors.

  13. Quadratic Polynomial Regression using Serial Observation Processing:Implementation within DART

    NASA Astrophysics Data System (ADS)

    Hodyss, D.; Anderson, J. L.; Collins, N.; Campbell, W. F.; Reinecke, P. A.

    2017-12-01

    Many Ensemble-Based Kalman ltering (EBKF) algorithms process the observations serially. Serial observation processing views the data assimilation process as an iterative sequence of scalar update equations. What is useful about this data assimilation algorithm is that it has very low memory requirements and does not need complex methods to perform the typical high-dimensional inverse calculation of many other algorithms. Recently, the push has been towards the prediction, and therefore the assimilation of observations, for regions and phenomena for which high-resolution is required and/or highly nonlinear physical processes are operating. For these situations, a basic hypothesis is that the use of the EBKF is sub-optimal and performance gains could be achieved by accounting for aspects of the non-Gaussianty. To this end, we develop here a new component of the Data Assimilation Research Testbed [DART] to allow for a wide-variety of users to test this hypothesis. This new version of DART allows one to run several variants of the EBKF as well as several variants of the quadratic polynomial lter using the same forecast model and observations. Dierences between the results of the two systems will then highlight the degree of non-Gaussianity in the system being examined. We will illustrate in this work the differences between the performance of linear versus quadratic polynomial regression in a hierarchy of models from Lorenz-63 to a simple general circulation model.

  14. Charge transport model in nanodielectric composites based on quantum tunneling mechanism and dual-level traps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Guochang; Chen, George, E-mail: gc@ecs.soton.ac.uk, E-mail: sli@mail.xjtu.edu.cn; School of Electronic and Computer Science, University of Southampton, Southampton SO17 1BJ

    Charge transport properties in nanodielectrics present different tendencies for different loading concentrations. The exact mechanisms that are responsible for charge transport in nanodielectrics are not detailed, especially for high loading concentration. A charge transport model in nanodielectrics has been proposed based on quantum tunneling mechanism and dual-level traps. In the model, the thermally assisted hopping (TAH) process for the shallow traps and the tunnelling process for the deep traps are considered. For different loading concentrations, the dominant charge transport mechanisms are different. The quantum tunneling mechanism plays a major role in determining the charge conduction in nanodielectrics with high loadingmore » concentrations. While for low loading concentrations, the thermal hopping mechanism will dominate the charge conduction process. The model can explain the observed conductivity property in nanodielectrics with different loading concentrations.« less

  15. Land Surface Data Assimilation

    NASA Astrophysics Data System (ADS)

    Houser, P. R.

    2012-12-01

    Information about land surface water, energy and carbon conditions is of critical importance to real-world applications such as agricultural production, water resource management, flood prediction, water supply, weather and climate forecasting, and environmental preservation. While ground-based observational networks are improving, the only practical way to observe these land surface states on continental to global scales is via satellites. Remote sensing can make spatially comprehensive measurements of various components of the terrestrial system, but it cannot provide information on the entire system (e.g. evaporation), and the observations represent only an instant in time. Land surface process models may be used to predict temporal and spatial terrestrial dynamics, but these predictions are often poor, due to model initialization, parameter and forcing, and physics errors. Therefore, an attractive prospect is to combine the strengths of land surface models and observations (and minimize the weaknesses) to provide a superior terrestrial state estimate. This is the goal of land surface data assimilation. Data Assimilation combines observations into a dynamical model, using the model's equations to provide time continuity and coupling between the estimated fields. Land surface data assimilation aims to utilize both our land surface process knowledge, as embodied in a land surface model, and information that can be gained from observations. Both model predictions and observations are imperfect and we wish to use both synergistically to obtain a more accurate result. Moreover, both contain different kinds of information, that when used together, provide an accuracy level that cannot be obtained individually. Model biases can be mitigated using a complementary calibration and parameterization process. Limited point measurements are often used to calibrate the model(s) and validate the assimilation results. This presentation will provide a brief background on land surface observation, modeling and data assimilation, followed by a discussion of various hydrologic data assimilation challenges, and finally conclude with several land surface data assimilation case studies.

  16. Analysis of the 20th November 2003 Extreme Geomagnetic Storm using CTIPe Model and GNSS Data

    NASA Astrophysics Data System (ADS)

    Fernandez-Gomez, I.; Borries, C.; Codrescu, M.

    2016-12-01

    The ionospheric instabilities produced by solar activity generate disturbances in ionospheric density (ionospheric storms) with important terrestrial consequences such as disrupting communications and positioning. During the 20th November 2003 extreme geomagnetic storm, significant perturbations were produced in the ionosphere - thermosphere system. In this work, we replicate how this system responded to the onset of this particular storm, using the Coupled Thermosphere Ionosphere Plasmasphere electrodynamics physics based model. CTIPe simulates the changes in the neutral winds, temperature, composition and electron densities. Although modelling the ionosphere under this conditions is a challenging task due to energy flow uncertainties, the model reproduces some of the storm features necessary to interpret the physical mechanisms behind the Total Electron Content (TEC) increase and the dramatic changes in composition during this event.Corresponding effects are observed in the TEC simulations from other physics based models and from observations derived from Global Navigation Satellite System (GNSS) and ground-based measurements.The study illustrates the necessity of using both, measurements and models, to have a complete understanding of the processes that are most likely responsible for the observed effects.

  17. Single baseline GLONASS observations with VLBI: data processing and first results

    NASA Astrophysics Data System (ADS)

    Tornatore, V.; Haas, R.; Duev, D.; Pogrebenko, S.; Casey, S.; Molera Calvés, G.; Keimpema, A.

    2011-07-01

    Several tests to observe signals transmitted by GLONASS (GLObal NAvigation Satellite System) satellites have been performed using the geodetic VLBI (Very Long Baseline Interferometry) technique. The radio telescopes involved in these experiments were Medicina (Italy) and Onsala (Sweden), both equipped with L-band receivers. Observations at the stations were performed using the standard Mark4 VLBI data acquisition rack and Mark5A disk-based recorders. The goals of the observations were to develop and test the scheduling, signal acquisition and processing routines to verify the full tracking pipeline, foreseeing the cross-correlation of the recorded data on the baseline Onsala-Medicina. The natural radio source 3c286 was used as a calibrator before the starting of the satellite observation sessions. Delay models, including the tropospheric and ionospheric corrections, which are consistent for both far- and near-field sources are under development. Correlation of the calibrator signal has been performed using the DiFX software, while the satellite signals have been processed using the narrow band approach with the Metsaehovi software and analysed with a near-field delay model. Delay models both for the calibrator signals and the satellites signals, using the same geometrical, tropospheric and ionospheric models, are under investigation to make a correlation of the satellite signals possible.

  18. Estimation of single plane unbalance parameters of a rotor-bearing system using Kalman filtering based force estimation technique

    NASA Astrophysics Data System (ADS)

    Shrivastava, Akash; Mohanty, A. R.

    2018-03-01

    This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.

  19. A hybrid agent-based approach for modeling microbiological systems.

    PubMed

    Guo, Zaiyi; Sloot, Peter M A; Tay, Joc Cing

    2008-11-21

    Models for systems biology commonly adopt Differential Equations or Agent-Based modeling approaches for simulating the processes as a whole. Models based on differential equations presuppose phenomenological intracellular behavioral mechanisms, while models based on Multi-Agent approach often use directly translated, and quantitatively less precise if-then logical rule constructs. We propose an extendible systems model based on a hybrid agent-based approach where biological cells are modeled as individuals (agents) while molecules are represented by quantities. This hybridization in entity representation entails a combined modeling strategy with agent-based behavioral rules and differential equations, thereby balancing the requirements of extendible model granularity with computational tractability. We demonstrate the efficacy of this approach with models of chemotaxis involving an assay of 10(3) cells and 1.2x10(6) molecules. The model produces cell migration patterns that are comparable to laboratory observations.

  20. Three dimensional modeling of cirrus during the 1991 FIRE IFO 2: Detailed process study

    NASA Technical Reports Server (NTRS)

    Jensen, Eric J.; Toon, Owen B.; Westphal, Douglas L.

    1993-01-01

    A three-dimensional model of cirrus cloud formation and evolution, including microphysical, dynamical, and radiative processes, was used to simulate cirrus observed in the FIRE Phase 2 Cirrus field program (13 Nov. - 7 Dec. 1991). Sulfate aerosols, solution drops, ice crystals, and water vapor are all treated as interactive elements in the model. Ice crystal size distributions are fully resolved based on calculations of homogeneous freezing of solution drops, growth by water vapor deposition, evaporation, aggregation, and vertical transport. Visible and infrared radiative fluxes, and radiative heating rates are calculated using the two-stream algorithm described by Toon et al. Wind velocities, diffusion coefficients, and temperatures were taken from the MAPS analyses and the MM4 mesoscale model simulations. Within the model, moisture is transported and converted to liquid or vapor by the microphysical processes. The simulated cloud bulk and microphysical properties are shown in detail for the Nov. 26 and Dec. 5 case studies. Comparisons with lidar, radar, and in situ data are used to determine how well the simulations reproduced the observed cirrus. The roles played by various processes in the model are described in detail. The potential modes of nucleation are evaluated, and the importance of small-scale variations in temperature and humidity are discussed. The importance of competing ice crystal growth mechanisms (water vapor deposition and aggregation) are evaluated based on model simulations. Finally, the importance of ice crystal shape for crystal growth and vertical transport of ice are discussed.

  1. Maximum likelihood-based analysis of single-molecule photon arrival trajectories

    NASA Astrophysics Data System (ADS)

    Hajdziona, Marta; Molski, Andrzej

    2011-02-01

    In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 103 photons. When the intensity levels are well-separated and 104 photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.

  2. Unmanned Aerial Systems as Part of a Multi-Component Assessment Strategy to Address Climate Change and Atmospheric Processes

    NASA Astrophysics Data System (ADS)

    Lange, Manfred; Vrekoussis, Mihalis; Sciare, Jean; Argyrides, Marios; Ioannou, Stelios; Keleshis, Christos

    2015-04-01

    Unmanned Aerial Systems (UAS) have been established as versatile tools for different applications, providing data and observations for atmospheric and Earth-Systems research. They offer an urgently needed link between in-situ ground based measurements and satellite remote sensing observations and are distinguished by significant versatility, flexibility and moderate operational costs. UAS have the proven potential to contribute to a multi-component assessment strategy that combines remote-sensing, numerical modelling and surface measurements in order to elucidate important atmospheric processes. This includes physical and chemical transformations related to ongoing climate change as well as issues linked to aerosol-cloud interactions and air quality. The distinct advantages offered by UAS comprise, to name but a few: (i) their ability to operate from altitudes of a few meters to up to a few kilometers; (ii) their capability to perform autonomously controlled missions, which provides for repeat-measurements to be carried out at precisely defined locations; (iii) their relative ease of operation, which enables flexible employment at short-term notice and (iv) the employment of more than one platform in stacked formation, which allows for unique, quasi-3D-observations of atmospheric properties and processes. These advantages are brought to bear in combining in-situ ground based observations and numerical modeling with UAS-based remote sensing in elucidating specific research questions that require both horizontally and vertically resolved measurements at high spatial and temporal resolutions. Employing numerical atmospheric modelling, UAS can provide survey information over spatially and temporally localized, focused areas of evolving atmospheric phenomena, as they become identified by the numerical models. Conversely, UAS observations offer urgently needed data for model verification and provide boundary conditions for numerical models. In this presentation, we will briefly describe the current elements of our observational capabilities that enable the aforementioned multi-component assessment strategy by the Unmanned Systems Research Laboratory of the Cyprus Institute. This strategy is applied and utilized in the context of the EU-funded BACCHUS project, aside from other tasks. The ongoing and planned observations are particularly relevant as they are carried out in the Eastern Mediterranean and the Middle East, a region characterized by increasing anthropogenic pressures and ongoing and anticipated severe climatic changes and their impacts.

  3. Model Improvement by Assimilating Observations of Storm-Induced Coastal Change

    NASA Astrophysics Data System (ADS)

    Long, J. W.; Plant, N. G.; Sopkin, K.

    2010-12-01

    Discrete, large scale, meteorological events such as hurricanes can cause wide-spread destruction of coastal islands, habitats, and infrastructure. The effects can vary significantly along the coast depending on the configuration of the coastline, variable dune elevations, changes in geomorphology (sandy beach vs. marshland), and alongshore variations in storm hydrodynamic forcing. There are two primary methods of determining the changing state of a coastal system. Process-based numerical models provide highly resolved (in space and time) representations of the dominant dynamics in a physical system but must employ certain parameterizations due to computational limitations. The predictive capability may also suffer from the lack of reliable initial or boundary conditions. On the other hand, observations of coastal topography before and after the storm allow the direct quantification of cumulative storm impacts. Unfortunately these measurements suffer from instrument noise and a lack of necessary temporal resolution. This research focuses on the combination of these two pieces of information to make more reliable forecasts of storm-induced coastal change. Of primary importance is the development of a data assimilation strategy that is efficient, applicable for use with highly nonlinear models, and able to quantify the remaining forecast uncertainty based on the reliability of each individual piece of information used in the assimilation process. We concentrate on an event time-scale and estimate/update unobserved model information (boundary conditions, free parameters, etc.) by assimilating direct observations of coastal change with those simulated by the model. The data assimilation can help estimate spatially varying quantities (e.g. friction coefficients) that are often modeled as homogeneous and identify processes inadequately characterized in the model.

  4. Pursuing the method of multiple working hypotheses to understand differences in process-based snow models

    NASA Astrophysics Data System (ADS)

    Clark, Martyn; Essery, Richard

    2017-04-01

    When faced with the complex and interdisciplinary challenge of building process-based land models, different modelers make different decisions at different points in the model development process. These modeling decisions are generally based on several considerations, including fidelity (e.g., what approaches faithfully simulate observed processes), complexity (e.g., which processes should be represented explicitly), practicality (e.g., what is the computational cost of the model simulations; are there sufficient resources to implement the desired modeling concepts), and data availability (e.g., is there sufficient data to force and evaluate models). Consequently the research community, comprising modelers of diverse background, experience, and modeling philosophy, has amassed a wide range of models, which differ in almost every aspect of their conceptualization and implementation. Model comparison studies have been undertaken to explore model differences, but have not been able to meaningfully attribute inter-model differences in predictive ability to individual model components because there are often too many structural and implementation differences among the different models considered. As a consequence, model comparison studies to date have provided limited insight into the causes of differences in model behavior, and model development has often relied on the inspiration and experience of individual modelers rather than on a systematic analysis of model shortcomings. This presentation will summarize the use of "multiple-hypothesis" modeling frameworks to understand differences in process-based snow models. Multiple-hypothesis frameworks define a master modeling template, and include a a wide variety of process parameterizations and spatial configurations that are used in existing models. Such frameworks provide the capability to decompose complex models into the individual decisions that are made as part of model development, and evaluate each decision in isolation. It is hence possible to attribute differences in system-scale model predictions to individual modeling decisions, providing scope to mimic the behavior of existing models, understand why models differ, characterize model uncertainty, and identify productive pathways to model improvement. Results will be presented applying multiple hypothesis frameworks to snow model comparison projects, including PILPS, SnowMIP, and the upcoming ESM-SnowMIP project.

  5. Diurnal hysteresis between soil CO2 and soil temperature is controlled by soil water content

    Treesearch

    Diego A. Riveros-Iregui; Ryan E. Emanuel; Daniel J. Muth; L. McGlynn Brian; Howard E. Epstein; Daniel L. Welsch; Vincent J. Pacific; Jon M. Wraith

    2007-01-01

    Recent years have seen a growing interest in measuring and modeling soil CO2 efflux, as this flux represents a large component of ecosystem respiration and is a key determinant of ecosystem carbon balance. Process-based models of soil CO2 production and efflux, commonly based on soil temperature, are limited by nonlinearities such as the observed diurnal hysteresis...

  6. Alternative ways of using field-based estimates to calibrate ecosystem models and their implications for carbon cycle studies

    USGS Publications Warehouse

    He, Yujie; Zhuang, Qianlai; McGuire, David; Liu, Yaling; Chen, Min

    2013-01-01

    Model-data fusion is a process in which field observations are used to constrain model parameters. How observations are used to constrain parameters has a direct impact on the carbon cycle dynamics simulated by ecosystem models. In this study, we present an evaluation of several options for the use of observations in modeling regional carbon dynamics and explore the implications of those options. We calibrated the Terrestrial Ecosystem Model on a hierarchy of three vegetation classification levels for the Alaskan boreal forest: species level, plant-functional-type level (PFT level), and biome level, and we examined the differences in simulated carbon dynamics. Species-specific field-based estimates were directly used to parameterize the model for species-level simulations, while weighted averages based on species percent cover were used to generate estimates for PFT- and biome-level model parameterization. We found that calibrated key ecosystem process parameters differed substantially among species and overlapped for species that are categorized into different PFTs. Our analysis of parameter sets suggests that the PFT-level parameterizations primarily reflected the dominant species and that functional information of some species were lost from the PFT-level parameterizations. The biome-level parameterization was primarily representative of the needleleaf PFT and lost information on broadleaf species or PFT function. Our results indicate that PFT-level simulations may be potentially representative of the performance of species-level simulations while biome-level simulations may result in biased estimates. Improved theoretical and empirical justifications for grouping species into PFTs or biomes are needed to adequately represent the dynamics of ecosystem functioning and structure.

  7. Understanding the effects of different HIV transmission models in individual-based microsimulation of HIV epidemic dynamics in people who inject drugs

    PubMed Central

    MONTEIRO, J.F.G.; ESCUDERO, D.J.; WEINREB, C.; FLANIGAN, T.; GALEA, S.; FRIEDMAN, S.R.; MARSHALL, B.D.L.

    2017-01-01

    SUMMARY We investigated how different models of HIV transmission, and assumptions regarding the distribution of unprotected sex and syringe-sharing events (‘risk acts’), affect quantitative understanding of HIV transmission process in people who inject drugs (PWID). The individual-based model simulated HIV transmission in a dynamic sexual and injecting network representing New York City. We constructed four HIV transmission models: model 1, constant probabilities; model 2, random number of sexual and parenteral acts; model 3, viral load individual assigned; and model 4, two groups of partnerships (low and high risk). Overall, models with less heterogeneity were more sensitive to changes in numbers risk acts, producing HIV incidence up to four times higher than that empirically observed. Although all models overestimated HIV incidence, micro-simulations with greater heterogeneity in the HIV transmission modelling process produced more robust results and better reproduced empirical epidemic dynamics. PMID:26753627

  8. Using Sensor Web Processes and Protocols to Assimilate Satellite Data into a Forecast Model

    NASA Technical Reports Server (NTRS)

    Goodman, H. Michael; Conover, Helen; Zavodsky, Bradley; Maskey, Manil; Jedlovec, Gary; Regner, Kathryn; Li, Xiang; Lu, Jessica; Botts, Mike; Berthiau, Gregoire

    2008-01-01

    The goal of the Sensor Management Applied Research Technologies (SMART) On-Demand Modeling project is to develop and demonstrate the readiness of the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) capabilities to integrate both space-based Earth observations and forecast model output into new data acquisition and assimilation strategies. The project is developing sensor web-enabled processing plans to assimilate Atmospheric Infrared Sounding (AIRS) satellite temperature and moisture retrievals into a regional Weather Research and Forecast (WRF) model over the southeastern United States.

  9. Toward a self-organizing pre-symbolic neural model representing sensorimotor primitives.

    PubMed

    Zhong, Junpei; Cangelosi, Angelo; Wermter, Stefan

    2014-01-01

    The acquisition of symbolic and linguistic representations of sensorimotor behavior is a cognitive process performed by an agent when it is executing and/or observing own and others' actions. According to Piaget's theory of cognitive development, these representations develop during the sensorimotor stage and the pre-operational stage. We propose a model that relates the conceptualization of the higher-level information from visual stimuli to the development of ventral/dorsal visual streams. This model employs neural network architecture incorporating a predictive sensory module based on an RNNPB (Recurrent Neural Network with Parametric Biases) and a horizontal product model. We exemplify this model through a robot passively observing an object to learn its features and movements. During the learning process of observing sensorimotor primitives, i.e., observing a set of trajectories of arm movements and its oriented object features, the pre-symbolic representation is self-organized in the parametric units. These representational units act as bifurcation parameters, guiding the robot to recognize and predict various learned sensorimotor primitives. The pre-symbolic representation also accounts for the learning of sensorimotor primitives in a latent learning context.

  10. Toward a self-organizing pre-symbolic neural model representing sensorimotor primitives

    PubMed Central

    Zhong, Junpei; Cangelosi, Angelo; Wermter, Stefan

    2014-01-01

    The acquisition of symbolic and linguistic representations of sensorimotor behavior is a cognitive process performed by an agent when it is executing and/or observing own and others' actions. According to Piaget's theory of cognitive development, these representations develop during the sensorimotor stage and the pre-operational stage. We propose a model that relates the conceptualization of the higher-level information from visual stimuli to the development of ventral/dorsal visual streams. This model employs neural network architecture incorporating a predictive sensory module based on an RNNPB (Recurrent Neural Network with Parametric Biases) and a horizontal product model. We exemplify this model through a robot passively observing an object to learn its features and movements. During the learning process of observing sensorimotor primitives, i.e., observing a set of trajectories of arm movements and its oriented object features, the pre-symbolic representation is self-organized in the parametric units. These representational units act as bifurcation parameters, guiding the robot to recognize and predict various learned sensorimotor primitives. The pre-symbolic representation also accounts for the learning of sensorimotor primitives in a latent learning context. PMID:24550798

  11. Modeling Hospital Discharge and Placement Decision Making: Whither the Elderly.

    ERIC Educational Resources Information Center

    Clark, William F.; Pelham, Anabel O.

    This paper examines the hospital discharge decision making process for elderly patients, based on observations of the operations of a long term care agency, the California Multipurpose Senior Services Project. The analysis is divided into four components: actors, factors, processes, and strategy critique. The first section discusses the major…

  12. The evaluation of GCMs and a new cloud parameterisation using satellite and in-situ data as part of a Climate Process Team

    NASA Astrophysics Data System (ADS)

    Grosvenor, D. P.; Wood, R.

    2012-12-01

    As part of one of the Climate Process Teams (CPTs) we have been testing the implementation of a new cloud parameterization into the CAM5 and AM3 GCMs. The CLUBB parameterization replaces all but the deep convection cloud scheme and uses an innovative PDF based approach to diagnose cloud water content and turbulence. We have evaluated the base models and the CLUBB parameterization in the SE Pacific stratocumulus region using a suite of satellite observation metrics including: Liquid Water Path (LWP) measurements from AMSRE; cloud fractions from CloudSat/CALIPSO; droplet concentrations (Nd) and Cloud Top Temperatures from MODIS; CloudSat precipitation; and relationships between Estimated Inversion Strength (calculated from AMSRE SSTs, Cloud Top Temperatures from MODIS and ECMWF re-analysis fields) and cloud fraction. This region has the advantage of an abundance of in-situ aircraft observations taken during the VOCALS campaign, which is facilitating the diagnosis of the model problems highlighted by the model evaluation. This data has also been recently used to demonstrate the reliability of MODIS Nd estimates. The satellite data needs to be filtered to ensure accurate retrievals and we have been careful to apply the same screenings to the model fields. For example, scenes with high cloud fractions and with output times near to the satellite overpass times can be extracted from the model for a fair comparison with MODIS Nd estimates. To facilitate this we have been supplied with instantaneous model output since screening would not be possible based on time averaged data. We also have COSP satellite simulator output, which allows a fairer comparison between satellite and model. For example, COSP cloud fraction is based upon the detection threshold of the satellite instrument in question. These COSP fields are also used for the model output filtering just described. The results have revealed problems with both the base models and the versions with the CLUBB parameterization. The CAM5 model produces realistic near-coast cloud cover, but too little further west in the stratocumulus to cumulus regions. The implementation of CLUBB has vastly improved this situation with cloud cover that is very similar to that observed. CLUBB also improves the Nd field in CAM5 by producing realistic near-coast increases and by removing high Nd values associated with the detrainment of droplets by cumulus clouds. AM3 has a lack of stratocumulus cloud near the South American coast and has much lower droplet concentrations than observed. VOCALS measurements showed that sulfate mass loadings were generally too high in both base models, whereas CCN concentrations were too low. This suggests a problem with the mass distribution partitioning of sulfate that is being investigated. Diurnal and seasonal comparisons have been very illuminating. CLUBB produces very little diurnal variation in LWP, but large variations in precipitation rates. This is likely to point to problems that are now being addressed by the modeling part of the CPT team, creating an iterative workflow process between the model developers and the model testers, which should facilitate efficient parameterization improvement. We will report on the latest developments of this process.

  13. A magnetic model for low/hard state of black hole binaries

    NASA Astrophysics Data System (ADS)

    Ye, Yong-Chun; Wang, Ding-Xiong; Huang, Chang-Yin; Cao, Xiao-Feng

    2016-03-01

    A magnetic model for the low/hard state (LHS) of two black hole X-ray binaries (BHXBs), H1743-322 and GX 339-4, is proposed based on transport of the magnetic field from a companion into an accretion disk around a black hole (BH). This model consists of a truncated thin disk with an inner advection-dominated accretion flow (ADAF). The spectral profiles of the sources are fitted in agreement with the data observed at four different dates corresponding to the rising phase of the LHS. In addition, the association of the LHS with a quasi-steady jet is modeled based on transport of magnetic field, where the Blandford-Znajek (BZ) and Blandford-Payne (BP) processes are invoked to drive the jets from BH and inner ADAF. It turns out that the steep radio/X-ray correlations observed in H1743-322 and GX 339-4 can be interpreted based on our model.

  14. A Dirichlet process model for classifying and forecasting epidemic curves

    PubMed Central

    2014-01-01

    Background A forecast can be defined as an endeavor to quantitatively estimate a future event or probabilities assigned to a future occurrence. Forecasting stochastic processes such as epidemics is challenging since there are several biological, behavioral, and environmental factors that influence the number of cases observed at each point during an epidemic. However, accurate forecasts of epidemics would impact timely and effective implementation of public health interventions. In this study, we introduce a Dirichlet process (DP) model for classifying and forecasting influenza epidemic curves. Methods The DP model is a nonparametric Bayesian approach that enables the matching of current influenza activity to simulated and historical patterns, identifies epidemic curves different from those observed in the past and enables prediction of the expected epidemic peak time. The method was validated using simulated influenza epidemics from an individual-based model and the accuracy was compared to that of the tree-based classification technique, Random Forest (RF), which has been shown to achieve high accuracy in the early prediction of epidemic curves using a classification approach. We also applied the method to forecasting influenza outbreaks in the United States from 1997–2013 using influenza-like illness (ILI) data from the Centers for Disease Control and Prevention (CDC). Results We made the following observations. First, the DP model performed as well as RF in identifying several of the simulated epidemics. Second, the DP model correctly forecasted the peak time several days in advance for most of the simulated epidemics. Third, the accuracy of identifying epidemics different from those already observed improved with additional data, as expected. Fourth, both methods correctly classified epidemics with higher reproduction numbers (R) with a higher accuracy compared to epidemics with lower R values. Lastly, in the classification of seasonal influenza epidemics based on ILI data from the CDC, the methods’ performance was comparable. Conclusions Although RF requires less computational time compared to the DP model, the algorithm is fully supervised implying that epidemic curves different from those previously observed will always be misclassified. In contrast, the DP model can be unsupervised, semi-supervised or fully supervised. Since both methods have their relative merits, an approach that uses both RF and the DP model could be beneficial. PMID:24405642

  15. Point process modeling and estimation: Advances in the analysis of dynamic neural spiking data

    NASA Astrophysics Data System (ADS)

    Deng, Xinyi

    2016-08-01

    A common interest of scientists in many fields is to understand the relationship between the dynamics of a physical system and the occurrences of discrete events within such physical system. Seismologists study the connection between mechanical vibrations of the Earth and the occurrences of earthquakes so that future earthquakes can be better predicted. Astrophysicists study the association between the oscillating energy of celestial regions and the emission of photons to learn the Universe's various objects and their interactions. Neuroscientists study the link between behavior and the millisecond-timescale spike patterns of neurons to understand higher brain functions. Such relationships can often be formulated within the framework of state-space models with point process observations. The basic idea is that the dynamics of the physical systems are driven by the dynamics of some stochastic state variables and the discrete events we observe in an interval are noisy observations with distributions determined by the state variables. This thesis proposes several new methodological developments that advance the framework of state-space models with point process observations at the intersection of statistics and neuroscience. In particular, we develop new methods 1) to characterize the rhythmic spiking activity using history-dependent structure, 2) to model population spike activity using marked point process models, 3) to allow for real-time decision making, and 4) to take into account the need for dimensionality reduction for high-dimensional state and observation processes. We applied these methods to a novel problem of tracking rhythmic dynamics in the spiking of neurons in the subthalamic nucleus of Parkinson's patients with the goal of optimizing placement of deep brain stimulation electrodes. We developed a decoding algorithm that can make decision in real-time (for example, to stimulate the neurons or not) based on various sources of information present in population spiking data. Lastly, we proposed a general three-step paradigm that allows us to relate behavioral outcomes of various tasks to simultaneously recorded neural activity across multiple brain areas, which is a step towards closed-loop therapies for psychological diseases using real-time neural stimulation. These methods are suitable for real-time implementation for content-based feedback experiments.

  16. How can model comparison help improving species distribution models?

    PubMed

    Gritti, Emmanuel Stephan; Gaucherel, Cédric; Crespo-Perez, Maria-Veronica; Chuine, Isabelle

    2013-01-01

    Today, more than ever, robust projections of potential species range shifts are needed to anticipate and mitigate the impacts of climate change on biodiversity and ecosystem services. Such projections are so far provided almost exclusively by correlative species distribution models (correlative SDMs). However, concerns regarding the reliability of their predictive power are growing and several authors call for the development of process-based SDMs. Still, each of these methods presents strengths and weakness which have to be estimated if they are to be reliably used by decision makers. In this study we compare projections of three different SDMs (STASH, LPJ and PHENOFIT) that lie in the continuum between correlative models and process-based models for the current distribution of three major European tree species, Fagussylvatica L., Quercusrobur L. and Pinussylvestris L. We compare the consistency of the model simulations using an innovative comparison map profile method, integrating local and multi-scale comparisons. The three models simulate relatively accurately the current distribution of the three species. The process-based model performs almost as well as the correlative model, although parameters of the former are not fitted to the observed species distributions. According to our simulations, species range limits are triggered, at the European scale, by establishment and survival through processes primarily related to phenology and resistance to abiotic stress rather than to growth efficiency. The accuracy of projections of the hybrid and process-based model could however be improved by integrating a more realistic representation of the species resistance to water stress for instance, advocating for pursuing efforts to understand and formulate explicitly the impact of climatic conditions and variations on these processes.

  17. How Can Model Comparison Help Improving Species Distribution Models?

    PubMed Central

    Gritti, Emmanuel Stephan; Gaucherel, Cédric; Crespo-Perez, Maria-Veronica; Chuine, Isabelle

    2013-01-01

    Today, more than ever, robust projections of potential species range shifts are needed to anticipate and mitigate the impacts of climate change on biodiversity and ecosystem services. Such projections are so far provided almost exclusively by correlative species distribution models (correlative SDMs). However, concerns regarding the reliability of their predictive power are growing and several authors call for the development of process-based SDMs. Still, each of these methods presents strengths and weakness which have to be estimated if they are to be reliably used by decision makers. In this study we compare projections of three different SDMs (STASH, LPJ and PHENOFIT) that lie in the continuum between correlative models and process-based models for the current distribution of three major European tree species, Fagus sylvatica L., Quercus robur L. and Pinus sylvestris L. We compare the consistency of the model simulations using an innovative comparison map profile method, integrating local and multi-scale comparisons. The three models simulate relatively accurately the current distribution of the three species. The process-based model performs almost as well as the correlative model, although parameters of the former are not fitted to the observed species distributions. According to our simulations, species range limits are triggered, at the European scale, by establishment and survival through processes primarily related to phenology and resistance to abiotic stress rather than to growth efficiency. The accuracy of projections of the hybrid and process-based model could however be improved by integrating a more realistic representation of the species resistance to water stress for instance, advocating for pursuing efforts to understand and formulate explicitly the impact of climatic conditions and variations on these processes. PMID:23874779

  18. Hydrological modelling of the Mabengnong catchment in the southeast Tibet with support of short term intensive precipitation observation

    NASA Astrophysics Data System (ADS)

    Wang, L.; Zhang, F.; Zhang, H.; Scott, C. A.; Zeng, C.; SHI, X.

    2017-12-01

    Precipitation is one of the crucial inputs for models used to better understand hydrological processes. In high mountain areas, it is a difficult task to obtain a reliable precipitation data set describing the spatial and temporal characteristic due to the limited meteorological observations and high variability of precipitation. This study carries out intensive observation of precipitation in a high mountain catchment in the southeast of the Tibet during July to August 2013. According to the rain gauges set up at different altitudes, it is found that precipitation is greatly influenced by altitude. The observed precipitation is used to depict the precipitation gradient (PG) and hourly distribution (HD), showing that the average duration is around 0.1, 0.8 and 6.0 hours and the average PG is 0.10, 0.28 and 0.26 mm/d/100m for trace, light and moderate rain, respectively. Based on the gridded precipitation derived from the PG and HD and the nearby Linzhi meteorological station at lower altitude, a distributed biosphere hydrological model based on water and energy budgets (WEB-DHM) is applied to simulate the hydrological processes. Beside the observed runoff, MODIS/Terra snow cover area (SCA) data, and MODIS/Terra land surface temperature (LST) data are also used for model calibration and validation. The resulting runoff, SCA and LST simulations are all reasonable. Sensitivity analyses indicate that runoff is greatly underestimated without considering PG, illustrating that short-term intensive precipitation observation contributes to improving hydrological modelling of poorly gauged high mountain catchments.

  19. Numerical Study of Solar Storms from the Sun to Earth

    NASA Astrophysics Data System (ADS)

    Feng, Xueshang; Jiang, Chaowei; Zhou, Yufen

    2017-04-01

    As solar storms are sweeping the Earth, adverse changes occur in geospace environment. How human can mitigate and avoid destructive damages caused by solar storms becomes an important frontier issue that we must face in the high-tech times. It is of both scientific significance to understand the dynamic process during solar storm's propagation in interplanetary space and realistic value to conduct physics-based numerical researches on the three-dimensional process of solar storms in interplanetary space with the aid of powerful computing capacity to predict the arrival times, intensities, and probable geoeffectiveness of solar storms at the Earth. So far, numerical studies based on magnetohydrodynamics (MHD) have gone through the transition from the initial qualitative principle researches to systematic quantitative studies on concrete events and numerical predictions. Numerical modeling community has a common goal to develop an end-to-end physics-based modeling system for forecasting the Sun-Earth relationship. It is hoped that the transition of these models to operational use depends on the availability of computational resources at reasonable cost and that the models' prediction capabilities may be improved by incorporating the observational findings and constraints into physics-based models, combining the observations, empirical models and MHD simulations in organic ways. In this talk, we briefly focus on our recent progress in using solar observations to produce realistic magnetic configurations of CMEs as they leave the Sun, and coupling data-driven simulations of CMEs to heliospheric simulations that then propagate the CME configuration to 1AU, and outlook the important numerical issues and their possible solutions in numerical space weather modeling from the Sun to Earth for future research.

  20. Bayesian Approaches for Model and Multi-mission Satellites Data Fusion

    NASA Astrophysics Data System (ADS)

    Khaki, M., , Dr; Forootan, E.; Awange, J.; Kuhn, M.

    2017-12-01

    Traditionally, data assimilation is formulated as a Bayesian approach that allows one to update model simulations using new incoming observations. This integration is necessary due to the uncertainty in model outputs, which mainly is the result of several drawbacks, e.g., limitations in accounting for the complexity of real-world processes, uncertainties of (unknown) empirical model parameters, and the absence of high resolution (both spatially and temporally) data. Data assimilation, however, requires knowledge of the physical process of a model, which may be either poorly described or entirely unavailable. Therefore, an alternative method is required to avoid this dependency. In this study we present a novel approach which can be used in hydrological applications. A non-parametric framework based on Kalman filtering technique is proposed to improve hydrological model estimates without using a model dynamics. Particularly, we assesse Kalman-Taken formulations that take advantage of the delay coordinate method to reconstruct nonlinear dynamics in the absence of the physical process. This empirical relationship is then used instead of model equations to integrate satellite products with model outputs. We use water storage variables from World-Wide Water Resources Assessment (W3RA) simulations and update them using data known as the Gravity Recovery And Climate Experiment (GRACE) terrestrial water storage (TWS) and also surface soil moisture data from the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E) over Australia for the period of 2003 to 2011. The performance of the proposed integration method is compared with data obtained from the more traditional assimilation scheme using the Ensemble Square-Root Filter (EnSRF) filtering technique (Khaki et al., 2017), as well as by evaluating them against ground-based soil moisture and groundwater observations within the Murray-Darling Basin.

  1. Multiple model analysis with discriminatory data collection (MMA-DDC): A new method for improving measurement selection

    NASA Astrophysics Data System (ADS)

    Kikuchi, C.; Ferre, P. A.; Vrugt, J. A.

    2011-12-01

    Hydrologic models are developed, tested, and refined based on the ability of those models to explain available hydrologic data. The optimization of model performance based upon mismatch between model outputs and real world observations has been extensively studied. However, identification of plausible models is sensitive not only to the models themselves - including model structure and model parameters - but also to the location, timing, type, and number of observations used in model calibration. Therefore, careful selection of hydrologic observations has the potential to significantly improve the performance of hydrologic models. In this research, we seek to reduce prediction uncertainty through optimization of the data collection process. A new tool - multiple model analysis with discriminatory data collection (MMA-DDC) - was developed to address this challenge. In this approach, multiple hydrologic models are developed and treated as competing hypotheses. Potential new data are then evaluated on their ability to discriminate between competing hypotheses. MMA-DDC is well-suited for use in recursive mode, in which new observations are continuously used in the optimization of subsequent observations. This new approach was applied to a synthetic solute transport experiment, in which ranges of parameter values constitute the multiple hydrologic models, and model predictions are calculated using likelihood-weighted model averaging. MMA-DDC was used to determine the optimal location, timing, number, and type of new observations. From comparison with an exhaustive search of all possible observation sequences, we find that MMA-DDC consistently selects observations which lead to the highest reduction in model prediction uncertainty. We conclude that using MMA-DDC to evaluate potential observations may significantly improve the performance of hydrologic models while reducing the cost associated with collecting new data.

  2. Processing Satellite Data for Slant Total Electron Content Measurements

    NASA Technical Reports Server (NTRS)

    Stephens, Philip John (Inventor); Komjathy, Attila (Inventor); Wilson, Brian D. (Inventor); Mannucci, Anthony J. (Inventor)

    2016-01-01

    A method, system, and apparatus provide the ability to estimate ionospheric observables using space-borne observations. Space-borne global positioning system (GPS) data of ionospheric delay are obtained from a satellite. The space-borne GPS data are combined with ground-based GPS observations. The combination is utilized in a model to estimate a global three-dimensional (3D) electron density field.

  3. Agent-based modeling: a new approach for theory building in social psychology.

    PubMed

    Smith, Eliot R; Conrey, Frederica R

    2007-02-01

    Most social and psychological phenomena occur not as the result of isolated decisions by individuals but rather as the result of repeated interactions between multiple individuals over time. Yet the theory-building and modeling techniques most commonly used in social psychology are less than ideal for understanding such dynamic and interactive processes. This article describes an alternative approach to theory building, agent-based modeling (ABM), which involves simulation of large numbers of autonomous agents that interact with each other and with a simulated environment and the observation of emergent patterns from their interactions. The authors believe that the ABM approach is better able than prevailing approaches in the field, variable-based modeling (VBM) techniques such as causal modeling, to capture types of complex, dynamic, interactive processes so important in the social world. The article elaborates several important contrasts between ABM and VBM and offers specific recommendations for learning more and applying the ABM approach.

  4. Wintertime nitric acid chemistry - Implications from three-dimensional model calculations

    NASA Technical Reports Server (NTRS)

    Rood, Richard B.; Kaye, Jack A.; Douglass, Anne R.; Allen, Dale J.; Steenford, Stephen

    1990-01-01

    A three-dimensional simulation of the evolution of HNO3 has been run for the winter of 1979. Winds and temperatures are taken from a stratospheric data assimilation analysis, and the chemistry is based on Limb Infrared Monitor of the Stratosphere (LIMS) observations. The model is compared to LIMS observations to investigate the problem of 'missing' nitric acid chemistry in the winter hemisphere. Both the model and observations support the contention that a nitric acid source is needed outside of the polar vortex and north of the subtropics. Observations suggest that HNO3 is not dynamically controlled in middle latitudes. The model shows that given the time scales of conventional chemistry, dynamical control is expected. Therefore, an error exists in the conventional chemistry or additional processes are needed to bring the model and data into agreement. Since the polar vortex is dynamically isolated from the middle latitudes, and since the highest HNO3 values are observed in October and November, a source associated solely with polar stratospheric clouds cannot explain the deficiencies in the chemistry. The role of heterogeneous processes on background aerosols is reviewed in light of these results.

  5. Lyapunov-Based Sensor Failure Detection And Recovery For The Reverse Water Gas Shift Process

    NASA Technical Reports Server (NTRS)

    Haralambous, Michael G.

    2001-01-01

    Livingstone, a model-based AI software system, is planned for use in the autonomous fault diagnosis, reconfiguration, and control of the oxygen-producing reverse water gas shift (RWGS) process test-bed located in the Applied Chemistry Laboratory at KSC. In this report the RWGS process is first briefly described and an overview of Livingstone is given. Next, a Lyapunov-based approach for detecting and recovering from sensor failures, differing significantly from that used by Livingstone, is presented. In this new method, models used are in terms of the defining differential equations of system components, thus differing from the qualitative, static models used by Livingstone. An easily computed scalar inequality constraint, expressed in terms of sensed system variables, is used to determine the existence of sensor failures. In the event of sensor failure, an observer/estimator is used for determining which sensors have failed. The theory underlying the new approach is developed. Finally, a recommendation is made to use the Lyapunov-based approach to complement the capability of Livingstone and to use this combination in the RWGS process.

  6. LYAPUNOV-Based Sensor Failure Detection and Recovery for the Reverse Water Gas Shift Process

    NASA Technical Reports Server (NTRS)

    Haralambous, Michael G.

    2002-01-01

    Livingstone, a model-based AI software system, is planned for use in the autonomous fault diagnosis, reconfiguration, and control of the oxygen-producing reverse water gas shift (RWGS) process test-bed located in the Applied Chemistry Laboratory at KSC. In this report the RWGS process is first briefly described and an overview of Livingstone is given. Next, a Lyapunov-based approach for detecting and recovering from sensor failures, differing significantly from that used by Livingstone, is presented. In this new method, models used are in t e m of the defining differential equations of system components, thus differing from the qualitative, static models used by Livingstone. An easily computed scalar inequality constraint, expressed in terms of sensed system variables, is used to determine the existence of sensor failures. In the event of sensor failure, an observer/estimator is used for determining which sensors have failed. The theory underlying the new approach is developed. Finally, a recommendation is made to use the Lyapunov-based approach to complement the capability of Livingstone and to use this combination in the RWGS process.

  7. Land Surface Model Biases and their Impacts on the Assimilation of Snow-related Observations

    NASA Astrophysics Data System (ADS)

    Arsenault, K. R.; Kumar, S.; Hunter, S. M.; Aman, R.; Houser, P. R.; Toll, D.; Engman, T.; Nigro, J.

    2007-12-01

    Some recent snow modeling studies have employed a wide range of assimilation methods to incorporate snow cover or other snow-related observations into different hydrological or land surface models. These methods often include taking both model and observation biases into account throughout the model integration. This study focuses more on diagnosing the model biases and presenting their subsequent impacts on assimilating snow observations and modeled snowmelt processes. In this study, the land surface model, the Community Land Model (CLM), is used within the Land Information System (LIS) modeling framework to show how such biases impact the assimilation of MODIS snow cover observations. Alternative in-situ and satellite-based observations are used to help guide the CLM LSM in better predicting snowpack conditions and more realistic timing of snowmelt for a western US mountainous region. Also, MODIS snow cover observation biases will be discussed, and validation results will be provided. The issues faced with inserting or assimilating MODIS snow cover at moderate spatial resolutions (like 1km or less) will be addressed, and the impacts on CLM will be presented.

  8. An agent-based model for queue formation of powered two-wheelers in heterogeneous traffic

    NASA Astrophysics Data System (ADS)

    Lee, Tzu-Chang; Wong, K. I.

    2016-11-01

    This paper presents an agent-based model (ABM) for simulating the queue formation of powered two-wheelers (PTWs) in heterogeneous traffic at a signalized intersection. The main novelty is that the proposed interaction rule describing the position choice behavior of PTWs when queuing in heterogeneous traffic can capture the stochastic nature of the decision making process. The interaction rule is formulated as a multinomial logit model, which is calibrated by using a microscopic traffic trajectory dataset obtained from video footage. The ABM is validated against the survey data for the vehicular trajectory patterns, queuing patterns, queue lengths, and discharge rates. The results demonstrate that the proposed model is capable of replicating the observed queue formation process for heterogeneous traffic.

  9. Chemistry-Transport Modeling of the Satellite Observed Distribution of Tropical Tropospheric Ozone

    NASA Technical Reports Server (NTRS)

    Peters, Wouter; Krol, Maarten; Dentener, Frank; Thompson, Anne M.; Leloeveld, Jos; Bhartia, P. K. (Technical Monitor)

    2002-01-01

    We have compared the 14-year record of satellite derived tropical tropospheric ozone columns (TTOC) from the NIMBUS-7 Total Ozone Mapping Spectrometer (TOMS) to TTOC calculated by a chemistry-transport model (CTM). An objective measure of error, based on the zonal distribution of TTOC in the tropics, is applied to perform this comparison systematically. In addition, the sensitivity of the model to several key processes in the tropics is quantified to select directions for future improvements. The comparisons indicate a widespread, systematic (20%) discrepancy over the tropical Atlantic Ocean, which maximizes during austral Spring. Although independent evidence from ozonesondes shows that some of the disagreement is due to satellite over-estimate of TTOC, the Atlantic mismatch is largely due to a misrepresentation of seasonally recurring processes in the model. Only minor differences between the model and observations over the Pacific occur, mostly due to interannual variability not captured by the model. Although chemical processes determine the TTOC extent, dynamical processes dominate the TTOC distribution, as the use of actual meteorology pertaining to the year of observations always leads to a better agreement with TTOC observations than using a random year or a climatology. The modeled TTOC is remarkably insensitive to many model parameters due to efficient feedbacks in the ozone budget. Nevertheless, the simulations would profit from an improved biomass burning calendar, as well as from an increase in NOX abundances in free tropospheric biomass burning plumes. The model showed the largest response to lightning NOX emissions, but systematic improvements could not be found. The use of multi-year satellite derived tropospheric data to systematically test and improve a CTM is a promising new addition to existing methods of model validation, and is a first step to integrating tropospheric satellite observations into global ozone modeling studies. Conversely,the CTM may suggest improvements to evolving satellite retrievals for tropospheric ozone.

  10. Insights into aerosols, chemistry, and clouds from NETCARE: Observations from the Canadian Arctic in summer 2014

    NASA Astrophysics Data System (ADS)

    Abbatt, J.

    2015-12-01

    The Canadian Network on Aerosols and Climate: Addressing Key Uncertainties in Remote Canadian Regions (or NETCARE) was established in 2013 to study the interactions between aerosols, chemistry, clouds and climate. The network brings together Canadian academic and government researchers, along with key international collaborators. Attention is being given to observations and modeling of Arctic aerosol, with the goal to understand underlying processes and so improve predictions of aerosol climate forcing. Motivation to understand the summer Arctic atmosphere comes from the retreat of summer sea ice and associated increase in marine influence. To address these goals, a suite of measurements was conducted from two platforms in summer 2014 in the Canadian Arctic, i.e. an aircraft-based campaign on the Alfred Wegener Institute POLAR 6 and an ocean-based campaign from the CGCS Amundsen icebreaker. NETCARE-POLAR was based out of Resolute Bay, Nunavut during an initial period of little transport and cloud-free conditions and a later period characterized by more transport with potentially biomass burning influence. Measurements included particle and cloud droplet numbers and size distributions, aerosol composition, cloud nuclei, and levels of gaseous tracers. Ultrafine particle events were more frequently observed in the marine boundary layer than above, with particle growth observed in some cases to cloud condensation nucleus sizes. The influence of biological processes on atmospheric constituents was also assessed from the ship during NETCARE-AMUNDSEN, as indicated by high measured levels of gaseous ammonia, DMS and oxygenated VOCs, as well as isolated particle formation and growth episodes. The cruise took place in Baffin Bay and through the Canadian archipelago. Interpretation of the observations from both campaigns is enhanced through the use of chemical transport and particle dispersion models. This talk will provide an overview of NETCARE Arctic observational and related modeling activities, focusing on 2014 Arctic activities and highlighting upcoming presentations within the session and the work of individual research teams. An attempt will be made to synthesize the observations and model results, drawing connections of aerosol sources through to cloud formation and deposition processes.

  11. Key issues, observations and goals for coupled, thermodynamic/geodynamic models

    NASA Astrophysics Data System (ADS)

    Kelemen, P. B.

    2017-12-01

    In coupled, thermodynamic/geodynamic models, focus should be on processes involving major rock forming minerals and simple fluid compositions, and parameters with first-order effects on likely dynamic processes: In a given setting, will fluid mass increase or decrease? How about solid density? Will flow become localized or diffuse? Will rocks flow or break? How do reactions affect global processes such as formation and evolution of the plates, plate boundary deformation, metamorphism, weathering, climate and geochemical cycles. Important reaction feedbacks in geodynamics include formation of dissolution channels and armored channels; divergence of flow and formation of permeability barriers due to crystallization in pore space; localization of fluid transport and ductile deformation in shear zones; reaction-driven cracking; mechanical channels granular media; shear heating; density instabilities; viscous fluid-weakening; fluid-induced frictional failure; and hydraulic fracture. Density instabilities often lead to melting, and there is an interesting dialectic between porous flow and diapirs. The best models provide a simple but comprehensive framework that can account for the general features in many or most of these phenomena. Ideally, calculations based on thermodynamic data and rheological observations alone should delineate the regimes in which each of these processes will occur and the boundaries between them. These often start with "toy models" and lab experiments on analog systems, with highly approximate scaling to simplified geological conditions and materials. Geologic observations provide the best constraints where `frozen' fluid transport pathways or deformation processes are preserved. Inferences about completed processes based on fluid or solid products alone is more challenging and less unique. Not all important processes have good examples in outcrop, so directed searches for specific phenomena may fail. A highly generalized approach provides a way forward, allowing serendipitous discoveries of iconic examples wherever they are best developed. These then constrain and inspire the overall "phase diagram" of geodynamic processes.

  12. Modelling Extortion Racket Systems: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Nardin, Luis G.; Andrighetto, Giulia; Székely, Áron; Conte, Rosaria

    Mafias are highly powerful and deeply entrenched organised criminal groups that cause both economic and social damage. Overcoming, or at least limiting, their harmful effects is a societally beneficial objective, which renders its dynamics understanding an objective of both scientific and political interests. We propose an agent-based simulation model aimed at understanding how independent and combined effects of legal and social norm-based processes help to counter mafias. Our results show that legal processes are effective in directly countering mafias by reducing their activities and changing the behaviour of the rest of population, yet they are not able to change people's mind-set that renders the change fragile. When combined with social norm-based processes, however, people's mind-set shifts towards a culture of legality rendering the observed behaviour resilient to change.

  13. Improved simulation of regional CO2 surface concentrations using GEOS-Chem and fluxes from VEGAS

    NASA Astrophysics Data System (ADS)

    Chen, Z. H.; Zhu, J.; Zeng, N.

    2013-08-01

    CO2 measurements have been combined with simulated CO2 distributions from a transport model in order to produce the optimal estimates of CO2 surface fluxes in inverse modeling. However, one persistent problem in using model-observation comparisons for this goal relates to the issue of compatibility. Observations at a single station reflect all underlying processes of various scales. These processes usually cannot be fully resolved by model simulations at the grid points nearest the station due to lack of spatial or temporal resolution or missing processes in the model. In this study the stations in one region were grouped based on the amplitude and phase of the seasonal cycle at each station. The regionally averaged CO2 at all stations in one region represents the regional CO2 concentration of this region. The regional CO2 concentrations from model simulations and observations were used to evaluate the regional model results. The difference of the regional CO2 concentration between observation and modeled results reflects the uncertainty of the large-scale flux in the region where the grouped stations are. We compared the regional CO2 concentrations between model results with biospheric fluxes from the Carnegie-Ames-Stanford Approach (CASA) and VEgetation-Global-Atmosphere-Soil (VEGAS) models, and used observations from GLOBALVIEW-CO2 to evaluate the regional model results. The results show the largest difference of the regionally averaged values between simulations with fluxes from VEGAS and observations is less than 5 ppm for North American boreal, North American temperate, Eurasian boreal, Eurasian temperate and Europe, which is smaller than the largest difference between CASA simulations and observations (more than 5 ppm). There is still a large difference between two model results and observations for the regional CO2 concentration in the North Atlantic, Indian Ocean, and South Pacific tropics. The regionally averaged CO2 concentrations will be helpful for comparing CO2 concentrations from modeled results and observations and evaluating regional surface fluxes from different methods.

  14. Resolving the Antarctic contribution to sea-level rise: a hierarchical modelling framework.

    PubMed

    Zammit-Mangion, Andrew; Rougier, Jonathan; Bamber, Jonathan; Schön, Nana

    2014-06-01

    Determining the Antarctic contribution to sea-level rise from observational data is a complex problem. The number of physical processes involved (such as ice dynamics and surface climate) exceeds the number of observables, some of which have very poor spatial definition. This has led, in general, to solutions that utilise strong prior assumptions or physically based deterministic models to simplify the problem. Here, we present a new approach for estimating the Antarctic contribution, which only incorporates descriptive aspects of the physically based models in the analysis and in a statistical manner. By combining physical insights with modern spatial statistical modelling techniques, we are able to provide probability distributions on all processes deemed to play a role in both the observed data and the contribution to sea-level rise. Specifically, we use stochastic partial differential equations and their relation to geostatistical fields to capture our physical understanding and employ a Gaussian Markov random field approach for efficient computation. The method, an instantiation of Bayesian hierarchical modelling, naturally incorporates uncertainty in order to reveal credible intervals on all estimated quantities. The estimated sea-level rise contribution using this approach corroborates those found using a statistically independent method. © 2013 The Authors. Environmetrics Published by John Wiley & Sons, Ltd.

  15. Resolving the Antarctic contribution to sea-level rise: a hierarchical modelling framework†

    PubMed Central

    Zammit-Mangion, Andrew; Rougier, Jonathan; Bamber, Jonathan; Schön, Nana

    2014-01-01

    Determining the Antarctic contribution to sea-level rise from observational data is a complex problem. The number of physical processes involved (such as ice dynamics and surface climate) exceeds the number of observables, some of which have very poor spatial definition. This has led, in general, to solutions that utilise strong prior assumptions or physically based deterministic models to simplify the problem. Here, we present a new approach for estimating the Antarctic contribution, which only incorporates descriptive aspects of the physically based models in the analysis and in a statistical manner. By combining physical insights with modern spatial statistical modelling techniques, we are able to provide probability distributions on all processes deemed to play a role in both the observed data and the contribution to sea-level rise. Specifically, we use stochastic partial differential equations and their relation to geostatistical fields to capture our physical understanding and employ a Gaussian Markov random field approach for efficient computation. The method, an instantiation of Bayesian hierarchical modelling, naturally incorporates uncertainty in order to reveal credible intervals on all estimated quantities. The estimated sea-level rise contribution using this approach corroborates those found using a statistically independent method. © 2013 The Authors. Environmetrics Published by John Wiley & Sons, Ltd. PMID:25505370

  16. Comparative Protein Structure Modeling Using MODELLER

    PubMed Central

    Webb, Benjamin; Sali, Andrej

    2016-01-01

    Comparative protein structure modeling predicts the three-dimensional structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and how to use the ModBase database of such models, and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. PMID:27322406

  17. Can we reconcile our understanding of the atmospheric methane budget over the past decades with atmospheric observations?

    NASA Astrophysics Data System (ADS)

    Bruhwiler, L. M.; Matthews, E.

    2007-12-01

    The balance of methane in the atmosphere is determined by surface emission, and losses due to uptake in soils and reaction with the hydroxyl radical. The atmospheric abundance of methane has risen by about a factor of three since pre-industrial times, but the growth rate has decreased substantially since the 1990's. Thus, global atmospheric methane appears to have equilibrated to around 1780 ppb subject to considerable interannual variability, the causes of which are not well-understood. Methane emissions are expected to increase in the future due to increases in fossil fuel use and possible changes in wetlands at high-latitudes, and it is therefore important to test our understanding of the methane budget over the last two decades against network observations of atmospheric methane. Issues of interest are whether we can match the rise in methane over the 1980's, whether we can explain the decrease in growth rate during the 1990's, and whether we are able to simulate the observed interannual variability in the observations. We will show results from a multi-decade model simulation using analyzed meteorology from the ERA-40 reanalysis over this period. New times series of methane sources for 1980 through the early 2000's are used in the simulation. Anthropogenic sources include fossil fuels with a total of 7 fuel-process emission combinations associated with mining, processing, transport and distribution of coal, natural gas and oil; ruminant animals and manure based on regionally-representative profiles of bovine populations ; landfills including the impact of on- site methane capture; and irrigated rice cultivation based on seasonal rice-cropping calendars. Natural sources we include are biomass burning from the GFED emission data base, oceans, termites, and natural wetlands using a multiple-regression model derived from a process-based model. If time permits, we will also show preliminary results of a methane data assimilation using the Cooperative Air-Sampling and GMD network observations, and our new estimates of methane sources.

  18. Quantifying Atmospheric Moist Processes from Earth Observations. Really?

    NASA Astrophysics Data System (ADS)

    Shepson, P. B.; Cambaliza, M. O. L.; Salmon, O. E.; Heimburger, A. M. F.; Davis, K. J.; Lauvaux, T.; McGowan, L. E.; Miles, N.; Richardson, S.; Sarmiento, D. P.; Hardesty, M.; Karion, A.; Sweeney, C.; Iraci, L. T.; Hillyard, P. W.; Podolske, J. R.; Gurney, K. R.; Patarasuk, R.; Razlivanov, I. N.; Song, Y.; O'Keeffe, D.; Turnbull, J. C.; Vimont, I.; Whetstone, J. R.; Possolo, A.; Prasad, K.; Lopez-Coto, I.

    2014-12-01

    The amount of water in the Earth's atmosphere is tiny compared to all other sources of water on our planet, fresh or otherwise. However, this tiny amount of water is fundamental to most aspects of human life. The tiny amount of water that cycles from the Earth's surface, through condensation into clouds in the atmosphere returning as precipitation falling is not only natures way of delivering fresh water to land-locked human societies but it also exerts a fundamental control on our climate system producing the most important feedbacks in the system. The representation of these processes in Earth system models contain many errors that produce well now biases in the hydrological cycle. Surprisingly the parameterizations of these important processes are not well validated with observations. Part of the reason for this situation stems from the fact that process evaluation is difficult to achieve on the global scale since it has commonly been assumed that the static observations available from snap-shots of individual parameters contain little information on processes. One of the successes of the A-Train has been the development of multi-parameter analysis based on the multi-sensor data produced by the satellite constellation. This has led to new insights on how water cycles through the Earth's atmosphere. Examples of these insights will be highlighted. It will be described how the rain formation process has been observed and how this has been used to constrain this process in models, with a huge impact. How these observations are beginning to reveal insights on deep convection and examples of the use these observations applied to models will also be highlighted as will the effects of aerosol on clouds on radiation.

  19. Quantifying Atmospheric Moist Processes from Earth Observations. Really?

    NASA Astrophysics Data System (ADS)

    Stephens, G. L.

    2015-12-01

    The amount of water in the Earth's atmosphere is tiny compared to all other sources of water on our planet, fresh or otherwise. However, this tiny amount of water is fundamental to most aspects of human life. The tiny amount of water that cycles from the Earth's surface, through condensation into clouds in the atmosphere returning as precipitation falling is not only natures way of delivering fresh water to land-locked human societies but it also exerts a fundamental control on our climate system producing the most important feedbacks in the system. The representation of these processes in Earth system models contain many errors that produce well now biases in the hydrological cycle. Surprisingly the parameterizations of these important processes are not well validated with observations. Part of the reason for this situation stems from the fact that process evaluation is difficult to achieve on the global scale since it has commonly been assumed that the static observations available from snap-shots of individual parameters contain little information on processes. One of the successes of the A-Train has been the development of multi-parameter analysis based on the multi-sensor data produced by the satellite constellation. This has led to new insights on how water cycles through the Earth's atmosphere. Examples of these insights will be highlighted. It will be described how the rain formation process has been observed and how this has been used to constrain this process in models, with a huge impact. How these observations are beginning to reveal insights on deep convection and examples of the use these observations applied to models will also be highlighted as will the effects of aerosol on clouds on radiation.

  20. Inferring biogeochemistry past: a millennial-scale multimodel assimilation of multiple paleoecological proxies.

    NASA Astrophysics Data System (ADS)

    Dietze, M.; Raiho, A.; Fer, I.; Dawson, A.; Heilman, K.; Hooten, M.; McLachlan, J. S.; Moore, D. J.; Paciorek, C. J.; Pederson, N.; Rollinson, C.; Tipton, J.

    2017-12-01

    The pre-industrial period serves as an essential baseline against which we judge anthropogenic impacts on the earth's systems. However, direct measurements of key biogeochemical processes, such as carbon, water, and nutrient cycling, are absent for this period and there is no direct way to link paleoecological proxies, such as pollen and tree rings, to these processes. Process-based terrestrial ecosystem models provide a way to make inferences about the past, but have large uncertainties and by themselves often fail to capture much of the observed variability. Here we investigate the ability to improve inferences about pre-industrial biogeochemical cycles through the formal assimilation of proxy data into multiple process-based models. A Tobit ensemble filter with explicit estimation of process error was run at five sites across the eastern US for three models (LINKAGES, ED2, LPJ-GUESS). In addition to process error, the ensemble accounted for parameter uncertainty, estimated through the assimilation of the TRY and BETY trait databases, and driver uncertainty, accommodated by probabilistically downscaling and debiasing CMIP5 GCM output then filtering based on paleoclimate reconstructions. The assimilation was informed by four PalEON data products, each of which includes an explicit Bayesian error estimate: (1) STEPPS forest composition estimated from fossil pollen; (2) REFAB aboveground biomass (AGB) estimated from fossil pollen; (3) tree ring AGB and woody net primary productivity (wNPP); and (4) public land survey composition, stem density, and AGB. By comparing ensemble runs with and without data assimilation we are able to assess the information contribution of the proxy data to constraining biogeochemical fluxes, which is driven by the combination of model uncertainty, data uncertainty, and the strength of correlation between observed and unobserved quantities in the model ensemble. To our knowledge this is the first attempt at multi-model data assimilation with terrestrial ecosystem models. Results from the data-model assimilation allow us to assess the consistency across models in post-assimilation inferences about indirectly inferred quantities, such as GPP, soil carbon, and the water budget.

  1. Where does the carbon go? A model–data intercomparison of vegetation carbon allocation and turnover processes at two temperate forest free-air CO2 enrichment sites

    PubMed Central

    De Kauwe, Martin G; Medlyn, Belinda E; Zaehle, Sönke; Walker, Anthony P; Dietze, Michael C; Wang, Ying-Ping; Luo, Yiqi; Jain, Atul K; El-Masri, Bassil; Hickler, Thomas; Wårlind, David; Weng, Ensheng; Parton, William J; Thornton, Peter E; Wang, Shusen; Prentice, I Colin; Asao, Shinichi; Smith, Benjamin; McCarthy, Heather R; Iversen, Colleen M; Hanson, Paul J; Warren, Jeffrey M; Oren, Ram; Norby, Richard J

    2014-01-01

    Elevated atmospheric CO2 concentration (eCO2) has the potential to increase vegetation carbon storage if increased net primary production causes increased long-lived biomass. Model predictions of eCO2 effects on vegetation carbon storage depend on how allocation and turnover processes are represented. We used data from two temperate forest free-air CO2 enrichment (FACE) experiments to evaluate representations of allocation and turnover in 11 ecosystem models. Observed eCO2 effects on allocation were dynamic. Allocation schemes based on functional relationships among biomass fractions that vary with resource availability were best able to capture the general features of the observations. Allocation schemes based on constant fractions or resource limitations performed less well, with some models having unintended outcomes. Few models represent turnover processes mechanistically and there was wide variation in predictions of tissue lifespan. Consequently, models did not perform well at predicting eCO2 effects on vegetation carbon storage. Our recommendations to reduce uncertainty include: use of allocation schemes constrained by biomass fractions; careful testing of allocation schemes; and synthesis of allocation and turnover data in terms of model parameters. Data from intensively studied ecosystem manipulation experiments are invaluable for constraining models and we recommend that such experiments should attempt to fully quantify carbon, water and nutrient budgets. PMID:24844873

  2. The Irrational Science of Educational Reform.

    ERIC Educational Resources Information Center

    Gordon, Rick

    This paper discusses the problems encountered in applying rational and participatory models to school reform and presents an alternative model based on action research. The group processes of a school-improvement team (SIT) at a high school are examined. Data were collected through participant observation, interviews with three faculty members,…

  3. A framework for modeling scenario-based barrier island storm impacts

    USGS Publications Warehouse

    Mickey, Rangley; Long, Joseph W.; Dalyander, P. Soupy; Plant, Nathaniel G.; Thompson, David M.

    2018-01-01

    Methods for investigating the vulnerability of existing or proposed coastal features to storm impacts often rely on simplified parametric models or one-dimensional process-based modeling studies that focus on changes to a profile across a dune or barrier island. These simple studies tend to neglect the impacts to curvilinear or alongshore varying island planforms, influence of non-uniform nearshore hydrodynamics and sediment transport, irregular morphology of the offshore bathymetry, and impacts from low magnitude wave events (e.g. cold fronts). Presented here is a framework for simulating regionally specific, low and high magnitude scenario-based storm impacts to assess the alongshore variable vulnerabilities of a coastal feature. Storm scenarios based on historic hydrodynamic conditions were derived and simulated using the process-based morphologic evolution model XBeach. Model results show that the scenarios predicted similar patterns of erosion and overwash when compared to observed qualitative morphologic changes from recent storm events that were not included in the dataset used to build the scenarios. The framework model simulations were capable of predicting specific areas of vulnerability in the existing feature and the results illustrate how this storm vulnerability simulation framework could be used as a tool to help inform the decision-making process for scientists, engineers, and stakeholders involved in coastal zone management or restoration projects.

  4. The Role of Laboratory-Based Studies of the Physical and Biological Properties of Sea Ice in Supporting the Observation and Modeling of Ice Covered Seas

    NASA Astrophysics Data System (ADS)

    Light, B.; Krembs, C.

    2003-12-01

    Laboratory-based studies of the physical and biological properties of sea ice are an essential link between high latitude field observations and existing numerical models. Such studies promote improved understanding of climatic variability and its impact on sea ice and the structure of ice-dependent marine ecosystems. Controlled laboratory experiments can help identify feedback mechanisms between physical and biological processes and their response to climate fluctuations. Climatically sensitive processes occurring between sea ice and the atmosphere and sea ice and the ocean determine surface radiative energy fluxes and the transfer of nutrients and mass across these boundaries. High temporally and spatially resolved analyses of sea ice under controlled environmental conditions lend insight to the physics that drive these transfer processes. Techniques such as optical probing, thin section photography, and microscopy can be used to conduct experiments on natural sea ice core samples and laboratory-grown ice. Such experiments yield insight on small scale processes from the microscopic to the meter scale and can be powerful interdisciplinary tools for education and model parameterization development. Examples of laboratory investigations by the authors include observation of the response of sea ice microstructure to changes in temperature, assessment of the relationships between ice structure and the partitioning of solar radiation by first-year sea ice covers, observation of pore evolution and interfacial structure, and quantification of the production and impact of microbial metabolic products on the mechanical, optical, and textural characteristics of sea ice.

  5. Sensitivity of Attitude Determination on the Model Assumed for ISAR Radar Mappings

    NASA Astrophysics Data System (ADS)

    Lemmens, S.; Krag, H.

    2013-09-01

    Inverse synthetic aperture radars (ISAR) are valuable instrumentations for assessing the state of a large object in low Earth orbit. The images generated by these radars can reach a sufficient quality to be used during launch support or contingency operations, e.g. for confirming the deployment of structures, determining the structural integrity, or analysing the dynamic behaviour of an object. However, the direct interpretation of ISAR images can be a demanding task due to the nature of the range-Doppler space in which these images are produced. Recently, a tool has been developed by the European Space Agency's Space Debris Office to generate radar mappings of a target in orbit. Such mappings are a 3D-model based simulation of how an ideal ISAR image would be generated by a ground based radar under given processing conditions. These radar mappings can be used to support a data interpretation process. E.g. by processing predefined attitude scenarios during an observation sequence and comparing them with actual observations, one can detect non-nominal behaviour. Vice versa, one can also estimate the attitude states of the target by fitting the radar mappings to the observations. It has been demonstrated for the latter use case that a coarse approximation of the target through an 3D-model is already sufficient to derive the attitude information from the generated mappings. The level of detail required for the 3D-model is determined by the process of generating ISAR images, which is based on the theory of scattering bodies. Therefore, a complex surface can return an intrinsically noisy ISAR image. E.g. when many instruments on a satellite are visible to the observer, the ISAR image can suffer from multipath reflections. In this paper, we will further analyse the sensitivity of the attitude fitting algorithms to variations in the dimensions and the level of detail of the underlying 3D model. Moreover, we investigate the ability to estimate the orientations of different spacecraft components with respect to each other from the fitting procedure.

  6. Dual Processes in Decision Making and Developmental Neuroscience: A Fuzzy-Trace Model.

    PubMed

    Reyna, Valerie F; Brainerd, Charles J

    2011-09-01

    From Piaget to the present, traditional and dual-process theories have predicted improvement in reasoning from childhood to adulthood, and improvement has been observed. However, developmental reversals-that reasoning biases emerge with development -have also been observed in a growing list of paradigms. We explain how fuzzy-trace theory predicts both improvement and developmental reversals in reasoning and decision making. Drawing on research on logical and quantitative reasoning, as well as on risky decision making in the laboratory and in life, we illustrate how the same small set of theoretical principles apply to typical neurodevelopment, encompassing childhood, adolescence, and adulthood, and to neurological conditions such as autism and Alzheimer's disease. For example, framing effects-that risk preferences shift when the same decisions are phrases in terms of gains versus losses-emerge in early adolescence as gist-based intuition develops. In autistic individuals, who rely less on gist-based intuition and more on verbatim-based analysis, framing biases are attenuated (i.e., they outperform typically developing control subjects). In adults, simple manipulations based on fuzzy-trace theory can make framing effects appear and disappear depending on whether gist-based intuition or verbatim-based analysis is induced. These theoretical principles are summarized and integrated in a new mathematical model that specifies how dual modes of reasoning combine to produce predictable variability in performance. In particular, we show how the most popular and extensively studied model of decision making-prospect theory-can be derived from fuzzy-trace theory by combining analytical (verbatim-based) and intuitive (gist-based) processes.

  7. Dual Processes in Decision Making and Developmental Neuroscience: A Fuzzy-Trace Model

    PubMed Central

    Reyna, Valerie F.; Brainerd, Charles J.

    2011-01-01

    From Piaget to the present, traditional and dual-process theories have predicted improvement in reasoning from childhood to adulthood, and improvement has been observed. However, developmental reversals—that reasoning biases emerge with development —have also been observed in a growing list of paradigms. We explain how fuzzy-trace theory predicts both improvement and developmental reversals in reasoning and decision making. Drawing on research on logical and quantitative reasoning, as well as on risky decision making in the laboratory and in life, we illustrate how the same small set of theoretical principles apply to typical neurodevelopment, encompassing childhood, adolescence, and adulthood, and to neurological conditions such as autism and Alzheimer's disease. For example, framing effects—that risk preferences shift when the same decisions are phrases in terms of gains versus losses—emerge in early adolescence as gist-based intuition develops. In autistic individuals, who rely less on gist-based intuition and more on verbatim-based analysis, framing biases are attenuated (i.e., they outperform typically developing control subjects). In adults, simple manipulations based on fuzzy-trace theory can make framing effects appear and disappear depending on whether gist-based intuition or verbatim-based analysis is induced. These theoretical principles are summarized and integrated in a new mathematical model that specifies how dual modes of reasoning combine to produce predictable variability in performance. In particular, we show how the most popular and extensively studied model of decision making—prospect theory—can be derived from fuzzy-trace theory by combining analytical (verbatim-based) and intuitive (gist-based) processes. PMID:22096268

  8. NAME Modeling and Climate Process Team

    NASA Astrophysics Data System (ADS)

    Schemm, J. E.; Williams, L. N.; Gutzler, D. S.

    2007-05-01

    NAME Climate Process and Modeling Team (CPT) has been established to address the need of linking climate process research to model development and testing activities for warm season climate prediction. The project builds on two existing NAME-related modeling efforts. One major component of this project is the organization and implementation of a second phase of NAMAP, based on the 2004 season. NAMAP2 will re-examine the metrics proposed by NAMAP, extend the NAMAP analysis to transient variability, exploit the extensive observational database provided by NAME 2004 to analyze simulation targets of special interest, and expand participation. Vertical column analysis will bring local NAME observations and model outputs together in a context where key physical processes in the models can be evaluated and improved. The second component builds on the current NAME-related modeling effort focused on the diurnal cycle of precipitation in several global models, including those implemented at NCEP, NASA and GFDL. Our activities will focus on the ability of the operational NCEP Global Forecast System (GFS) to simulate the diurnal and seasonal evolution of warm season precipitation during the NAME 2004 EOP, and on changes to the treatment of deep convection in the complicated terrain of the NAMS domain that are necessary to improve the simulations, and ultimately predictions of warm season precipitation These activities will be strongly tied to NAMAP2 to ensure technology transfer from research to operations. Results based on experiments conducted with the NCEP CFS GCM will be reported at the conference with emphasis on the impact of horizontal resolution in predicting warm season precipitation over North America.

  9. Extended Kalman Doppler tracking and model determination for multi-sensor short-range radar

    NASA Astrophysics Data System (ADS)

    Mittermaier, Thomas J.; Siart, Uwe; Eibert, Thomas F.; Bonerz, Stefan

    2016-09-01

    A tracking solution for collision avoidance in industrial machine tools based on short-range millimeter-wave radar Doppler observations is presented. At the core of the tracking algorithm there is an Extended Kalman Filter (EKF) that provides dynamic estimation and localization in real-time. The underlying sensor platform consists of several homodyne continuous wave (CW) radar modules. Based on In-phase-Quadrature (IQ) processing and down-conversion, they provide only Doppler shift information about the observed target. Localization with Doppler shift estimates is a nonlinear problem that needs to be linearized before the linear KF can be applied. The accuracy of state estimation depends highly on the introduced linearization errors, the initialization and the models that represent the true physics as well as the stochastic properties. The important issue of filter consistency is addressed and an initialization procedure based on data fitting and maximum likelihood estimation is suggested. Models for both, measurement and process noise are developed. Tracking results from typical three-dimensional courses of movement at short distances in front of a multi-sensor radar platform are presented.

  10. Prediction of Proper Temperatures for the Hot Stamping Process Based on the Kinetics Models

    NASA Astrophysics Data System (ADS)

    Samadian, P.; Parsa, M. H.; Mirzadeh, H.

    2015-02-01

    Nowadays, the application of kinetics models for predicting microstructures of steels subjected to thermo-mechanical treatments has increased to minimize direct experimentation, which is costly and time consuming. In the current work, the final microstructures of AISI 4140 steel sheets after the hot stamping process were predicted using the Kirkaldy and Li kinetics models combined with new thermodynamically based models in order for the determination of the appropriate process temperatures. In this way, the effect of deformation during hot stamping on the Ae3, Acm, and Ae1 temperatures was considered, and then the equilibrium volume fractions of phases at different temperatures were calculated. Moreover, the ferrite transformation rate equations of the Kirkaldy and Li models were modified by a term proposed by Åkerström to consider the influence of plastic deformation. Results showed that the modified Kirkaldy model is satisfactory for the determination of appropriate austenitization temperatures for the hot stamping process of AISI 4140 steel sheets because of agreeable microstructure predictions in comparison with the experimental observations.

  11. Response time modeling reveals multiple contextual cuing mechanisms.

    PubMed

    Sewell, David K; Colagiuri, Ben; Livesey, Evan J

    2017-08-24

    Contextual cuing refers to a response time (RT) benefit that occurs when observers search through displays that have been repeated over the course of an experiment. Although it is generally agreed that contextual cuing arises via an associative learning mechanism, there is uncertainty about the type(s) of process(es) that allow learning to influence RT. We contrast two leading accounts of the contextual cuing effect that differ in terms of the general process that is credited with producing the effect. The first, the expedited search account, attributes the cuing effect to an increase in the speed with which the target is acquired. The second, the decision threshold account, attributes the cuing effect to a reduction in the response threshold used by observers when making a subsequent decision about the target (e.g., judging its orientation). We use the diffusion model to contrast the quantitative predictions of these two accounts at the level of individual observers. Our use of the diffusion model allows us to also explore a novel decision-level locus of the cuing effect based on perceptual learning. This novel account attributes the RT benefit to a perceptual learning process that increases the quality of information used to drive the decision process. Our results reveal both individual differences in the process(es) involved in contextual cuing but also identify several striking regularities across observers. We find strong support for both the decision threshold account as well as the novel perceptual learning account. We find relatively weak support for the expedited search account.

  12. Towards a Stochastic Predictive Understanding of Ecosystem Functioning and Resilience to Environmental Changes

    NASA Astrophysics Data System (ADS)

    Pappas, C.

    2017-12-01

    Terrestrial ecosystem processes respond differently to hydrometeorological variability across timescales, and so does our scientific understanding of the underlying mechanisms. Process-based modeling of ecosystem functioning is therefore challenging, especially when long-term predictions are envisioned. Here we analyze the statistical properties of hydrometeorological and ecosystem variability, i.e., the variability of ecosystem process related to vegetation carbon dynamics, from hourly to decadal timescales. 23 extra-tropical forest sites, covering different climatic zones and vegetation characteristics, are examined. Micrometeorological and reanalysis data of precipitation, air temperature, shortwave radiation and vapor pressure deficit are used to describe hydrometeorological variability. Ecosystem variability is quantified using long-term eddy covariance flux data of hourly net ecosystem exchange of CO2 between land surface and atmosphere, monthly remote sensing vegetation indices, annual tree-ring widths and above-ground biomass increment estimates. We find that across sites and timescales ecosystem variability is confined within a hydrometeorological envelope that describes the range of variability of the available resources, i.e., water and energy. Furthermore, ecosystem variability demonstrates long-term persistence, highlighting ecological memory and slow ecosystem recovery rates after disturbances. We derive an analytical model, combining deterministic harmonics and stochastic processes, that represents major mechanisms and uncertainties and mimics the observed pattern of hydrometeorological and ecosystem variability. This stochastic framework offers a parsimonious and mathematically tractable approach for modelling ecosystem functioning and for understanding its response and resilience to environmental changes. Furthermore, this framework reflects well the observed ecological memory, an inherent property of ecosystem functioning that is currently not captured by simulation results with process-based models. Our analysis offers a perspective for terrestrial ecosystem modelling, combining current process understanding with stochastic methods, and paves the way for new model-data integration opportunities in Earth system sciences.

  13. Complex Networks in Psychological Models

    NASA Astrophysics Data System (ADS)

    Wedemann, R. S.; Carvalho, L. S. A. V. D.; Donangelo, R.

    We develop schematic, self-organizing, neural-network models to describe mechanisms associated with mental processes, by a neurocomputational substrate. These models are examples of real world complex networks with interesting general topological structures. Considering dopaminergic signal-to-noise neuronal modulation in the central nervous system, we propose neural network models to explain development of cortical map structure and dynamics of memory access, and unify different mental processes into a single neurocomputational substrate. Based on our neural network models, neurotic behavior may be understood as an associative memory process in the brain, and the linguistic, symbolic associative process involved in psychoanalytic working-through can be mapped onto a corresponding process of reconfiguration of the neural network. The models are illustrated through computer simulations, where we varied dopaminergic modulation and observed the self-organizing emergent patterns at the resulting semantic map, interpreting them as different manifestations of mental functioning, from psychotic through to normal and neurotic behavior, and creativity.

  14. Boltzmann Energy-based Image Analysis Demonstrates that Extracellular Domain Size Differences Explain Protein Segregation at Immune Synapses

    PubMed Central

    Burroughs, Nigel J.; Köhler, Karsten; Miloserdov, Vladimir; Dustin, Michael L.; van der Merwe, P. Anton; Davis, Daniel M.

    2011-01-01

    Immune synapses formed by T and NK cells both show segregation of the integrin ICAM1 from other proteins such as CD2 (T cell) or KIR (NK cell). However, the mechanism by which these proteins segregate remains unclear; one key hypothesis is a redistribution based on protein size. Simulations of this mechanism qualitatively reproduce observed segregation patterns, but only in certain parameter regimes. Verifying that these parameter constraints in fact hold has not been possible to date, this requiring a quantitative coupling of theory to experimental data. Here, we address this challenge, developing a new methodology for analysing and quantifying image data and its integration with biophysical models. Specifically we fit a binding kinetics model to 2 colour fluorescence data for cytoskeleton independent synapses (2 and 3D) and test whether the observed inverse correlation between fluorophores conforms to size dependent exclusion, and further, whether patterned states are predicted when model parameters are estimated on individual synapses. All synapses analysed satisfy these conditions demonstrating that the mechanisms of protein redistribution have identifiable signatures in their spatial patterns. We conclude that energy processes implicit in protein size based segregation can drive the patternation observed in individual synapses, at least for the specific examples tested, such that no additional processes need to be invoked. This implies that biophysical processes within the membrane interface have a crucial impact on cell∶cell communication and cell signalling, governing protein interactions and protein aggregation. PMID:21829338

  15. Diviner lunar radiometer gridded brightness temperatures from geodesic binning of modeled fields of view

    NASA Astrophysics Data System (ADS)

    Sefton-Nash, E.; Williams, J.-P.; Greenhagen, B. T.; Aye, K.-M.; Paige, D. A.

    2017-12-01

    An approach is presented to efficiently produce high quality gridded data records from the large, global point-based dataset returned by the Diviner Lunar Radiometer Experiment aboard NASA's Lunar Reconnaissance Orbiter. The need to minimize data volume and processing time in production of science-ready map products is increasingly important with the growth in data volume of planetary datasets. Diviner makes on average >1400 observations per second of radiance that is reflected and emitted from the lunar surface, using 189 detectors divided into 9 spectral channels. Data management and processing bottlenecks are amplified by modeling every observation as a probability distribution function over the field of view, which can increase the required processing time by 2-3 orders of magnitude. Geometric corrections, such as projection of data points onto a digital elevation model, are numerically intensive and therefore it is desirable to perform them only once. Our approach reduces bottlenecks through parallel binning and efficient storage of a pre-processed database of observations. Database construction is via subdivision of a geodesic icosahedral grid, with a spatial resolution that can be tailored to suit the field of view of the observing instrument. Global geodesic grids with high spatial resolution are normally impractically memory intensive. We therefore demonstrate a minimum storage and highly parallel method to bin very large numbers of data points onto such a grid. A database of the pre-processed and binned points is then used for production of mapped data products that is significantly faster than if unprocessed points were used. We explore quality controls in the production of gridded data records by conditional interpolation, allowed only where data density is sufficient. The resultant effects on the spatial continuity and uncertainty in maps of lunar brightness temperatures is illustrated. We identify four binning regimes based on trades between the spatial resolution of the grid, the size of the FOV and the on-target spacing of observations. Our approach may be applicable and beneficial for many existing and future point-based planetary datasets.

  16. Caliber Corrected Markov Modeling (C2M2): Correcting Equilibrium Markov Models.

    PubMed

    Dixit, Purushottam D; Dill, Ken A

    2018-02-13

    Rate processes are often modeled using Markov State Models (MSMs). Suppose you know a prior MSM and then learn that your prediction of some particular observable rate is wrong. What is the best way to correct the whole MSM? For example, molecular dynamics simulations of protein folding may sample many microstates, possibly giving correct pathways through them while also giving the wrong overall folding rate when compared to experiment. Here, we describe Caliber Corrected Markov Modeling (C 2 M 2 ), an approach based on the principle of maximum entropy for updating a Markov model by imposing state- and trajectory-based constraints. We show that such corrections are equivalent to asserting position-dependent diffusion coefficients in continuous-time continuous-space Markov processes modeled by a Smoluchowski equation. We derive the functional form of the diffusion coefficient explicitly in terms of the trajectory-based constraints. We illustrate with examples of 2D particle diffusion and an overdamped harmonic oscillator.

  17. Modeling High Rate Phosphorus and Nitrogen Removal in a Vertical Flow Alum Sludge based Constructed Wetlands

    NASA Astrophysics Data System (ADS)

    Jeyakumar, Lordwin; Zhao, Yaqian

    2014-05-01

    Increased awareness of the impacts of diffuse pollution and their intensification has pushed forward the need for the development of low-cost wastewater treatment techniques. One of such efforts is the use of novel DASC (Dewatered Alum Sludge Cakes) based constructed wetlands (CWs) for removing nutrients, organics, trace elements and other pollutants from wastewater. Understanding of the processes in CWs requires a numerical model that describes the biochemical transformation and degradation processes in subsurface vertical flow (VF) CWs. Therefore, this research focuses on the development of a process-based model for phosphorus (P) and nitrogen (N) removal to achieve a stable performance by using DASC as a substrate in CWs treatment system. An object-oriented modelling tool known as "STELLA" which works based on the principle of system dynamics is used for the development of P and N model. The core objective of the modelling work is oriented towards understanding the process in DASC-based CWs and optimizes design criteria. The P and N dynamic model is developed for DASC-based CWs. The P model developed exclusively for DASC-based CW was able to simulate the effluent P concentration leaving the system satisfactorily. Moreover, the developed P dynamic model has identified the major P pathways as adsorption (72%) followed by plant uptake (20%) and microbial uptake (7%) in single-stage laboratory scale DASC-based CW. Similarly, P dynamic simulation model was developed to simulate the four-stage laboratory scale DASC-based CWs. It was found that simulated and observed values of P removal were in good agreement. The fate of P in all the four stages clearly shows that adsorption played a pivotal role in each stage of the system due to the use of the DASC as a substrate. P adsorption by wetland substrate/DASC represents 59-75% of total P reduction. Subsequently, plant uptake and microbial uptake have lesser role regarding P removal (as compared to adsorption).With regard to N, DASC-based CWs dynamic model was developed and was run for 18 months from Feb 2009 to May 2010. The results reveal that the simulated effluent DN, NH4-N, NO3-N and TN had a considerably good agreement with the observed results. The TN removal was found to be 52% in the DASC-based CW. Interestingly, NIT is the main agent (65.60%) for the removal followed by ad (11.90%), AMM (8.90%), NH4-N (P) (5.90%), and NO3-N (P) (4.40%). DeN did not result in any significant removal (2.90%) in DASC-based CW which may be due to lack of anaerobic condition and absence of carbon sources. The N model also attempted to simulate the internal process behaviour of the system which provided a useful tool for gaining insight into the N dynamics of VFCWs. The results obtained for both N and P models can be used to improve the design of the newly developed DASC-based CWs to increase the efficiency of nutrient removal by CWs.

  18. Understanding interannual variability in the distribution of, and transport processes affecting, the early life stages of Todarodes pacificus using behavioral-hydrodynamic modeling approaches

    NASA Astrophysics Data System (ADS)

    Kim, Jung Jin; Stockhausen, William; Kim, Suam; Cho, Yang-Ki; Seo, Gwang-Ho; Lee, Joon-Soo

    2015-11-01

    To understand interannual variability in the distribution of the early life stages of Todarodes pacificus summer spawning population, and to identify the key transport processes influencing this variability, we used a coupled bio-physical model that combines an individual-based model (IBM) incorporating ontogenetic vertical migration for paralarval behavior and temperature-dependent survival process with a ROMS oceanographic model. Using the distribution of paralarvae observed in the northern East China Sea (ECS) during several field cruises as an end point, the spawning ground for the summer-spawning population was estimated to extend from southeast Jeju Island to the central ECS near 29°N by running the model backwards in time. Running the model forward, interannual variability in the distribution of paralarvae predicted by the model was consistent with that observed in several field surveys; surviving individuals in the northern ECS were substantially more abundant in late July 2006 than in 2007, in agreement with observed paralarval distributions. The total number of surviving individuals at 60 days after release based on the simulation throughout summer spawning period (June-August) was 20,329 for 2006, compared with 13,816 for 2007. The surviving individuals were mainly distributed in the East/Japan Sea (EJS), corresponding to a pathway following the nearshore branch of the Tsushima Warm Current flowing along the Japanese coast during both years. In contrast, the abundance of surviving individuals was extremely low in 2007 compared to 2006 on the Pacific side of Japan. Interannual variability in transport and survival processes made a substantial impact on not only the abundance of surviving paralarvae, but also on the flux of paralarvae to adjacent waters. Our simulation results for between-year variation in paralarval abundance coincide with recruitment (year n + 1) variability of T. pacificus in the field. The agreement between the simulation and field data indicates our model may be useful for predicting the recruitment of T. pacificus.

  19. Occurrence analysis of daily rainfalls through non-homogeneous Poissonian processes

    NASA Astrophysics Data System (ADS)

    Sirangelo, B.; Ferrari, E.; de Luca, D. L.

    2011-06-01

    A stochastic model based on a non-homogeneous Poisson process, characterised by a time-dependent intensity of rainfall occurrence, is employed to explain seasonal effects of daily rainfalls exceeding prefixed threshold values. The data modelling has been performed with a partition of observed daily rainfall data into a calibration period for parameter estimation and a validation period for checking on occurrence process changes. The model has been applied to a set of rain gauges located in different geographical areas of Southern Italy. The results show a good fit for time-varying intensity of rainfall occurrence process by 2-harmonic Fourier law and no statistically significant evidence of changes in the validation period for different threshold values.

  20. A Regional Climate Model Evaluation System based on Satellite and other Observations

    NASA Astrophysics Data System (ADS)

    Lean, P.; Kim, J.; Waliser, D. E.; Hall, A. D.; Mattmann, C. A.; Granger, S. L.; Case, K.; Goodale, C.; Hart, A.; Zimdars, P.; Guan, B.; Molotch, N. P.; Kaki, S.

    2010-12-01

    Regional climate models are a fundamental tool needed for downscaling global climate simulations and projections, such as those contributing to the Coupled Model Intercomparison Projects (CMIPs) that form the basis of the IPCC Assessment Reports. The regional modeling process provides the means to accommodate higher resolution and a greater complexity of Earth System processes. Evaluation of both the global and regional climate models against observations is essential to identify model weaknesses and to direct future model development efforts focused on reducing the uncertainty associated with climate projections. However, the lack of reliable observational data and the lack of formal tools are among the serious limitations to addressing these objectives. Recent satellite observations are particularly useful as they provide a wealth of information on many different aspects of the climate system, but due to their large volume and the difficulties associated with accessing and using the data, these datasets have been generally underutilized in model evaluation studies. Recognizing this problem, NASA JPL / UCLA is developing a model evaluation system to help make satellite observations, in conjunction with in-situ, assimilated, and reanalysis datasets, more readily accessible to the modeling community. The system includes a central database to store multiple datasets in a common format and codes for calculating predefined statistical metrics to assess model performance. This allows the time taken to compare model simulations with satellite observations to be reduced from weeks to days. Early results from the use this new model evaluation system for evaluating regional climate simulations over California/western US regions will be presented.

  1. A Synergistic Approach to Interpreting Planetary Atmospheres

    NASA Astrophysics Data System (ADS)

    Batalha, Natasha E.

    We will soon have the technological capability to measure the atmospheric composition of temperate Earth-sized planets orbiting nearby stars. Interpreting these atmospheric signals poses a new challenge to planetary science. In contrast to jovian-like atmospheres, whose bulk compositions consist of hydrogen and helium, terrestrial planet atmospheres are likely comprised of high mean molecular weight secondary atmospheres, which have gone through a high degree of evolution. For example, present-day Mars has a frozen surface with a thin tenuous atmosphere, but 4 billion years ago it may have been warmed by a thick greenhouse atmosphere. Several processes contribute to a planet's atmospheric evolution: stellar evolution, geological processes, atmospheric escape, biology, etc. Each of these individual processes affects the planetary system as a whole and therefore they all must be considered in the modeling of terrestrial planets. In order to demonstrate the intricacies in modeling terrestrial planets, I use early Mars as a case study. I leverage a combination of one-dimensional climate, photochemical and energy balance models in order to create one self-consistent model that closely matches currently available climate data. One-dimensional models can address several processes: the influence of greenhouse gases on heating, the effect of the planet's geological processes (i.e. volcanoes and the carbonatesilicate cycle) on the atmosphere, the effect of rainfall on atmospheric composition and the stellar irradiance. After demonstrating the number of assumptions required to build a model, I look towards what exactly we can learn from remote observations of temperate Earths and Super Earths. However, unlike in-situ observations from our own solar system, remote sensing techniques need to be developed and understood in order to accurately characterize exo-atmospheres. I describe the models used to create synthetic transit transmission observations, which includes models of transit spectroscopy and instrumental noise. Using these, I lay the framework for an information content-based approach to optimize our observations and maximize the retrievable information from exoatmospheres. First I test the method on observing strategies of the well-studied, low-mean-molecular weight atmospheres of warm-Neptunes and hot Jupiters. Upon verifying the methodology, I finally address optimal observing strategies for temperate, high-mean-molecular weight atmospheres (Earths/super-Earths). iv.

  2. The Chaotic Light Curves of Accreting Black Holes

    NASA Technical Reports Server (NTRS)

    Kazanas, Demosthenes

    2007-01-01

    We present model light curves for accreting Black Hole Candidates (BHC) based on a recently developed model of these sources. According to this model, the observed light curves and aperiodic variability of BHC are due to a series of soft photon injections at random (Poisson) intervals and the stochastic nature of the Comptonization process in converting these soft photons to the observed high energy radiation. The additional assumption of our model is that the Comptonization process takes place in an extended but non-uniform hot plasma corona surrounding the compact object. We compute the corresponding Power Spectral Densities (PSD), autocorrelation functions, time skewness of the light curves and time lags between the light curves of the sources at different photon energies and compare our results to observation. Our model reproduces the observed light curves well, in that it provides good fits to their overall morphology (as manifest by the autocorrelation and time skewness) and also to their PSDs and time lags, by producing most of the variability power at time scales 2 a few seconds, while at the same time allowing for shots of a few msec in duration, in accordance with observation. We suggest that refinement of this type of model along with spectral and phase lag information can be used to probe the structure of this class of high energy sources.

  3. Hot Deformation Behavior and Intrinsic Workability of Carbon Nanotube-Aluminum Reinforced ZA27 Composites

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Geng, Cong; Zhu, Yunke; Peng, Jinfeng; Xu, Junrui

    2017-04-01

    Using a controlled thermal simulator system, hybrid carbon nanotube-aluminum reinforced ZA27 composites were subjected to hot compression testing in the temperature range of 473-523 K with strain rates of 0.01-10 s-1. Based on experimental results, a developed-flow stress model was established using a constitutive equation coupled with strain to describe strain softening arising from dynamic recrystallization. The intrinsic workability was further investigated by constructing three-dimensional (3D) processing maps aided by optical observations of microstructures. The 3D processing maps were constructed based on a dynamic model of materials to delineate variations in the efficiency of power dissipation and flow instability domains. The instability domains exhibited adiabatic shear band and flow localization, which need to be prevented during hot processing. The recommended domain is predicated to be within the temperature range 550-590 K and strain rate range 0.01-0.35 s-1. In this state, the main softening mechanism is dynamic recrystallization. The results from processing maps agree well with the microstructure observations.

  4. A new physics-based modeling approach for tsunami-ionosphere coupling

    NASA Astrophysics Data System (ADS)

    Meng, X.; Komjathy, A.; Verkhoglyadova, O. P.; Yang, Y.-M.; Deng, Y.; Mannucci, A. J.

    2015-06-01

    Tsunamis can generate gravity waves propagating upward through the atmosphere, inducing total electron content (TEC) disturbances in the ionosphere. To capture this process, we have implemented tsunami-generated gravity waves into the Global Ionosphere-Thermosphere Model (GITM) to construct a three-dimensional physics-based model WP (Wave Perturbation)-GITM. WP-GITM takes tsunami wave properties, including the wave height, wave period, wavelength, and propagation direction, as inputs and time-dependently characterizes the responses of the upper atmosphere between 100 km and 600 km altitudes. We apply WP-GITM to simulate the ionosphere above the West Coast of the United States around the time when the tsunami associated with the March 2011 Tohuku-Oki earthquke arrived. The simulated TEC perturbations agree with Global Positioning System observations reasonably well. For the first time, a fully self-consistent and physics-based model has reproduced the GPS-observed traveling ionospheric signatures of an actual tsunami event.

  5. Rain-rate data base development and rain-rate climate analysis

    NASA Technical Reports Server (NTRS)

    Crane, Robert K.

    1993-01-01

    The single-year rain-rate distribution data available within the archives of Consultative Committee for International Radio (CCIR) Study Group 5 were compiled into a data base for use in rain-rate climate modeling and for the preparation of predictions of attenuation statistics. The four year set of tip-time sequences provided by J. Goldhirsh for locations near Wallops Island were processed to compile monthly and annual distributions of rain rate and of event durations for intervals above and below preset thresholds. A four-year data set of tropical rain-rate tip-time sequences were acquired from the NASA TRMM program for 30 gauges near Darwin, Australia. They were also processed for inclusion in the CCIR data base and the expanded data base for monthly observations at the University of Oklahoma. The empirical rain-rate distributions (edfs) accepted for inclusion in the CCIR data base were used to estimate parameters for several rain-rate distribution models: the lognormal model, the Crane two-component model, and the three parameter model proposed by Moupfuma. The intent of this segment of the study is to obtain a limited set of parameters that can be mapped globally for use in rain attenuation predictions. If the form of the distribution can be established, then perhaps available climatological data can be used to estimate the parameters rather than requiring years of rain-rate observations to set the parameters. The two-component model provided the best fit to the Wallops Island data but the Moupfuma model provided the best fit to the Darwin data.

  6. Inferring Instantaneous, Multivariate and Nonlinear Sensitivities for the Analysis of Feedback Processes in a Dynamical System: Lorenz Model Case Study

    NASA Technical Reports Server (NTRS)

    Aires, Filipe; Rossow, William B.; Hansen, James E. (Technical Monitor)

    2001-01-01

    A new approach is presented for the analysis of feedback processes in a nonlinear dynamical system by observing its variations. The new methodology consists of statistical estimates of the sensitivities between all pairs of variables in the system based on a neural network modeling of the dynamical system. The model can then be used to estimate the instantaneous, multivariate and nonlinear sensitivities, which are shown to be essential for the analysis of the feedbacks processes involved in the dynamical system. The method is described and tested on synthetic data from the low-order Lorenz circulation model where the correct sensitivities can be evaluated analytically.

  7. Ensuring congruency in multiscale modeling: towards linking agent based and continuum biomechanical models of arterial adaptation.

    PubMed

    Hayenga, Heather N; Thorne, Bryan C; Peirce, Shayn M; Humphrey, Jay D

    2011-11-01

    There is a need to develop multiscale models of vascular adaptations to understand tissue-level manifestations of cellular level mechanisms. Continuum-based biomechanical models are well suited for relating blood pressures and flows to stress-mediated changes in geometry and properties, but less so for describing underlying mechanobiological processes. Discrete stochastic agent-based models are well suited for representing biological processes at a cellular level, but not for describing tissue-level mechanical changes. We present here a conceptually new approach to facilitate the coupling of continuum and agent-based models. Because of ubiquitous limitations in both the tissue- and cell-level data from which one derives constitutive relations for continuum models and rule-sets for agent-based models, we suggest that model verification should enforce congruency across scales. That is, multiscale model parameters initially determined from data sets representing different scales should be refined, when possible, to ensure that common outputs are consistent. Potential advantages of this approach are illustrated by comparing simulated aortic responses to a sustained increase in blood pressure predicted by continuum and agent-based models both before and after instituting a genetic algorithm to refine 16 objectively bounded model parameters. We show that congruency-based parameter refinement not only yielded increased consistency across scales, it also yielded predictions that are closer to in vivo observations.

  8. Four decades of modeling methane cycling in terrestrial ecosystems: Where we are heading?

    NASA Astrophysics Data System (ADS)

    Xu, X.; Yuan, F.; Hanson, P. J.; Wullschleger, S. D.; Thornton, P. E.; Tian, H.; Riley, W. J.; Song, X.; Graham, D. E.; Song, C.

    2015-12-01

    A modeling approach to methane (CH4) is widely used to quantify the budget, investigate spatial and temporal variabilities, and understand the mechanistic processes and environmental controls on CH4 fluxes across spatial and temporal scales. Moreover, CH4 models are an important tool for integrating CH4 data from multiple sources, such as laboratory-based incubation and molecular analysis, field observational experiments, remote sensing, and aircraft-based measurements across a variety of terrestrial ecosystems. We reviewed 39 terrestrial CH4 models to characterize their strengths and weaknesses and to design a roadmap for future model improvement and application. We found that: (1) the focus of CH4 models have been shifted from theoretical to site- to regional-level application over the past four decades, expressed as dramatic increases in CH4 model development on regional budget quantification; (2) large discrepancies exist among models in terms of representing CH4 processes and their environmental controls; (3) significant data-model and model-model mismatches are partially attributed to different representations of wetland characterization and inundation dynamics. Three efforts should be paid special attention for future improvements and applications of fully mechanistic CH4 models: (1) CH4 models should be improved to represent the mechanisms underlying land-atmosphere CH4 exchange, with emphasis on improving and validating individual CH4 processes over depth and horizontal space; (2) models should be developed that are capable of simulating CH4 fluxes across space and time (particularly hot moments and hot spots); (3) efforts should be invested to develop model benchmarking frameworks that can easily be used for model improvement, evaluation, and integration with data from molecular to global scales. A newly developed microbial functional group-based CH4 model (CLM-Microbe) was further used to demonstrate the features of mechanistic representation and integration with multiple source of observational datasets.

  9. Hierarchical spatial models of abundance and occurrence from imperfect survey data

    USGS Publications Warehouse

    Royle, J. Andrew; Kery, M.; Gautier, R.; Schmid, Hans

    2007-01-01

    Many estimation and inference problems arising from large-scale animal surveys are focused on developing an understanding of patterns in abundance or occurrence of a species based on spatially referenced count data. One fundamental challenge, then, is that it is generally not feasible to completely enumerate ('census') all individuals present in each sample unit. This observation bias may consist of several components, including spatial coverage bias (not all individuals in the Population are exposed to sampling) and detection bias (exposed individuals may go undetected). Thus, observations are biased for the state variable (abundance, occupancy) that is the object of inference. Moreover, data are often sparse for most observation locations, requiring consideration of methods for spatially aggregating or otherwise combining sparse data among sample units. The development of methods that unify spatial statistical models with models accommodating non-detection is necessary to resolve important spatial inference problems based on animal survey data. In this paper, we develop a novel hierarchical spatial model for estimation of abundance and occurrence from survey data wherein detection is imperfect. Our application is focused on spatial inference problems in the Swiss Survey of Common Breeding Birds. The observation model for the survey data is specified conditional on the unknown quadrat population size, N(s). We augment the observation model with a spatial process model for N(s), describing the spatial variation in abundance of the species. The model includes explicit sources of variation in habitat structure (forest, elevation) and latent variation in the form of a correlated spatial process. This provides a model-based framework for combining the spatially referenced samples while at the same time yielding a unified treatment of estimation problems involving both abundance and occurrence. We provide a Bayesian framework for analysis and prediction based on the integrated likelihood, and we use the model to obtain estimates of abundance and occurrence maps for the European Jay (Garrulus glandarius), a widespread, elusive, forest bird. The naive national abundance estimate ignoring imperfect detection and incomplete quadrat coverage was 77 766 territories. Accounting for imperfect detection added approximately 18 000 territories, and adjusting for coverage bias added another 131 000 territories to yield a fully corrected estimate of the national total of about 227 000 territories. This is approximately three times as high as previous estimates that assume every territory is detected in each quadrat.

  10. Data-based mechanistic modeling of dissolved organic carbon load through storms using continuous 15-minute resolution observations within UK upland watersheds

    NASA Astrophysics Data System (ADS)

    Jones, T.; Chappell, N. A.

    2013-12-01

    Few watershed modeling studies have addressed DOC dynamics through storm hydrographs (notable exceptions include Boyer et al., 1997 Hydrol Process; Jutras et al., 2011 Ecol Model; Xu et al., 2012 Water Resour Res). In part this has been a consequence of an incomplete understanding of the biogeochemical processes leading to DOC export to streams (Neff & Asner, 2001, Ecosystems) & an insufficient frequency of DOC monitoring to capture sometimes complex time-varying relationships between DOC & storm hydrographs (Kirchner et al., 2004, Hydrol Process). We present the results of a new & ongoing UK study that integrates two components - 1/ New observations of DOC concentrations (& derived load) continuously monitored at 15 minute intervals through multiple seasons for replicated watersheds; & 2/ A dynamic modeling technique that is able to quantify storage-decay effects, plus hysteretic, nonlinear, lagged & non-stationary relationships between DOC & controlling variables (including rainfall, streamflow, temperature & specific biogeochemical variables e.g., pH, nitrate). DOC concentration is being monitored continuously using the latest generation of UV spectrophotometers (i.e. S::CAN spectro::lysers) with in situ calibrations to laboratory analyzed DOC. The controlling variables are recorded simultaneously at the same stream stations. The watersheds selected for study are among the most intensively studied basins in the UK uplands, namely the Plynlimon & Llyn Brianne experimental basins. All contain areas of organic soils, with three having improved grasslands & three conifer afforested. The dynamic response characteristics (DRCs) that describe detailed DOC behaviour through sequences of storms are simulated using the latest identification routines for continuous time transfer function (CT-TF) models within the Matlab-based CAPTAIN toolbox (some incorporating nonlinear components). To our knowledge this is the first application of CT-TFs to modelling DOC processes. Furthermore this allows a data-based mechanistic (DBM) modelling philosophy to be followed where no assumptions about processes are defined a priori (given that dominant processes are often not known before analysis) & where the information contained in the time-series is used to identify multiple structures of models that are statistically robust. Within the final stage of DBM, biogeochemical & hydrological processes are interpreted from those models that are observable from the available stream time-series. We show that this approach can simulate the key features of DOC dynamics within & between storms & that some of the resultant response characteristics change with varying DOC processes in different seasons. Through the use of MISO (multiple-input single-output) models we demonstrate the relative importance of different variables (e.g., rainfall, temperature) in controlling DOC responses. The contrasting behaviour of the six experimental catchments is also reflected in differing response characteristics. These characteristics are shown to contribute to understanding of basin-integrated DOC export processes & to the ecosystem service impacts of DOC & color on commercial water treatment within the surrounding water supply basins.

  11. Exploring the Effect of Embedded Scaffolding Within Curricular Tasks on Third-Grade Students' Model-Based Explanations about Hydrologic Cycling

    NASA Astrophysics Data System (ADS)

    Zangori, Laura; Forbes, Cory T.; Schwarz, Christina V.

    2015-10-01

    Opportunities to generate model-based explanations are crucial for elementary students, yet are rarely foregrounded in elementary science learning environments despite evidence that early learners can reason from models when provided with scaffolding. We used a quasi-experimental research design to investigate the comparative impact of a scaffold test condition consisting of embedded physical scaffolds within a curricular modeling task on third-grade (age 8-9) students' formulation of model-based explanations for the water cycle. This condition was contrasted to the control condition where third-grade students used a curricular modeling task with no embedded physical scaffolds. Students from each condition ( n scaffold = 60; n unscaffold = 56) generated models of the water cycle before and after completion of a 10-week water unit. Results from quantitative analyses suggest that students in the scaffolded condition represented and linked more subsurface water process sequences with surface water process sequences than did students in the unscaffolded condition. However, results of qualitative analyses indicate that students in the scaffolded condition were less likely to build upon these process sequences to generate model-based explanations and experienced difficulties understanding their models as abstracted representations rather than recreations of real-world phenomena. We conclude that embedded curricular scaffolds may support students to consider non-observable components of the water cycle but, alone, may be insufficient for generation of model-based explanations about subsurface water movement.

  12. Modelling rapid subsurface flow at the hillslope scale with explicit representation of preferential flow paths

    NASA Astrophysics Data System (ADS)

    Wienhöfer, J.; Zehe, E.

    2012-04-01

    Rapid lateral flow processes via preferential flow paths are widely accepted to play a key role for rainfall-runoff response in temperate humid headwater catchments. A quantitative description of these processes, however, is still a major challenge in hydrological research, not least because detailed information about the architecture of subsurface flow paths are often impossible to obtain at a natural site without disturbing the system. Our study combines physically based modelling and field observations with the objective to better understand how flow network configurations influence the hydrological response of hillslopes. The system under investigation is a forested hillslope with a small perennial spring at the study area Heumöser, a headwater catchment of the Dornbirnerach in Vorarlberg, Austria. In-situ points measurements of field-saturated hydraulic conductivity and dye staining experiments at the plot scale revealed that shrinkage cracks and biogenic macropores function as preferential flow paths in the fine-textured soils of the study area, and these preferential flow structures were active in fast subsurface transport of artificial tracers at the hillslope scale. For modelling of water and solute transport, we followed the approach of implementing preferential flow paths as spatially explicit structures of high hydraulic conductivity and low retention within the 2D process-based model CATFLOW. Many potential configurations of the flow path network were generated as realisations of a stochastic process informed by macropore characteristics derived from the plot scale observations. Together with different realisations of soil hydraulic parameters, this approach results in a Monte Carlo study. The model setups were used for short-term simulation of a sprinkling and tracer experiment, and the results were evaluated against measured discharges and tracer breakthrough curves. Although both criteria were taken for model evaluation, still several model setups produced acceptable matches to the observed behaviour. These setups were selected for long-term simulation, the results of which were compared against water level measurements at two piezometers along the hillslope and the integral discharge response of the spring to reject some non-behavioural model setups and further reduce equifinality. The results of this study indicate that process-based modelling can provide a means to distinguish preferential flow networks on the hillslope scale when complementary measurements to constrain the range of behavioural model setups are available. These models can further be employed as a virtual reality to investigate the characteristics of flow path architectures and explore effective parameterisations for larger scale applications.

  13. Biophysical modelling of intra-ring variations in tracheid features and wood density of Pinus pinaster trees exposed to seasonal droughts

    Treesearch

    Sarah Wilkinson; Jerome Ogee; Jean-Christophe Domec; Mark Rayment; Lisa Wingate

    2015-01-01

    Process-based models that link seasonally varying environmental signals to morphological features within tree rings are essential tools to predict tree growth response and commercially important wood quality traits under future climate scenarios. This study evaluated model portrayal of radial growth and wood anatomy observations within a mature maritime pine (Pinus...

  14. A Model-Based Cluster Analysis of Maternal Emotion Regulation and Relations to Parenting Behavior.

    PubMed

    Shaffer, Anne; Whitehead, Monica; Davis, Molly; Morelen, Diana; Suveg, Cynthia

    2017-10-15

    In a diverse community sample of mothers (N = 108) and their preschool-aged children (M age  = 3.50 years), this study conducted person-oriented analyses of maternal emotion regulation (ER) based on a multimethod assessment incorporating physiological, observational, and self-report indicators. A model-based cluster analysis was applied to five indicators of maternal ER: maternal self-report, observed negative affect in a parent-child interaction, baseline respiratory sinus arrhythmia (RSA), and RSA suppression across two laboratory tasks. Model-based cluster analyses revealed four maternal ER profiles, including a group of mothers with average ER functioning, characterized by socioeconomic advantage and more positive parenting behavior. A dysregulated cluster demonstrated the greatest challenges with parenting and dyadic interactions. Two clusters of intermediate dysregulation were also identified. Implications for assessment and applications to parenting interventions are discussed. © 2017 Family Process Institute.

  15. Observations and modeling of methane flux in northern wetlands

    NASA Astrophysics Data System (ADS)

    Futakuchi, Y.; Ueyama, M.; Matsumoto, Y.; Yazaki, T.; Hirano, T.; Kominami, Y.; Harazono, Y.; Igarashi, Y.

    2016-12-01

    Methane (CH4) budgets in northern wetlands vary greatly with high spatio-temporal heterogeneity. Owing to limited available data, yet, it is difficult to constrain the CH4 emission from northern wetlands. In this context, we continuously measured CH4 fluxes at two northern wetlands. Measured fluxes were used for constraining the new model that empirically partitioned net CH4 fluxes into the processes of production, oxidation, and transport associated with ebullition, diffusion, and plant, based on the optimization technique. This study reveal the important processes related to the seasonal variations in CH4 emission with the continuous observations and inverse model analysis. The measurements have been conducted at a Sphagnum-dominated cool temperate bog (BBY) since April 2015 using the open-path eddy covariance method and a sub-arctic forested bog on permafrost in University of Alaska Fairbanks (UAF) since May 2016 using three automated chambers by a laser-based gas analyzer (FGGA-24r-EP, Los Gatos Research Inc., USA). In BBY, daily CH4 fluxes ranged from 1.9 nmol m-2 s-1 in early spring to 97.9 nmol m-2 s-1 in mid-summer. Growing-season total CH4 flux was 13 g m-2 yr-1 in 2015. In contrast, CH4 flux at the UAF site was small (0.2 to 1.0 nmol m-2 s-1), and hardly increased since start of the observation. This difference could be caused by the difference in the climate and soil conditions; mean air and soil temperature, and presence of permafrost. For BBY, the seasonal variation of CH4 emission was mostly explained by soil temperature, suggesting that the production was the important controlling process. In mid-summer when soil temperature was high, however, decrease in atmospheric pressure and increase in vegetation greenness stimulated CH4 emission probably through plant-mediated transport and form of bubble, suggesting that the transport processes were important. Based on a preliminary results by the model optimization in BBY site, CH4 fluxes were strongly influenced by the processes associated with production, ebullition, and plant-mediated transports rather than the processes associated with oxidation and diffusion. In this presentation, we will show that the new data-model fusion that we developed is the effective tool for evaluating CH4 fluxes and controlling processes at northern wetlands.

  16. Century Scale Evaporation Trend: An Observational Study

    NASA Technical Reports Server (NTRS)

    Bounoui, Lahouari

    2012-01-01

    Several climate models with different complexity indicate that under increased CO2 forcing, runoff would increase faster than precipitation overland. However, observations over large U.S watersheds indicate otherwise. This inconsistency between models and observations suggests that there may be important feedbacks between climate and land surface unaccounted for in the present generation of models. We have analyzed century-scale observed annual runoff and precipitation time-series over several United States Geological Survey hydrological units covering large forested regions of the Eastern United States not affected by irrigation. Both time-series exhibit a positive long-term trend; however, in contrast to model results, these historic data records show that the rate of precipitation increases at roughly double the rate of runoff increase. We considered several hydrological processes to close the water budget and found that none of these processes acting alone could account for the total water excess generated by the observed difference between precipitation and runoff. We conclude that evaporation has increased over the period of observations and show that the increasing trend in precipitation minus runoff is correlated to observed increase in vegetation density based on the longest available global satellite record. The increase in vegetation density has important implications for climate; it slows but does not alleviate the projected warming associated with greenhouse gases emission.

  17. Modeling the Hydrologic Processes of a Permeable Pavement ...

    EPA Pesticide Factsheets

    A permeable pavement system can capture stormwater to reduce runoff volume and flow rate, improve onsite groundwater recharge, and enhance pollutant controls within the site. A new unit process model for evaluating the hydrologic performance of a permeable pavement system has been developed in this study. The developed model can continuously simulate infiltration through the permeable pavement surface, exfiltration from the storage to the surrounding in situ soils, and clogging impacts on infiltration/exfiltration capacity at the pavement surface and the bottom of the subsurface storage unit. The exfiltration modeling component simulates vertical and horizontal exfiltration independently based on Darcy’s formula with the Green-Ampt approximation. The developed model can be arranged with physically-based modeling parameters, such as hydraulic conductivity, Manning’s friction flow parameters, saturated and field capacity volumetric water contents, porosity, density, etc. The developed model was calibrated using high-frequency observed data. The modeled water depths are well matched with the observed values (R2 = 0.90). The modeling results show that horizontal exfiltration through the side walls of the subsurface storage unit is a prevailing factor in determining the hydrologic performance of the system, especially where the storage unit is developed in a long, narrow shape; or with a high risk of bottom compaction and clogging. This paper presents unit

  18. Conceptual modelling to predict unobserved system states - the case of groundwater flooding in the UK Chalk

    NASA Astrophysics Data System (ADS)

    Hartmann, A. J.; Ireson, A. M.

    2017-12-01

    Chalk aquifers represent an important source of drinking water in the UK. Due to its fractured-porous structure, Chalk aquifers are characterized by highly dynamic groundwater fluctuations that enhance the risk of groundwater flooding. The risk of groundwater flooding can be assessed by physically-based groundwater models. But for reliable results, a-priori information about the distribution of hydraulic conductivities and porosities is necessary, which is often not available. For that reason, conceptual simulation models are often used to predict groundwater behaviour. They commonly require calibration by historic groundwater observations. Consequently, their prediction performance may reduce significantly, when it comes to system states that did not occur within the calibration time series. In this study, we calibrate a conceptual model to the observed groundwater level observations at several locations within a Chalk system in Southern England. During the calibration period, no groundwater flooding occurred. We then apply our model to predict the groundwater dynamics of the system at a time that includes a groundwater flooding event. We show that the calibrated model provides reasonable predictions before and after the flooding event but it over-estimates groundwater levels during the event. After modifying the model structure to include topographic information, the model is capable of prediction the groundwater flooding event even though groundwater flooding never occurred in the calibration period. Although straight forward, our approach shows how conceptual process-based models can be applied to predict system states and dynamics that did not occur in the calibration period. We believe such an approach can be transferred to similar cases, especially to regions where rainfall intensities are expected to trigger processes and system states that may have not yet been observed.

  19. Accuracy of latent-variable estimation in Bayesian semi-supervised learning.

    PubMed

    Yamazaki, Keisuke

    2015-09-01

    Hierarchical probabilistic models, such as Gaussian mixture models, are widely used for unsupervised learning tasks. These models consist of observable and latent variables, which represent the observable data and the underlying data-generation process, respectively. Unsupervised learning tasks, such as cluster analysis, are regarded as estimations of latent variables based on the observable ones. The estimation of latent variables in semi-supervised learning, where some labels are observed, will be more precise than that in unsupervised, and one of the concerns is to clarify the effect of the labeled data. However, there has not been sufficient theoretical analysis of the accuracy of the estimation of latent variables. In a previous study, a distribution-based error function was formulated, and its asymptotic form was calculated for unsupervised learning with generative models. It has been shown that, for the estimation of latent variables, the Bayes method is more accurate than the maximum-likelihood method. The present paper reveals the asymptotic forms of the error function in Bayesian semi-supervised learning for both discriminative and generative models. The results show that the generative model, which uses all of the given data, performs better when the model is well specified. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. A Microphysics-Based Black Carbon Aging Scheme in a Global Chemical Transport Model: Constraints from HIPPO Observations

    NASA Astrophysics Data System (ADS)

    He, C.; Li, Q.; Liou, K. N.; Qi, L.; Tao, S.; Schwarz, J. P.

    2015-12-01

    Black carbon (BC) aging significantly affects its distributions and radiative properties, which is an important uncertainty source in estimating BC climatic effects. Global models often use a fixed aging timescale for the hydrophobic-to-hydrophilic BC conversion or a simple parameterization. We have developed and implemented a microphysics-based BC aging scheme that accounts for condensation and coagulation processes into a global 3-D chemical transport model (GEOS-Chem). Model results are systematically evaluated by comparing with the HIPPO observations across the Pacific (67°S-85°N) during 2009-2011. We find that the microphysics-based scheme substantially increases the BC aging rate over source regions as compared with the fixed aging timescale (1.2 days), due to the condensation of sulfate and secondary organic aerosols (SOA) and coagulation with pre-existing hydrophilic aerosols. However, the microphysics-based scheme slows down BC aging over Polar regions where condensation and coagulation are rather weak. We find that BC aging is primarily dominated by condensation process that accounts for ~75% of global BC aging, while the coagulation process is important over source regions where a large amount of pre-existing aerosols are available. Model results show that the fixed aging scheme tends to overestimate BC concentrations over the Pacific throughout the troposphere by a factor of 2-5 at different latitudes, while the microphysics-based scheme reduces the discrepancies by up to a factor of 2, particularly in the middle troposphere. The microphysics-based scheme developed in this work decreases BC column total concentrations at all latitudes and seasons, especially over tropical regions, leading to large improvement in model simulations. We are presently analyzing the impact of this scheme on global BC budget and lifetime, quantifying its uncertainty associated with key parameters, and investigating the effects of heterogeneous chemical oxidation on BC aging.

  1. Model selection using cosmic chronometers with Gaussian Processes

    NASA Astrophysics Data System (ADS)

    Melia, Fulvio; Yennapureddy, Manoj K.

    2018-02-01

    The use of Gaussian Processes with a measurement of the cosmic expansion rate based solely on the observation of cosmic chronometers provides a completely cosmology-independent reconstruction of the Hubble constant H(z) suitable for testing different models. The corresponding dispersion σH is smaller than ~ 9% over the entire redshift range (lesssim zlesssim 20) of the observations, rivaling many kinds of cosmological measurements available today. We use the reconstructed H(z) function to test six different cosmologies, and show that it favours the Rh=ct universe, which has only one free parameter (i.e., H0) over other models, including Planck ΛCDM . The parameters of the standard model may be re-optimized to improve the fits to the reconstructed H(z) function, but the results have smaller p-values than one finds with Rh=ct.

  2. Climate Projections from the NARCliM Project: Bayesian Model Averaging of Maximum Temperature Projections

    NASA Astrophysics Data System (ADS)

    Olson, R.; Evans, J. P.; Fan, Y.

    2015-12-01

    NARCliM (NSW/ACT Regional Climate Modelling Project) is a regional climate project for Australia and the surrounding region. It dynamically downscales 4 General Circulation Models (GCMs) using three Regional Climate Models (RCMs) to provide climate projections for the CORDEX-AustralAsia region at 50 km resolution, and for south-east Australia at 10 km resolution. The project differs from previous work in the level of sophistication of model selection. Specifically, the selection process for GCMs included (i) conducting literature review to evaluate model performance, (ii) analysing model independence, and (iii) selecting models that span future temperature and precipitation change space. RCMs for downscaling the GCMs were chosen based on their performance for several precipitation events over South-East Australia, and on model independence.Bayesian Model Averaging (BMA) provides a statistically consistent framework for weighing the models based on their likelihood given the available observations. These weights are used to provide probability distribution functions (pdfs) for model projections. We develop a BMA framework for constructing probabilistic climate projections for spatially-averaged variables from the NARCliM project. The first step in the procedure is smoothing model output in order to exclude the influence of internal climate variability. Our statistical model for model-observations residuals is a homoskedastic iid process. Comparing RCMs with Australian Water Availability Project (AWAP) observations is used to determine model weights through Monte Carlo integration. Posterior pdfs of statistical parameters of model-data residuals are obtained using Markov Chain Monte Carlo. The uncertainty in the properties of the model-data residuals is fully accounted for when constructing the projections. We present the preliminary results of the BMA analysis for yearly maximum temperature for New South Wales state planning regions for the period 2060-2079.

  3. Effect of plasma spray processing variations on particle melting and splat spreading of hydroxylapatite and alumina

    NASA Astrophysics Data System (ADS)

    Yankee, S. J.; Pletka, B. J.

    1993-09-01

    Splats of hydroxylapatite (HA) and alumina were obtained via plasma spraying using systematically varied combinations of plasma velocity and temperature, which were achieved by altering the primary plasma gas flow rate and plasma gas composition. Particle size was also varied in the case of alumina. Splat spreading was quantified via computer- aided image analysis as a function of processing variations. A comparison of the predicted splat dimensions from a model developed by Madejski with experimental observations of HA and alumina splats was performed. The model tended to underestimate the HA splat sizes, suggesting that evaporation of smaller particles occurred under the chosen experimental conditions, and to overestimate the observed alumina splat dimensions. Based on this latter result and on the surface appearance of the substrates, incomplete melting appeared to take place in all but the smaller alumina particles. Analysis of the spreading data as a function of the processing variations indicated that the particle size as well as the plasma temperature and velocity influenced the extent of particle melting. Based on these data and other considerations, a physical model was developed that described the degree of particle melting in terms of material and processing parameters. The physical model correctly predicted the relative splat spreading behavior of HA and alumina, assuming that spreading was directly linked to the extent of particle melting.

  4. Crop monitoring & yield forecasting system based on Synthetic Aperture Radar (SAR) and process-based crop growth model: Development and validation in South and South East Asian Countries

    NASA Astrophysics Data System (ADS)

    Setiyono, T. D.

    2014-12-01

    Accurate and timely information on rice crop growth and yield helps governments and other stakeholders adapting their economic policies and enables relief organizations to better anticipate and coordinate relief efforts in the wake of a natural catastrophe. Such delivery of rice growth and yield information is made possible by regular earth observation using space-born Synthetic Aperture Radar (SAR) technology combined with crop modeling approach to estimate yield. Radar-based remote sensing is capable of observing rice vegetation growth irrespective of cloud coverage, an important feature given that in incidences of flooding the sky is often cloud-covered. The system allows rapid damage assessment over the area of interest. Rice yield monitoring is based on a crop growth simulation and SAR-derived key information, particularly start of season and leaf growth rate. Results from pilot study sites in South and South East Asian countries suggest that incorporation of SAR data into crop model improves yield estimation for actual yields. Remote-sensing data assimilation into crop model effectively capture responses of rice crops to environmental conditions over large spatial coverage, which otherwise is practically impossible to achieve. Such improvement of actual yield estimates offers practical application such as in a crop insurance program. Process-based crop simulation model is used in the system to ensure climate information is adequately captured and to enable mid-season yield forecast.

  5. Models of formation and some algorithms of hyperspectral image processing

    NASA Astrophysics Data System (ADS)

    Achmetov, R. N.; Stratilatov, N. R.; Yudakov, A. A.; Vezenov, V. I.; Eremeev, V. V.

    2014-12-01

    Algorithms and information technologies for processing Earth hyperspectral imagery are presented. Several new approaches are discussed. Peculiar properties of processing the hyperspectral imagery, such as multifold signal-to-noise reduction, atmospheric distortions, access to spectral characteristics of every image point, and high dimensionality of data, were studied. Different measures of similarity between individual hyperspectral image points and the effect of additive uncorrelated noise on these measures were analyzed. It was shown that these measures are substantially affected by noise, and a new measure free of this disadvantage was proposed. The problem of detecting the observed scene object boundaries, based on comparing the spectral characteristics of image points, is considered. It was shown that contours are processed much better when spectral characteristics are used instead of energy brightness. A statistical approach to the correction of atmospheric distortions, which makes it possible to solve the stated problem based on analysis of a distorted image in contrast to analytical multiparametric models, was proposed. Several algorithms used to integrate spectral zonal images with data from other survey systems, which make it possible to image observed scene objects with a higher quality, are considered. Quality characteristics of hyperspectral data processing were proposed and studied.

  6. Empirically Derived and Simulated Sensitivity of Vegetation to Climate Across Global Gradients of Temperature and Precipitation

    NASA Astrophysics Data System (ADS)

    Quetin, G. R.; Swann, A. L. S.

    2017-12-01

    Successfully predicting the state of vegetation in a novel environment is dependent on our process level understanding of the ecosystem and its interactions with the environment. We derive a global empirical map of the sensitivity of vegetation to climate using the response of satellite-observed greenness and leaf area to interannual variations in temperature and precipitation. Our analysis provides observations of ecosystem functioning; the vegetation interactions with the physical environment, across a wide range of climates and provide a functional constraint for hypotheses engendered in process-based models. We infer mechanisms constraining ecosystem functioning by contrasting how the observed and simulated sensitivity of vegetation to climate varies across climate space. Our analysis yields empirical evidence for multiple physical and biological mediators of the sensitivity of vegetation to climate as a systematic change across climate space. Our comparison of remote sensing-based vegetation sensitivity with modeled estimates provides evidence for which physiological mechanisms - photosynthetic efficiency, respiration, water supply, atmospheric water demand, and sunlight availability - dominate the ecosystem functioning in places with different climates. Earth system models are generally successful in reproducing the broad sign and shape of ecosystem functioning across climate space. However, this general agreement breaks down in hot wet climates where models simulate less leaf area during a warmer year, while observations show a mixed response but overall more leaf area during warmer years. In addition, simulated ecosystem interaction with temperature is generally larger and changes more rapidly across a gradient of temperature than is observed. We hypothesize that the amplified interaction and change are both due to a lack of adaptation and acclimation in simulations. This discrepancy with observations suggests that simulated responses of vegetation to global warming, and feedbacks between vegetation and climate, are too strong in the models.

  7. Latent Heating Retrieval from TRMM Observations Using a Simplified Thermodynamic Model

    NASA Technical Reports Server (NTRS)

    Grecu, Mircea; Olson, William S.

    2003-01-01

    A procedure for the retrieval of hydrometeor latent heating from TRMM active and passive observations is presented. The procedure is based on current methods for estimating multiple-species hydrometeor profiles from TRMM observations. The species include: cloud water, cloud ice, rain, and graupel (or snow). A three-dimensional wind field is prescribed based on the retrieved hydrometeor profiles, and, assuming a steady-state, the sources and sinks in the hydrometeor conservation equations are determined. Then, the momentum and thermodynamic equations, in which the heating and cooling are derived from the hydrometeor sources and sinks, are integrated one step forward in time. The hydrometeor sources and sinks are reevaluated based on the new wind field, and the momentum and thermodynamic equations are integrated one more step. The reevalution-integration process is repeated until a steady state is reached. The procedure is tested using cloud model simulations. Cloud-model derived fields are used to synthesize TRMM observations, from which hydrometeor profiles are derived. The procedure is applied to the retrieved hydrometeor profiles, and the latent heating estimates are compared to the actual latent heating produced by the cloud model. Examples of procedure's applications to real TRMM data are also provided.

  8. Investigating the effect of invasion characteristics on onion thrips (Thysanoptera: Thripidae) populations in onions with a temperature-driven process model.

    PubMed

    Mo, Jianhua; Stevens, Mark; Liu, De Li; Herron, Grant

    2009-12-01

    A temperature-driven process model was developed to describe the seasonal patterns of populations of onion thrips, Thrips tabaci Lindeman, in onions. The model used daily cohorts (individuals of the same developmental stage and daily age) as the population unit. Stage transitions were modeled as a logistic function of accumulated degree-days to account for variability in development rate among individuals. Daily survival was modeled as a logistic function of daily mean temperature. Parameters for development, survival, and fecundity were estimated from published data. A single invasion event was used to initiate the population process, starting at 1-100 d after onion emergence (DAE) for 10-100 d at the daily rate of 0.001-0.9 adults/plant/d. The model was validated against five observed seasonal patterns of onion thrips populations from two unsprayed sites in the Riverina, New South Wales, Australia, during 2003-2006. Performance of the model was measured by a fit index based on the proportion of variations in observed data explained by the model (R (2)) and the differences in total thrips-days between observed and predicted populations. Satisfactory matching between simulated and observed seasonal patterns was obtained within the ranges of invasion parameters tested. Model best-fit was obtained at invasion starting dates of 6-98 DAE with a daily invasion rate of 0.002-0.2 adults/plant/d and an invasion duration of 30-100 d. Under the best-fit invasion scenarios, the model closely reproduced the observed seasonal patterns, explaining 73-95% of variability in adult and larval densities during population increase periods. The results showed that small invasions of adult thrips followed by a gradual population build-up of thrips within onion crops were sufficient to bring about the observed seasonal patterns of onion thrips populations in onion. Implications of the model on timing of chemical controls are discussed.

  9. Physico-chemical processes for landfill leachate treatment: Experiments and mathematical models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xing, W.; Ngo, H.H.; Kim, S.H.

    2008-07-01

    In this study, the adsorption of synthetic landfill leachate onto four kinds of activated carbon has been investigated. From the equilibrium and kinetics experiments, it was observed that coal based PAC presented the highest organic pollutants removal efficiency (54%), followed by coal based GAC (50%), wood based GAC (33%) and wood based PAC (14%). The adsorption equilibrium of PAC and GAC was successfully predicted by Henry-Freundlich adsorption model whilst LDFA + Dual isotherm Kinetics model could describe well the batch adsorption kinetics. The flocculation and flocculation-adsorption experiments were also conducted. The results indicated that flocculation did not perform well onmore » organics removal because of the dominance of low molecular weight organic compounds in synthetic landfill leachate. Consequently, flocculation as pretreatment to adsorption and a combination of flocculation-adsorption could not improve much the organic removal efficiency for the single adsorption process.« less

  10. Creation of an ensemble of simulated cardiac cases and a human observer study: tools for the development of numerical observers for SPECT myocardial perfusion imaging

    NASA Astrophysics Data System (ADS)

    O'Connor, J. Michael; Pretorius, P. Hendrik; Gifford, Howard C.; Licho, Robert; Joffe, Samuel; McGuiness, Matthew; Mehurg, Shannon; Zacharias, Michael; Brankov, Jovan G.

    2012-02-01

    Our previous Single Photon Emission Computed Tomography (SPECT) myocardial perfusion imaging (MPI) research explored the utility of numerical observers. We recently created two hundred and eighty simulated SPECT cardiac cases using Dynamic MCAT (DMCAT) and SIMIND Monte Carlo tools. All simulated cases were then processed with two reconstruction methods: iterative ordered subset expectation maximization (OSEM) and filtered back-projection (FBP). Observer study sets were assembled for both OSEM and FBP methods. Five physicians performed an observer study on one hundred and seventy-nine images from the simulated cases. The observer task was to indicate detection of any myocardial perfusion defect using the American Society of Nuclear Cardiology (ASNC) 17-segment cardiac model and the ASNC five-scale rating guidelines. Human observer Receiver Operating Characteristic (ROC) studies established the guidelines for the subsequent evaluation of numerical model observer (NO) performance. Several NOs were formulated and their performance was compared with the human observer performance. One type of NO was based on evaluation of a cardiac polar map that had been pre-processed using a gradient-magnitude watershed segmentation algorithm. The second type of NO was also based on analysis of a cardiac polar map but with use of a priori calculated average image derived from an ensemble of normal cases.

  11. The Cold Land Processes Experiment (CLPX-1): Analysis and Modelling of LSOS Data (IOP3 Period)

    NASA Technical Reports Server (NTRS)

    Tedesco, Marco; Kim, Edward J.; Cline, Don; Graf, Tobias; Koike, Toshio; Hardy, Janet; Armstrong, Richard; Brodzik, Mary

    2004-01-01

    Microwave brightness temperatures at 18.7,36.5, and 89 GHz collected at the Local-Scale Observation Site (LSOS) of the NASA Cold-Land Processes Field Experiment in February, 2003 (third Intensive Observation Period) were simulated using a Dense Media Radiative Transfer model (DMRT), based on the Quasi Crystalline Approximation with Coherent Potential (QCA-CP). Inputs to the model were averaged from LSOS snow pit measurements, although different averages were used for the lower frequencies vs. the highest one, due to the different penetration depths and to the stratigraphy of the snowpack. Mean snow particle radius was computed as a best-fit parameter. Results show that the model was able to reproduce satisfactorily brightness temperatures measured by the University of Tokyo s Ground Based Microwave Radiometer system (CBMR-7). The values of the best-fit snow particle radii were found to fall within the range of values obtained by averaging the field-measured mean particle sizes for the three classes of Small, Medium and Large grain sizes measured at the LSOS site.

  12. A simulation to study the feasibility of improving the temporal resolution of LAGEOS geodynamic solutions by using a sequential process noise filter

    NASA Technical Reports Server (NTRS)

    Hartman, Brian Davis

    1995-01-01

    A key drawback to estimating geodetic and geodynamic parameters over time based on satellite laser ranging (SLR) observations is the inability to accurately model all the forces acting on the satellite. Errors associated with the observations and the measurement model can detract from the estimates as well. These 'model errors' corrupt the solutions obtained from the satellite orbit determination process. Dynamical models for satellite motion utilize known geophysical parameters to mathematically detail the forces acting on the satellite. However, these parameters, while estimated as constants, vary over time. These temporal variations must be accounted for in some fashion to maintain meaningful solutions. The primary goal of this study is to analyze the feasibility of using a sequential process noise filter for estimating geodynamic parameters over time from the Laser Geodynamics Satellite (LAGEOS) SLR data. This evaluation is achieved by first simulating a sequence of realistic LAGEOS laser ranging observations. These observations are generated using models with known temporal variations in several geodynamic parameters (along track drag and the J(sub 2), J(sub 3), J(sub 4), and J(sub 5) geopotential coefficients). A standard (non-stochastic) filter and a stochastic process noise filter are then utilized to estimate the model parameters from the simulated observations. The standard non-stochastic filter estimates these parameters as constants over consecutive fixed time intervals. Thus, the resulting solutions contain constant estimates of parameters that vary in time which limits the temporal resolution and accuracy of the solution. The stochastic process noise filter estimates these parameters as correlated process noise variables. As a result, the stochastic process noise filter has the potential to estimate the temporal variations more accurately since the constraint of estimating the parameters as constants is eliminated. A comparison of the temporal resolution of solutions obtained from standard sequential filtering methods and process noise sequential filtering methods shows that the accuracy is significantly improved using process noise. The results show that the positional accuracy of the orbit is improved as well. The temporal resolution of the resulting solutions are detailed, and conclusions drawn about the results. Benefits and drawbacks of using process noise filtering in this type of scenario are also identified.

  13. Modeling of Powder Bed Manufacturing Defects

    NASA Astrophysics Data System (ADS)

    Mindt, H.-W.; Desmaison, O.; Megahed, M.; Peralta, A.; Neumann, J.

    2018-01-01

    Powder bed additive manufacturing offers unmatched capabilities. The deposition resolution achieved is extremely high enabling the production of innovative functional products and materials. Achieving the desired final quality is, however, hampered by many potential defects that have to be managed in due course of the manufacturing process. Defects observed in products manufactured via powder bed fusion have been studied experimentally. In this effort we have relied on experiments reported in the literature and—when experimental data were not sufficient—we have performed additional experiments providing an extended foundation for defect analysis. There is large interest in reducing the effort and cost of additive manufacturing process qualification and certification using integrated computational material engineering. A prerequisite is, however, that numerical methods can indeed capture defects. A multiscale multiphysics platform is developed and applied to predict and explain the origin of several defects that have been observed experimentally during laser-based powder bed fusion processes. The models utilized are briefly introduced. The ability of the models to capture the observed defects is verified. The root cause of the defects is explained by analyzing the numerical results thus confirming the ability of numerical methods to provide a foundation for rapid process qualification.

  14. Assessment of snow-dominated water resources: (Ir-)relevant scales for observation and modelling

    NASA Astrophysics Data System (ADS)

    Schaefli, Bettina; Ceperley, Natalie; Michelon, Anthony; Larsen, Joshua; Beria, Harsh

    2017-04-01

    High Alpine catchments play an essential role for many world regions since they 1) provide water resources to low lying and often relatively dry regions, 2) are important for hydropower production as a result of their high hydraulic heads, 3) offer relatively undisturbed habitat for fauna and flora and 4) provide a source of cold water often late into the summer season (due to snowmelt), which is essential for many downstream river ecosystems. However, the water balance of such high Alpine hydrological systems is often difficult to accurately estimate, in part because of seasonal to interannual accumulation of precipitation in the form of snow and ice and by relatively low but highly seasonal evapotranspiration rates. These processes are strongly driven by the topography and related vegetation patterns, by air temperature gradients, solar radiation and wind patterns. Based on selected examples, we will discuss how the spatial scale of these patterns dictates at which scales we can make reliable water balance assessments. Overall, this contribution will provide an overview of some of the key open questions in terms of observing and modelling the dominant hydrological processes in Alpine areas at the right scale. A particular focus will be on the observation and modelling of snow accumulation and melt processes, discussing in particular the usefulness of simple models versus fully physical models at different spatial scales and the role of observed data.

  15. Application of time-variable process noise in terrestrial reference frames determined from VLBI data

    NASA Astrophysics Data System (ADS)

    Soja, Benedikt; Gross, Richard S.; Abbondanza, Claudio; Chin, Toshio M.; Heflin, Michael B.; Parker, Jay W.; Wu, Xiaoping; Balidakis, Kyriakos; Nilsson, Tobias; Glaser, Susanne; Karbon, Maria; Heinkelmann, Robert; Schuh, Harald

    2018-05-01

    In recent years, Kalman filtering has emerged as a suitable technique to determine terrestrial reference frames (TRFs), a prime example being JTRF2014. The time series approach allows variations of station coordinates that are neither reduced by observational corrections nor considered in the functional model to be taken into account. These variations are primarily due to non-tidal geophysical loading effects that are not reduced according to the current IERS Conventions (2010). It is standard practice that the process noise models applied in Kalman filter TRF solutions are derived from time series of loading displacements and account for station dependent differences. So far, it has been assumed that the parameters of these process noise models are constant over time. However, due to the presence of seasonal and irregular variations, this assumption does not truly reflect reality. In this study, we derive a station coordinate process noise model allowing for such temporal variations. This process noise model and one that is a parameterized version of the former are applied in the computation of TRF solutions based on very long baseline interferometry data. In comparison with a solution based on a constant process noise model, we find that the station coordinates are affected at the millimeter level.

  16. Sea Fog Forecasting with Lagrangian Models

    NASA Astrophysics Data System (ADS)

    Lewis, J. M.

    2014-12-01

    In 1913, G. I. Taylor introduced us to a Lagrangian view of sea fog formation. He conducted his study off the coast of Newfoundland in the aftermath of the Titanic disaster. We briefly review Taylor's classic work and then apply these same principles to a case of sea fog formation and dissipation off the coast of California. The resources used in this study consist of: 1) land-based surface and upper-air observations, 2) NDBC (National Data Buoy Center) observations from moored buoys equipped to measure dew point temperature as well as the standard surface observations at sea (wind, sea surface temperature, pressure, and air temperature), 3) satellite observations of cloud, and 4) a one-dimensional (vertically directed) boundary layer model that tracks with the surface air motion and makes use of sophisticated turbulence-radiation parameterizations. Results of the investigation indicate that delicate interplay and interaction between the radiation and turbulence processes makes accurate forecasts of sea fog onset unlikely in the near future. This pessimistic attitude stems from inadequacy of the existing network of observations and uncertainties in modeling dynamical processes within the boundary layer.

  17. Coordinated Parameterization Development and Large-Eddy Simulation for Marine and Arctic Cloud-Topped Boundary Layers

    NASA Technical Reports Server (NTRS)

    Bretherton, Christopher S.

    2002-01-01

    The goal of this project was to compare observations of marine and arctic boundary layers with: (1) parameterization systems used in climate and weather forecast models; and (2) two and three dimensional eddy resolving (LES) models for turbulent fluid flow. Based on this comparison, we hoped to better understand, predict, and parameterize the boundary layer structure and cloud amount, type, and thickness as functions of large scale conditions that are predicted by global climate models. The principal achievements of the project were as follows: (1) Development of a novel boundary layer parameterization for large-scale models that better represents the physical processes in marine boundary layer clouds; and (2) Comparison of column output from the ECMWF global forecast model with observations from the SHEBA experiment. Overall the forecast model did predict most of the major precipitation events and synoptic variability observed over the year of observation of the SHEBA ice camp.

  18. BAIAP2 is related to emotional modulation of human memory strength.

    PubMed

    Luksys, Gediminas; Ackermann, Sandra; Coynel, David; Fastenrath, Matthias; Gschwind, Leo; Heck, Angela; Rasch, Bjoern; Spalek, Klara; Vogler, Christian; Papassotiropoulos, Andreas; de Quervain, Dominique

    2014-01-01

    Memory performance is the result of many distinct mental processes, such as memory encoding, forgetting, and modulation of memory strength by emotional arousal. These processes, which are subserved by partly distinct molecular profiles, are not always amenable to direct observation. Therefore, computational models can be used to make inferences about specific mental processes and to study their genetic underpinnings. Here we combined a computational model-based analysis of memory-related processes with high density genetic information derived from a genome-wide study in healthy young adults. After identifying the best-fitting model for a verbal memory task and estimating the best-fitting individual cognitive parameters, we found a common variant in the gene encoding the brain-specific angiogenesis inhibitor 1-associated protein 2 (BAIAP2) that was related to the model parameter reflecting modulation of verbal memory strength by negative valence. We also observed an association between the same genetic variant and a similar emotional modulation phenotype in a different population performing a picture memory task. Furthermore, using functional neuroimaging we found robust genotype-dependent differences in activity of the parahippocampal cortex that were specifically related to successful memory encoding of negative versus neutral information. Finally, we analyzed cortical gene expression data of 193 deceased subjects and detected significant BAIAP2 genotype-dependent differences in BAIAP2 mRNA levels. Our findings suggest that model-based dissociation of specific cognitive parameters can improve the understanding of genetic underpinnings of human learning and memory.

  19. Cascading Walks Model for Human Mobility Patterns

    PubMed Central

    Han, Xiao-Pu; Wang, Xiang-Wen; Yan, Xiao-Yong; Wang, Bing-Hong

    2015-01-01

    Background Uncovering the mechanism behind the scaling laws and series of anomalies in human trajectories is of fundamental significance in understanding many spatio-temporal phenomena. Recently, several models, e.g. the explorations-returns model (Song et al., 2010) and the radiation model for intercity travels (Simini et al., 2012), have been proposed to study the origin of these anomalies and the prediction of human movements. However, an agent-based model that could reproduce most of empirical observations without priori is still lacking. Methodology/Principal Findings In this paper, considering the empirical findings on the correlations of move-lengths and staying time in human trips, we propose a simple model which is mainly based on the cascading processes to capture the human mobility patterns. In this model, each long-range movement activates series of shorter movements that are organized by the law of localized explorations and preferential returns in prescribed region. Conclusions/Significance Based on the numerical simulations and analytical studies, we show more than five statistical characters that are well consistent with the empirical observations, including several types of scaling anomalies and the ultraslow diffusion properties, implying the cascading processes associated with the localized exploration and preferential returns are indeed a key in the understanding of human mobility activities. Moreover, the model shows both of the diverse individual mobility and aggregated scaling displacements, bridging the micro and macro patterns in human mobility. In summary, our model successfully explains most of empirical findings and provides deeper understandings on the emergence of human mobility patterns. PMID:25860140

  20. Cascading walks model for human mobility patterns.

    PubMed

    Han, Xiao-Pu; Wang, Xiang-Wen; Yan, Xiao-Yong; Wang, Bing-Hong

    2015-01-01

    Uncovering the mechanism behind the scaling laws and series of anomalies in human trajectories is of fundamental significance in understanding many spatio-temporal phenomena. Recently, several models, e.g. the explorations-returns model (Song et al., 2010) and the radiation model for intercity travels (Simini et al., 2012), have been proposed to study the origin of these anomalies and the prediction of human movements. However, an agent-based model that could reproduce most of empirical observations without priori is still lacking. In this paper, considering the empirical findings on the correlations of move-lengths and staying time in human trips, we propose a simple model which is mainly based on the cascading processes to capture the human mobility patterns. In this model, each long-range movement activates series of shorter movements that are organized by the law of localized explorations and preferential returns in prescribed region. Based on the numerical simulations and analytical studies, we show more than five statistical characters that are well consistent with the empirical observations, including several types of scaling anomalies and the ultraslow diffusion properties, implying the cascading processes associated with the localized exploration and preferential returns are indeed a key in the understanding of human mobility activities. Moreover, the model shows both of the diverse individual mobility and aggregated scaling displacements, bridging the micro and macro patterns in human mobility. In summary, our model successfully explains most of empirical findings and provides deeper understandings on the emergence of human mobility patterns.

  1. Volcanic Ash Data Assimilation System for Atmospheric Transport Model

    NASA Astrophysics Data System (ADS)

    Ishii, K.; Shimbori, T.; Sato, E.; Tokumoto, T.; Hayashi, Y.; Hashimoto, A.

    2017-12-01

    The Japan Meteorological Agency (JMA) has two operations for volcanic ash forecasts, which are Volcanic Ash Fall Forecast (VAFF) and Volcanic Ash Advisory (VAA). In these operations, the forecasts are calculated by atmospheric transport models including the advection process, the turbulent diffusion process, the gravitational fall process and the deposition process (wet/dry). The initial distribution of volcanic ash in the models is the most important but uncertain factor. In operations, the model of Suzuki (1983) with many empirical assumptions is adopted to the initial distribution. This adversely affects the reconstruction of actual eruption plumes.We are developing a volcanic ash data assimilation system using weather radars and meteorological satellite observation, in order to improve the initial distribution of the atmospheric transport models. Our data assimilation system is based on the three-dimensional variational data assimilation method (3D-Var). Analysis variables are ash concentration and size distribution parameters which are mutually independent. The radar observation is expected to provide three-dimensional parameters such as ash concentration and parameters of ash particle size distribution. On the other hand, the satellite observation is anticipated to provide two-dimensional parameters of ash clouds such as mass loading, top height and particle effective radius. In this study, we estimate the thickness of ash clouds using vertical wind shear of JMA numerical weather prediction, and apply for the volcanic ash data assimilation system.

  2. Probabilistic Estimates of Global Mean Sea Level and its Underlying Processes

    NASA Astrophysics Data System (ADS)

    Hay, C.; Morrow, E.; Kopp, R. E.; Mitrovica, J. X.

    2015-12-01

    Local sea level can vary significantly from the global mean value due to a suite of processes that includes ongoing sea-level changes due to the last ice age, land water storage, ocean circulation changes, and non-uniform sea-level changes that arise when modern-day land ice rapidly melts. Understanding these sources of spatial and temporal variability is critical to estimating past and present sea-level change and projecting future sea-level rise. Using two probabilistic techniques, a multi-model Kalman smoother and Gaussian process regression, we have reanalyzed 20th century tide gauge observations to produce a new estimate of global mean sea level (GMSL). Our methods allow us to extract global information from the sparse tide gauge field by taking advantage of the physics-based and model-derived geometry of the contributing processes. Both methods provide constraints on the sea-level contribution of glacial isostatic adjustment (GIA). The Kalman smoother tests multiple discrete models of glacial isostatic adjustment (GIA), probabilistically computing the most likely GIA model given the observations, while the Gaussian process regression characterizes the prior covariance structure of a suite of GIA models and then uses this structure to estimate the posterior distribution of local rates of GIA-induced sea-level change. We present the two methodologies, the model-derived geometries of the underlying processes, and our new probabilistic estimates of GMSL and GIA.

  3. Multiframe super resolution reconstruction method based on light field angular images

    NASA Astrophysics Data System (ADS)

    Zhou, Shubo; Yuan, Yan; Su, Lijuan; Ding, Xiaomin; Wang, Jichao

    2017-12-01

    The plenoptic camera can directly obtain 4-dimensional light field information from a 2-dimensional sensor. However, based on the sampling theorem, the spatial resolution is greatly limited by the microlenses. In this paper, we present a method of reconstructing high-resolution images from the angular images. First, the ray tracing method is used to model the telecentric-based light field imaging process. Then, we analyze the subpixel shifts between the angular images extracted from the defocused light field data and the blur in the angular images. According to the analysis above, we construct the observation model from the ideal high-resolution image to the angular images. Applying the regularized super resolution method, we can obtain the super resolution result with a magnification ratio of 8. The results demonstrate the effectiveness of the proposed observation model.

  4. Intensive precipitation observation greatly improves hydrological modelling of the poorly gauged high mountain Mabengnong catchment in the Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Wang, Li; Zhang, Fan; Zhang, Hongbo; Scott, Christopher A.; Zeng, Chen; Shi, Xiaonan

    2018-01-01

    Precipitation is one of the most critical inputs for models used to improve understanding of hydrological processes. In high mountain areas, it is challenging to generate a reliable precipitation data set capturing the spatial and temporal heterogeneity due to the harsh climate, extreme terrain and the lack of observations. This study conducts intensive observation of precipitation in the Mabengnong catchment in the southeast of the Tibetan Plateau during July to August 2013. Because precipitation is greatly influenced by altitude, the observed data are used to characterize the precipitation gradient (PG) and hourly distribution (HD), showing that the average PG is 0.10, 0.28 and 0.26 mm/d/100 m and the average duration is around 0.1, 0.8 and 5.2 h for trace, light and moderate rain, respectively. A distributed biosphere hydrological model based on water and energy budgets with improved physical process for snow (WEB-DHM-S) is applied to simulate the hydrological processes with gridded precipitation data derived from a lower altitude meteorological station and the PG and HD characterized for the study area. The observed runoff, MODIS/Terra snow cover area (SCA) data, and MODIS/Terra land surface temperature (LST) data are used for model calibration and validation. Runoff, SCA and LST simulations all show reasonable results. Sensitivity analyses illustrate that runoff is largely underestimated without considering PG, indicating that short-term intensive precipitation observation has the potential to greatly improve hydrological modelling of poorly gauged high mountain catchments.

  5. Ultra-heavy cosmic rays: Theoretical implications of recent observations

    NASA Technical Reports Server (NTRS)

    Blake, J. B.; Hainebach, K. L.; Schramm, D. N.; Anglin, J. D.

    1977-01-01

    Extreme ultraheavy cosmic ray observations (Z greater or equal 70) are compared with r-process models. A detailed cosmic ray propagation calculation is used to transform the calculated source distributions to those observed at the earth. The r-process production abundances are calculated using different mass formulae and beta-rate formulae; an empirical estimate based on the observed solar system abundances is used also. There is the continued strong indication of an r-process dominance in the extreme ultra-heavy cosmic rays. However it is shown that the observed high actinide/Pt ratio in the cosmic rays cannot be fit with the same r-process calculation which also fits the solar system material. This result suggests that the cosmic rays probably undergo some preferential acceleration in addition to the apparent general enrichment in heavy (r-process) material. As estimate also is made of the expected relative abundance of superheavy elements in the cosmic rays if the anomalous heavy xenon in carbonaceous chondrites is due to a fissioning superheavy element.

  6. Maximum likelihood-based analysis of single-molecule photon arrival trajectories.

    PubMed

    Hajdziona, Marta; Molski, Andrzej

    2011-02-07

    In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 10(3) photons. When the intensity levels are well-separated and 10(4) photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.

  7. Decisionmaking in practice: The dynamics of muddling through.

    PubMed

    Flach, John M; Feufel, Markus A; Reynolds, Peter L; Parker, Sarah Henrickson; Kellogg, Kathryn M

    2017-09-01

    An alternative to conventional models that treat decisions as open-loop independent choices is presented. The alterative model is based on observations of work situations such as healthcare, where decisionmaking is more typically a closed-loop, dynamic, problem-solving process. The article suggests five important distinctions between the processes assumed by conventional models and the reality of decisionmaking in practice. It is suggested that the logic of abduction in the form of an adaptive, muddling through process is more consistent with the realities of practice in domains such as healthcare. The practical implication is that the design goal should not be to improve consistency with normative models of rationality, but to tune the representations guiding the muddling process to increase functional perspicacity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Assessment of Mars Pathfinder landing site predictions

    USGS Publications Warehouse

    Golombek, M.P.; Moore, H.J.; Haldemann, A.F.C.; Parker, T.J.; Schofield, J.T.

    1999-01-01

    Remote sensing data at scales of kilometers and an Earth analog were used to accurately predict the characteristics of the Mars Pathfinder landing site at a scale of meters. The surface surrounding the Mars Pathfinder lander in Ares Vallis appears consistent with orbital interpretations, namely, that it would be a rocky plain composed of materials deposited by catastrophic floods. The surface and observed maximum clast size appears similar to predictions based on an analogous surface of the Ephrata Fan in the Channeled Scabland of Washington state. The elevation of the site measured by relatively small footprint delay-Doppler radar is within 100 m of that determined by two-way ranging and Doppler tracking of the spacecraft. The nearly equal elevations of the Mars Pathfinder and Viking Lander 1 sites allowed a prediction of the atmospheric conditions with altitude (pressure, temperature, and winds) that were well within the entry, descent, and landing design margins. High-resolution (~38 m/pixel) Viking Orbiter 1 images showed a sparsely cratered surface with small knobs with relatively low slopes, consistent with observations of these features from the lander. Measured rock abundance is within 10% of that expected from Viking orbiter thermal observations and models. The fractional area covered by large, potentially hazardous rocks observed is similar to that estimated from model rock distributions based on data from the Viking landing sites, Earth analog sites, and total rock abundance. The bulk and fine-component thermal inertias measured from orbit are similar to those calculated from the observed rock size-frequency distribution. A simple radar echo model based on the reflectivity of the soil (estimated from its bulk density), and the measured fraction of area covered by rocks was used to approximate the quasi-specular and diffuse components of the Earth-based radar echos. Color and albedo orbiter data were used to predict the relatively dust free or unweathered surface around the Pathfinder lander compared to the Viking landing sites. Comparisons with the experiences of selecting the Viking landing sites demonstrate the enormous benefit the Viking data and its analyses and models had on the successful predictions of the Pathfinder site. The Pathfinder experience demonstrates that, in certain locations, geologic processes observed in orbiter data can be used to infer surface characteristics where those processes dominate over other processes affecting the Martian surface layer. Copyright 1999 by the American Geophysical Union.

  9. Analysis of Summer-Time Ozone and Precursor Species in the Southeast United States

    NASA Technical Reports Server (NTRS)

    Johnson, Matthew

    2016-01-01

    Ozone (O3) is a greenhouse gas and toxic pollutant which plays a major role in air quality and atmospheric chemistry. The understanding and ability to model the horizontal and vertical structure of O3 mixing ratios is difficult due to the complex formation/destruction processes and transport pathways that cause large variability of O3. The Environmental Protection Agency has National Ambient Air Quality Standards for O3 set at 75 ppb with future standards proposed to be as low as 65 ppb. These lower values emphasize the need to better understand/simulate the transport processes, emission sources, and chemical processes controlling precursor species (e.g., NOx, VOCs, and CO) which influence O3 mixing ratios. The uncertainty of these controlling variables is particularly large in the southeast United States (US) which is a region impacted by multiple different emission sources of precursor species (anthropogenic and biogenic) and transport processes resulting in complex spatio-temporal O3 patterns. During this work we will evaluate O3 and precursor species in the southeast US applying models, ground-based and airborne in situ data, and lidar observations. In the summer of 2013, the UAH O3 Differential Absorption Lidar (DIAL) (part of the Tropospheric Ozone Lidar Network (TOLNet)) measured vertical O3 profiles from the surface up to approximately 12 km. During this period, the lidar observed numerous periods of dynamic temporal and vertical O3 structures. In order to determine the sources/processes impacting these O3 mixing ratios we will apply the CTM GEOS-Chem (v9-02) at a 0.25 deg x 0.3125 deg resolution. Using in situ ground-based (e.g., SEARCH Network, CASTNET), airborne (e.g., NOAA WP-3D - SENEX 2013, DC-8 - SEAC4RS), and TOLNet lidar data we will first evaluate the model to determine the capability of GEOS-Chem to simulate the spatio-temporal variability of O3 in the southeast US. Secondly, we will perform model sensitivity studies in order to quantify which emission sources (e.g., anthropogenic, biogenic, lighting, wildfire) and transport processes (e.g., stratospheric, long-range, local scale) are contributing to these TOLNet-observed dynamic O3 patterns. Results from the evaluation of the model and the study of sources/processes impacting observed O3 mixing ratios will be presented.

  10. Analysis of Summer-time Ozone and Precursor Species in the Southeast United States

    NASA Astrophysics Data System (ADS)

    Johnson, M. S.; Kuang, S.; Newchurch, M.; Hair, J. W.

    2015-12-01

    Ozone (O3) is a greenhouse gas and toxic pollutant which plays a major role in air quality and atmospheric chemistry. The understanding and ability to model the horizontal and vertical structure of O3 mixing ratios is difficult due to the complex formation/destruction processes and transport pathways that cause large variability of O3. The Environmental Protection Agency has National Ambient Air Quality Standards for O3 set at 75 ppb with future standards proposed to be as low as 65 ppb. These lower values emphasize the need to better understand/simulate the transport processes, emission sources, and chemical processes controlling precursor species (e.g., NOx, VOCs, and CO) which influence O3 mixing ratios. The uncertainty of these controlling variables is particularly large in the southeast United States (US) which is a region impacted by multiple different emission sources of precursor species (anthropogenic and biogenic) and transport processes resulting in complex spatio-temporal O3 patterns. During this work we will evaluate O3 and precursor species in the southeast US applying models, ground-based and airborne in situ data, and lidar observations. In the summer of 2013, the UAH O3 Differential Absorption Lidar (DIAL) (part of the Tropospheric Ozone Lidar Network (TOLNet)) measured vertical O3 profiles from the surface up to ~12 km. During this period, the lidar observed numerous periods of dynamic temporal and vertical O3 structures. In order to determine the sources/processes impacting these O3 mixing ratios we will apply the CTM GEOS-Chem (v9-02) at a 0.25° × 0.3125° resolution. Using in situ ground-based (e.g., SEARCH Network, CASTNET), airborne (e.g., NOAA WP-3D - SENEX 2013, DC-8 - SEAC4RS), and TOLNet lidar data we will first evaluate the model to determine the capability of GEOS-Chem to simulate the spatio-temporal variability of O3 in the southeast US. Secondly, we will perform model sensitivity studies in order to quantify which emission sources (e.g., anthropogenic, biogenic, lighting, wildfire) and transport processes (e.g., stratospheric, long-range, local scale) are contributing to these TOLNet-observed dynamic O3 patterns. Results from the evaluation of the model and the study of sources/processes impacting observed O3 mixing ratios will be presented.

  11. Analysis of thermohydraulic explosion energetics

    NASA Astrophysics Data System (ADS)

    Büttner, Ralf; Zimanowski, Bernd; Mohrholz, Chris-Oliver; Kümmel, Reiner

    2005-08-01

    Thermohydraulic explosion, caused by direct contact of hot liquids with cold water, represent a major danger of volcanism and in technical processes. Based on experimental observations and nonequilibrium thermodynamics we propose a model of heat transfer from the hot liquid to the water during the thermohydraulic fragmentation process. The model was validated using the experimentally observed thermal energy release. From a database of more than 1000 experimental runs, conducted during the last 20 years, a standardized entrapment experiment was defined, where a conversion of 1 MJ/kg of thermal energy to kinetic energy within 700μs is observed. The results of the model calculations are in good agreement with this value. Furthermore, the model was found to be robust with respect to the material properties of the hot melt, which also is observed in experiments using different melt compositions. As the model parameters can be easily obtained from size and shape properties of the products of thermohydraulic explosions and from material properties of the hot melt, we believe that this method will not only allow a better analysis of volcanic eruptions or technical accidents, but also significantly improve the quality of hazard assessment and mitigation.

  12. Computationally modeling interpersonal trust.

    PubMed

    Lee, Jin Joo; Knox, W Bradley; Wormwood, Jolie B; Breazeal, Cynthia; Desteno, David

    2013-01-01

    We present a computational model capable of predicting-above human accuracy-the degree of trust a person has toward their novel partner by observing the trust-related nonverbal cues expressed in their social interaction. We summarize our prior work, in which we identify nonverbal cues that signal untrustworthy behavior and also demonstrate the human mind's readiness to interpret those cues to assess the trustworthiness of a social robot. We demonstrate that domain knowledge gained from our prior work using human-subjects experiments, when incorporated into the feature engineering process, permits a computational model to outperform both human predictions and a baseline model built in naiveté of this domain knowledge. We then present the construction of hidden Markov models to investigate temporal relationships among the trust-related nonverbal cues. By interpreting the resulting learned structure, we observe that models built to emulate different levels of trust exhibit different sequences of nonverbal cues. From this observation, we derived sequence-based temporal features that further improve the accuracy of our computational model. Our multi-step research process presented in this paper combines the strength of experimental manipulation and machine learning to not only design a computational trust model but also to further our understanding of the dynamics of interpersonal trust.

  13. Measurement, modeling, and analysis of nonmethane hydrocarbons and ozone in the southeast United States national parks

    NASA Astrophysics Data System (ADS)

    Kang, Daiwen

    In this research, the sources, distributions, transport, ozone formation potential, and biogenic emissions of VOCs are investigated focusing on three Southeast United States National Parks: Shenandoah National Park, Big Meadows site (SHEN), Great Smoky Mountains National Park at Cove Mountain (GRSM) and Mammoth Cave National Park (MACA). A detailed modeling analysis is conducted using the Multiscale Air Quality SImulation Platform (MAQSIP) focusing on nonmethane hydrocarbons and ozone characterized by high O3 surface concentrations. Nine emissions perturbation using the Multiscale Air Quality SImulation Platform (MAQSIP) focusing on nonmethane hydrocarbons and ozone characterized by high O 3 surface concentrations. In the observation-based analysis, source classification techniques based on correlation coefficient, chemical reactivity, and certain ratios were developed and applied to the data set. Anthropogenic VOCs from automobile exhaust dominate at Mammoth Cave National Park, and at Cove Mountain, Great Smoky Mountains National Park, while at Big Meadows, Shenandoah National Park, the source composition is complex and changed from 1995 to 1996. The dependence of isoprene concentrations on ambient temperatures is investigated, and similar regressional relationships are obtained for all three monitoring locations. Propylene-equivalent concentrations are calculated to account for differences in reaction rates between the OH and individual hydrocarbons, and to thereby estimate their relative contributions to ozone formation. Isoprene fluxes were also estimated for all these rural areas. Model predictions (base scenario) tend to give lower daily maximum O 3 concentrations than observations by 10 to 30%. Model predicted concentrations of lumped paraffin compounds are of the same order of magnitude as the observed values, while the observed concentrations for other species (isoprene, ethene, surrogate olefin, surrogate toluene, and surrogate xylene) are usually an order of magnitude higher than the predictions. Detailed sensitivity and process analyses in terms of ozone and VOC scenarios including the base scenario are designed and utilized in the model simulations. Model predictions are compared with the observed values at the three locations for the same time period. Detailed sensitivity and process analyses in terms of ozone and VOC budgets, and relative importance of various VOCs species are provided. (Abstract shortened by UMI.)

  14. Constraining land carbon cycle process understanding with observations of atmospheric CO2 variability

    NASA Astrophysics Data System (ADS)

    Collatz, G. J.; Kawa, S. R.; Liu, Y.; Zeng, F.; Ivanoff, A.

    2013-12-01

    We evaluate our understanding of the land biospheric carbon cycle by benchmarking a model and its variants to atmospheric CO2 observations and to an atmospheric CO2 inversion. Though the seasonal cycle in CO2 observations is well simulated by the model (RMSE/standard deviation of observations <0.5 at most sites north of 15N and <1 for Southern Hemisphere sites) different model setups suggest that the CO2 seasonal cycle provides some constraint on gross photosynthesis, respiration, and fire fluxes revealed in the amplitude and phase at northern latitude sites. CarbonTracker inversions (CT) and model show similar phasing of the seasonal fluxes but agreement in the amplitude varies by region. We also evaluate interannual variability (IAV) in the measured atmospheric CO2 which, in contrast to the seasonal cycle, is not well represented by the model. We estimate the contributions of biospheric and fire fluxes, and atmospheric transport variability to explaining observed variability in measured CO2. Comparisons with CT show that modeled IAV has some correspondence to the inversion results >40N though fluxes match poorly at regional to continental scales. Regional and global fire emissions are strongly correlated with variability observed at northern flask sample sites and in the global atmospheric CO2 growth rate though in the latter case fire emissions anomalies are not large enough to account fully for the observed variability. We discuss remaining unexplained variability in CO2 observations in terms of the representation of fluxes by the model. This work also demonstrates the limitations of the current network of CO2 observations and the potential of new denser surface measurements and space based column measurements for constraining carbon cycle processes in models.

  15. Evaluation of Aerosol-cloud Interaction in the GISS Model E Using ARM Observations

    NASA Technical Reports Server (NTRS)

    DeBoer, G.; Bauer, S. E.; Toto, T.; Menon, Surabi; Vogelmann, A. M.

    2013-01-01

    Observations from the US Department of Energy's Atmospheric Radiation Measurement (ARM) program are used to evaluate the ability of the NASA GISS ModelE global climate model in reproducing observed interactions between aerosols and clouds. Included in the evaluation are comparisons of basic meteorology and aerosol properties, droplet activation, effective radius parameterizations, and surface-based evaluations of aerosol-cloud interactions (ACI). Differences between the simulated and observed ACI are generally large, but these differences may result partially from vertical distribution of aerosol in the model, rather than the representation of physical processes governing the interactions between aerosols and clouds. Compared to the current observations, the ModelE often features elevated droplet concentrations for a given aerosol concentration, indicating that the activation parameterizations used may be too aggressive. Additionally, parameterizations for effective radius commonly used in models were tested using ARM observations, and there was no clear superior parameterization for the cases reviewed here. This lack of consensus is demonstrated to result in potentially large, statistically significant differences to surface radiative budgets, should one parameterization be chosen over another.

  16. Pluto's Paleoglaciation: Processes and Bounds

    NASA Astrophysics Data System (ADS)

    Umurhan, Orkan; Howard, Alan D.; White, Oliver L.; Moore, Jeffrey M.; Grundy, William M.; Schenk, Paul M.; Beyer, Ross A.; McKinnon, William B.; Singer, Kelsi N.; Lauer, Tod R.; Cheng, Andrew F.; Stern, S. Alan; Weaver, Harold A.; Young, Leslie; Ennico, Kimberly; Olkin, Catherine; New Horizons Science Team

    2017-10-01

    New Horizons imaging of Pluto’s surface shows eroded landscapes reminiscent of assorted glaciated terrains found on the Earth such as alpine valleys, dendritic networks and others. For example, LORRI imaging of fluted craters show radially oriented ridging which also resembles Pluto’s washboard terrain. Digital elevation modeling indicates that these down-gradient oriented ridges are about 3-4 km spaced apart with depths ranging from 0.2-0.5 km. Present day glaciation on Pluto is characterized by moving N2 ice blocks presumably riding over a H2O ice bedrock substrate. Assuming Pluto’s ancient surface was sculpted by N2 glaciation, what remains a mystery is the specific nature of the glacial erosion mechanism(s) responsible for the observed features.To better resolve this puzzle, we perform landform evolution modeling of several glacial erosion processes known from terrestrial H2O ice glaciation studies. These terrestrial processes, which depend upon whether or not the glacier’s base is wet or dry, include quarrying/plucking and fluvial erosion. We also consider new erosional processes (to be described in this presentation) which are unique to the highly insulating character of solid N2 including both phase change induced hydrofracture and geothermally driven basal melt. Until improvements in our knowledge of solid N2’s rheology are made available (including its mechanical behavior as a binary/trinary mixture of CH4 and CO), it is difficult to assess with high precision which of the aforementioned erosion mechanisms are responsible for the observed surface etchings.Nevertheless, we consider a model crater surface and examine its erosional development due to flowing N2 glacial ice as built up over time according to N2 deposition rates based on GCM modeling of Pluto’s ancient atmosphere. For given erosional mechanism our aim is to determine the permissible ranges of model input parameters (e.g., ice strength, flow rates, grain sizes, quarrying rates, etc.) that best reproduces the observed length scales found on the observed fluted craters. As of the writing of this abstract, both the processes of quarrying and phase change induced hydrofracture appear to be most promising at explaining the fluted crater ridging.

  17. Pluto's Paleoglaciation: Processes and Bounds.

    NASA Astrophysics Data System (ADS)

    Umurhan, O. M.; Howard, A. D.; White, O. L.; Moore, J. M.; Grundy, W. M.; Schenk, P.; Beyer, R. A.; McKinnon, W. B.; Singer, K. N.; Lauer, T.; Cheng, A. F.; Stern, A.; Weaver, H. A., Jr.; Young, L. A.; Ennico Smith, K.; Olkin, C.

    2017-12-01

    New Horizons imaging of Pluto's surface shows eroded landscapes reminiscent of assorted glaciated terrains found on the Earth such as alpine valleys, dendritic networks and others. For example, LORRI imaging of fluted craters show radially oriented ridging which also resembles Pluto's washboard terrain. Digital elevation modeling indicates that these down-gradient oriented ridges are about 3-4 km spaced apart with depths ranging from 0.2-0.5 km. Present day glaciation on Pluto is characterized by moving N2 ice blocks presumably riding over a H2O ice bedrock substrate. Assuming Pluto's ancient surface was sculpted by N2 glaciation, what remains a mystery is the specific nature of the glacial erosion mechanism(s) responsible for the observed features. To better resolve this puzzle, we perform landform evolution modeling of several glacial erosion processes known from terrestrial H2O ice glaciation studies. These terrestrial processes, which depend upon whether or not the glacier's base is wet or dry, include quarrying/plucking and fluvial erosion. We also consider new erosional processes (to be described in this presentation) which are unique to the highly insulating character of solid N2 including both phase change induced hydrofracture and geothermally driven basal melt. Until improvements in our knowledge of solid N2's rheology are made available (including its mechanical behavior as a binary/trinary mixture of CH4 and CO), it is difficult to assess with high precision which of the aforementioned erosion mechanisms are responsible for the observed surface etchings. Nevertheless, we consider a model crater surface and examine its erosional development due to flowing N2 glacial ice as built up over time according to N2 deposition rates based on GCM modeling of Pluto's ancient atmosphere. For given erosional mechanism our aim is to determine the permissible ranges of model input parameters (e.g., ice strength, flow rates, grain sizes, quarrying rates, etc.) that best reproduces the observed length scales found on the observed fluted craters. As of the writing of this abstract, both the processes of quarrying and phase change induced hydrofracture appear to be most promising at explaining the fluted crater ridging.

  18. Using a spatially-distributed hydrologic biogeochemistry model to study the spatial variation of carbon processes in a Critical Zone Observatory

    NASA Astrophysics Data System (ADS)

    Shi, Y.; Eissenstat, D. M.; Davis, K. J.; He, Y.

    2015-12-01

    Forest carbon processes are affected by soil moisture, soil temperature and solar radiation. Most of the current biogeochemical models are 1-D and represent one point in space. Therefore they can neither resolve topographically driven hill-slope soil moisture patterns, nor simulate the nonlinear effects of soil moisture on carbon processes. A spatially-distributed biogeochemistry model, Flux-PIHM-BGC, has been developed by coupling the Biome-BGC (BBGC) model with a coupled physically-based land surface hydrologic model, Flux-PIHM. Flux-PIHM incorporates a land-surface scheme (adapted from the Noah land surface model) into the Penn State Integrated Hydrologic Model (PIHM). Because PIHM is capable of simulating lateral water flow and deep groundwater, Flux-PIHM is able to represent the link between groundwater and the surface energy balance, as well as the land surface heterogeneities caused by topography. Flux-PIHM-BGC model was tested at the Susquehanna/Shale Hills critical zone observatory (SSHCZO). The abundant observations at the SSHCZO, including eddy covariance fluxes, soil moisture, groundwater level, sap flux, stream discharge, litterfall, leaf area index, aboveground carbon stock, and soil carbon efflux, provided an ideal test bed for the coupled model. Model results show that when uniform solar radiation is used, vegetation carbon and soil carbon are positively correlated with soil moisture in space, which agrees with the observations within the watershed. When topographically-driven solar radiation is used, however, the wetter valley floor becomes radiation limited, and produces less vegetation and soil carbon than the drier hillslope due to the assumption that canopy height is uniform in the watershed. This contradicts with the observations, and suggests that a tree height model with dynamic allocation model are needed to reproduce the spatial variation of carbon processes within a watershed.

  19. Process-based, morphodynamic hindcast of decadal deposition patterns in San Pablo Bay, California, 1856-1887

    USGS Publications Warehouse

    van der Wegen, M.; Jaffe, B.E.; Roelvink, J.A.

    2011-01-01

    This study investigates the possibility of hindcasting-observed decadal-scale morphologic change in San Pablo Bay, a subembayment of the San Francisco Estuary, California, USA, by means of a 3-D numerical model (Delft3D). The hindcast period, 1856-1887, is characterized by upstream hydraulic mining that resulted in a high sediment input to the estuary. The model includes wind waves, salt water and fresh water interactions, and graded sediment transport, among others. Simplified initial conditions and hydrodynamic forcing were necessary because detailed historic descriptions were lacking. Model results show significant skill. The river discharge and sediment concentration have a strong positive influence on deposition volumes. Waves decrease deposition rates and have, together with tidal movement, the greatest effect on sediment distribution within San Pablo Bay. The applied process-based (or reductionist) modeling approach is valuable once reasonable values for model parameters and hydrodynamic forcing are obtained. Sensitivity analysis reveals the dominant forcing of the system and suggests that the model planform plays a dominant role in the morphodynamic development. A detailed physical explanation of the model outcomes is difficult because of the high nonlinearity of the processes. Process formulation refinement, a more detailed description of the forcing, or further model parameter variations may lead to an enhanced model performance, albeit to a limited extent. The approach potentially provides a sound basis for prediction of future developments. Parallel use of highly schematized box models and a process-based approach as described in the present work is probably the most valuable method to assess decadal morphodynamic development. Copyright ?? 2011 by the American Geophysical Union.

  20. Internal audit in a microbiology laboratory.

    PubMed Central

    Mifsud, A J; Shafi, M S

    1995-01-01

    AIM--To set up a programme of internal laboratory audit in a medical microbiology laboratory. METHODS--A model of laboratory based process audit is described. Laboratory activities were examined in turn by specimen type. Standards were set using laboratory standard operating procedures; practice was observed using a purpose designed questionnaire and the data were analysed by computer; performance was assessed at laboratory audit meetings; and the audit circle was closed by re-auditing topics after an interval. RESULTS--Improvements in performance scores (objective measures) and in staff morale (subjective impression) were observed. CONCLUSIONS--This model of process audit could be applied, with amendments to take local practice into account, in any microbiology laboratory. PMID:7665701

  1. The interaction of host genetics and disease processes in chronic livestock disease: a simulation model of ovine footrot.

    PubMed

    Russell, V N L; Green, L E; Bishop, S C; Medley, G F

    2013-03-01

    A stochastic, individual-based, simulation model of footrot in a flock of 200 ewes was developed that included flock demography, disease processes, host genetic variation for traits influencing infection and disease processes, and bacterial contamination of the environment. Sensitivity analyses were performed using ANOVA to examine the contribution of unknown parameters to outcome variation. The infection rate and bacterial death rate were the most significant factors determining the observed prevalence of footrot, as well as the heritability of resistance. The dominance of infection parameters in determining outcomes implies that observational data cannot be used to accurately estimate the strength of genetic control of underlying traits describing the infection process, i.e. resistance. Further work will allow us to address the potential for genetic selection to control ovine footrot. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Application of State Analysis and Goal-based Operations to a MER Mission Scenario

    NASA Technical Reports Server (NTRS)

    Morris, John Richard; Ingham, Michel D.; Mishkin, Andrew H.; Rasmussen, Robert D.; Starbird, Thomas W.

    2006-01-01

    State Analysis is a model-based systems engineering methodology employing a rigorous discovery process which articulates operations concepts and operability needs as an integrated part of system design. The process produces requirements on system and software design in the form of explicit models which describe the system behavior in terms of state variables and the relationships among them. By applying State Analysis to an actual MER flight mission scenario, this study addresses the specific real world challenges of complex space operations and explores technologies that can be brought to bear on future missions. The paper first describes the tools currently used on a daily basis for MER operations planning and provides an in-depth description of the planning process, in the context of a Martian day's worth of rover engineering activities, resource modeling, flight rules, science observations, and more. It then describes how State Analysis allows for the specification of a corresponding goal-based sequence that accomplishes the same objectives, with several important additional benefits.

  3. Application of State Analysis and Goal-Based Operations to a MER Mission Scenario

    NASA Technical Reports Server (NTRS)

    Morris, J. Richard; Ingham, Michel D.; Mishkin, Andrew H.; Rasmussen, Robert D.; Starbird, Thomas W.

    2006-01-01

    State Analysis is a model-based systems engineering methodology employing a rigorous discovery process which articulates operations concepts and operability needs as an integrated part of system design. The process produces requirements on system and software design in the form of explicit models which describe the behavior of states and the relationships among them. By applying State Analysis to an actual MER flight mission scenario, this study addresses the specific real world challenges of complex space operations and explores technologies that can be brought to bear on future missions. The paper describes the tools currently used on a daily basis for MER operations planning and provides an in-depth description of the planning process, in the context of a Martian day's worth of rover engineering activities, resource modeling, flight rules, science observations, and more. It then describes how State Analysis allows for the specification of a corresponding goal-based sequence that accomplishes the same objectives, with several important additional benefits.

  4. Systematic ionospheric electron density tilts (SITs) at mid-latitudes and their associated HF bearing errors

    NASA Astrophysics Data System (ADS)

    Tedd, B. L.; Strangeways, H. J.; Jones, T. B.

    1985-11-01

    Systematic ionospheric tilts (SITs) at midlatitudes and the diurnal variation of bearing error for different transmission paths are examined. An explanation of diurnal variations of bearing error based on the dependence of ionospheric tilt on solar zenith angle and plasma transport processes is presented. The effect of vertical ion drift and the momentum transfer of neutral winds is investigated. During the daytime the transmissions are low and photochemical processes control SITs; however, at night transmissions are at higher heights and spatial and temporal variations of plasma transport processes influence SITs. A HF ray tracing technique which uses a three-dimensional ionospheric model based on predictions to simulate SIT-induced bearing errors is described; poor correlation with experimental data is observed and the causes for this are studied. A second model based on measured vertical-sounder data is proposed. Model two is applicable for predicting bearing error for a range of transmission paths and correlates well with experimental data.

  5. Molecular modeling of the microstructure evolution during carbon fiber processing

    NASA Astrophysics Data System (ADS)

    Desai, Saaketh; Li, Chunyu; Shen, Tongtong; Strachan, Alejandro

    2017-12-01

    The rational design of carbon fibers with desired properties requires quantitative relationships between the processing conditions, microstructure, and resulting properties. We developed a molecular model that combines kinetic Monte Carlo and molecular dynamics techniques to predict the microstructure evolution during the processes of carbonization and graphitization of polyacrylonitrile (PAN)-based carbon fibers. The model accurately predicts the cross-sectional microstructure of the fibers with the molecular structure of the stabilized PAN fibers and physics-based chemical reaction rates as the only inputs. The resulting structures exhibit key features observed in electron microcopy studies such as curved graphitic sheets and hairpin structures. In addition, computed X-ray diffraction patterns are in good agreement with experiments. We predict the transverse moduli of the resulting fibers between 1 GPa and 5 GPa, in good agreement with experimental results for high modulus fibers and slightly lower than those of high-strength fibers. The transverse modulus is governed by sliding between graphitic sheets, and the relatively low value for the predicted microstructures can be attributed to their perfect longitudinal texture. Finally, the simulations provide insight into the relationships between chemical kinetics and the final microstructure; we observe that high reaction rates result in porous structures with lower moduli.

  6. Modeling biotic uptake by periphyton and transient hyporrheic storage of nitrate in a natural stream

    USGS Publications Warehouse

    Kim, Brian K.A.; Jackman, Alan P.; Triska, Frank J.

    1992-01-01

    To a convection-dispersion hydrologic transport model we coupled a transient storage submodel (Bencala, 1984) and a biotic uptake submodel based on Michaelis-Menten kinetics (Kim et al., 1990). Our purpose was threefold: (1) to simulate nitrate retention in response to change in load in a third-order stream, (2) to differentiate biotic versus hydrologie factors in nitrate retention, and (3) to produce a research tool whose properties are consistent with laboratory and field observations. Hydrodynamic parameters were fitted from chloride concentration during a 20-day chloride-nitrate coinjection (Bencala, 1984), and biotic uptake kinetics were based on flume studies by Kim et al. (1990) and Triska et al. (1983). Nitrate concentration from the 20-day coinjection experiment served as a base for model validation. The complete transport retention model reasonably predicted the observed nitrate concentration. However, simulations which lacked either the transient storage submodel or the biotic uptake submodel poorly predicted the observed nitrate concentration. Model simulations indicated that transient storage in channel and hyporrheic interstices dominated nitrate retention within the first 24 hours, whereas biotic uptake dominated thereafter. A sawtooth function for Vmax ranging from 0.10 to 0.17 μg NO3-N s−1 gAFDM−1 (grams ash free dry mass) slightly underpredicted nitrate retention in simulations of 2–7 days. This result was reasonable since uptake by other nitrate-demanding processes were not included. The model demonstrated how ecosystem retention is an interaction between physical and biotic processes and supports the validity of coupling separate hydrodynamic and reactive submodels to established solute transport models in biological studies of fluvial ecosystems.

  7. Gaussian process inference for estimating pharmacokinetic parameters of dynamic contrast-enhanced MR images.

    PubMed

    Wang, Shijun; Liu, Peter; Turkbey, Baris; Choyke, Peter; Pinto, Peter; Summers, Ronald M

    2012-01-01

    In this paper, we propose a new pharmacokinetic model for parameter estimation of dynamic contrast-enhanced (DCE) MRI by using Gaussian process inference. Our model is based on the Tofts dual-compartment model for the description of tracer kinetics and the observed time series from DCE-MRI is treated as a Gaussian stochastic process. The parameter estimation is done through a maximum likelihood approach and we propose a variant of the coordinate descent method to solve this likelihood maximization problem. The new model was shown to outperform a baseline method on simulated data. Parametric maps generated on prostate DCE data with the new model also provided better enhancement of tumors, lower intensity on false positives, and better boundary delineation when compared with the baseline method. New statistical parameter maps from the process model were also found to be informative, particularly when paired with the PK parameter maps.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donner, Leo J.; O'Brien, Travis A.; Rieger, Daniel

    Both climate forcing and climate sensitivity persist as stubborn uncertainties limiting the extent to which climate models can provide actionable scientific scenarios for climate change. A key, explicit control on cloud-aerosol interactions, the largest uncertainty in climate forcing, is the vertical velocity of cloud-scale updrafts. Model-based studies of climate sensitivity indicate that convective entrainment, which is closely related to updraft speeds, is an important control on climate sensitivity. Updraft vertical velocities also drive many physical processes essential to numerical weather prediction. Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climatemore » and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying vertical velocities, and parameterizations which do provide vertical velocities have been subject to limited evaluation against what have until recently been scant observations. Atmospheric observations imply that the distribution of vertical velocities depends on the areas over which the vertical velocities are averaged. Distributions of vertical velocities in climate models may capture this behavior, but it has not been accounted for when parameterizing cloud and precipitation processes in current models. New observations of convective vertical velocities offer a potentially promising path toward developing process-level cloud models and parameterizations for climate and numerical weather prediction. Taking account of scale-dependence of resolved vertical velocities offers a path to matching cloud-scale physical processes and their driving dynamics more realistically, with a prospect of reduced uncertainty in both climate forcing and sensitivity.« less

  9. An interdisciplinary approach for earthquake modelling and forecasting

    NASA Astrophysics Data System (ADS)

    Han, P.; Zhuang, J.; Hattori, K.; Ogata, Y.

    2016-12-01

    Earthquake is one of the most serious disasters, which may cause heavy casualties and economic losses. Especially in the past two decades, huge/mega earthquakes have hit many countries. Effective earthquake forecasting (including time, location, and magnitude) becomes extremely important and urgent. To date, various heuristically derived algorithms have been developed for forecasting earthquakes. Generally, they can be classified into two types: catalog-based approaches and non-catalog-based approaches. Thanks to the rapid development of statistical seismology in the past 30 years, now we are able to evaluate the performances of these earthquake forecast approaches quantitatively. Although a certain amount of precursory information is available in both earthquake catalogs and non-catalog observations, the earthquake forecast is still far from satisfactory. In most case, the precursory phenomena were studied individually. An earthquake model that combines self-exciting and mutually exciting elements was developed by Ogata and Utsu from the Hawkes process. The core idea of this combined model is that the status of the event at present is controlled by the event itself (self-exciting) and all the external factors (mutually exciting) in the past. In essence, the conditional intensity function is a time-varying Poisson process with rate λ(t), which is composed of the background rate, the self-exciting term (the information from past seismic events), and the external excitation term (the information from past non-seismic observations). This model shows us a way to integrate the catalog-based forecast and non-catalog-based forecast. Against this background, we are trying to develop a new earthquake forecast model which combines catalog-based and non-catalog-based approaches.

  10. A Microstructure-Based Constitutive Model for Superplastic Forming

    NASA Astrophysics Data System (ADS)

    Jafari Nedoushan, Reza; Farzin, Mahmoud; Mashayekhi, Mohammad; Banabic, Dorel

    2012-11-01

    A constitutive model is proposed for simulations of hot metal forming processes. This model is constructed based on dominant mechanisms that take part in hot forming and includes intergranular deformation, grain boundary sliding, and grain boundary diffusion. A Taylor type polycrystalline model is used to predict intergranular deformation. Previous works on grain boundary sliding and grain boundary diffusion are extended to drive three-dimensional macro stress-strain rate relationships for each mechanism. In these relationships, the effect of grain size is also taken into account. The proposed model is first used to simulate step strain-rate tests and the results are compared with experimental data. It is shown that the model can be used to predict flow stresses for various grain sizes and strain rates. The yield locus is then predicted for multiaxial stress states, and it is observed that it is very close to the von Mises yield criterion. It is also shown that the proposed model can be directly used to simulate hot forming processes. Bulge forming process and gas pressure tray forming are simulated, and the results are compared with experimental data.

  11. Combining Model-Based and Feature-Driven Diagnosis Approaches - A Case Study on Electromechanical Actuators

    NASA Technical Reports Server (NTRS)

    Narasimhan, Sriram; Roychoudhury, Indranil; Balaban, Edward; Saxena, Abhinav

    2010-01-01

    Model-based diagnosis typically uses analytical redundancy to compare predictions from a model against observations from the system being diagnosed. However this approach does not work very well when it is not feasible to create analytic relations describing all the observed data, e.g., for vibration data which is usually sampled at very high rates and requires very detailed finite element models to describe its behavior. In such cases, features (in time and frequency domains) that contain diagnostic information are extracted from the data. Since this is a computationally intensive process, it is not efficient to extract all the features all the time. In this paper we present an approach that combines the analytic model-based and feature-driven diagnosis approaches. The analytic approach is used to reduce the set of possible faults and then features are chosen to best distinguish among the remaining faults. We describe an implementation of this approach on the Flyable Electro-mechanical Actuator (FLEA) test bed.

  12. Wound care clinical pathway: a conceptual model.

    PubMed

    Barr, J E; Cuzzell, J

    1996-08-01

    A clinical pathway is a written sequence of clinical processes or events that guides a patient with a defined problem toward an expected outcome. Clinical pathways are tools to assist with the cost-effective management of clinical outcomes related to specific problems or disease processes. The primary obstacles to developing clinical pathways for wound care are the chronic natures of some wounds and the many variables that can delay healing. The pathway introduced in this article was modeled upon the three phases of tissue repair: inflammatory, proliferative, and maturation. This physiology-based model allows clinicians to identify and monitor outcomes based on observable and measurable clinical parameters. The pathway design, which also includes educational and behavioral outcomes, allows the clinician to individualize the expected timeframe for outcome achievement based on individual patient criteria and expert judgement. Integral to the pathway are the "4P's" which help standardize the clinical processes by wound type: Protocols, Policies, Procedures, and Patient education tools. Four categories into which variances are categorized based on the cause of the deviation from the norm are patient, process/system, practitioner, and planning/discharge. Additional research is warranted to support the value of this clinical pathway in the clinical arena.

  13. Systematic errors in Monsoon simulation: importance of the equatorial Indian Ocean processes

    NASA Astrophysics Data System (ADS)

    Annamalai, H.; Taguchi, B.; McCreary, J. P., Jr.; Nagura, M.; Miyama, T.

    2015-12-01

    H. Annamalai1, B. Taguchi2, J.P. McCreary1, J. Hafner1, M. Nagura2, and T. Miyama2 International Pacific Research Center, University of Hawaii, USA Application Laboratory, JAMSTEC, Japan In climate models, simulating the monsoon precipitation climatology remains a grand challenge. Compared to CMIP3, the multi-model-mean (MMM) errors for Asian-Australian monsoon (AAM) precipitation climatology in CMIP5, relative to GPCP observations, have shown little improvement. One of the implications is that uncertainties in the future projections of time-mean changes to AAM rainfall may not have reduced from CMIP3 to CMIP5. Despite dedicated efforts by the modeling community, the progress in monsoon modeling is rather slow. This leads us to wonder: Has the scientific community reached a "plateau" in modeling mean monsoon precipitation? Our focus here is to better understanding of the coupled air-sea interactions, and moist processes that govern the precipitation characteristics over the tropical Indian Ocean where large-scale errors persist. A series idealized coupled model experiments are performed to test the hypothesis that errors in the coupled processes along the equatorial Indian Ocean during inter-monsoon seasons could potentially influence systematic errors during the monsoon season. Moist static energy budget diagnostics has been performed to identify the leading moist and radiative processes that account for the large-scale errors in the simulated precipitation. As a way forward, we propose three coordinated efforts, and they are: (i) idealized coupled model experiments; (ii) process-based diagnostics and (iii) direct observations to constrain model physics. We will argue that a systematic and coordinated approach in the identification of the various interactive processes that shape the precipitation basic state needs to be carried out, and high-quality observations over the data sparse monsoon region are needed to validate models and further improve model physics.

  14. Determining fundamental properties of matter created in ultrarelativistic heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Novak, J.; Novak, K.; Pratt, S.; Vredevoogd, J.; Coleman-Smith, C. E.; Wolpert, R. L.

    2014-03-01

    Posterior distributions for physical parameters describing relativistic heavy-ion collisions, such as the viscosity of the quark-gluon plasma, are extracted through a comparison of hydrodynamic-based transport models to experimental results from 100AGeV+100AGeV Au +Au collisions at the Relativistic Heavy Ion Collider. By simultaneously varying six parameters and by evaluating several classes of observables, we are able to explore the complex intertwined dependencies of observables on model parameters. The methods provide a full multidimensional posterior distribution for the model output, including a range of acceptable values for each parameter, and reveal correlations between them. The breadth of observables and the number of parameters considered here go beyond previous studies in this field. The statistical tools, which are based upon Gaussian process emulators, are tested in detail and should be extendable to larger data sets and a higher number of parameters.

  15. Back to the Future: Consistency-Based Trajectory Tracking

    NASA Technical Reports Server (NTRS)

    Kurien, James; Nayak, P. Pandurand; Norvig, Peter (Technical Monitor)

    2000-01-01

    Given a model of a physical process and a sequence of commands and observations received over time, the task of an autonomous controller is to determine the likely states of the process and the actions required to move the process to a desired configuration. We introduce a representation and algorithms for incrementally generating approximate belief states for a restricted but relevant class of partially observable Markov decision processes with very large state spaces. The algorithm presented incrementally generates, rather than revises, an approximate belief state at any point by abstracting and summarizing segments of the likely trajectories of the process. This enables applications to efficiently maintain a partial belief state when it remains consistent with observations and revisit past assumptions about the process' evolution when the belief state is ruled out. The system presented has been implemented and results on examples from the domain of spacecraft control are presented.

  16. Metallicity Distribution of Disk Stars and the Formation History of the Milky Way

    NASA Astrophysics Data System (ADS)

    Toyouchi, Daisuke; Chiba, Masashi

    2018-03-01

    We investigate the formation history of the stellar disk component in the Milky Way (MW) based on our new chemical evolution model. Our model considers several fundamental baryonic processes, including gas infall, reaccretion of outflowing gas, and radial migration of disk stars. Each of these baryonic processes in the disk evolution is characterized by model parameters that are determined by fitting to various observational data of the stellar disk in the MW, including the radial dependence of the metallicity distribution function (MDF) of the disk stars, which has recently been derived in the APOGEE survey. We succeeded to obtain the best set of model parameters that well reproduces the observed radial dependences of the mean, standard deviation, skewness, and kurtosis of the MDFs for the disk stars. We analyze the basic properties of our model results in detail to gain new insights into the important baryonic processes in the formation history of the MW. One of the remarkable findings is that outflowing gas, containing many heavy elements, preferentially reaccretes onto the outer disk parts, and this recycling process of metal-enriched gas is a key ingredient for reproducing the observed narrower MDFs at larger radii. Moreover, important implications for the radial dependence of gas infall and the influence of radial migration on the MDFs are also inferred from our model calculation. Thus, the MDF of disk stars is a useful clue for studying the formation history of the MW.

  17. Nucleosynthesis Predictions for Intermediate-Mass AGB Stars: Comparison to Observations of Type I Planetary Nebulae

    NASA Technical Reports Server (NTRS)

    Karakas, Amanda I.; vanRaai, Mark A.; Lugaro, Maria; Sterling, Nicholas C.; Dinerstein, Harriet L.

    2008-01-01

    Type I planetary nebulae (PNe) have high He/H and N/O ratios and are thought to be descendants of stars with initial masses of approx. 3-8 Stellar Mass. These characteristics indicate that the progenitor stars experienced proton-capture nucleosynthesis at the base of the convective envelope, in addition to the slow neutron capture process operating in the He-shell (the s-process). We compare the predicted abundances of elements up to Sr from models of intermediate-mass asymptotic giant branch (AGB) stars to measured abundances in Type I PNe. In particular, we compare predictions and observations for the light trans-iron elements Se and Kr, in order to constrain convective mixing and the s-process in these stars. A partial mixing zone is included in selected models to explore the effect of a C-13 pocket on the s-process yields. The solar-metallicity models produce enrichments of [(Se, Kr)/Fe] less than or approx. 0.6, consistent with Galactic Type I PNe where the observed enhancements are typically less than or approx. 0.3 dex, while lower metallicity models predict larger enrichments of C, N, Se, and Kr. O destruction occurs in the most massive models but it is not efficient enough to account for the greater than or approx. 0.3 dex O depletions observed in some Type I PNe. It is not possible to reach firm conclusions regarding the neutron source operating in massive AGB stars from Se and Kr abundances in Type I PNe; abundances for more s-process elements may help to distinguish between the two neutron sources. We predict that only the most massive (M grester than or approx.5 Stellar Mass) models would evolve into Type I PNe, indicating that extra-mixing processes are active in lower-mass stars (3-4 Stellar Mass), if these stars are to evolve into Type I PNe.

  18. Nucleosynthesis Predictions for Intermediate-Mass Asymptotic Giant Branch Stars: Comparison to Observations of Type I Planetary Nebulae

    NASA Astrophysics Data System (ADS)

    Karakas, Amanda I.; van Raai, Mark A.; Lugaro, Maria; Sterling, N. C.; Dinerstein, Harriet L.

    2009-01-01

    Type I planetary nebulae (PNe) have high He/H and N/O ratios and are thought to be descendants of stars with initial masses of ~3-8 M sun. These characteristics indicate that the progenitor stars experienced proton-capture nucleosynthesis at the base of the convective envelope, in addition to the slow neutron capture process operating in the He-shell (the s-process). We compare the predicted abundances of elements up to Sr from models of intermediate-mass asymptotic giant branch (AGB) stars to measured abundances in Type I PNe. In particular, we compare predictions and observations for the light trans-iron elements Se and Kr, in order to constrain convective mixing and the s-process in these stars. A partial mixing zone is included in selected models to explore the effect of a 13C pocket on the s-process yields. The solar-metallicity models produce enrichments of [(Se, Kr)/Fe] lsim0.6, consistent with Galactic Type I PNe where the observed enhancements are typically lsim0.3 dex, while lower metallicity models predict larger enrichments of C, N, Se, and Kr. O destruction occurs in the most massive models but it is not efficient enough to account for the gsim0.3 dex O depletions observed in some Type I PNe. It is not possible to reach firm conclusions regarding the neutron source operating in massive AGB stars from Se and Kr abundances in Type I PNe; abundances for more s-process elements may help to distinguish between the two neutron sources. We predict that only the most massive (M gsim 5 M sun) models would evolve into Type I PNe, indicating that extra-mixing processes are active in lower-mass stars (3-4 M sun), if these stars are to evolve into Type I PNe. This paper includes data taken at The McDonald Observatory of The University of Texas at Austin.

  19. The Impact of Secondary School Students' Preconceptions on the Evolution of Their Mental Models of the Greenhouse Effect and Global Warming

    ERIC Educational Resources Information Center

    Reinfried, Sibylle; Tempelmann, Sebastian

    2014-01-01

    This paper provides a video-based learning process study that investigates the kinds of mental models of the atmospheric greenhouse effect 13-year-old learners have and how these mental models change with a learning environment, which is optimised in regard to instructional psychology. The objective of this explorative study was to observe and…

  20. Comparative assessment of several post-processing methods for correcting evapotranspiration forecasts derived from TIGGE datasets.

    NASA Astrophysics Data System (ADS)

    Tian, D.; Medina, H.

    2017-12-01

    Post-processing of medium range reference evapotranspiration (ETo) forecasts based on numerical weather prediction (NWP) models has the potential of improving the quality and utility of these forecasts. This work compares the performance of several post-processing methods for correcting ETo forecasts over the continental U.S. generated from The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) database using data from Europe (EC), the United Kingdom (MO), and the United States (NCEP). The pondered post-processing techniques are: simple bias correction, the use of multimodels, the Ensemble Model Output Statistics (EMOS, Gneitting et al., 2005) and the Bayesian Model Averaging (BMA, Raftery et al., 2005). ETo estimates based on quality-controlled U.S. Regional Climate Reference Network measurements, and computed with the FAO 56 Penman Monteith equation, are adopted as baseline. EMOS and BMA are generally the most efficient post-processing techniques of the ETo forecasts. Nevertheless, the simple bias correction of the best model is commonly much more rewarding than using multimodel raw forecasts. Our results demonstrate the potential of different forecasting and post-processing frameworks in operational evapotranspiration and irrigation advisory systems at national scale.

  1. Genetic Programming for Automatic Hydrological Modelling

    NASA Astrophysics Data System (ADS)

    Chadalawada, Jayashree; Babovic, Vladan

    2017-04-01

    One of the recent challenges for the hydrologic research community is the need for the development of coupled systems that involves the integration of hydrologic, atmospheric and socio-economic relationships. This poses a requirement for novel modelling frameworks that can accurately represent complex systems, given, the limited understanding of underlying processes, increasing volume of data and high levels of uncertainity. Each of the existing hydrological models vary in terms of conceptualization and process representation and is the best suited to capture the environmental dynamics of a particular hydrological system. Data driven approaches can be used in the integration of alternative process hypotheses in order to achieve a unified theory at catchment scale. The key steps in the implementation of integrated modelling framework that is influenced by prior understanding and data, include, choice of the technique for the induction of knowledge from data, identification of alternative structural hypotheses, definition of rules, constraints for meaningful, intelligent combination of model component hypotheses and definition of evaluation metrics. This study aims at defining a Genetic Programming based modelling framework that test different conceptual model constructs based on wide range of objective functions and evolves accurate and parsimonious models that capture dominant hydrological processes at catchment scale. In this paper, GP initializes the evolutionary process using the modelling decisions inspired from the Superflex framework [Fenicia et al., 2011] and automatically combines them into model structures that are scrutinized against observed data using statistical, hydrological and flow duration curve based performance metrics. The collaboration between data driven and physical, conceptual modelling paradigms improves the ability to model and manage hydrologic systems. Fenicia, F., D. Kavetski, and H. H. Savenije (2011), Elements of a flexible approach for conceptual hydrological modeling: 1. Motivation and theoretical development, Water Resources Research, 47(11).

  2. Improving snow density estimation for mapping SWE with Lidar snow depth: assessment of uncertainty in modeled density and field sampling strategies in NASA SnowEx

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Smyth, E.; Small, E. E.

    2017-12-01

    The spatial distribution of snow water equivalent (SWE) is not sufficiently monitored with either remotely sensed or ground-based observations for water resources management. Recent applications of airborne Lidar have yielded basin-wide mapping of SWE when combined with a snow density model. However, in the absence of snow density observations, the uncertainty in these SWE maps is dominated by uncertainty in modeled snow density rather than in Lidar measurement of snow depth. Available observations tend to have a bias in physiographic regime (e.g., flat open areas) and are often insufficient in number to support testing of models across a range of conditions. Thus, there is a need for targeted sampling strategies and controlled model experiments to understand where and why different snow density models diverge. This will enable identification of robust model structures that represent dominant processes controlling snow densification, in support of basin-scale estimation of SWE with remotely-sensed snow depth datasets. The NASA SnowEx mission is a unique opportunity to evaluate sampling strategies of snow density and to quantify and reduce uncertainty in modeled snow density. In this presentation, we present initial field data analyses and modeling results over the Colorado SnowEx domain in the 2016-2017 winter campaign. We detail a framework for spatially mapping the uncertainty in snowpack density, as represented across multiple models. Leveraging the modular SUMMA model, we construct a series of physically-based models to assess systematically the importance of specific process representations to snow density estimates. We will show how models and snow pit observations characterize snow density variations with forest cover in the SnowEx domains. Finally, we will use the spatial maps of density uncertainty to evaluate the selected locations of snow pits, thereby assessing the adequacy of the sampling strategy for targeting uncertainty in modeled snow density.

  3. The impact of system of care support in adherence to wraparound principles in Child and Family Teams in child welfare in North Carolina.

    PubMed

    Snyder, Elizabeth H; Lawrence, C Nicole; Dodge, Kenneth A

    2012-04-01

    North Carolina is one of a growing number of states to implement family meeting models in child welfare as a way to engage families, while simultaneously addressing complex familial needs and child safety issues. However, much is still unknown regarding how family meetings actually operate in child welfare, underscoring a clear need for further evaluation of this process. Utilizing direct observational data of Child and Family Team (CFT) meetings, collected as part of two separate evaluations of the North Carolina Division of Social Service's Multiple Response System (MRS) and System of Care (SOC) initiatives, the purpose of the current study was to examine whether the support provided by SOC improved fidelity to the CFT model in child welfare. The observations were conducted using the Team Observation Measure consisting of 78 indicators that measure adherence to ten domains associated with high quality family team meetings (e.g., collaborative, individualized, natural supports, outcomes based, strengths-based). Findings indicate that receiving SOC support in child welfare leads to a more collaborative and individualized decision-making process with families. Meeting facilitators in SOC counties were better prepared for CFTs, and had greater ability to lead a more robust and creative brainstorming process to develop a family-driven case plan. The current study also provides a much needed description of the CFT meeting process within child welfare using a direct observational measure.

  4. Improving Science Process Skills for Primary School Students Through 5E Instructional Model-Based Learning

    NASA Astrophysics Data System (ADS)

    Choirunnisa, N. L.; Prabowo, P.; Suryanti, S.

    2018-01-01

    The main objective of this study is to describe the effectiveness of 5E instructional model-based learning to improve primary school students’ science process skills. The science process skills is important for students as it is the foundation for enhancing the mastery of concepts and thinking skills needed in the 21st century. The design of this study was experimental involving one group pre-test and post-test design. The result of this study shows that (1) the implementation of learning in both of classes, IVA and IVB, show that the percentage of learning implementation increased which indicates a better quality of learning and (2) the percentage of students’ science process skills test results on the aspects of observing, formulating hypotheses, determining variable, interpreting data and communicating increased as well.

  5. The DRAGON scale concept and results for remote sensing of aerosol properties

    NASA Astrophysics Data System (ADS)

    Holben, B. N.; Eck, T. F.; Schafer, J.; Giles, D. M.; Kim, J.; Sano, I.; Mukai, S.; Kim, Y. J.; Reid, J. S.; Pickering, K. E.; Crawford, J. H.; Smirnov, A.; Sinyuk, A.; Slutsker, I.; Sorokin, M.; Rodriguez, J.; Liew, S.; Trevino, N.; Lim, H.; Lefer, B. L.; Nadkarni, R.; Macke, A.; Kinne, S. A.; Anderson, B. E.; Russell, P. B.; Maring, H. B.; Welton, E. J.; da Silva, A.; Toon, O. B.; Redemann, J.

    2013-12-01

    Aerosol processes occur at microscales but are typically observed and reported at continental to global scales. Often observable aerosol processes that have significant anthropogenic impact occur on spatial scales of tens to a few hundred km, representative of convective cloud processing, urban/megacity sources, anthropogenic burning and natural wildfires, dry lakebed dust sources etc. Historically remote sensing of aerosols has relied on relatively coarse temporal and spatial resolution satellite observations or high temporal resolution point observations from ground-based monitoring sites from networks such as AERONET, SKYNET, MPLNET and many other surface observation platforms. Airborne remote and in situ observations combined with assimilation models were/are to be the mesoscale link between the ground- and space-based RS scales. However clearly the in situ and ground-based RS characterizations of aerosols require a convergence of thought, parameterization and actual scale measurements in order to advance this goal. This has been served by periodic multidisciplinary field campaigns yet only recently has a concerted effort been made to establish these ground-based networks in an effort to capture the mesoscale processes through measurement programs such as DISCOVER AQ and NASA AERONET's effort to foster such measurements and analysis through the Distributed Regional Aerosol Gridded Observation Networks (DRAGON), short term meso-networks, with partners in Asia and Europe and N. America. This talk will review the historical need for such networks and discuss some of the results and in some cases unexpected findings from the eight DRAGON campaigns conducted the last several years. Emphasis will be placed on the most recent DISCOVER AQ campaign conducted in Houston TX and the synergism with a regional to global network plan through the SEAC4RS US campaign.

  6. Reconstructing Sediment Supply, Transport and Deposition Behind the Elwha River Dams

    NASA Astrophysics Data System (ADS)

    Beveridge, C.

    2017-12-01

    The Elwha River watershed in Olympic National Park of Washington State, USA is predominantly a steep, mountainous landscape where dominant geomorphic processes include landslides, debris flows and gullying. The river is characterized by substantial variability of channel morphology and fluvial processes, and alternates between narrow bedrock canyons and wider alluvial reaches for much of its length. Literature suggests that the Elwha watershed is topographically and tectonically in steady state. The removal of the two massive hydropower dams along the river in 2013 marked the largest dam removal in history. Over the century long lifespan of the dams, approximately 21 million cubic meters of sediment was impounded behind them. Long term erosion rates documented in this region and reservoir sedimentation data give unprecedented opportunities to test watershed sediment yield models and examine dominant processes that control sediment yield over human time scales. In this study, we aim to reconstruct sediment supply, transport and deposition behind the Glines Canyon Dam (most upstream dam) over its lifespan using a watershed modeling approach. We developed alternative models of varying complexity for sediment production and transport at the network scale driven by hydrologic forcing. We simulate sediment supply and transport in tributaries upstream of the dam. The modeled sediment supply and transport dynamics are based on calibrated formulae (e.g., bedload transport is simulated using Wilcock-Crowe 2003 with modification based on observed bedload transport in the Elwha River). Observational data that aid in our approach include DEM, channel morphology, meteorology, and streamflow and sediment (bedload and suspended load) discharge. We aim to demonstrate how the observed sediment yield behind the dams was influenced by upstream transport supply and capacity limitations, thereby demonstrating the scale effects of flow and sediment transport processes in the Elwha River watershed.

  7. The Influence of Fracturing Fluids on Fracturing Processes: A Comparison Between Water, Oil and SC-CO2

    NASA Astrophysics Data System (ADS)

    Wang, Jiehao; Elsworth, Derek; Wu, Yu; Liu, Jishan; Zhu, Wancheng; Liu, Yu

    2018-01-01

    Conventional water-based fracturing treatments may not work well for many shale gas reservoirs. This is due to the fact that shale gas formations are much more sensitive to water because of the significant capillary effects and the potentially high contents of swelling clay, each of which may result in the impairment of productivity. As an alternative to water-based fluids, gaseous stimulants not only avoid this potential impairment in productivity, but also conserve water as a resource and may sequester greenhouse gases underground. However, experimental observations have shown that different fracturing fluids yield variations in the induced fracture. During the hydraulic fracturing process, fracturing fluids will penetrate into the borehole wall, and the evolution of the fracture(s) then results from the coupled phenomena of fluid flow, solid deformation and damage. To represent this, coupled models of rock damage mechanics and fluid flow for both slightly compressible fluids and CO2 are presented. We investigate the fracturing processes driven by pressurization of three kinds of fluids: water, viscous oil and supercritical CO2. Simulation results indicate that SC-CO2-based fracturing indeed has a lower breakdown pressure, as observed in experiments, and may develop fractures with greater complexity than those developed with water-based and oil-based fracturing. We explore the relation between the breakdown pressure to both the dynamic viscosity and the interfacial tension of the fracturing fluids. Modeling demonstrates an increase in the breakdown pressure with an increase both in the dynamic viscosity and in the interfacial tension, consistent with experimental observations.

  8. Development of an Integrated Hydrologic Modeling System for Rainfall-Runoff Simulation

    NASA Astrophysics Data System (ADS)

    Lu, B.; Piasecki, M.

    2008-12-01

    This paper aims to present the development of an integrated hydrological model which involves functionalities of digital watershed processing, online data retrieval, hydrologic simulation and post-event analysis. The proposed system is intended to work as a back end to the CUAHSI HIS cyberinfrastructure developments. As a first step into developing this system, a physics-based distributed hydrologic model PIHM (Penn State Integrated Hydrologic Model) is wrapped into OpenMI(Open Modeling Interface and Environment ) environment so as to seamlessly interact with OpenMI compliant meteorological models. The graphical user interface is being developed from the openGIS application called MapWindows which permits functionality expansion through the addition of plug-ins. . Modules required to set up through the GUI workboard include those for retrieving meteorological data from existing database or meteorological prediction models, obtaining geospatial data from the output of digital watershed processing, and importing initial condition and boundary condition. They are connected to the OpenMI compliant PIHM to simulate rainfall-runoff processes and includes a module for automatically displaying output after the simulation. Online databases are accessed through the WaterOneFlow web services, and the retrieved data are either stored in an observation database(OD) following the schema of Observation Data Model(ODM) in case for time series support, or a grid based storage facility which may be a format like netCDF or a grid-based-data database schema . Specific development steps include the creation of a bridge to overcome interoperability issue between PIHM and the ODM, as well as the embedding of TauDEM (Terrain Analysis Using Digital Elevation Models) into the model. This module is responsible for developing watershed and stream network using digital elevation models. Visualizing and editing geospatial data is achieved by the usage of MapWinGIS, an ActiveX control developed by MapWindow team. After applying to the practical watershed, the performance of the model can be tested by the post-event analysis module.

  9. Development of a Dynamic Web Mapping Service for Vegetation Productivity Using Earth Observation and in situ Sensors in a Sensor Web Based Approach

    PubMed Central

    Kooistra, Lammert; Bergsma, Aldo; Chuma, Beatus; de Bruin, Sytze

    2009-01-01

    This paper describes the development of a sensor web based approach which combines earth observation and in situ sensor data to derive typical information offered by a dynamic web mapping service (WMS). A prototype has been developed which provides daily maps of vegetation productivity for the Netherlands with a spatial resolution of 250 m. Daily available MODIS surface reflectance products and meteorological parameters obtained through a Sensor Observation Service (SOS) were used as input for a vegetation productivity model. This paper presents the vegetation productivity model, the sensor data sources and the implementation of the automated processing facility. Finally, an evaluation is made of the opportunities and limitations of sensor web based approaches for the development of web services which combine both satellite and in situ sensor sources. PMID:22574019

  10. Computational Modeling of Morphological Effects in Bangla Visual Word Recognition.

    PubMed

    Dasgupta, Tirthankar; Sinha, Manjira; Basu, Anupam

    2015-10-01

    In this paper we aim to model the organization and processing of Bangla polymorphemic words in the mental lexicon. Our objective is to determine whether the mental lexicon accesses a polymorphemic word as a whole or decomposes the word into its constituent morphemes and then recognize them accordingly. To address this issue, we adopted two different strategies. First, we conduct a masked priming experiment over native speakers. Analysis of reaction time (RT) and error rates indicates that in general, morphologically derived words are accessed via decomposition process. Next, based on the collected RT data we have developed a computational model that can explain the processing phenomena of the access and representation of Bangla derivationally suffixed words. In order to do so, we first explored the individual roles of different linguistic features of a Bangla morphologically complex word and observed that processing of Bangla morphologically complex words depends upon several factors like, the base and surface word frequency, suffix type/token ratio, suffix family size and suffix productivity. Accordingly, we have proposed different feature models. Finally, we combine these feature models together and came up with a new model that takes the advantage of the individual feature models and successfully explain the processing phenomena of most of the Bangla morphologically derived words. Our proposed model shows an accuracy of around 80% which outperforms the other related frequency models.

  11. Nonlinear Estimation of Discrete-Time Signals Under Random Observation Delay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caballero-Aguila, R.; Jimenez-Lopez, J. D.; Hermoso-Carazo, A.

    2008-11-06

    This paper presents an approximation to the nonlinear least-squares estimation problem of discrete-time stochastic signals using nonlinear observations with additive white noise which can be randomly delayed by one sampling time. The observation delay is modelled by a sequence of independent Bernoulli random variables whose values, zero or one, indicate that the real observation arrives on time or it is delayed and, hence, the available measurement to estimate the signal is not up-to-date. Assuming that the state-space model generating the signal is unknown and only the covariance functions of the processes involved in the observation equation are ready for use,more » a filtering algorithm based on linear approximations of the real observations is proposed.« less

  12. Crustal block motion model and interplate coupling along Ecuador-Colombia trench based on GNSS observation network

    NASA Astrophysics Data System (ADS)

    Ito, T.; Mora-Páez, H.; Peláez-Gaviria, J. R.; Kimura, H.; Sagiya, T.

    2017-12-01

    IntroductionEcuador-Colombia trench is located at the boundary between South-America plate, Nazca Plate and Caribrian plate. This region is very complexes such as subducting Caribrian plate and Nazca plate, and collision between Panama and northern part of the Andes mountains. The previous large earthquakes occurred along the subducting boundary of Nazca plate, such as 1906 (M8.8) and 1979 (M8.2). And also, earthquakes occurred inland, too. So, it is important to evaluate earthquake potentials for preparing huge damage due to large earthquake in near future. GNSS observation In the last decade, the GNSS observation was established in Columbia. The GNSS observation is called by GEORED, which is operated by servicing Geologico Colomiano. The purpose of GEORED is research of crustal deformation. The number of GNSS site of GEORED is consist of 60 continuous GNSS observation site at 2017 (Mora et al., 2017). The sampling interval of almost GNSS site is 30 seconds. These GNSS data were processed by PPP processing using GIPSY-OASYS II software. GEORED can obtain the detailed crustal deformation map in whole Colombia. In addition, we use 100 GNSS data at Ecuador-Peru region (Nocquet et al. 2014). Method We developed a crustal block movements model based on crustal deformation derived from GNSS observation. Our model considers to the block motion with pole location and angular velocity and the interplate coupling between each block boundaries, including subduction between the South-American plate and the Nazca plate. And also, our approach of estimation of crustal block motion and coefficient of interplate coupling are based on MCMC method. The estimated each parameter is obtained probably density function (PDF). Result We tested 11 crustal block models based on geological data, such as active fault trace at surface. The optimal number of crustal blocks is 11 for based on geological and geodetic data using AIC. We use optimal block motion model. And also, we estimate interplate coupling along the plate interface and rigid block motion. We can evaluate to contribution of elastic deformation and rigid motion. In result, weak plate coupling was found northern part of 3 degree in latitude. Almost crustal deformation are explained by rigid block motion.

  13. Climate Process Team "Representing calving and iceberg dynamics in global climate models"

    NASA Astrophysics Data System (ADS)

    Sergienko, O. V.; Adcroft, A.; Amundson, J. M.; Bassis, J. N.; Hallberg, R.; Pollard, D.; Stearns, L. A.; Stern, A. A.

    2016-12-01

    Iceberg calving accounts for approximately 50% of the ice mass loss from the Greenland and Antarctic ice sheets. By changing a glacier's geometry, calving can also significantly perturb the glacier's stress-regime far upstream of the grounding line. This process can enhance discharge of ice across the grounding line. Once calved, icebergs drift into the open ocean where they melt, injecting freshwater to the ocean and affecting the large-scale ocean circulation. The spatial redistribution of the freshwater flux have strong impact on sea-ice formation and its spatial variability. A Climate Process Team "Representing calving and iceberg dynamics in global climate models" was established in the fall 2014. The major objectives of the CPT are: (1) develop parameterizations of calving processes that are suitable for continental-scale ice-sheet models that simulate the evolution of the Antarctic and Greenland ice sheets; (2) compile the data sets of the glaciological and oceanographic observations that are necessary to test, validate and constrain the developed parameterizations and models; (3) develop a physically based iceberg component for inclusion in the large-scale ocean circulation model. Several calving parameterizations based suitable for various glaciological settings have been developed and implemented in a continental-scale ice sheet model. Simulations of the present-day Antarctic and Greenland ice sheets show that the ice-sheet geometric configurations (thickness and extent) are sensitive to the calving process. In order to guide the development as well as to test calving parameterizations, available observations (of various kinds) have been compiled and organized into a database. Monthly estimates of iceberg distribution around the coast of Greenland have been produced with a goal of constructing iceberg size distribution and probability functions for iceberg occurrence in particular regions. A physically based iceberg model component was used in a GFDL global climate model. The simulation results show that the Antarctic iceberg calving-size distribution affects iceberg trajectories, determines where iceberg meltwater enters the ocean and the increased ice-berg freshwater transport leads to increased sea-ice growth around much of the East Antarctic coastline.

  14. A metrics for soil hydrological processes and their intrinsic dimensionality in heterogeneous systems

    NASA Astrophysics Data System (ADS)

    Lischeid, G.; Hohenbrink, T.; Schindler, U.

    2012-04-01

    Hydrology is based on the observation that catchments process input signals, e.g., precipitation, in a highly deterministic way. Thus, the Darcy or the Richards equation can be applied to model water fluxes in the saturated or vadose zone, respectively. Soils and aquifers usually exhibit substantial spatial heterogeneities at different scales that can, in principle, be represented by corresponding parameterisations of the models. In practice, however, data are hardly available at the required spatial resolution, and accounting for observed heterogeneities of soil and aquifer structure renders models very time and CPU consuming. We hypothesize that the intrinsic dimensionality of soil hydrological processes, which is induced by spatial heterogeneities, actually is very low and that soil hydrological processes in heterogeneous soils follow approximately the same trajectory. That means, the way how the soil transforms any hydrological input signals is the same for different soil textures and structures. Different soils differ only with respect to the extent of transformation of input signals. In a first step, we analysed the output of a soil hydrological model, based on the Richards equation, for homogeneous soils down to 5 m depth for different soil textures. A matrix of time series of soil matrix potential and soil water content at 10 cm depth intervals was set up. The intrinsic dimensionality of that matrix was assessed using the Correlation Dimension and a non-linear principal component approach. The latter provided a metrics for the extent of transformation ("damping") of the input signal. In a second step, model outputs for heterogeneous soils were analysed. In a last step, the same approaches were applied to 55 time series of observed soil water content from 15 sites and different depths. In all cases, the intrinsic dimensionality in fact was very close to unity, confirming our hypothesis. The metrics provided a very efficient tool to quantify the observed behaviour, depending on depth and soil heterogeneity: Different soils differed primarily with respect to the extent of damping per depth interval rather than to the kind of damping. We will show how that metrics can be used in a very efficient way for representing soil heterogeneities in simulation models.

  15. Improving operational anodising process performance using simulation approach

    NASA Astrophysics Data System (ADS)

    Liong, Choong-Yeun; Ghazali, Syarah Syahidah

    2015-10-01

    The use of aluminium is very widespread, especially in transportation, electrical and electronics, architectural, automotive and engineering applications sectors. Therefore, the anodizing process is an important process for aluminium in order to make the aluminium durable, attractive and weather resistant. This research is focused on the anodizing process operations in manufacturing and supplying of aluminium extrusion. The data required for the development of the model is collected from the observations and interviews conducted in the study. To study the current system, the processes involved in the anodizing process are modeled by using Arena 14.5 simulation software. Those processes consist of five main processes, namely the degreasing process, the etching process, the desmut process, the anodizing process, the sealing process and 16 other processes. The results obtained were analyzed to identify the problems or bottlenecks that occurred and to propose improvement methods that can be implemented on the original model. Based on the comparisons that have been done between the improvement methods, the productivity could be increased by reallocating the workers and reducing loading time.

  16. A GUI-based Tool for Bridging the Gap between Models and Process-Oriented Studies

    NASA Astrophysics Data System (ADS)

    Kornfeld, A.; Van der Tol, C.; Berry, J. A.

    2014-12-01

    Models used for simulation of photosynthesis and transpiration by canopies of terrestrial plants typically have subroutines such as STOMATA.F90, PHOSIB.F90 or BIOCHEM.m that solve for photosynthesis and associated processes. Key parameters such as the Vmax for Rubisco and temperature response parameters are required by these subroutines. These are often taken from the literature or determined by separate analysis of gas exchange experiments. It is useful to note however that subroutines can be extracted and run as standalone models to simulate leaf responses collected in gas exchange experiments. Furthermore, there are excellent non-linear fitting tools that can be used to optimize the parameter values in these models to fit the observations. Ideally the Vmax fit in this way should be the same as that determined by a separate analysis, but it may not because of interactions with other kinetic constants and the temperature dependence of these in the full subroutine. We submit that it is more useful to fit the complete model to the calibration experiments rather as disaggregated constants. We designed a graphical user interface (GUI) based tool that uses gas exchange photosynthesis data to directly estimate model parameters in the SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) model and, at the same time, allow researchers to change parameters interactively to visualize how variation in model parameters affect predicted outcomes such as photosynthetic rates, electron transport, and chlorophyll fluorescence. We have also ported some of this functionality to an Excel spreadsheet, which could be used as a teaching tool to help integrate process-oriented and model-oriented studies.

  17. Porosity and Variations in Microgravity Aerogel Nano-Structures. 2; New Laser Speckle Characterization Methods

    NASA Technical Reports Server (NTRS)

    Hunt, A. J.; Ayers, M. R.; Sibille, L.; Smith, D. D.

    2001-01-01

    The transition from sol to gel is a process that is critical to the properties of engineered nanomaterials, but one with few available techniques for observing the dynamic processes occurring during the evolution of the gel network. Specifically, the observation of various cluster aggregation models, such as diffusion-limited and reaction-limited cluster growth can be quite difficult. This can be rather important as the actual aggregation model can dramatically influence the mechanical properties of gels, and is significantly affected by the presence of convective flows, or their absence in microgravity. We have developed two new non-intrusive optical methods for observing the aggregation processes within gels in real time. These make use of the dynamic behavior of laser speckle patterns produced when an intense laser source is passed through a gelling sol. The first method is a simplified time-correlation measurement, where the speckle pattern is observed using a CCD camera and information on the movement of the scattering objects is readily apparent. This approach is extremely sensitive to minute variations in the flow field as the observed speckle pattern is a diffraction-based image, and is therefore sensitive to motions within the sol on the order of the wavelength of the probing light. Additionally, this method has proven useful in determining a precise time for the gel-point, an event often difficult to measure. Monitoring the evolution of contrast within the speckle field is another method that has proven useful for studying aeration. In this case, speckle contrast is dependent upon the size (correlation length) and number of scattering centers, increasing with increasing size, and decreasing with increasing numbers. The dynamic behavior of cluster growth in gels causes both of these to change simultaneously with time, the exact rate of which is determined by the specific aggregation model involved. Actual growth processes can now be observed, and the effects of varying gravity fields on the growth processes qualitatively described. Results on preliminary ground-based measurements have been obtained.

  18. A process model of technology innovation in governmental agencies: Insights from NASA’s science directorate

    NASA Astrophysics Data System (ADS)

    Szajnfarber, Zoe; Weigel, Annalisa L.

    2013-03-01

    This paper investigates the process through which new technical concepts are matured in the NASA innovation ecosystem. We propose an "epoch-shock" conceptualization as an alternative mental model to the traditional stage-gate view. The epoch-shock model is developed inductively, based on detailed empirical observations of the process, and validated, to the extent possible, through expert review. The paper concludes by illustrating how the new epoch-shock conceptualization could provide a useful basis for rethinking feasible interventions to improve innovation management in the space agency context. Where the more traditional stage-gate model leads to an emphasis on centralized flow control, the epoch-shock model acknowledges the decentralized, probabilistic nature of key interactions and highlights which aspects may be influenced.

  19. Neural Network-Based Retrieval of Surface and Root Zone Soil Moisture using Multi-Frequency Remotely-Sensed Observations

    NASA Astrophysics Data System (ADS)

    Hamed Alemohammad, Seyed; Kolassa, Jana; Prigent, Catherine; Aires, Filipe; Gentine, Pierre

    2017-04-01

    Knowledge of root zone soil moisture is essential in studying plant's response to different stress conditions since plant photosynthetic activity and transpiration rate are constrained by the water available through their roots. Current global root zone soil moisture estimates are based on either outputs from physical models constrained by observations, or assimilation of remotely-sensed microwave-based surface soil moisture estimates with physical model outputs. However, quality of these estimates are limited by the accuracy of the model representations of physical processes (such as radiative transfer, infiltration, percolation, and evapotranspiration) as well as errors in the estimates of the surface parameters. Additionally, statistical approaches provide an alternative efficient platform to develop root zone soil moisture retrieval algorithms from remotely-sensed observations. In this study, we present a new neural network based retrieval algorithm to estimate surface and root zone soil moisture from passive microwave observations of SMAP satellite (L-band) and AMSR2 instrument (X-band). SMAP early morning observations are ideal for surface soil moisture retrieval. AMSR2 mid-night observations are used here as an indicator of plant hydraulic properties that are related to root zone soil moisture. The combined observations from SMAP and AMSR2 together with other ancillary observations including the Solar-Induced Fluorescence (SIF) estimates from GOME-2 instrument provide necessary information to estimate surface and root zone soil moisture. The algorithm is applied to observations from the first 18 months of SMAP mission and retrievals are validated against in-situ observations and other global datasets.

  20. Evaluation of observed blast loading effects on NIF x-ray diagnostic collimators.

    PubMed

    Masters, N D; Fisher, A; Kalantar, D; Prasad, R; Stölken, J S; Wlodarczyk, C

    2014-11-01

    We present the "debris wind" models used to estimate the impulsive load to which x-ray diagnostics and other structures are subject during National Ignition Facility experiments. These models are used as part of the engineering design process. Isotropic models, based on simulations or simplified "expanding shell" models, are augmented by debris wind multipliers to account for directional anisotropy. We present improvements to these multipliers based on measurements of the permanent deflections of diagnostic components: 4× for the polar direction and 2× within the equatorial plane-the latter relaxing the previous heuristic debris wind multiplier.

  1. A Process-based, Climate-Sensitive Model to Derive Methane Emissions from Natural Wetlands: Application to 5 Wetland Sites, Sensitivity to Model Parameters and Climate

    NASA Technical Reports Server (NTRS)

    Walter, Bernadette P.; Heimann, Martin

    1999-01-01

    Methane emissions from natural wetlands constitutes the largest methane source at present and depends highly on the climate. In order to investigate the response of methane emissions from natural wetlands to climate variations, a 1-dimensional process-based climate-sensitive model to derive methane emissions from natural wetlands is developed. In the model the processes leading to methane emission are simulated within a 1-dimensional soil column and the three different transport mechanisms diffusion, plant-mediated transport and ebullition are modeled explicitly. The model forcing consists of daily values of soil temperature, water table and Net Primary Productivity, and at permafrost sites the thaw depth is included. The methane model is tested using observational data obtained at 5 wetland sites located in North America, Europe and Central America, representing a large variety of environmental conditions. It can be shown that in most cases seasonal variations in methane emissions can be explained by the combined effect of changes in soil temperature and the position of the water table. Our results also show that a process-based approach is needed, because there is no simple relationship between these controlling factors and methane emissions that applies to a variety of wetland sites. The sensitivity of the model to the choice of key model parameters is tested and further sensitivity tests are performed to demonstrate how methane emissions from wetlands respond to climate variations.

  2. Can climate variability information constrain a hydrological model for an ungauged Costa Rican catchment?

    NASA Astrophysics Data System (ADS)

    Quesada-Montano, Beatriz; Westerberg, Ida K.; Fuentes-Andino, Diana; Hidalgo-Leon, Hugo; Halldin, Sven

    2017-04-01

    Long-term hydrological data are key to understanding catchment behaviour and for decision making within water management and planning. Given the lack of observed data in many regions worldwide, hydrological models are an alternative for reproducing historical streamflow series. Additional types of information - to locally observed discharge - can be used to constrain model parameter uncertainty for ungauged catchments. Climate variability exerts a strong influence on streamflow variability on long and short time scales, in particular in the Central-American region. We therefore explored the use of climate variability knowledge to constrain the simulated discharge uncertainty of a conceptual hydrological model applied to a Costa Rican catchment, assumed to be ungauged. To reduce model uncertainty we first rejected parameter relationships that disagreed with our understanding of the system. We then assessed how well climate-based constraints applied at long-term, inter-annual and intra-annual time scales could constrain model uncertainty. Finally, we compared the climate-based constraints to a constraint on low-flow statistics based on information obtained from global maps. We evaluated our method in terms of the ability of the model to reproduce the observed hydrograph and the active catchment processes in terms of two efficiency measures, a statistical consistency measure, a spread measure and 17 hydrological signatures. We found that climate variability knowledge was useful for reducing model uncertainty, in particular, unrealistic representation of deep groundwater processes. The constraints based on global maps of low-flow statistics provided more constraining information than those based on climate variability, but the latter rejected slow rainfall-runoff representations that the low flow statistics did not reject. The use of such knowledge, together with information on low-flow statistics and constraints on parameter relationships showed to be useful to constrain model uncertainty for an - assumed to be - ungauged basin. This shows that our method is promising for reconstructing long-term flow data for ungauged catchments on the Pacific side of Central America, and that similar methods can be developed for ungauged basins in other regions where climate variability exerts a strong control on streamflow variability.

  3. Integrated Thermal Response Modeling System For Hypersonic Entry Vehicles

    NASA Technical Reports Server (NTRS)

    Chen, Y.-K.; Milos, F. S.; Partridge, Harry (Technical Monitor)

    2000-01-01

    We describe all extension of the Markov decision process model in which a continuous time dimension is included ill the state space. This allows for the representation and exact solution of a wide range of problems in which transitions or rewards vary over time. We examine problems based on route planning with public transportation and telescope observation scheduling.

  4. Embedding Task-Based Neural Models into a Connectome-Based Model of the Cerebral Cortex.

    PubMed

    Ulloa, Antonio; Horwitz, Barry

    2016-01-01

    A number of recent efforts have used large-scale, biologically realistic, neural models to help understand the neural basis for the patterns of activity observed in both resting state and task-related functional neural imaging data. An example of the former is The Virtual Brain (TVB) software platform, which allows one to apply large-scale neural modeling in a whole brain framework. TVB provides a set of structural connectomes of the human cerebral cortex, a collection of neural processing units for each connectome node, and various forward models that can convert simulated neural activity into a variety of functional brain imaging signals. In this paper, we demonstrate how to embed a previously or newly constructed task-based large-scale neural model into the TVB platform. We tested our method on a previously constructed large-scale neural model (LSNM) of visual object processing that consisted of interconnected neural populations that represent, primary and secondary visual, inferotemporal, and prefrontal cortex. Some neural elements in the original model were "non-task-specific" (NS) neurons that served as noise generators to "task-specific" neurons that processed shapes during a delayed match-to-sample (DMS) task. We replaced the NS neurons with an anatomical TVB connectome model of the cerebral cortex comprising 998 regions of interest interconnected by white matter fiber tract weights. We embedded our LSNM of visual object processing into corresponding nodes within the TVB connectome. Reciprocal connections between TVB nodes and our task-based modules were included in this framework. We ran visual object processing simulations and showed that the TVB simulator successfully replaced the noise generation originally provided by NS neurons; i.e., the DMS tasks performed with the hybrid LSNM/TVB simulator generated equivalent neural and fMRI activity to that of the original task-based models. Additionally, we found partial agreement between the functional connectivities using the hybrid LSNM/TVB model and the original LSNM. Our framework thus presents a way to embed task-based neural models into the TVB platform, enabling a better comparison between empirical and computational data, which in turn can lead to a better understanding of how interacting neural populations give rise to human cognitive behaviors.

  5. Model-Based Fatigue Prognosis of Fiber-Reinforced Laminates Exhibiting Concurrent Damage Mechanisms

    NASA Technical Reports Server (NTRS)

    Corbetta, M.; Sbarufatti, C.; Saxena, A.; Giglio, M.; Goebel, K.

    2016-01-01

    Prognostics of large composite structures is a topic of increasing interest in the field of structural health monitoring for aerospace, civil, and mechanical systems. Along with recent advancements in real-time structural health data acquisition and processing for damage detection and characterization, model-based stochastic methods for life prediction are showing promising results in the literature. Among various model-based approaches, particle-filtering algorithms are particularly capable in coping with uncertainties associated with the process. These include uncertainties about information on the damage extent and the inherent uncertainties of the damage propagation process. Some efforts have shown successful applications of particle filtering-based frameworks for predicting the matrix crack evolution and structural stiffness degradation caused by repetitive fatigue loads. Effects of other damage modes such as delamination, however, are not incorporated in these works. It is well established that delamination and matrix cracks not only co-exist in most laminate structures during the fatigue degradation process but also affect each other's progression. Furthermore, delamination significantly alters the stress-state in the laminates and accelerates the material degradation leading to catastrophic failure. Therefore, the work presented herein proposes a particle filtering-based framework for predicting a structure's remaining useful life with consideration of multiple co-existing damage-mechanisms. The framework uses an energy-based model from the composite modeling literature. The multiple damage-mode model has been shown to suitably estimate the energy release rate of cross-ply laminates as affected by matrix cracks and delamination modes. The model is also able to estimate the reduction in stiffness of the damaged laminate. This information is then used in the algorithms for life prediction capabilities. First, a brief summary of the energy-based damage model is provided. Then, the paper describes how the model is embedded within the prognostic framework and how the prognostics performance is assessed using observations from run-to-failure experiments

  6. An approach for modelling snowcover ablation and snowmelt runoff in cold region environments

    NASA Astrophysics Data System (ADS)

    Dornes, Pablo Fernando

    Reliable hydrological model simulations are the result of numerous complex interactions among hydrological inputs, landscape properties, and initial conditions. Determination of the effects of these factors is one of the main challenges in hydrological modelling. This situation becomes even more difficult in cold regions due to the ungauged nature of subarctic and arctic environments. This research work is an attempt to apply a new approach for modelling snowcover ablation and snowmelt runoff in complex subarctic environments with limited data while retaining integrity in the process representations. The modelling strategy is based on the incorporation of both detailed process understanding and inputs along with information gained from observations of basin-wide streamflow phenomenon; essentially a combination of deductive and inductive approaches. The study was conducted in the Wolf Creek Research Basin, Yukon Territory, using three models, a small-scale physically based hydrological model, a land surface scheme, and a land surface hydrological model. The spatial representation was based on previous research studies and observations, and was accomplished by incorporating landscape units, defined according to topography and vegetation, as the spatial model elements. Comparisons between distributed and aggregated modelling approaches showed that simulations incorporating distributed initial snowcover and corrected solar radiation were able to properly simulate snowcover ablation and snowmelt runoff whereas the aggregated modelling approaches were unable to represent the differential snowmelt rates and complex snowmelt runoff dynamics. Similarly, the inclusion of spatially distributed information in a land surface scheme clearly improved simulations of snowcover ablation. Application of the same modelling approach at a larger scale using the same landscape based parameterisation showed satisfactory results in simulating snowcover ablation and snowmelt runoff with minimal calibration. Verification of this approach in an arctic basin illustrated that landscape based parameters are a feasible regionalisation framework for distributed and physically based models. In summary, the proposed modelling philosophy, based on the combination of an inductive and deductive reasoning, is a suitable strategy for reliable predictions of snowcover ablation and snowmelt runoff in cold regions and complex environments.

  7. Coupled diffusion processes and 2D affinities of adhesion molecules at synthetic membrane junctions

    NASA Astrophysics Data System (ADS)

    Peel, Christopher; Choudhuri, Kaushik; Schmid, Eva M.; Bakalar, Matthew H.; Ann, Hyoung Sook; Fletcher, Daniel A.; Journot, Celine; Turberfield, Andrew; Wallace, Mark; Dustin, Michael

    A more complete understanding of the physically intrinsic mechanisms underlying protein mobility at cellular interfaces will provide additional insights into processes driving adhesion and organization in signalling junctions such as the immunological synapse. We observed diffusional slowing of structurally diverse binding proteins at synthetic interfaces formed by giant unilamellar vesicles (GUVs) on supported lipid bilayers (SLBs) that shows size dependence not accounted for by existing models. To model the effects of size and intermembrane spacing on interfacial reaction-diffusion processes, we describe a multistate diffusion model incorporating entropic effects of constrained binding. This can be merged with hydrodynamic theories of receptor-ligand diffusion and coupling to thermal membrane roughness. A novel synthetic membrane adhesion assay based on reversible and irreversible DNA-mediated interactions between GUVs and SLBs is used to precisely vary length, affinity, and flexibility, and also provides a platform to examine these effects on the dynamics of processes such as size-based segregation of binding and non-binding species.

  8. Dynamic frailty models based on compound birth-death processes.

    PubMed

    Putter, Hein; van Houwelingen, Hans C

    2015-07-01

    Frailty models are used in survival analysis to model unobserved heterogeneity. They accommodate such heterogeneity by the inclusion of a random term, the frailty, which is assumed to multiply the hazard of a subject (individual frailty) or the hazards of all subjects in a cluster (shared frailty). Typically, the frailty term is assumed to be constant over time. This is a restrictive assumption and extensions to allow for time-varying or dynamic frailties are of interest. In this paper, we extend the auto-correlated frailty models of Henderson and Shimakura and of Fiocco, Putter and van Houwelingen, developed for longitudinal count data and discrete survival data, to continuous survival data. We present a rigorous construction of the frailty processes in continuous time based on compound birth-death processes. When the frailty processes are used as mixtures in models for survival data, we derive the marginal hazards and survival functions and the marginal bivariate survival functions and cross-ratio function. We derive distributional properties of the processes, conditional on observed data, and show how to obtain the maximum likelihood estimators of the parameters of the model using a (stochastic) expectation-maximization algorithm. The methods are applied to a publicly available data set. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Evaluation of mean climate in a chemistry-climate model simulation

    NASA Astrophysics Data System (ADS)

    Hong, S.; Park, H.; Wie, J.; Park, R.; Lee, S.; Moon, B. K.

    2017-12-01

    Incorporation of the interactive chemistry is essential for understanding chemistry-climate interactions and feedback processes in climate models. Here we assess a newly developed chemistry-climate model (GRIMs-Chem), which is based on the Global/Regional Integrated Model system (GRIMs) including the aerosol direct effect as well as stratospheric linearized ozone chemistry (LINOZ). We conducted GRIMs-Chem with observed sea surface temperature during the period of 1979-2010, and compared the simulation results with observations and also with CMIP models. To measure the relative performance of our model, we define the quantitative performance metric using the Taylor diagram. This metric allow us to assess overall features in simulating multiple variables. Overall, our model better reproduce the zonal mean spatial pattern of temperature, horizontal wind, vertical motion, and relative humidity relative to other models. However, the model did not produce good simulations at upper troposphere (200 hPa). It is currently unclear which model processes are responsible for this. AcknowledgementsThis research was supported by the Korea Ministry of Environment (MOE) as "Climate Change Correspondence Program."

  10. Nonparametric Bayesian Segmentation of a Multivariate Inhomogeneous Space-Time Poisson Process.

    PubMed

    Ding, Mingtao; He, Lihan; Dunson, David; Carin, Lawrence

    2012-12-01

    A nonparametric Bayesian model is proposed for segmenting time-evolving multivariate spatial point process data. An inhomogeneous Poisson process is assumed, with a logistic stick-breaking process (LSBP) used to encourage piecewise-constant spatial Poisson intensities. The LSBP explicitly favors spatially contiguous segments, and infers the number of segments based on the observed data. The temporal dynamics of the segmentation and of the Poisson intensities are modeled with exponential correlation in time, implemented in the form of a first-order autoregressive model for uniformly sampled discrete data, and via a Gaussian process with an exponential kernel for general temporal sampling. We consider and compare two different inference techniques: a Markov chain Monte Carlo sampler, which has relatively high computational complexity; and an approximate and efficient variational Bayesian analysis. The model is demonstrated with a simulated example and a real example of space-time crime events in Cincinnati, Ohio, USA.

  11. Using natural selection and optimization for smarter vegetation models - challenges and opportunities

    NASA Astrophysics Data System (ADS)

    Franklin, Oskar; Han, Wang; Dieckmann, Ulf; Cramer, Wolfgang; Brännström, Åke; Pietsch, Stephan; Rovenskaya, Elena; Prentice, Iain Colin

    2017-04-01

    Dynamic global vegetation models (DGVMs) are now indispensable for understanding the biosphere and for estimating the capacity of ecosystems to provide services. The models are continuously developed to include an increasing number of processes and to utilize the growing amounts of observed data becoming available. However, while the versatility of the models is increasing as new processes and variables are added, their accuracy suffers from the accumulation of uncertainty, especially in the absence of overarching principles controlling their concerted behaviour. We have initiated a collaborative working group to address this problem based on a 'missing law' - adaptation and optimization principles rooted in natural selection. Even though this 'missing law' constrains relationships between traits, and therefore can vastly reduce the number of uncertain parameters in ecosystem models, it has rarely been applied to DGVMs. Our recent research have shown that optimization- and trait-based models of gross primary production can be both much simpler and more accurate than current models based on fixed functional types, and that observed plant carbon allocations and distributions of plant functional traits are predictable with eco-evolutionary models. While there are also many other examples of the usefulness of these and other theoretical principles, it is not always straight-forward to make them operational in predictive models. In particular on longer time scales, the representation of functional diversity and the dynamical interactions among individuals and species presents a formidable challenge. Here we will present recent ideas on the use of adaptation and optimization principles in vegetation models, including examples of promising developments, but also limitations of the principles and some key challenges.

  12. Numerical Modeling of Unsteady Thermofluid Dynamics in Cryogenic Systems

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok

    2003-01-01

    A finite volume based network analysis procedure has been applied to model unsteady flow without and with heat transfer. Liquid has been modeled as compressible fluid where the compressibility factor is computed from the equation of state for a real fluid. The modeling approach recognizes that the pressure oscillation is linked with the variation of the compressibility factor; therefore, the speed of sound does not explicitly appear in the governing equations. The numerical results of chilldown process also suggest that the flow and heat transfer are strongly coupled. This is evident by observing that the mass flow rate during 90-second chilldown process increases by factor of ten.

  13. Image-Based Reconstruction and Analysis of Dynamic Scenes in a Landslide Simulation Facility

    NASA Astrophysics Data System (ADS)

    Scaioni, M.; Crippa, J.; Longoni, L.; Papini, M.; Zanzi, L.

    2017-12-01

    The application of image processing and photogrammetric techniques to dynamic reconstruction of landslide simulations in a scaled-down facility is described. Simulations are also used here for active-learning purpose: students are helped understand how physical processes happen and which kinds of observations may be obtained from a sensor network. In particular, the use of digital images to obtain multi-temporal information is presented. On one side, using a multi-view sensor set up based on four synchronized GoPro 4 Black® cameras, a 4D (3D spatial position and time) reconstruction of the dynamic scene is obtained through the composition of several 3D models obtained from dense image matching. The final textured 4D model allows one to revisit in dynamic and interactive mode a completed experiment at any time. On the other side, a digital image correlation (DIC) technique has been used to track surface point displacements from the image sequence obtained from the camera in front of the simulation facility. While the 4D model may provide a qualitative description and documentation of the experiment running, DIC analysis output quantitative information such as local point displacements and velocities, to be related to physical processes and to other observations. All the hardware and software equipment adopted for the photogrammetric reconstruction has been based on low-cost and open-source solutions.

  14. Impact of model complexity and multi-scale data integration on the estimation of hydrogeological parameters in a dual-porosity aquifer

    NASA Astrophysics Data System (ADS)

    Tamayo-Mas, Elena; Bianchi, Marco; Mansour, Majdi

    2018-03-01

    This study investigates the impact of model complexity and multi-scale prior hydrogeological data on the interpretation of pumping test data in a dual-porosity aquifer (the Chalk aquifer in England, UK). In order to characterize the hydrogeological properties, different approaches ranging from a traditional analytical solution (Theis approach) to more sophisticated numerical models with automatically calibrated input parameters are applied. Comparisons of results from the different approaches show that neither traditional analytical solutions nor a numerical model assuming a homogenous and isotropic aquifer can adequately explain the observed drawdowns. A better reproduction of the observed drawdowns in all seven monitoring locations is instead achieved when medium and local-scale prior information about the vertical hydraulic conductivity (K) distribution is used to constrain the model calibration process. In particular, the integration of medium-scale vertical K variations based on flowmeter measurements lead to an improvement in the goodness-of-fit of the simulated drawdowns of about 30%. Further improvements (up to 70%) were observed when a simple upscaling approach was used to integrate small-scale K data to constrain the automatic calibration process of the numerical model. Although the analysis focuses on a specific case study, these results provide insights about the representativeness of the estimates of hydrogeological properties based on different interpretations of pumping test data, and promote the integration of multi-scale data for the characterization of heterogeneous aquifers in complex hydrogeological settings.

  15. Preclinical Biokinetic Modelling of Tc-99m Radiophamaceuticals Obtained from Semi-Automatic Image Processing.

    PubMed

    Cornejo-Aragón, Luz G; Santos-Cuevas, Clara L; Ocampo-García, Blanca E; Chairez-Oria, Isaac; Diaz-Nieto, Lorenza; García-Quiroz, Janice

    2017-01-01

    The aim of this study was to develop a semi automatic image processing algorithm (AIPA) based on the simultaneous information provided by X-ray and radioisotopic images to determine the biokinetic models of Tc-99m radiopharmaceuticals from quantification of image radiation activity in murine models. These radioisotopic images were obtained by a CCD (charge couple device) camera coupled to an ultrathin phosphorous screen in a preclinical multimodal imaging system (Xtreme, Bruker). The AIPA consisted of different image processing methods for background, scattering and attenuation correction on the activity quantification. A set of parametric identification algorithms was used to obtain the biokinetic models that characterize the interaction between different tissues and the radiopharmaceuticals considered in the study. The set of biokinetic models corresponded to the Tc-99m biodistribution observed in different ex vivo studies. This fact confirmed the contribution of the semi-automatic image processing technique developed in this study.

  16. Progress in Modeling Global Atmospheric CO2 Fluxes and Transport: Results from Simulations with Diurnal Fluxes

    NASA Technical Reports Server (NTRS)

    Collatz, G. James; Kawa, R.

    2007-01-01

    Progress in better determining CO2 sources and sinks will almost certainly rely on utilization of more extensive and intensive CO2 and related observations including those from satellite remote sensing. Use of advanced data requires improved modeling and analysis capability. Under NASA Carbon Cycle Science support we seek to develop and integrate improved formulations for 1) atmospheric transport, 2) terrestrial uptake and release, 3) biomass and 4) fossil fuel burning, and 5) observational data analysis including inverse calculations. The transport modeling is based on meteorological data assimilation analysis from the Goddard Modeling and Assimilation Office. Use of assimilated met data enables model comparison to CO2 and other observations across a wide range of scales of variability. In this presentation we focus on the short end of the temporal variability spectrum: hourly to synoptic to seasonal. Using CO2 fluxes at varying temporal resolution from the SIB 2 and CASA biosphere models, we examine the model's ability to simulate CO2 variability in comparison to observations at different times, locations, and altitudes. We find that the model can resolve much of the variability in the observations, although there are limits imposed by vertical resolution of boundary layer processes. The influence of key process representations is inferred. The high degree of fidelity in these simulations leads us to anticipate incorporation of realtime, highly resolved observations into a multiscale carbon cycle analysis system that will begin to bridge the gap between top-down and bottom-up flux estimation, which is a primary focus of NACP.

  17. Observational clues to the energy release process in impulsive solar bursts

    NASA Technical Reports Server (NTRS)

    Batchelor, David

    1990-01-01

    The nature of the energy release process that produces impulsive bursts of hard X-rays and microwaves during solar flares is discussed, based on new evidence obtained using the method of Crannell et al. (1978). It is shown that the hard X-ray spectral index gamma is negatively correlated with the microwave peak frequency, suggesting a common source for the microwaves and X-rays. The thermal and nonthermal models are compared. It is found that the most straightforward explanations for burst time behavior are shock-wave particle acceleration in the nonthermal model and thermal conduction fronts in the thermal model.

  18. A Study on the Relationships among Surface Variables to Adjust the Height of Surface Temperature for Data Assimilation.

    NASA Astrophysics Data System (ADS)

    Kang, J. H.; Song, H. J.; Han, H. J.; Ha, J. H.

    2016-12-01

    The observation processing system, KPOP (KIAPS - Korea Institute of Atmospheric Prediction Systems - Package for Observation Processing) have developed to provide optimal observations to the data assimilation system for the KIAPS Integrated Model (KIM). Currently, the KPOP has capable of processing almost all of observations for the KMA (Korea Meteorological Administration) operational global data assimilation system. The height adjustment of SURFACE observations are essential for the quality control due to the difference in height between observation station and model topography. For the SURFACE observation, it is usual to adjust the height using lapse rate or hypsometric equation, which decides values mainly depending on the difference of height. We have a question of whether the height can be properly adjusted following to the linear or exponential relationship solely with regard to the difference of height, with disregard the atmospheric conditions. In this study, firstly we analyse the change of surface variables such as temperature (T2m), pressure (Psfc), humidity (RH2m and Q2m), and wind components (U and V) according to the height difference. Additionally, we look further into the relationships among surface variables . The difference of pressure shows a strong linear relationship with difference of height. But the difference of temperature according to the height shows a significant correlation with difference of relative humidity than with the height difference. A development of reliable model for the height-adjustment of surface temperature is being undertaken based on the preliminary results.

  19. Case studies, cross-site comparisons, and the challenge of generalization: comparing agent-based models of land-use change in frontier regions

    PubMed Central

    Parker, Dawn C.; Entwisle, Barbara; Rindfuss, Ronald R.; Vanwey, Leah K.; Manson, Steven M.; Moran, Emilio; An, Li; Deadman, Peter; Evans, Tom P.; Linderman, Marc; Rizi, S. Mohammad Mussavi; Malanson, George

    2009-01-01

    Cross-site comparisons of case studies have been identified as an important priority by the land-use science community. From an empirical perspective, such comparisons potentially allow generalizations that may contribute to production of global-scale land-use and land-cover change projections. From a theoretical perspective, such comparisons can inform development of a theory of land-use science by identifying potential hypotheses and supporting or refuting evidence. This paper undertakes a structured comparison of four case studies of land-use change in frontier regions that follow an agent-based modeling approach. Our hypothesis is that each case study represents a particular manifestation of a common process. Given differences in initial conditions among sites and the time at which the process is observed, actual mechanisms and outcomes are anticipated to differ substantially between sites. Our goal is to reveal both commonalities and differences among research sites, model implementations, and ultimately, conclusions derived from the modeling process. PMID:19960107

  20. Case studies, cross-site comparisons, and the challenge of generalization: comparing agent-based models of land-use change in frontier regions.

    PubMed

    Parker, Dawn C; Entwisle, Barbara; Rindfuss, Ronald R; Vanwey, Leah K; Manson, Steven M; Moran, Emilio; An, Li; Deadman, Peter; Evans, Tom P; Linderman, Marc; Rizi, S Mohammad Mussavi; Malanson, George

    2008-01-01

    Cross-site comparisons of case studies have been identified as an important priority by the land-use science community. From an empirical perspective, such comparisons potentially allow generalizations that may contribute to production of global-scale land-use and land-cover change projections. From a theoretical perspective, such comparisons can inform development of a theory of land-use science by identifying potential hypotheses and supporting or refuting evidence. This paper undertakes a structured comparison of four case studies of land-use change in frontier regions that follow an agent-based modeling approach. Our hypothesis is that each case study represents a particular manifestation of a common process. Given differences in initial conditions among sites and the time at which the process is observed, actual mechanisms and outcomes are anticipated to differ substantially between sites. Our goal is to reveal both commonalities and differences among research sites, model implementations, and ultimately, conclusions derived from the modeling process.

  1. Potential for real-time understanding of coupled hydrologic and biogeochemical processes in stream ecosystems: Future integration of telemetered data with process models for glacial meltwater streams

    NASA Astrophysics Data System (ADS)

    McKnight, Diane M.; Cozzetto, Karen; Cullis, James D. S.; Gooseff, Michael N.; Jaros, Christopher; Koch, Joshua C.; Lyons, W. Berry; Neupauer, Roseanna; Wlostowski, Adam

    2015-08-01

    While continuous monitoring of streamflow and temperature has been common for some time, there is great potential to expand continuous monitoring to include water quality parameters such as nutrients, turbidity, oxygen, and dissolved organic material. In many systems, distinguishing between watershed and stream ecosystem controls can be challenging. The usefulness of such monitoring can be enhanced by the application of quantitative models to interpret observed patterns in real time. Examples are discussed primarily from the glacial meltwater streams of the McMurdo Dry Valleys, Antarctica. Although the Dry Valley landscape is barren of plants, many streams harbor thriving cyanobacterial mats. Whereas a daily cycle of streamflow is controlled by the surface energy balance on the glaciers and the temporal pattern of solar exposure, the daily signal for biogeochemical processes controlling water quality is generated along the stream. These features result in an excellent outdoor laboratory for investigating fundamental ecosystem process and the development and validation of process-based models. As part of the McMurdo Dry Valleys Long-Term Ecological Research project, we have conducted field experiments and developed coupled biogeochemical transport models for the role of hyporheic exchange in controlling weathering reactions, microbial nitrogen cycling, and stream temperature regulation. We have adapted modeling approaches from sediment transport to understand mobilization of stream biomass with increasing flows. These models help to elucidate the role of in-stream processes in systems where watershed processes also contribute to observed patterns, and may serve as a test case for applying real-time stream ecosystem models.

  2. An Empirical Study on Washback Effects of the Internet-Based College English Test Band 4 in China

    ERIC Educational Resources Information Center

    Wang, Chao; Yan, Jiaolan; Liu, Bao

    2014-01-01

    Based on Bailey's washback model, in respect of participants, process and products, the present empirical study was conducted to find the actual washback effects of the internet-based College English Test Band 4 (IB CET-4). The methods adopted are questionnaires, class observation, interview and the analysis of both the CET-4 teaching and testing…

  3. Estimation of Transpiration and Water Use Efficiency Using Satellite and Field Observations

    NASA Technical Reports Server (NTRS)

    Choudhury, Bhaskar J.; Quick, B. E.

    2003-01-01

    Structure and function of terrestrial plant communities bring about intimate relations between water, energy, and carbon exchange between land surface and atmosphere. Total evaporation, which is the sum of transpiration, soil evaporation and evaporation of intercepted water, couples water and energy balance equations. The rate of transpiration, which is the major fraction of total evaporation over most of the terrestrial land surface, is linked to the rate of carbon accumulation because functioning of stomata is optimized by both of these processes. Thus, quantifying the spatial and temporal variations of the transpiration efficiency (which is defined as the ratio of the rate of carbon accumulation and transpiration), and water use efficiency (defined as the ratio of the rate of carbon accumulation and total evaporation), and evaluation of modeling results against observations, are of significant importance in developing a better understanding of land surface processes. An approach has been developed for quantifying spatial and temporal variations of transpiration, and water-use efficiency based on biophysical process-based models, satellite and field observations. Calculations have been done using concurrent meteorological data derived from satellite observations and four dimensional data assimilation for four consecutive years (1987-1990) over an agricultural area in the Northern Great Plains of the US, and compared with field observations within and outside the study area. The paper provides substantive new information about interannual variation, particularly the effect of drought, on the efficiency values at a regional scale.

  4. Building v/s Exploring Models: Comparing Learning of Evolutionary Processes through Agent-based Modeling

    NASA Astrophysics Data System (ADS)

    Wagh, Aditi

    Two strands of work motivate the three studies in this dissertation. Evolutionary change can be viewed as a computational complex system in which a small set of rules operating at the individual level result in different population level outcomes under different conditions. Extensive research has documented students' difficulties with learning about evolutionary change (Rosengren et al., 2012), particularly in terms of levels slippage (Wilensky & Resnick, 1999). Second, though building and using computational models is becoming increasingly common in K-12 science education, we know little about how these two modalities compare. This dissertation adopts agent-based modeling as a representational system to compare these modalities in the conceptual context of micro-evolutionary processes. Drawing on interviews, Study 1 examines middle-school students' productive ways of reasoning about micro-evolutionary processes to find that the specific framing of traits plays a key role in whether slippage explanations are cued. Study 2, which was conducted in 2 schools with about 150 students, forms the crux of the dissertation. It compares learning processes and outcomes when students build their own models or explore a pre-built model. Analysis of Camtasia videos of student pairs reveals that builders' and explorers' ways of accessing rules, and sense-making of observed trends are of a different character. Builders notice rules through available blocks-based primitives, often bypassing their enactment while explorers attend to rules primarily through the enactment. Moreover, builders' sense-making of observed trends is more rule-driven while explorers' is more enactment-driven. Pre and posttests reveal that builders manifest a greater facility with accessing rules, providing explanations manifesting targeted assembly. Explorers use rules to construct explanations manifesting non-targeted assembly. Interviews reveal varying degrees of shifts away from slippage in both modalities, with students who built models not incorporating slippage explanations in responses. Study 3 compares these modalities with a control using traditional activities. Pre and posttests reveal that the two modalities manifested greater facility with accessing and assembling rules than the control. The dissertation offers implications for the design of learning environments for evolutionary change, design of the two modalities based on their strengths and weaknesses, and teacher training for the same.

  5. An Observation-based Assessment of Instrument Requirements for a Future Precipitation Process Observing System

    NASA Astrophysics Data System (ADS)

    Nelson, E.; L'Ecuyer, T. S.; Wood, N.; Smalley, M.; Kulie, M.; Hahn, W.

    2017-12-01

    Global models exhibit substantial biases in the frequency, intensity, duration, and spatial scales of precipitation systems. Much of this uncertainty stems from an inadequate representation of the processes by which water is cycled between the surface and atmosphere and, in particular, those that govern the formation and maintenance of cloud systems and their propensity to form the precipitation. Progress toward improving precipitation process models requires observing systems capable of quantifying the coupling between the ice content, vertical mass fluxes, and precipitation yield of precipitating cloud systems. Spaceborne multi-frequency, Doppler radar offers a unique opportunity to address this need but the effectiveness of such a mission is heavily dependent on its ability to actually observe the processes of interest in the widest possible range of systems. Planning for a next generation precipitation process observing system should, therefore, start with a fundamental evaluation of the trade-offs between sensitivity, resolution, sampling, cost, and the overall potential scientific yield of the mission. Here we provide an initial assessment of the scientific and economic trade-space by evaluating hypothetical spaceborne multi-frequency radars using a combination of current real-world and model-derived synthetic observations. Specifically, we alter the field of view, vertical resolution, and sensitivity of a hypothetical Ka- and W-band radar system and propagate those changes through precipitation detection and intensity retrievals. The results suggest that sampling biases introduced by reducing sensitivity disproportionately affect the light rainfall and frozen precipitation regimes that are critical for warm cloud feedbacks and ice sheet mass balance, respectively. Coarser spatial resolution observations introduce regime-dependent biases in both precipitation occurrence and intensity that depend on cloud regime, with even the sign of the bias varying within a single storm system. It is suggested that the next generation spaceborne radar have a minimum sensitivity of -5 dBZ and spatial resolution of at least 3 km at all frequencies to adequately sample liquid and ice phase precipitation processes globally.

  6. A stochastic estimation procedure for intermittently-observed semi-Markov multistate models with back transitions.

    PubMed

    Aralis, Hilary; Brookmeyer, Ron

    2017-01-01

    Multistate models provide an important method for analyzing a wide range of life history processes including disease progression and patient recovery following medical intervention. Panel data consisting of the states occupied by an individual at a series of discrete time points are often used to estimate transition intensities of the underlying continuous-time process. When transition intensities depend on the time elapsed in the current state and back transitions between states are possible, this intermittent observation process presents difficulties in estimation due to intractability of the likelihood function. In this manuscript, we present an iterative stochastic expectation-maximization algorithm that relies on a simulation-based approximation to the likelihood function and implement this algorithm using rejection sampling. In a simulation study, we demonstrate the feasibility and performance of the proposed procedure. We then demonstrate application of the algorithm to a study of dementia, the Nun Study, consisting of intermittently-observed elderly subjects in one of four possible states corresponding to intact cognition, impaired cognition, dementia, and death. We show that the proposed stochastic expectation-maximization algorithm substantially reduces bias in model parameter estimates compared to an alternative approach used in the literature, minimal path estimation. We conclude that in estimating intermittently observed semi-Markov models, the proposed approach is a computationally feasible and accurate estimation procedure that leads to substantial improvements in back transition estimates.

  7. Snow model design for operational purposes

    NASA Astrophysics Data System (ADS)

    Kolberg, Sjur

    2017-04-01

    A parsimonious distributed energy balance snow model intended for operational use is evaluated using discharge, snow covered area and grain size; the latter two as observed from the MODIS sensor. The snow model is an improvement of the existing GamSnow model, which is a part of the Enki modelling framework. Core requirements for the new version have been: 1. Reduction of calibration freedom, motivated by previous experience of non-identifiable parameters in the existing version 2. Improvement of process representation based on recent advances in physically based snow modelling 3. Limiting the sensitivity to forcing data which are poorly known over the spatial domain of interest (often in mountainous areas) 4. Preference for observable states, and the ability to improve from updates. The albedo calculation is completely revised, now based on grain size through an emulation of the SNICAR model (Flanner and Zender, 2006; Gardener and Sharp, 2010). The number of calibration parameters in the albedo model is reduced from 6 to 2. The wind function governing turbulent energy fluxes has been reduced from 2 to 1 parameter. Following Raleigh et al (2011), snow surface radiant temperature is split from the top layer thermodynamic temperature, using bias-corrected wet-bulb temperature to model the former. Analyses are ongoing, and the poster will bring evaluation results from 16 years of MODIS observations and more than 25 catchments in southern Norway.

  8. Korea-United States Air Quality (KORUS-AQ) Campaign

    NASA Technical Reports Server (NTRS)

    Castellanos, Patricia; Da Silva, Arlindo; Longo-De Freitas, Karla

    2017-01-01

    The Korea-United States Air Quality (KORUS-AQ) campaign was an international cooperative field study based out of Osan Air Base, Songtan, South Korea (about 60 kilometers south of Seoul) in April-June 2016. A comprehensive suite of instruments capable of measuring atmospheric composition was deployed around the Korean peninsula on aircrafts, ships, and at ground sites in order to characterize local and transboundary pollution. The NASA Goddard Earth Observing System, version 5 (GEOS-5) forecast model was used for near real time meteorological and aerosol forecasting and flight planning during the KORUS-AQ campaign. Evaluation of GEOS-5 against observations from the campaign will help to identify inaccuracies in the models physical and chemical processes in this region within East Asia and lead to further developments of the modeling system.

  9. Comprehensive, Process-based Identification of Hydrologic Models using Satellite and In-situ Water Storage Data: A Multi-objective calibration Approach

    NASA Astrophysics Data System (ADS)

    Abdo Yassin, Fuad; Wheater, Howard; Razavi, Saman; Sapriza, Gonzalo; Davison, Bruce; Pietroniro, Alain

    2015-04-01

    The credible identification of vertical and horizontal hydrological components and their associated parameters is very challenging (if not impossible) by only constraining the model to streamflow data, especially in regions where the vertical processes significantly dominate the horizontal processes. The prairie areas of the Saskatchewan River basin, a major water system in Canada, demonstrate such behavior, where the hydrologic connectivity and vertical fluxes are mainly controlled by the amount of surface and sub-surface water storages. In this study, we develop a framework for distributed hydrologic model identification and calibration that jointly constrains the model response (i.e., streamflows) as well as a set of model state variables (i.e., water storages) to observations. This framework is set up in the form of multi-objective optimization, where multiple performance criteria are defined and used to simultaneously evaluate the fidelity of the model to streamflow observations and observed (estimated) changes of water storage in the gridded landscape over daily and monthly time scales. The time series of estimated changes in total water storage (including soil, canopy, snow and pond storages) used in this study were derived from an experimental study enhanced by the information obtained from the GRACE satellite. We test this framework on the calibration of a Land Surface Scheme-Hydrology model, called MESH (Modélisation Environmentale Communautaire - Surface and Hydrology), for the Saskatchewan River basin. Pareto Archived Dynamically Dimensioned Search (PA-DDS) is used as the multi-objective optimization engine. The significance of using the developed framework is demonstrated in comparison with the results obtained through a conventional calibration approach to streamflow observations. The approach of incorporating water storage data into the model identification process can more potentially constrain the posterior parameter space, more comprehensively evaluate the model fidelity, and yield more credible predictions.

  10. Foundation observation of teaching project--a developmental model of peer observation of teaching.

    PubMed

    Pattison, Andrew Timothy; Sherwood, Morgan; Lumsden, Colin James; Gale, Alison; Markides, Maria

    2012-01-01

    Peer observation of teaching is important in the development of educators. The foundation curriculum specifies teaching competencies that must be attained. We created a developmental model of peer observation of teaching to help our foundation doctors achieve these competencies and develop as educators. A process for peer observation was created based on key features of faculty development. The project consisted of a pre-observation meeting, the observation, a post-observation debrief, writing of reflective reports and group feedback sessions. The project was evaluated by completion of questionnaires and focus groups held with both foundation doctors and the students they taught to achieve triangulation. Twenty-one foundation doctors took part. All completed reflective reports on their teaching. Participants described the process as useful in their development as educators, citing specific examples of changes to their teaching practice. Medical students rated the sessions as better or much better quality as their usual teaching. The study highlights the benefits of the project to individual foundation doctors, undergraduate medical students and faculty. It acknowledges potential anxieties involved in having teaching observed. A structured programme of observation of teaching can deliver specific teaching competencies required by foundation doctors and provides additional benefits.

  11. Markov Chain-Like Quantum Biological Modeling of Mutations, Aging, and Evolution.

    PubMed

    Djordjevic, Ivan B

    2015-08-24

    Recent evidence suggests that quantum mechanics is relevant in photosynthesis, magnetoreception, enzymatic catalytic reactions, olfactory reception, photoreception, genetics, electron-transfer in proteins, and evolution; to mention few. In our recent paper published in Life, we have derived the operator-sum representation of a biological channel based on codon basekets, and determined the quantum channel model suitable for study of the quantum biological channel capacity. However, this model is essentially memoryless and it is not able to properly model the propagation of mutation errors in time, the process of aging, and evolution of genetic information through generations. To solve for these problems, we propose novel quantum mechanical models to accurately describe the process of creation spontaneous, induced, and adaptive mutations and their propagation in time. Different biological channel models with memory, proposed in this paper, include: (i) Markovian classical model, (ii) Markovian-like quantum model, and (iii) hybrid quantum-classical model. We then apply these models in a study of aging and evolution of quantum biological channel capacity through generations. We also discuss key differences of these models with respect to a multilevel symmetric channel-based Markovian model and a Kimura model-based Markovian process. These models are quite general and applicable to many open problems in biology, not only biological channel capacity, which is the main focus of the paper. We will show that the famous quantum Master equation approach, commonly used to describe different biological processes, is just the first-order approximation of the proposed quantum Markov chain-like model, when the observation interval tends to zero. One of the important implications of this model is that the aging phenotype becomes determined by different underlying transition probabilities in both programmed and random (damage) Markov chain-like models of aging, which are mutually coupled.

  12. Markov Chain-Like Quantum Biological Modeling of Mutations, Aging, and Evolution

    PubMed Central

    Djordjevic, Ivan B.

    2015-01-01

    Recent evidence suggests that quantum mechanics is relevant in photosynthesis, magnetoreception, enzymatic catalytic reactions, olfactory reception, photoreception, genetics, electron-transfer in proteins, and evolution; to mention few. In our recent paper published in Life, we have derived the operator-sum representation of a biological channel based on codon basekets, and determined the quantum channel model suitable for study of the quantum biological channel capacity. However, this model is essentially memoryless and it is not able to properly model the propagation of mutation errors in time, the process of aging, and evolution of genetic information through generations. To solve for these problems, we propose novel quantum mechanical models to accurately describe the process of creation spontaneous, induced, and adaptive mutations and their propagation in time. Different biological channel models with memory, proposed in this paper, include: (i) Markovian classical model, (ii) Markovian-like quantum model, and (iii) hybrid quantum-classical model. We then apply these models in a study of aging and evolution of quantum biological channel capacity through generations. We also discuss key differences of these models with respect to a multilevel symmetric channel-based Markovian model and a Kimura model-based Markovian process. These models are quite general and applicable to many open problems in biology, not only biological channel capacity, which is the main focus of the paper. We will show that the famous quantum Master equation approach, commonly used to describe different biological processes, is just the first-order approximation of the proposed quantum Markov chain-like model, when the observation interval tends to zero. One of the important implications of this model is that the aging phenotype becomes determined by different underlying transition probabilities in both programmed and random (damage) Markov chain-like models of aging, which are mutually coupled. PMID:26305258

  13. Numerical simulation of dendrite growth in nickel-based superalloy and validated by in-situ observation using high temperature confocal laser scanning microscopy

    NASA Astrophysics Data System (ADS)

    Yan, Xuewei; Xu, Qingyan; Liu, Baicheng

    2017-12-01

    Dendritic structures are the predominant microstructural constituents of nickel-based superalloys, an understanding of the dendrite growth is required in order to obtain the desirable microstructure and improve the performance of castings. For this reason, numerical simulation method and an in-situ observation technology by employing high temperature confocal laser scanning microscopy (HT-CLSM) were used to investigate dendrite growth during solidification process. A combined cellular automaton-finite difference (CA-FD) model allowing for the prediction of dendrite growth of binary alloys was developed. The algorithm of cells capture was modified, and a deterministic cellular automaton (DCA) model was proposed to describe neighborhood tracking. The dendrite and detail morphology, especially hundreds of dendrites distribution at a large scale and three-dimensional (3-D) polycrystalline growth, were successfully simulated based on this model. The dendritic morphologies of samples before and after HT-CLSM were both observed by optical microscope (OM) and scanning electron microscope (SEM). The experimental observations presented a reasonable agreement with the simulation results. It was also found that primary or secondary dendrite arm spacing, and segregation pattern were significantly influenced by dendrite growth. Furthermore, the directional solidification (DS) dendritic evolution behavior and detail morphology were also simulated based on the proposed model, and the simulation results also agree well with experimental results.

  14. Importance of vesicle release stochasticity in neuro-spike communication.

    PubMed

    Ramezani, Hamideh; Akan, Ozgur B

    2017-07-01

    Aim of this paper is proposing a stochastic model for vesicle release process, a part of neuro-spike communication. Hence, we study biological events occurring in this process and use microphysiological simulations to observe functionality of these events. Since the most important source of variability in vesicle release probability is opening of voltage dependent calcium channels (VDCCs) followed by influx of calcium ions through these channels, we propose a stochastic model for this event, while using a deterministic model for other variability sources. To capture the stochasticity of calcium influx to pre-synaptic neuron in our model, we study its statistics and find that it can be modeled by a distribution defined based on Normal and Logistic distributions.

  15. Application of a simple cerebellar model to geologic surface mapping

    USGS Publications Warehouse

    Hagens, A.; Doveton, J.H.

    1991-01-01

    Neurophysiological research into the structure and function of the cerebellum has inspired computational models that simulate information processing associated with coordination and motor movement. The cerebellar model arithmetic computer (CMAC) has a design structure which makes it readily applicable as an automated mapping device that "senses" a surface, based on a sample of discrete observations of surface elevation. The model operates as an iterative learning process, where cell weights are continuously modified by feedback to improve surface representation. The storage requirements are substantially less than those of a conventional memory allocation, and the model is extended easily to mapping in multidimensional space, where the memory savings are even greater. ?? 1991.

  16. Mathematical modeling of malaria infection with innate and adaptive immunity in individuals and agent-based communities.

    PubMed

    Gurarie, David; Karl, Stephan; Zimmerman, Peter A; King, Charles H; St Pierre, Timothy G; Davis, Timothy M E

    2012-01-01

    Agent-based modeling of Plasmodium falciparum infection offers an attractive alternative to the conventional Ross-Macdonald methodology, as it allows simulation of heterogeneous communities subjected to realistic transmission (inoculation patterns). We developed a new, agent based model that accounts for the essential in-host processes: parasite replication and its regulation by innate and adaptive immunity. The model also incorporates a simplified version of antigenic variation by Plasmodium falciparum. We calibrated the model using data from malaria-therapy (MT) studies, and developed a novel calibration procedure that accounts for a deterministic and a pseudo-random component in the observed parasite density patterns. Using the parasite density patterns of 122 MT patients, we generated a large number of calibrated parameters. The resulting data set served as a basis for constructing and simulating heterogeneous agent-based (AB) communities of MT-like hosts. We conducted several numerical experiments subjecting AB communities to realistic inoculation patterns reported from previous field studies, and compared the model output to the observed malaria prevalence in the field. There was overall consistency, supporting the potential of this agent-based methodology to represent transmission in realistic communities. Our approach represents a novel, convenient and versatile method to model Plasmodium falciparum infection.

  17. Impacts of Aerosol Direct Effects on the South Asian climate: Assessment of Radiative Feedback Processes Using Model Simulations and Satellite/surface Measurements

    NASA Astrophysics Data System (ADS)

    Wang, S.; Gautam, R.; Lau, W. K.; Tsay, S.; Sun, W.; Kim, K.; Chern, J.; Colarco, P. R.; Hsu, N. C.; Lin, N.

    2011-12-01

    Current assessment of aerosol radiative effect is hindered by our incomplete knowledge of aerosol optical properties, especially absorption, and our current inability to quantify physical and microphysical processes. In this research, we investigate direct aerosol radiative effect over heavy aerosol loading areas (e.g., Indo-Gangetic Plains, South/East Asia) and its feedbacks on the South Asian climate during the pre-monsoon season (March-June) using the Purdue Regional Climate Model (PRCM) with prescribed aerosol data derived by the NASA Goddard Earth Observing System Model (GEOS-5). Our modeling domain covers South and East Asia (60-140E and 0-50N) with spatial resolutions of 45 km in horizontal and 28 layers in vertical. The model is integrated from 15 February to 30 June 2008 continuously without nudging (i.e., only forced by initial/boundary conditions). Two numerical experiments are conducted with and without the aerosol-radiation effects. Both simulations are successful in reproducing the synoptic patterns on seasonal-to-interannual time scales and capturing a pre-monsoon feature of the northward rainfall propagation over Indian region in early June which shown in Tropical Rainfall Measuring Mission (TRMM) observation. Preliminary result suggests aerosol-radiation interactions mainly alter surface-atmosphere energetics and further result in an adjustment of the vertical temperature distribution in lower atmosphere (below 700 hPa). The modifications of temperature and associated rainfall and circulation feedbacks on the regional climate will be discussed in the presentation. In addition to modeling study, we will also present the most recent results on aerosol properties, regional aerosol absorption, and radiative forcing estimation based on NASA's operational satellite and ground-based remote sensing. Observational results show spatial gradients in aerosol loading and solar absorption accounting over Indo-Gangetic Plains during the pre-monsoon season. The direct radiative forcing of aerosols at surface to be -19-23 Wm-2 (12-15 % of the surface solar insolation) over NW India is estimated using an observational approach. A comparison of aerosol radiative forcing between numerical simulation and observational estimate will be presented. Overall, this work will demonstrate the aerosol direct effects from both modeling and observation perspectives, and further to assess the physical processes underlying the aerosol radiative feedbacks and possible impacts on the large-scale South Asian monsoon system.

  18. The Coastal Ocean Prediction Systems program: Understanding and managing our coastal ocean

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eden, H.F.; Mooers, C.N.K.

    1990-06-01

    The goal of COPS is to couple a program of regular observations to numerical models, through techniques of data assimilation, in order to provide a predictive capability for the US coastal ocean including the Great Lakes, estuaries, and the entire Exclusive Economic Zone (EEZ). The objectives of the program include: determining the predictability of the coastal ocean and the processes that govern the predictability; developing efficient prediction systems for the coastal ocean based on the assimilation of real-time observations into numerical models; and coupling the predictive systems for the physical behavior of the coastal ocean to predictive systems for biological,more » chemical, and geological processes to achieve an interdisciplinary capability. COPS will provide the basis for effective monitoring and prediction of coastal ocean conditions by optimizing the use of increased scientific understanding, improved observations, advanced computer models, and computer graphics to make the best possible estimates of sea level, currents, temperatures, salinities, and other properties of entire coastal regions.« less

  19. Economic communication model set

    NASA Astrophysics Data System (ADS)

    Zvereva, Olga M.; Berg, Dmitry B.

    2017-06-01

    This paper details findings from the research work targeted at economic communications investigation with agent-based models usage. The agent-based model set was engineered to simulate economic communications. Money in the form of internal and external currencies was introduced into the models to support exchanges in communications. Every model, being based on the general concept, has its own peculiarities in algorithm and input data set since it was engineered to solve the specific problem. Several and different origin data sets were used in experiments: theoretic sets were estimated on the basis of static Leontief's equilibrium equation and the real set was constructed on the basis of statistical data. While simulation experiments, communication process was observed in dynamics, and system macroparameters were estimated. This research approved that combination of an agent-based and mathematical model can cause a synergetic effect.

  20. Genetic programming-based mathematical modeling of influence of weather parameters in BOD5 removal by Lemna minor.

    PubMed

    Chandrasekaran, Sivapragasam; Sankararajan, Vanitha; Neelakandhan, Nampoothiri; Ram Kumar, Mahalakshmi

    2017-11-04

    This study, through extensive experiments and mathematical modeling, reveals that other than retention time and wastewater temperature (T w ), atmospheric parameters also play important role in the effective functioning of aquatic macrophyte-based treatment system. Duckweed species Lemna minor is considered in this study. It is observed that the combined effect of atmospheric temperature (T atm ), wind speed (U w ), and relative humidity (RH) can be reflected through one parameter, namely the "apparent temperature" (T a ). A total of eight different models are considered based on the combination of input parameters and the best mathematical model is arrived at which is validated through a new experimental set-up outside the modeling period. The validation results are highly encouraging. Genetic programming (GP)-based models are found to reveal deeper understandings of the wetland process.

  1. Developing a CD-CBM Anticipatory Approach for Cavitation - Defining a Model Descriptor Consistent Between Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allgood, G.O.; Dress, W.B.; Kercel, S.W.

    1999-05-10

    A major problem with cavitation in pumps and other hydraulic devices is that there is no effective method for detecting or predicting its inception. The traditional approach is to declare the pump in cavitation when the total head pressure drops by some arbitrary value (typically 3o/0) in response to a reduction in pump inlet pressure. However, the pump is already cavitating at this point. A method is needed in which cavitation events are captured as they occur and characterized by their process dynamics. The object of this research was to identify specific features of cavitation that could be used asmore » a model-based descriptor in a context-dependent condition-based maintenance (CD-CBM) anticipatory prognostic and health assessment model. This descriptor was based on the physics of the phenomena, capturing the salient features of the process dynamics. An important element of this concept is the development and formulation of the extended process feature vector @) or model vector. Thk model-based descriptor encodes the specific information that describes the phenomena and its dynamics and is formulated as a data structure consisting of several elements. The first is a descriptive model abstracting the phenomena. The second is the parameter list associated with the functional model. The third is a figure of merit, a single number between [0,1] representing a confidence factor that the functional model and parameter list actually describes the observed data. Using this as a basis and applying it to the cavitation problem, any given location in a flow loop will have this data structure, differing in value but not content. The extended process feature vector is formulated as follows: E`> [ , {parameter Iist}, confidence factor]. (1) For this study, the model that characterized cavitation was a chirped-exponentially decaying sinusoid. Using the parameters defined by this model, the parameter list included frequency, decay, and chirp rate. Based on this, the process feature vector has the form: @=> [, {01 = a, ~= b, ~ = c}, cf = 0.80]. (2) In this experiment a reversible catastrophe was examined. The reason for this is that the same catastrophe could be repeated to ensure the statistical significance of the data.« less

  2. Post-processing of multi-hydrologic model simulations for improved streamflow projections

    NASA Astrophysics Data System (ADS)

    khajehei, sepideh; Ahmadalipour, Ali; Moradkhani, Hamid

    2016-04-01

    Hydrologic model outputs are prone to bias and uncertainty due to knowledge deficiency in model and data. Uncertainty in hydroclimatic projections arises due to uncertainty in hydrologic model as well as the epistemic or aleatory uncertainties in GCM parameterization and development. This study is conducted to: 1) evaluate the recently developed multi-variate post-processing method for historical simulations and 2) assess the effect of post-processing on uncertainty and reliability of future streamflow projections in both high-flow and low-flow conditions. The first objective is performed for historical period of 1970-1999. Future streamflow projections are generated for 10 statistically downscaled GCMs from two widely used downscaling methods: Bias Corrected Statistically Downscaled (BCSD) and Multivariate Adaptive Constructed Analogs (MACA), over the period of 2010-2099 for two representative concentration pathways of RCP4.5 and RCP8.5. Three semi-distributed hydrologic models were employed and calibrated at 1/16 degree latitude-longitude resolution for over 100 points across the Columbia River Basin (CRB) in the pacific northwest USA. Streamflow outputs are post-processed through a Bayesian framework based on copula functions. The post-processing approach is relying on a transfer function developed based on bivariate joint distribution between the observation and simulation in historical period. Results show that application of post-processing technique leads to considerably higher accuracy in historical simulations and also reducing model uncertainty in future streamflow projections.

  3. Improving orbit prediction accuracy through supervised machine learning

    NASA Astrophysics Data System (ADS)

    Peng, Hao; Bai, Xiaoli

    2018-05-01

    Due to the lack of information such as the space environment condition and resident space objects' (RSOs') body characteristics, current orbit predictions that are solely grounded on physics-based models may fail to achieve required accuracy for collision avoidance and have led to satellite collisions already. This paper presents a methodology to predict RSOs' trajectories with higher accuracy than that of the current methods. Inspired by the machine learning (ML) theory through which the models are learned based on large amounts of observed data and the prediction is conducted without explicitly modeling space objects and space environment, the proposed ML approach integrates physics-based orbit prediction algorithms with a learning-based process that focuses on reducing the prediction errors. Using a simulation-based space catalog environment as the test bed, the paper demonstrates three types of generalization capability for the proposed ML approach: (1) the ML model can be used to improve the same RSO's orbit information that is not available during the learning process but shares the same time interval as the training data; (2) the ML model can be used to improve predictions of the same RSO at future epochs; and (3) the ML model based on a RSO can be applied to other RSOs that share some common features.

  4. Effect of river flow fluctuations on riparian vegetation dynamics: Processes and models

    NASA Astrophysics Data System (ADS)

    Vesipa, Riccardo; Camporeale, Carlo; Ridolfi, Luca

    2017-12-01

    Several decades of field observations, laboratory experiments and mathematical modelings have demonstrated that the riparian environment is a disturbance-driven ecosystem, and that the main source of disturbance is river flow fluctuations. The focus of the present work has been on the key role that flow fluctuations play in determining the abundance, zonation and species composition of patches of riparian vegetation. To this aim, the scientific literature on the subject, over the last 20 years, has been reviewed. First, the most relevant ecological, morphological and chemical mechanisms induced by river flow fluctuations are described from a process-based perspective. The role of flow variability is discussed for the processes that affect the recruitment of vegetation, the vegetation during its adult life, and the morphological and nutrient dynamics occurring in the riparian habitat. Particular emphasis has been given to studies that were aimed at quantifying the effect of these processes on vegetation, and at linking them to the statistical characteristics of the river hydrology. Second, the advances made, from a modeling point of view, have been considered and discussed. The main models that have been developed to describe the dynamics of riparian vegetation have been presented. Different modeling approaches have been compared, and the corresponding advantages and drawbacks have been pointed out. Finally, attention has been paid to identifying the processes considered by the models, and these processes have been compared with those that have actually been observed or measured in field/laboratory studies.

  5. Implementation of Web Processing Services (WPS) over IPSL Earth System Grid Federation (ESGF) node

    NASA Astrophysics Data System (ADS)

    Kadygrov, Nikolay; Denvil, Sebastien; Carenton, Nicolas; Levavasseur, Guillaume; Hempelmann, Nils; Ehbrecht, Carsten

    2016-04-01

    The Earth System Grid Federation (ESGF) is aimed to provide access to climate data for the international climate community. ESGF is a system of distributed and federated nodes that dynamically interact with each other. ESGF user may search and download climatic data, geographically distributed over the world, from one common web interface and through standardized API. With the continuous development of the climate models and the beginning of the sixth phase of the Coupled Model Intercomparison Project (CMIP6), the amount of data available from ESGF will continuously increase during the next 5 years. IPSL holds a replication of the different global and regional climate models output, observations and reanalysis data (CMIP5, CORDEX, obs4MIPs, etc) that are available on the IPSL ESGF node. In order to let scientists perform analysis of the models without downloading vast amount of data the Web Processing Services (WPS) were installed at IPSL compute node. The work is part of the CONVERGENCE project founded by French National Research Agency (ANR). PyWPS implementation of the Web processing Service standard from Open Geospatial Consortium (OGC) in the framework of birdhouse software is used. The processes could be run by user remotely through web-based WPS client or by using command-line tool. All the calculations are performed on the server side close to the data. If the models/observations are not available at IPSL it will be downloaded and cached by WPS process from ESGF network using synda tool. The outputs of the WPS processes are available for download as plots, tar-archives or as NetCDF files. We present the architecture of WPS at IPSL along with the processes for evaluation of the model performance, on-site diagnostics and post-analysis processing of the models output, e.g.: - regriding/interpolation/aggregation - ocgis (OpenClimateGIS) based polygon subsetting of the data - average seasonal cycle, multimodel mean, multimodel mean bias - calculation of the climate indices with icclim library (CERFACS) - atmospheric modes of variability In order to evaluate performance of any new model, once it became available in ESGF, we implement WPS with several model diagnostics and performance metrics calculated using ESMValTool (Eyring et al., GMDD 2015). As a further step we are developing new WPS processes and core-functions to be implemented at ISPL ESGF compute node following the scientific community needs.

  6. Recurrent connectivity can account for the dynamics of disparity processing in V1

    PubMed Central

    Samonds, Jason M.; Potetz, Brian R.; Tyler, Christopher W.; Lee, Tai Sing

    2013-01-01

    Disparity tuning measured in the primary visual cortex (V1) is described well by the disparity energy model, but not all aspects of disparity tuning are fully explained by the model. Such deviations from the disparity energy model provide us with insight into how network interactions may play a role in disparity processing and help to solve the stereo correspondence problem. Here, we propose a neuronal circuit model with recurrent connections that provides a simple account of the observed deviations. The model is based on recurrent connections inferred from neurophysiological observations on spike timing correlations, and is in good accord with existing data on disparity tuning dynamics. We further performed two additional experiments to test predictions of the model. First, we increased the size of stimuli to drive more neurons and provide a stronger recurrent input. Our model predicted sharper disparity tuning for larger stimuli. Second, we displayed anti-correlated stereograms, where dots of opposite luminance polarity are matched between the left- and right-eye images and result in inverted disparity tuning in the disparity energy model. In this case, our model predicted reduced sharpening and strength of inverted disparity tuning. For both experiments, the dynamics of disparity tuning observed from the neurophysiological recordings in macaque V1 matched model simulation predictions. Overall, the results of this study support the notion that, while the disparity energy model provides a primary account of disparity tuning in V1 neurons, neural disparity processing in V1 neurons is refined by recurrent interactions among elements in the neural circuit. PMID:23407952

  7. An integrative neural model of social perception, action observation, and theory of mind.

    PubMed

    Yang, Daniel Y-J; Rosenblau, Gabriela; Keifer, Cara; Pelphrey, Kevin A

    2015-04-01

    In the field of social neuroscience, major branches of research have been instrumental in describing independent components of typical and aberrant social information processing, but the field as a whole lacks a comprehensive model that integrates different branches. We review existing research related to the neural basis of three key neural systems underlying social information processing: social perception, action observation, and theory of mind. We propose an integrative model that unites these three processes and highlights the posterior superior temporal sulcus (pSTS), which plays a central role in all three systems. Furthermore, we integrate these neural systems with the dual system account of implicit and explicit social information processing. Large-scale meta-analyses based on Neurosynth confirmed that the pSTS is at the intersection of the three neural systems. Resting-state functional connectivity analysis with 1000 subjects confirmed that the pSTS is connected to all other regions in these systems. The findings presented in this review are specifically relevant for psychiatric research especially disorders characterized by social deficits such as autism spectrum disorder. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. An integrative neural model of social perception, action observation, and theory of mind

    PubMed Central

    Yang, Daniel Y.-J.; Rosenblau, Gabriela; Keifer, Cara; Pelphrey, Kevin A.

    2016-01-01

    In the field of social neuroscience, major branches of research have been instrumental in describing independent components of typical and aberrant social information processing, but the field as a whole lacks a comprehensive model that integrates different branches. We review existing research related to the neural basis of three key neural systems underlying social information processing: social perception, action observation, and theory of mind. We propose an integrative model that unites these three processes and highlights the posterior superior temporal sulcus (pSTS), which plays a central role in all three systems. Furthermore, we integrate these neural systems with the dual system account of implicit and explicit social information processing. Large-scale meta-analyses based on Neurosynth confirmed that the pSTS is at the intersection of the three neural systems. Resting-state functional connectivity analysis with 1000 subjects confirmed that the pSTS is connected to all other regions in these systems. The findings presented in this review are specifically relevant for psychiatric research especially disorders characterized by social deficits such as autism spectrum disorder. PMID:25660957

  9. Development of FWIGPR, an open-source package for full-waveform inversion of common-offset GPR data

    NASA Astrophysics Data System (ADS)

    Jazayeri, S.; Kruse, S.

    2017-12-01

    We introduce a package for full-waveform inversion (FWI) of Ground Penetrating Radar (GPR) data based on a combination of open-source programs. The FWI requires a good starting model, based on direct knowledge of field conditions or on traditional ray-based inversion methods. With a good starting model, the FWI can improve resolution of selected subsurface features. The package will be made available for general use in educational and research activities. The FWIGPR package consists of four main components: 3D to 2D data conversion, source wavelet estimation, forward modeling, and inversion. (These four components additionally require the development, by the user, of a good starting model.) A major challenge with GPR data is the unknown form of the waveform emitted by the transmitter held close to the ground surface. We apply a blind deconvolution method to estimate the source wavelet, based on a sparsity assumption about the reflectivity series of the subsurface model (Gholami and Sacchi 2012). The estimated wavelet is deconvolved from the data and the sparsest reflectivity series with fewest reflectors. The gprMax code (www.gprmax.com) is used as the forward modeling tool and the PEST parameter estimation package (www.pesthomepage.com) for the inversion. To reduce computation time, the field data are converted to an effective 2D equivalent, and the gprMax code can be run in 2D mode. In the first step, the user must create a good starting model of the data, presumably using ray-based methods. This estimated model will be introduced to the FWI process as an initial model. Next, the 3D data is converted to 2D, then the user estimates the source wavelet that best fits the observed data by sparsity assumption of the earth's response. Last, PEST runs gprMax with the initial model and calculates the misfit between the synthetic and observed data, and using an iterative algorithm calling gprMax several times ineach iteration, finds successive models that better fit the data. To gauge whether the iterative process has arrived at a local or global minima, the process can be repeated with a range of starting models. Tests have shown that this package can successfully improve estimates of selected subsurface model parameters for simple synthetic and real data. Ongoing research will focus on FWI of more complex scenarios.

  10. Web-based Interactive Landform Simulation Model - Grand Canyon

    NASA Astrophysics Data System (ADS)

    Luo, W.; Pelletier, J. D.; Duffin, K.; Ormand, C. J.; Hung, W.; Iverson, E. A.; Shernoff, D.; Zhai, X.; Chowdary, A.

    2013-12-01

    Earth science educators need interactive tools to engage and enable students to better understand how Earth systems work over geologic time scales. The evolution of landforms is ripe for interactive, inquiry-based learning exercises because landforms exist all around us. The Web-based Interactive Landform Simulation Model - Grand Canyon (WILSIM-GC, http://serc.carleton.edu/landform/) is a continuation and upgrade of the simple cellular automata (CA) rule-based model (WILSIM-CA, http://www.niu.edu/landform/) that can be accessed from anywhere with an Internet connection. Major improvements in WILSIM-GC include adopting a physically based model and the latest Java technology. The physically based model is incorporated to illustrate the fluvial processes involved in land-sculpting pertaining to the development and evolution of one of the most famous landforms on Earth: the Grand Canyon. It is hoped that this focus on a famous and specific landscape will attract greater student interest and provide opportunities for students to learn not only how different processes interact to form the landform we observe today, but also how models and data are used together to enhance our understanding of the processes involved. The latest development in Java technology (such as Java OpenGL for access to ubiquitous fast graphics hardware, Trusted Applet for file input and output, and multithreaded ability to take advantage of modern multi-core CPUs) are incorporated into building WILSIM-GC and active, standards-aligned curricula materials guided by educational psychology theory on science learning will be developed to accompany the model. This project is funded NSF-TUES program.

  11. Viral epidemics in a cell culture: novel high resolution data and their interpretation by a percolation theory based model.

    PubMed

    Gönci, Balázs; Németh, Valéria; Balogh, Emeric; Szabó, Bálint; Dénes, Ádám; Környei, Zsuzsanna; Vicsek, Tamás

    2010-12-20

    Because of its relevance to everyday life, the spreading of viral infections has been of central interest in a variety of scientific communities involved in fighting, preventing and theoretically interpreting epidemic processes. Recent large scale observations have resulted in major discoveries concerning the overall features of the spreading process in systems with highly mobile susceptible units, but virtually no data are available about observations of infection spreading for a very large number of immobile units. Here we present the first detailed quantitative documentation of percolation-type viral epidemics in a highly reproducible in vitro system consisting of tens of thousands of virtually motionless cells. We use a confluent astroglial monolayer in a Petri dish and induce productive infection in a limited number of cells with a genetically modified herpesvirus strain. This approach allows extreme high resolution tracking of the spatio-temporal development of the epidemic. We show that a simple model is capable of reproducing the basic features of our observations, i.e., the observed behaviour is likely to be applicable to many different kinds of systems. Statistical physics inspired approaches to our data, such as fractal dimension of the infected clusters as well as their size distribution, seem to fit into a percolation theory based interpretation. We suggest that our observations may be used to model epidemics in more complex systems, which are difficult to study in isolation.

  12. Viral Epidemics in a Cell Culture: Novel High Resolution Data and Their Interpretation by a Percolation Theory Based Model

    PubMed Central

    Gönci, Balázs; Németh, Valéria; Balogh, Emeric; Szabó, Bálint; Dénes, Ádám; Környei, Zsuzsanna; Vicsek, Tamás

    2010-01-01

    Because of its relevance to everyday life, the spreading of viral infections has been of central interest in a variety of scientific communities involved in fighting, preventing and theoretically interpreting epidemic processes. Recent large scale observations have resulted in major discoveries concerning the overall features of the spreading process in systems with highly mobile susceptible units, but virtually no data are available about observations of infection spreading for a very large number of immobile units. Here we present the first detailed quantitative documentation of percolation-type viral epidemics in a highly reproducible in vitro system consisting of tens of thousands of virtually motionless cells. We use a confluent astroglial monolayer in a Petri dish and induce productive infection in a limited number of cells with a genetically modified herpesvirus strain. This approach allows extreme high resolution tracking of the spatio-temporal development of the epidemic. We show that a simple model is capable of reproducing the basic features of our observations, i.e., the observed behaviour is likely to be applicable to many different kinds of systems. Statistical physics inspired approaches to our data, such as fractal dimension of the infected clusters as well as their size distribution, seem to fit into a percolation theory based interpretation. We suggest that our observations may be used to model epidemics in more complex systems, which are difficult to study in isolation. PMID:21187920

  13. Nonmethane Hydrocarbons and Ozone in the Rural Southeast United States National Parks: A Model Sensitivity Analysis and Its Comparison with Measurement

    NASA Astrophysics Data System (ADS)

    Kang, D.; Aneja, V. P.; Mathur, R.; Ray, J. D.

    2001-12-01

    A comprehensive modeling analysis is conducted using the Multiscale Air Quality SImulation Platform (MAQSIP) focusing on nonmethane hydrocarbons and ozone in three southeast United States national parks for a 15-day time period (July 14th to July 29th, 1995) characterized by high O3 surface concentrations. Nine emission scenarios including the base scenario are analyzed. Model predictions are compared with and contrasted against observed data at the three locations for the same time period. Model predictions (base scenario) tend to give lower daily maximum O3 concentrations than observation by 10.8% at Cove Mountain, Great Smoke Mountains National Park (GRSM), 26.8% at Mammoth Cave National Park (MACA), and 17.6% at Big Meadows, Shenandoah National Park (SHEN). Overall mean ozone concentrations are very similar at GRSM and SHEN (observed data at MACA are not available). Model predicted concentrations of lumped paraffin compounds match the observed values on the same order, while the observed concentrations for other species (isoprene, ethene, surrogate olefin, surrogate toluene, and surrogate xylene) are usually an order of magnitude higher than the predictions. Sensitivity analyses indicate each location has its own characteristics in terms of the capacity of volatile organic compounds (VOCs) to produce O3, but a maximum VOC capacity point (MVCP) exists at all locations that changes the influence of VOCs on O3 from production to destruction. Analysis of individual model process budgets shows that more than 50% of daytime O3 concentrations at these rural locations are transported from other areas, local chemistry is the second largest contributor (13% to 42%), all other processes combined contribute less than 10% of the daytime O3 concentrations. Local emissions (>99%) are predominantly responsible for VOCs at all locations, while vertical diffusion (>70%) is the predominant process to move VOCs away from the modeling grid. Dry deposition ( ~ 10%) and chemistry (2 to 13%) processes are also responsible for the removal of VOCs. Metrics such as O3 production efficiency of VOC emissions (VOPE), VOC potential for O3 production (VPOP), and MVCP are devised to quantitatively measure the different characteristics of O3 production and VOCs in these rural environments. Implications of this model exercise in understanding O3 production in rural atmospheres are analyzed and discussed. Even though this study is focusing on three United States National Parks, the research results and conclusions may be applicable to other rural atmospheres.

  14. Aircraft Observation of Gravity Wave Breaking at the Storm Top and Comparison with High Resolution Cloud Model Simulations and Satellite Images

    NASA Astrophysics Data System (ADS)

    Wang, P. K.; Cheng, K. Y.; Lindsey, D. T.

    2017-12-01

    Deep convective clouds play an important role in the transport of momentum, energy, and chemical species from the surface to upper troposphere and lower stratosphere (UT/LS), but exactly how these processes occur and how important they are as compared to other processes are still up to debate. The main hurdle to the complete understanding of these transport processes is the difficulty in observing storm systems directly. Remote sensing data such as those obtained by radars and satellites are very valuable but they need correct interpretation before we can use them profitably. We have performed numerical simulations of thunderstorms using a physics-based cloud resolving model and compared model results with satellite observations. Many major features of observed satellite storm top images, such as cold-V, close in warm area, above anvil cirrus plumes, are successfully simulated and can be interpreted by the model physics. However, due to the limitation of resolution and other ambiguities, we have been unable to determine the real cause of some features such as the conversion of jumping cirrus to long trail plumes and whether or no small scale ( < 1 km) wave breaking occur. We are fortunate to have encountered a line of sea breeze storms along the coast of China during a flight from Beijing to Taipei in July 2106. The flight was at an altitude such that storm tops could be clearly observed. Nearly all of the mature storm cells that can be identified had very vigorous storm top activities, indicating very strong stratosphere/troposphere exchange (STE). There is no doubt that the signatures of wave breaking, i.e., jumping cirrus, occurs from very small scale (< 1 km) to tens of km. this matches our previous model results very well. Furthermore, one storm cell shows very clearly the process whereby a jumping cirrus is being transformed into a long trail cirrus plume which was often observed in satellite images. We have also obtained the corresponding Himawari-8 satellite images for this line of storms. Aircraft observation, satellite images and model results will be compared and the implications to STE discussed.

  15. Using a multinomial tree model for detecting mixtures in perceptual detection

    PubMed Central

    Chechile, Richard A.

    2014-01-01

    In the area of memory research there have been two rival approaches for memory measurement—signal detection theory (SDT) and multinomial processing trees (MPT). Both approaches provide measures for the quality of the memory representation, and both approaches provide for corrections for response bias. In recent years there has been a strong case advanced for the MPT approach because of the finding of stochastic mixtures on both target-present and target-absent tests. In this paper a case is made that perceptual detection, like memory recognition, involves a mixture of processes that are readily represented as a MPT model. The Chechile (2004) 6P memory measurement model is modified in order to apply to the case of perceptual detection. This new MPT model is called the Perceptual Detection (PD) model. The properties of the PD model are developed, and the model is applied to some existing data of a radiologist examining CT scans. The PD model brings out novel features that were absent from a standard SDT analysis. Also the topic of optimal parameter estimation on an individual-observer basis is explored with Monte Carlo simulations. These simulations reveal that the mean of the Bayesian posterior distribution is a more accurate estimator than the corresponding maximum likelihood estimator (MLE). Monte Carlo simulations also indicate that model estimates based on only the data from an individual observer can be improved upon (in the sense of being more accurate) by an adjustment that takes into account the parameter estimate based on the data pooled across all the observers. The adjustment of the estimate for an individual is discussed as an analogous statistical effect to the improvement over the individual MLE demonstrated by the James–Stein shrinkage estimator in the case of the multiple-group normal model. PMID:25018741

  16. Drivers of dissolved organic carbon export in a subarctic catchment: Importance of microbial decomposition, sorption-desorption, peatland and lateral flow.

    PubMed

    Tang, Jing; Yurova, Alla Y; Schurgers, Guy; Miller, Paul A; Olin, Stefan; Smith, Benjamin; Siewert, Matthias B; Olefeldt, David; Pilesjö, Petter; Poska, Anneli

    2018-05-01

    Tundra soils account for 50% of global stocks of soil organic carbon (SOC), and it is expected that the amplified climate warming in high latitude could cause loss of this SOC through decomposition. Decomposed SOC could become hydrologically accessible, which increase downstream dissolved organic carbon (DOC) export and subsequent carbon release to the atmosphere, constituting a positive feedback to climate warming. However, DOC export is often neglected in ecosystem models. In this paper, we incorporate processes related to DOC production, mineralization, diffusion, sorption-desorption, and leaching into a customized arctic version of the dynamic ecosystem model LPJ-GUESS in order to mechanistically model catchment DOC export, and to link this flux to other ecosystem processes. The extended LPJ-GUESS is compared to observed DOC export at Stordalen catchment in northern Sweden. Vegetation communities include flood-tolerant graminoids (Eriophorum) and Sphagnum moss, birch forest and dwarf shrub communities. The processes, sorption-desorption and microbial decomposition (DOC production and mineralization) are found to contribute most to the variance in DOC export based on a detailed variance-based Sobol sensitivity analysis (SA) at grid cell-level. Catchment-level SA shows that the highest mean DOC exports come from the Eriophorum peatland (fen). A comparison with observations shows that the model captures the seasonality of DOC fluxes. Two catchment simulations, one without water lateral routing and one without peatland processes, were compared with the catchment simulations with all processes. The comparison showed that the current implementation of catchment lateral flow and peatland processes in LPJ-GUESS are essential to capture catchment-level DOC dynamics and indicate the model is at an appropriate level of complexity to represent the main mechanism of DOC dynamics in soils. The extended model provides a new tool to investigate potential interactions among climate change, vegetation dynamics, soil hydrology and DOC dynamics at both stand-alone to catchment scales. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Exploring cultural differences in feedback processes and perceived instructiveness during clerkships: replicating a Dutch study in Indonesia.

    PubMed

    Suhoyo, Yoyo; van Hell, Elisabeth A; Prihatiningsih, Titi S; Kuks, Jan B M; Cohen-Schotanus, Janke

    2014-03-01

    Cultural differences between countries may entail differences in feedback processes. By replicating a Dutch study in Indonesia, we analysed whether differences in processes influenced the perceived instructiveness of feedback. Over a two-week period, Indonesian students (n = 215) recorded feedback moments during clerkships, noting who provided the feedback, whether the feedback was based on observations, who initiated the feedback, and its perceived instructiveness. Data were compared with the earlier Dutch study and analysed with χ(2) tests, t-tests and multilevel techniques. Cultural differences were explored using Hofstede's Model, with Indonesia and the Netherlands differing on "power distance" and "individualism." Perceived instructiveness of feedback did not differ significantly between both countries. However, significant differences were found in feedback provider, observation and initiative. Indonesian students perceived feedback as more instructive if provided by specialists and initiated jointly by the supervisor and student (βresidents = -0.201, p < 0.001 and βjoint = 0.193, p = 0.001). Dutch students appreciated feedback more when it was based on observation. We obtained empirical evidence that one model of feedback does not necessarily translate to another culture. Further research is necessary to unravel other possible influences of culture in implementing feedback procedures in different countries.

  18. Prediction of hemoglobin in blood donors using a latent class mixed-effects transition model.

    PubMed

    Nasserinejad, Kazem; van Rosmalen, Joost; de Kort, Wim; Rizopoulos, Dimitris; Lesaffre, Emmanuel

    2016-02-20

    Blood donors experience a temporary reduction in their hemoglobin (Hb) value after donation. At each visit, the Hb value is measured, and a too low Hb value leads to a deferral for donation. Because of the recovery process after each donation as well as state dependence and unobserved heterogeneity, longitudinal data of Hb values of blood donors provide unique statistical challenges. To estimate the shape and duration of the recovery process and to predict future Hb values, we employed three models for the Hb value: (i) a mixed-effects models; (ii) a latent-class mixed-effects model; and (iii) a latent-class mixed-effects transition model. In each model, a flexible function was used to model the recovery process after donation. The latent classes identify groups of donors with fast or slow recovery times and donors whose recovery time increases with the number of donations. The transition effect accounts for possible state dependence in the observed data. All models were estimated in a Bayesian way, using data of new entrant donors from the Donor InSight study. Informative priors were used for parameters of the recovery process that were not identified using the observed data, based on results from the clinical literature. The results show that the latent-class mixed-effects transition model fits the data best, which illustrates the importance of modeling state dependence, unobserved heterogeneity, and the recovery process after donation. The estimated recovery time is much longer than the current minimum interval between donations, suggesting that an increase of this interval may be warranted. Copyright © 2015 John Wiley & Sons, Ltd.

  19. Are Atmospheric Updrafts a Key to Unlocking Climate Forcing and Sensitivity?

    DOE PAGES

    Donner, Leo J.; O'Brien, Travis A.; Rieger, Daniel; ...

    2016-06-08

    Both climate forcing and climate sensitivity persist as stubborn uncertainties limiting the extent to which climate models can provide actionable scientific scenarios for climate change. A key, explicit control on cloud-aerosol interactions, the largest uncertainty in climate forcing, is the vertical velocity of cloud-scale updrafts. Model-based studies of climate sensitivity indicate that convective entrainment, which is closely related to updraft speeds, is an important control on climate sensitivity. Updraft vertical velocities also drive many physical processes essential to numerical weather prediction. Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climatemore » and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying vertical velocities, and parameterizations which do provide vertical velocities have been subject to limited evaluation against what have until recently been scant observations. Atmospheric observations imply that the distribution of vertical velocities depends on the areas over which the vertical velocities are averaged. Distributions of vertical velocities in climate models may capture this behavior, but it has not been accounted for when parameterizing cloud and precipitation processes in current models. New observations of convective vertical velocities offer a potentially promising path toward developing process-level cloud models and parameterizations for climate and numerical weather prediction. Taking account of scale-dependence of resolved vertical velocities offers a path to matching cloud-scale physical processes and their driving dynamics more realistically, with a prospect of reduced uncertainty in both climate forcing and sensitivity.« less

  20. Agent-Based Phytoplankton Models of Cellular and Population Processes: Fostering Individual-Based Learning in Undergraduate Research

    NASA Astrophysics Data System (ADS)

    Berges, J. A.; Raphael, T.; Rafa Todd, C. S.; Bate, T. C.; Hellweger, F. L.

    2016-02-01

    Engaging undergraduate students in research projects that require expertise in multiple disciplines (e.g. cell biology, population ecology, and mathematical modeling) can be challenging because they have often not developed the expertise that allows them to participate at a satisfying level. Use of agent-based modeling can allow exploration of concepts at more intuitive levels, and encourage experimentation that emphasizes processes over computational skills. Over the past several years, we have involved undergraduate students in projects examining both ecological and cell biological aspects of aquatic microbial biology, using the freely-downloadable, agent-based modeling environment NetLogo (https://ccl.northwestern.edu/netlogo/). In Netlogo, actions of large numbers of individuals can be simulated, leading to complex systems with emergent behavior. The interface features appealing graphics, monitors, and control structures. In one example, a group of sophomores in a BioMathematics program developed an agent-based model of phytoplankton population dynamics in a pond ecosystem, motivated by observed macroscopic changes in cell numbers (due to growth and death), and driven by responses to irradiance, temperature and a limiting nutrient. In a second example, junior and senior undergraduates conducting Independent Studies created a model of the intracellular processes governing stress and cell death for individual phytoplankton cells (based on parameters derived from experiments using single-cell culturing and flow cytometry), and then this model was embedded in the agents in the pond ecosystem model. In our experience, students with a range of mathematical abilities learned to code quickly and could use the software with varying degrees of sophistication, for example, creation of spatially-explicit two and three-dimensional models. Skills developed quickly and transferred readily to other platforms (e.g. Matlab).

  1. Modeling the Secondary Drying Stage of Freeze Drying: Development and Validation of an Excel-Based Model.

    PubMed

    Sahni, Ekneet K; Pikal, Michael J

    2017-03-01

    Although several mathematical models of primary drying have been developed over the years, with significant impact on the efficiency of process design, models of secondary drying have been confined to highly complex models. The simple-to-use Excel-based model developed here is, in essence, a series of steady state calculations of heat and mass transfer in the 2 halves of the dry layer where drying time is divided into a large number of time steps, where in each time step steady state conditions prevail. Water desorption isotherm and mass transfer coefficient data are required. We use the Excel "Solver" to estimate the parameters that define the mass transfer coefficient by minimizing the deviations in water content between calculation and a calibration drying experiment. This tool allows the user to input the parameters specific to the product, process, container, and equipment. Temporal variations in average moisture contents and product temperatures are outputs and are compared with experiment. We observe good agreement between experiments and calculations, generally well within experimental error, for sucrose at various concentrations, temperatures, and ice nucleation temperatures. We conclude that this model can serve as an important process development tool for process design and manufacturing problem-solving. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  2. A simple 2D biofilm model yields a variety of morphological features.

    PubMed

    Hermanowicz, S W

    2001-01-01

    A two-dimensional biofilm model was developed based on the concept of cellular automata. Three simple, generic processes were included in the model: cell growth, internal and external mass transport and cell detachment (erosion). The model generated a diverse range of biofilm morphologies (from dense layers to open, mushroom-like forms) similar to those observed in real biofilm systems. Bulk nutrient concentration and external mass transfer resistance had a large influence on the biofilm structure.

  3. Towards an improved ensemble precipitation forecast: A probabilistic post-processing approach

    NASA Astrophysics Data System (ADS)

    Khajehei, Sepideh; Moradkhani, Hamid

    2017-03-01

    Recently, ensemble post-processing (EPP) has become a commonly used approach for reducing the uncertainty in forcing data and hence hydrologic simulation. The procedure was introduced to build ensemble precipitation forecasts based on the statistical relationship between observations and forecasts. More specifically, the approach relies on a transfer function that is developed based on a bivariate joint distribution between the observations and the simulations in the historical period. The transfer function is used to post-process the forecast. In this study, we propose a Bayesian EPP approach based on copula functions (COP-EPP) to improve the reliability of the precipitation ensemble forecast. Evaluation of the copula-based method is carried out by comparing the performance of the generated ensemble precipitation with the outputs from an existing procedure, i.e. mixed type meta-Gaussian distribution. Monthly precipitation from Climate Forecast System Reanalysis (CFS) and gridded observation from Parameter-Elevation Relationships on Independent Slopes Model (PRISM) have been employed to generate the post-processed ensemble precipitation. Deterministic and probabilistic verification frameworks are utilized in order to evaluate the outputs from the proposed technique. Distribution of seasonal precipitation for the generated ensemble from the copula-based technique is compared to the observation and raw forecasts for three sub-basins located in the Western United States. Results show that both techniques are successful in producing reliable and unbiased ensemble forecast, however, the COP-EPP demonstrates considerable improvement in the ensemble forecast in both deterministic and probabilistic verification, in particular in characterizing the extreme events in wet seasons.

  4. Littoral transport rates in the Santa Barbara Littoral Cell: a process-based model analysis

    USGS Publications Warehouse

    Elias, E. P. L.; Barnard, Patrick L.; Brocatus, John

    2009-01-01

    Identification of the sediment transport patterns and pathways is essential for sustainable coastal zone management of the heavily modified coastline of Santa Barbara and Ventura County (California, USA). A process-based model application, based on Delft3D Online Morphology, is used to investigate the littoral transport potential along the Santa Barbara Littoral Cell (between Point Conception and Mugu Canyon). An advanced optimalization procedure is applied to enable annual sediment transport computations by reducing the ocean wave climate in 10 wave height - direction classes. Modeled littoral transport rates compare well with observed dredging volumes, and erosion or sedimentation hotspots coincide with the modeled divergence and convergence of the transport gradients. Sediment transport rates are strongly dependent on the alongshore variation in wave height due to wave sheltering, diffraction and focusing by the Northern Channel Islands, and the local orientation of the geologically-controlled coastline. Local transport gradients exceed the net eastward littoral transport, and are considered a primary driver for hot-spot erosion.

  5. Content analysis of science material in junior school-based inquiry and science process skills

    NASA Astrophysics Data System (ADS)

    Patonah, S.; Nuvitalia, D.; Saptaningrum, E.

    2018-03-01

    The purpose of this research is to obtain the characteristic map of science material content in Junior School which can be optimized using inquiry learning model to tone the science process skill. The research method used in the form of qualitative research on SMP science curriculum document in Indonesia. Documents are reviewed on the basis of the basic competencies of each level as well as their potential to trace the skills of the science process using inquiry learning models. The review was conducted by the research team. The results obtained, science process skills in grade 7 have the potential to be trained using the model of inquiry learning by 74%, 8th grade by 83%, and grade 9 by 75%. For the dominant process skills in each chapter and each level is the observing skill. Follow-up research is used to develop instructional inquiry tools to trace the skills of the science process.

  6. Comparing an Economic Model of News Selection with One Based on Professional Norms in Local Television Newscasts.

    ERIC Educational Resources Information Center

    McManus, John

    A study compared two models (economic and journalistic) of news selection in an attempt to explain what becomes news. The news gathering and news decisionmaking processes of three western United States network-affiliated television stations, one each in a small, medium, and large market, were observed during 12 "typical" days.…

  7. Climate change and the eco-hydrology of fire: Will area burned increase in a warming western USA?

    Treesearch

    Donald McKenzie; Jeremy S. Littell

    2017-01-01

    Wildfire area is predicted to increase with global warming. Empirical statistical models and process-based simulations agree almost universally. The key relationship for this unanimity, observed at multiple spatial and temporal scales, is between drought and fire. Predictive models often focus on ecosystems in which this relationship appears to be particularly strong,...

  8. An efficient surrogate-based simulation-optimization method for calibrating a regional MODFLOW model

    NASA Astrophysics Data System (ADS)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.

    2017-01-01

    Simulation-optimization method entails a large number of model simulations, which is computationally intensive or even prohibitive if the model simulation is extremely time-consuming. Statistical models have been examined as a surrogate of the high-fidelity physical model during simulation-optimization process to tackle this problem. Among them, Multivariate Adaptive Regression Splines (MARS), a non-parametric adaptive regression method, is superior in overcoming problems of high-dimensions and discontinuities of the data. Furthermore, the stability and accuracy of MARS model can be improved by bootstrap aggregating methods, namely, bagging. In this paper, Bagging MARS (BMARS) method is integrated to a surrogate-based simulation-optimization framework to calibrate a three-dimensional MODFLOW model, which is developed to simulate the groundwater flow in an arid hardrock-alluvium region in northwestern Oman. The physical MODFLOW model is surrogated by the statistical model developed using BMARS algorithm. The surrogate model, which is fitted and validated using training dataset generated by the physical model, can approximate solutions rapidly. An efficient Sobol' method is employed to calculate global sensitivities of head outputs to input parameters, which are used to analyze their importance for the model outputs spatiotemporally. Only sensitive parameters are included in the calibration process to further improve the computational efficiency. Normalized root mean square error (NRMSE) between measured and simulated heads at observation wells is used as the objective function to be minimized during optimization. The reasonable history match between the simulated and observed heads demonstrated feasibility of this high-efficient calibration framework.

  9. Seismological Field Observation of Mesoscopic Nonlinearity

    NASA Astrophysics Data System (ADS)

    Sens-Schönfelder, Christoph; Gassenmeier, Martina; Eulenfeld, Tom; Tilmann, Frederik; Korn, Michael; Niederleithinger, Ernst

    2016-04-01

    Noise based observations of seismic velocity changes have been made in various environments. We know of seasonal changes of velocities related to ground water or temperature changes, co-seismic changes originating from shaking or stress redistribution and changes related to volcanic activity. Is is often argued that a decrease of velocity is related to the opening of cracks while the closure of cracks leads to a velocity increase if permanent stress changes are invoked. In contrast shaking induced changes are often related to "damage" and subsequent "healing" of the material. The co-seismic decrease and transient recovery of seismic velocities can thus be explained with both - static stress changes or damage/healing processes. This results in ambiguous interpretations of the observations. Here we present the analysis of one particular seismic station in northern Chile that shows very strong and clear velocity changes associated with several earthquakes ranging from Mw=5.3 to Mw=8.1. The fact that we can observe the response to several events of various magnitudes from different directions offers the unique possibility to discern the two possible causative processes. We test the hypothesis, that the velocity changes are related to shaking rather than stress changes by developing an empirical model that is based on the local ground acceleration at the sensor site. The eight year of almost continuous observations of velocity changes are well modeled by a daily drop of the velocity followed by an exponential recovery. Both, the amplitude of the drop as well as the recovery time are proportional to the integrated acceleration at the seismic station. Effects of consecutive days are independent and superimposed resulting in strong changes after earthquakes and constantly increasing velocities during quiet days thereafter. This model describes the continuous observations of the velocity changes solely based on the acceleration time series without individually defined dates of events associated with separately inverted parameters. As the local ground acceleration is not correlated to static stress changes we can exclude static stress changes as causative process. The shaking sensitivity and healing process is well known from laboratory experiments in composite materials as mesoscopic nonlinearity. The sensitive behavior at this station is related to the particular near surface material that is a conglomerate cemented with gypsum - so called gypcrete. However, mesoscopic nonlinearity with different parameters might be a key to understand velocity changes also at other sites.

  10. Methodology for the AutoRegressive Planet Search (ARPS) Project

    NASA Astrophysics Data System (ADS)

    Feigelson, Eric; Caceres, Gabriel; ARPS Collaboration

    2018-01-01

    The detection of periodic signals of transiting exoplanets is often impeded by the presence of aperiodic photometric variations. This variability is intrinsic to the host star in space-based observations (typically arising from magnetic activity) and from observational conditions in ground-based observations. The most common statistical procedures to remove stellar variations are nonparametric, such as wavelet decomposition or Gaussian Processes regression. However, many stars display variability with autoregressive properties, wherein later flux values are correlated with previous ones. Providing the time series is evenly spaced, parametric autoregressive models can prove very effective. Here we present the methodology of the Autoregessive Planet Search (ARPS) project which uses Autoregressive Integrated Moving Average (ARIMA) models to treat a wide variety of stochastic short-memory processes, as well as nonstationarity. Additionally, we introduce a planet-search algorithm to detect periodic transits in the time-series residuals after application of ARIMA models. Our matched-filter algorithm, the Transit Comb Filter (TCF), replaces the traditional box-fitting step. We construct a periodogram based on the TCF to concentrate the signal of these periodic spikes. Various features of the original light curves, the ARIMA fits, the TCF periodograms, and folded light curves at peaks of the TCF periodogram can then be collected to provide constraints for planet detection. These features provide input into a multivariate classifier when a training set is available. The ARPS procedure has been applied NASA's Kepler mission observations of ~200,000 stars (Caceres, Dissertation Talk, this meeting) and will be applied in the future to other datasets.

  11. Global and Time-Resolved Monitoring of Crop Photosynthesis with Chlorophyll Fluorescence

    NASA Technical Reports Server (NTRS)

    Guanter, Luis; Zhang, Yongguang; Jung, Martin; Joiner, Joanna; Voigt, Maximilian; Berry, Joseph A.; Frankenberg, Christian; Huete, Alfredo R.; Zarco-Tejada, Pablo; Lee, Jung-Eun; hide

    2014-01-01

    Photosynthesis is the process by which plants harvest sunlight to produce sugars from carbon dioxide and water. It is the primary source of energy for all life on Earth; hence it is important to understand how this process responds to climate change and human impact. However, model-based estimates of gross primary production (GPP, output from photosynthesis) are highly uncertain, in particular over heavily managed agricultural areas. Recent advances in spectroscopy enable the space-based monitoring of sun-induced chlorophyll fluorescence (SIF) from terrestrial plants. Here we demonstrate that spaceborne SIF retrievals provide a direct measure of the GPP of cropland and grassland ecosystems. Such a strong link with crop photosynthesis is not evident for traditional remotely sensed vegetation indices, nor for more complex carbon cycle models. We use SIF observations to provide a global perspective on agricultural productivity. Our SIF-based crop GPP estimates are 50-75% higher than results from state-of-the-art carbon cycle models over, for example, the US Corn Belt and the Indo-Gangetic Plain, implying that current models severely underestimate the role of management. Our results indicate that SIF data can help us improve our global models for more accurate projections of agricultural productivity and climate impact on crop yields. Extension of our approach to other ecosystems, along with increased observational capabilities for SIF in the near future, holds the prospect of reducing uncertainties in the modeling of the current and future carbon cycle.

  12. Global and time-resolved monitoring of crop photosynthesis with chlorophyll fluorescence

    PubMed Central

    Guanter, Luis; Zhang, Yongguang; Jung, Martin; Joiner, Joanna; Voigt, Maximilian; Berry, Joseph A.; Frankenberg, Christian; Huete, Alfredo R.; Zarco-Tejada, Pablo; Lee, Jung-Eun; Moran, M. Susan; Ponce-Campos, Guillermo; Beer, Christian; Camps-Valls, Gustavo; Buchmann, Nina; Gianelle, Damiano; Klumpp, Katja; Cescatti, Alessandro; Baker, John M.; Griffis, Timothy J.

    2014-01-01

    Photosynthesis is the process by which plants harvest sunlight to produce sugars from carbon dioxide and water. It is the primary source of energy for all life on Earth; hence it is important to understand how this process responds to climate change and human impact. However, model-based estimates of gross primary production (GPP, output from photosynthesis) are highly uncertain, in particular over heavily managed agricultural areas. Recent advances in spectroscopy enable the space-based monitoring of sun-induced chlorophyll fluorescence (SIF) from terrestrial plants. Here we demonstrate that spaceborne SIF retrievals provide a direct measure of the GPP of cropland and grassland ecosystems. Such a strong link with crop photosynthesis is not evident for traditional remotely sensed vegetation indices, nor for more complex carbon cycle models. We use SIF observations to provide a global perspective on agricultural productivity. Our SIF-based crop GPP estimates are 50–75% higher than results from state-of-the-art carbon cycle models over, for example, the US Corn Belt and the Indo-Gangetic Plain, implying that current models severely underestimate the role of management. Our results indicate that SIF data can help us improve our global models for more accurate projections of agricultural productivity and climate impact on crop yields. Extension of our approach to other ecosystems, along with increased observational capabilities for SIF in the near future, holds the prospect of reducing uncertainties in the modeling of the current and future carbon cycle. PMID:24706867

  13. Terrestrial gross carbon dioxide uptake: global distribution and covariation with climate.

    PubMed

    Beer, Christian; Reichstein, Markus; Tomelleri, Enrico; Ciais, Philippe; Jung, Martin; Carvalhais, Nuno; Rödenbeck, Christian; Arain, M Altaf; Baldocchi, Dennis; Bonan, Gordon B; Bondeau, Alberte; Cescatti, Alessandro; Lasslop, Gitta; Lindroth, Anders; Lomas, Mark; Luyssaert, Sebastiaan; Margolis, Hank; Oleson, Keith W; Roupsard, Olivier; Veenendaal, Elmar; Viovy, Nicolas; Williams, Christopher; Woodward, F Ian; Papale, Dario

    2010-08-13

    Terrestrial gross primary production (GPP) is the largest global CO(2) flux driving several ecosystem functions. We provide an observation-based estimate of this flux at 123 +/- 8 petagrams of carbon per year (Pg C year(-1)) using eddy covariance flux data and various diagnostic models. Tropical forests and savannahs account for 60%. GPP over 40% of the vegetated land is associated with precipitation. State-of-the-art process-oriented biosphere models used for climate predictions exhibit a large between-model variation of GPP's latitudinal patterns and show higher spatial correlations between GPP and precipitation, suggesting the existence of missing processes or feedback mechanisms which attenuate the vegetation response to climate. Our estimates of spatially distributed GPP and its covariation with climate can help improve coupled climate-carbon cycle process models.

  14. Extending data worth methods to select multiple observations targeting specific hydrological predictions of interest

    NASA Astrophysics Data System (ADS)

    Vilhelmsen, Troels N.; Ferré, Ty P. A.

    2016-04-01

    Hydrological models are often developed to forecasting future behavior in response due to natural or human induced changes in stresses affecting hydrologic systems. Commonly, these models are conceptualized and calibrated based on existing data/information about the hydrological conditions. However, most hydrologic systems lack sufficient data to constrain models with adequate certainty to support robust decision making. Therefore, a key element of a hydrologic study is the selection of additional data to improve model performance. Given the nature of hydrologic investigations, it is not practical to select data sequentially, i.e. to choose the next observation, collect it, refine the model, and then repeat the process. Rather, for timing and financial reasons, measurement campaigns include multiple wells or sampling points. There is a growing body of literature aimed at defining the expected data worth based on existing models. However, these are almost all limited to identifying single additional observations. In this study, we present a methodology for simultaneously selecting multiple potential new observations based on their expected ability to reduce the uncertainty of the forecasts of interest. This methodology is based on linear estimates of the predictive uncertainty, and it can be used to determine the optimal combinations of measurements (location and number) established to reduce the uncertainty of multiple predictions. The outcome of the analysis is an estimate of the optimal sampling locations; the optimal number of samples; as well as a probability map showing the locations within the investigated area that are most likely to provide useful information about the forecasting of interest.

  15. Boolean Dynamic Modeling Approaches to Study Plant Gene Regulatory Networks: Integration, Validation, and Prediction.

    PubMed

    Velderraín, José Dávila; Martínez-García, Juan Carlos; Álvarez-Buylla, Elena R

    2017-01-01

    Mathematical models based on dynamical systems theory are well-suited tools for the integration of available molecular experimental data into coherent frameworks in order to propose hypotheses about the cooperative regulatory mechanisms driving developmental processes. Computational analysis of the proposed models using well-established methods enables testing the hypotheses by contrasting predictions with observations. Within such framework, Boolean gene regulatory network dynamical models have been extensively used in modeling plant development. Boolean models are simple and intuitively appealing, ideal tools for collaborative efforts between theorists and experimentalists. In this chapter we present protocols used in our group for the study of diverse plant developmental processes. We focus on conceptual clarity and practical implementation, providing directions to the corresponding technical literature.

  16. Towards a Decision Support System for Space Flight Operations

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila; Hogle, Charles; Ruszkowski, James

    2013-01-01

    The Mission Operations Directorate (MOD) at the Johnson Space Center (JSC) has put in place a Model Based Systems Engineering (MBSE) technological framework for the development and execution of the Flight Production Process (FPP). This framework has provided much added value and return on investment to date. This paper describes a vision for a model based Decision Support System (DSS) for the development and execution of the FPP and its design and development process. The envisioned system extends the existing MBSE methodology and technological framework which is currently in use. The MBSE technological framework currently in place enables the systematic collection and integration of data required for building an FPP model for a diverse set of missions. This framework includes the technology, people and processes required for rapid development of architectural artifacts. It is used to build a feasible FPP model for the first flight of spacecraft and for recurrent flights throughout the life of the program. This model greatly enhances our ability to effectively engage with a new customer. It provides a preliminary work breakdown structure, data flow information and a master schedule based on its existing knowledge base. These artifacts are then refined and iterated upon with the customer for the development of a robust end-to-end, high-level integrated master schedule and its associated dependencies. The vision is to enhance this framework to enable its application for uncertainty management, decision support and optimization of the design and execution of the FPP by the program. Furthermore, this enhanced framework will enable the agile response and redesign of the FPP based on observed system behavior. The discrepancy of the anticipated system behavior and the observed behavior may be due to the processing of tasks internally, or due to external factors such as changes in program requirements or conditions associated with other organizations that are outside of MOD. The paper provides a roadmap for the three increments of this vision. These increments include (1) hardware and software system components and interfaces with the NASA ground system, (2) uncertainty management and (3) re-planning and automated execution. Each of these increments provide value independently; but some may also enable building of a subsequent increment.

  17. QRS complex detection based on continuous density hidden Markov models using univariate observations

    NASA Astrophysics Data System (ADS)

    Sotelo, S.; Arenas, W.; Altuve, M.

    2018-04-01

    In the electrocardiogram (ECG), the detection of QRS complexes is a fundamental step in the ECG signal processing chain since it allows the determination of other characteristics waves of the ECG and provides information about heart rate variability. In this work, an automatic QRS complex detector based on continuous density hidden Markov models (HMM) is proposed. HMM were trained using univariate observation sequences taken either from QRS complexes or their derivatives. The detection approach is based on the log-likelihood comparison of the observation sequence with a fixed threshold. A sliding window was used to obtain the observation sequence to be evaluated by the model. The threshold was optimized by receiver operating characteristic curves. Sensitivity (Sen), specificity (Spc) and F1 score were used to evaluate the detection performance. The approach was validated using ECG recordings from the MIT-BIH Arrhythmia database. A 6-fold cross-validation shows that the best detection performance was achieved with 2 states HMM trained with QRS complexes sequences (Sen = 0.668, Spc = 0.360 and F1 = 0.309). We concluded that these univariate sequences provide enough information to characterize the QRS complex dynamics from HMM. Future works are directed to the use of multivariate observations to increase the detection performance.

  18. Effects of seasonality and land-use change on carbon and water fluxes across the Amazon basin: synthesizing results from satellite-based remote sensing, towers, and models

    NASA Astrophysics Data System (ADS)

    Saleska, S.; Goncalves, L. G.; Baker, I.; Costa, M.; Poulter, B.; Christoffersen, B.; Da Rocha, H. R.; Didan, K.; Huete, A.; Imbuziero, H.; Kruijt, B.; Manzi, A.; von Randow, C.; Restrepo-Coupe, N.; Silva, R.; Tota, J.; Denning, S.; Gulden, L.; Rosero, E.; Zeng, X.

    2008-12-01

    Amazon forests play an important and complex role in the global carbon cycle, and important advances have been made in understanding Amazon processes in recent years. However, reconciling modeled mechanisms of carbon cycling with observations across scales remains a challenge. To better address this challenge, we initiated a Model intercomparison Project for the 'Large-Scale Biosphere Atmosphere Experiment in Amazonia' (LBA-MIP) to integrate modeling and observational studies for improved understanding of Amazon basin carbon cycling. Here, we report on the initial results of this project, which used the network of meteorological and climate data (sunlight, radiation, precipitation) from Amazon tower sites in forest and converted lands to drive a suite of 20 ecosystem models that simulate energy, water and CO2 fluxes. We compared model mechanisms to each other and to the relevant flux observations from those towers, as well as from satellite data from the Moderate Resolution Imaging Spectroradiometer (MODIS). Remote sensing and flux tower observations tend to show higher primary forest photosynthetic activity in the dry season than in the wet season in central Amazon, a broad pattern that is now captured in many models, but for different reasons. A reversal from the primary forest pattern was observed in areas converted to pasture, agriculture, or secondary forests, likely a consequence of the elimination of deep root access to deep soil waters which often persist through the dry season. Testing the models with observed fluxes under different land use patterns, and across different spatial scales with remote sensing, is enabling us to distinguish correct vs. incorrect model mechanisms and improve understanding of Amazon processes.

  19. A Regional Climate Model Evaluation System based on contemporary Satellite and other Observations for Assessing Regional Climate Model Fidelity

    NASA Astrophysics Data System (ADS)

    Waliser, D. E.; Kim, J.; Mattman, C.; Goodale, C.; Hart, A.; Zimdars, P.; Lean, P.

    2011-12-01

    Evaluation of climate models against observations is an essential part of assessing the impact of climate variations and change on regionally important sectors and improving climate models. Regional climate models (RCMs) are of a particular concern. RCMs provide fine-scale climate needed by the assessment community via downscaling global climate model projections such as those contributing to the Coupled Model Intercomparison Project (CMIP) that form one aspect of the quantitative basis of the IPCC Assessment Reports. The lack of reliable fine-resolution observational data and formal tools and metrics has represented a challenge in evaluating RCMs. Recent satellite observations are particularly useful as they provide a wealth of information and constraints on many different processes within the climate system. Due to their large volume and the difficulties associated with accessing and using contemporary observations, however, these datasets have been generally underutilized in model evaluation studies. Recognizing this problem, NASA JPL and UCLA have developed the Regional Climate Model Evaluation System (RCMES) to help make satellite observations, in conjunction with in-situ and reanalysis datasets, more readily accessible to the regional modeling community. The system includes a central database (Regional Climate Model Evaluation Database: RCMED) to store multiple datasets in a common format and codes for calculating and plotting statistical metrics to assess model performance (Regional Climate Model Evaluation Tool: RCMET). This allows the time taken to compare model data with satellite observations to be reduced from weeks to days. RCMES is a component of the recent ExArch project, an international effort for facilitating the archive and access of massive amounts data for users using cloud-based infrastructure, in this case as applied to the study of climate and climate change. This presentation will describe RCMES and demonstrate its utility using examples from RCMs applied to the southwest US as well as to Africa based on output from the CORDEX activity. Application of RCMES to the evaluation of multi-RCM hindcast for CORDEX-Africa will be presented in a companion paper in A41.

  20. New parsimonious simulation methods and tools to assess future food and environmental security of farm populations

    PubMed Central

    Antle, John M.; Stoorvogel, Jetse J.; Valdivia, Roberto O.

    2014-01-01

    This article presents conceptual and empirical foundations for new parsimonious simulation models that are being used to assess future food and environmental security of farm populations. The conceptual framework integrates key features of the biophysical and economic processes on which the farming systems are based. The approach represents a methodological advance by coupling important behavioural processes, for example, self-selection in adaptive responses to technological and environmental change, with aggregate processes, such as changes in market supply and demand conditions or environmental conditions as climate. Suitable biophysical and economic data are a critical limiting factor in modelling these complex systems, particularly for the characterization of out-of-sample counterfactuals in ex ante analyses. Parsimonious, population-based simulation methods are described that exploit available observational, experimental, modelled and expert data. The analysis makes use of a new scenario design concept called representative agricultural pathways. A case study illustrates how these methods can be used to assess food and environmental security. The concluding section addresses generalizations of parametric forms and linkages of regional models to global models. PMID:24535388

  1. New parsimonious simulation methods and tools to assess future food and environmental security of farm populations.

    PubMed

    Antle, John M; Stoorvogel, Jetse J; Valdivia, Roberto O

    2014-04-05

    This article presents conceptual and empirical foundations for new parsimonious simulation models that are being used to assess future food and environmental security of farm populations. The conceptual framework integrates key features of the biophysical and economic processes on which the farming systems are based. The approach represents a methodological advance by coupling important behavioural processes, for example, self-selection in adaptive responses to technological and environmental change, with aggregate processes, such as changes in market supply and demand conditions or environmental conditions as climate. Suitable biophysical and economic data are a critical limiting factor in modelling these complex systems, particularly for the characterization of out-of-sample counterfactuals in ex ante analyses. Parsimonious, population-based simulation methods are described that exploit available observational, experimental, modelled and expert data. The analysis makes use of a new scenario design concept called representative agricultural pathways. A case study illustrates how these methods can be used to assess food and environmental security. The concluding section addresses generalizations of parametric forms and linkages of regional models to global models.

  2. Stratospheric Aerosol--Observations, Processes, and Impact on Climate

    NASA Technical Reports Server (NTRS)

    Kresmer, Stefanie; Thomason, Larry W.; von Hobe, Marc; Hermann, Markus; Deshler, Terry; Timmreck, Claudia; Toohey, Matthew; Stenke, Andrea; Schwarz, Joshua P.; Weigel, Ralf; hide

    2016-01-01

    Interest in stratospheric aerosol and its role in climate have increased over the last decade due to the observed increase in stratospheric aerosol since 2000 and the potential for changes in the sulfur cycle induced by climate change. This review provides an overview about the advances in stratospheric aerosol research since the last comprehensive assessment of stratospheric aerosol was published in 2006. A crucial development since 2006 is the substantial improvement in the agreement between in situ and space-based inferences of stratospheric aerosol properties during volcanically quiescent periods. Furthermore, new measurement systems and techniques, both in situ and space based, have been developed for measuring physical aerosol properties with greater accuracy and for characterizing aerosol composition. However, these changes induce challenges to constructing a long-term stratospheric aerosol climatology. Currently, changes in stratospheric aerosol levels less than 20% cannot be confidently quantified. The volcanic signals tend to mask any nonvolcanically driven change, making them difficult to understand. While the role of carbonyl sulfide as a substantial and relatively constant source of stratospheric sulfur has been confirmed by new observations and model simulations, large uncertainties remain with respect to the contribution from anthropogenic sulfur dioxide emissions. New evidence has been provided that stratospheric aerosol can also contain small amounts of nonsulfatematter such as black carbon and organics. Chemistry-climate models have substantially increased in quantity and sophistication. In many models the implementation of stratospheric aerosol processes is coupled to radiation and/or stratospheric chemistry modules to account for relevant feedback processes.

  3. Monte Carlo modeling of atomic oxygen attack of polymers with protective coatings on LDEF

    NASA Technical Reports Server (NTRS)

    Banks, Bruce A.; Degroh, Kim K.; Sechkar, Edward A.

    1992-01-01

    Characterization of the behavior of atomic oxygen interaction with materials on the Long Duration Exposure Facility (LDEF) will assist in understanding the mechanisms involved, and will lead to improved reliability in predicting in-space durability of materials based on ground laboratory testing. A computational simulation of atomic oxygen interaction with protected polymers was developed using Monte Carlo techniques. Through the use of assumed mechanistic behavior of atomic oxygen and results of both ground laboratory and LDEF data, a predictive Monte Carlo model was developed which simulates the oxidation processes that occur on polymers with applied protective coatings that have defects. The use of high atomic oxygen fluence-directed ram LDEF results has enabled mechanistic implications to be made by adjusting Monte Carlo modeling assumptions to match observed results based on scanning electron microscopy. Modeling assumptions, implications, and predictions are presented, along with comparison of observed ground laboratory and LDEF results.

  4. Estimating Western U.S. Reservoir Sedimentation

    NASA Astrophysics Data System (ADS)

    Bensching, L.; Livneh, B.; Greimann, B. P.

    2017-12-01

    Reservoir sedimentation is a long-term problem for water management across the Western U.S. Observations of sedimentation are limited to reservoir surveys that are costly and infrequent, with many reservoirs having only two or fewer surveys. This work aims to apply a recently developed ensemble of sediment algorithms to estimate reservoir sedimentation over several western U.S. reservoirs. The sediment algorithms include empirical, conceptual, stochastic, and processes based approaches and are coupled with a hydrologic modeling framework. Preliminary results showed that the more complex and processed based algorithms performed better in predicting high sediment flux values and in a basin transferability experiment. However, more testing and validation is required to confirm sediment model skill. This work is carried out in partnership with the Bureau of Reclamation with the goal of evaluating the viability of reservoir sediment yield prediction across the western U.S. using a multi-algorithm approach. Simulations of streamflow and sediment fluxes are validated against observed discharges, as well as a Reservoir Sedimentation Information database that is being developed by the US Army Corps of Engineers. Specific goals of this research include (i) quantifying whether inter-algorithm differences consistently capture observational variability; (ii) identifying whether certain categories of models consistently produce the best results, (iii) assessing the expected sedimentation life-span of several western U.S. reservoirs through long-term simulations.

  5. Calibration of a distributed hydrologic model for six European catchments using remote sensing data

    NASA Astrophysics Data System (ADS)

    Stisen, S.; Demirel, M. C.; Mendiguren González, G.; Kumar, R.; Rakovec, O.; Samaniego, L. E.

    2017-12-01

    While observed streamflow has been the single reference for most conventional hydrologic model calibration exercises, the availability of spatially distributed remote sensing observations provide new possibilities for multi-variable calibration assessing both spatial and temporal variability of different hydrologic processes. In this study, we first identify the key transfer parameters of the mesoscale Hydrologic Model (mHM) controlling both the discharge and the spatial distribution of actual evapotranspiration (AET) across six central European catchments (Elbe, Main, Meuse, Moselle, Neckar and Vienne). These catchments are selected based on their limited topographical and climatic variability which enables to evaluate the effect of spatial parameterization on the simulated evapotranspiration patterns. We develop a European scale remote sensing based actual evapotranspiration dataset at a 1 km grid scale driven primarily by land surface temperature observations from MODIS using the TSEB approach. Using the observed AET maps we analyze the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mHM model. This model allows calibrating one-basin-at-a-time or all-basins-together using its unique structure and multi-parameter regionalization approach. Results will indicate any tradeoffs between spatial pattern and discharge simulation during model calibration and through validation against independent internal discharge locations. Moreover, added value on internal water balances will be analyzed.

  6. Investigation of flow and transport processes at the MADE site using ensemble Kalman filter

    USGS Publications Warehouse

    Liu, Gaisheng; Chen, Y.; Zhang, Dongxiao

    2008-01-01

    In this work the ensemble Kalman filter (EnKF) is applied to investigate the flow and transport processes at the macro-dispersion experiment (MADE) site in Columbus, MS. The EnKF is a sequential data assimilation approach that adjusts the unknown model parameter values based on the observed data with time. The classic advection-dispersion (AD) and the dual-domain mass transfer (DDMT) models are employed to analyze the tritium plume during the second MADE tracer experiment. The hydraulic conductivity (K), longitudinal dispersivity in the AD model, and mass transfer rate coefficient and mobile porosity ratio in the DDMT model, are estimated in this investigation. Because of its sequential feature, the EnKF allows for the temporal scaling of transport parameters during the tritium concentration analysis. Inverse simulation results indicate that for the AD model to reproduce the extensive spatial spreading of the tritium observed in the field, the K in the downgradient area needs to be increased significantly. The estimated K in the AD model becomes an order of magnitude higher than the in situ flowmeter measurements over a large portion of media. On the other hand, the DDMT model gives an estimation of K that is much more comparable with the flowmeter values. In addition, the simulated concentrations by the DDMT model show a better agreement with the observed values. The root mean square (RMS) between the observed and simulated tritium plumes is 0.77 for the AD model and 0.45 for the DDMT model at 328 days. Unlike the AD model, which gives inconsistent K estimates at different times, the DDMT model is able to invert the K values that consistently reproduce the observed tritium concentrations through all times. ?? 2008 Elsevier Ltd. All rights reserved.

  7. A model of plant isoprene emission based on available reducing power captures responses to atmospheric CO₂.

    PubMed

    Morfopoulos, Catherine; Sperlich, Dominik; Peñuelas, Josep; Filella, Iolanda; Llusià, Joan; Medlyn, Belinda E; Niinemets, Ülo; Possell, Malcolm; Sun, Zhihong; Prentice, Iain Colin

    2014-07-01

    We present a unifying model for isoprene emission by photosynthesizing leaves based on the hypothesis that isoprene biosynthesis depends on a balance between the supply of photosynthetic reducing power and the demands of carbon fixation. We compared the predictions from our model, as well as from two other widely used models, with measurements of isoprene emission from leaves of Populus nigra and hybrid aspen (Populus tremula × P. tremuloides) in response to changes in leaf internal CO2 concentration (C(i)) and photosynthetic photon flux density (PPFD) under diverse ambient CO2 concentrations (C(a)). Our model reproduces the observed changes in isoprene emissions with C(i) and PPFD, and also reproduces the tendency for the fraction of fixed carbon allocated to isoprene to increase with increasing PPFD. It also provides a simple mechanism for the previously unexplained decrease in the quantum efficiency of isoprene emission with increasing C(a). Experimental and modelled results support our hypothesis. Our model can reproduce the key features of the observations and has the potential to improve process-based modelling of isoprene emissions by land vegetation at the ecosystem and global scales. © 2014 The Authors. New Phytologist © 2014 New Phytologist Trust.

  8. Three-Dimension Visualization for Primary Wheat Diseases Based on Simulation Model

    NASA Astrophysics Data System (ADS)

    Shijuan, Li; Yeping, Zhu

    Crop simulation model has been becoming the core of agricultural production management and resource optimization management. Displaying crop growth process makes user observe the crop growth and development intuitionisticly. On the basis of understanding and grasping the occurrence condition, popularity season, key impact factors for main wheat diseases of stripe rust, leaf rust, stem rust, head blight and powdery mildew from research material and literature, we designed 3D visualization model for wheat growth and diseases occurrence. The model system will help farmer, technician and decision-maker to use crop growth simulation model better and provide decision-making support. Now 3D visualization model for wheat growth on the basis of simulation model has been developed, and the visualization model for primary wheat diseases is in the process of development.

  9. Control of variable speed variable pitch wind turbine based on a disturbance observer

    NASA Astrophysics Data System (ADS)

    Ren, Haijun; Lei, Xin

    2017-11-01

    In this paper, a novel sliding mode controller based on disturbance observer (DOB) to optimize the efficiency of variable speed variable pitch (VSVP) wind turbine is developed and analyzed. Due to the highly nonlinearity of the VSVP system, the model is linearly processed to obtain the state space model of the system. Then, a conventional sliding mode controller is designed and a DOB is added to estimate wind speed. The proposed control strategy can successfully deal with the random nature of wind speed, the nonlinearity of VSVP system, the uncertainty of parameters and external disturbance. Via adding the observer to the sliding mode controller, it can greatly reduce the chattering produced by the sliding mode switching gain. The simulation results show that the proposed control system has the effectiveness and robustness.

  10. Patterning nanowire and micro-nanoparticle array on micropillar-structured surface: Experiment and modeling.

    PubMed

    Lin, Chung Hsun; Guan, Jingjiao; Chau, Shiu Wu; Chen, Shia Chung; Lee, L James

    2010-08-04

    DNA molecules in a solution can be immobilized and stretched into a highly ordered array on a solid surface containing micropillars by molecular combing technique. However, the mechanism of this process is not well understood. In this study, we demonstrated the generation of DNA nanostrand array with linear, zigzag, and fork-zigzag patterns and the microfluidic processes are modeled based on a deforming body-fitted grid approach. The simulation results provide insights for explaining the stretching, immobilizing, and patterning of DNA molecules observed in the experiments.

  11. Reconstruction of dynamical systems from resampled point processes produced by neuron models

    NASA Astrophysics Data System (ADS)

    Pavlova, Olga N.; Pavlov, Alexey N.

    2018-04-01

    Characterization of dynamical features of chaotic oscillations from point processes is based on embedding theorems for non-uniformly sampled signals such as the sequences of interspike intervals (ISIs). This theoretical background confirms the ability of attractor reconstruction from ISIs generated by chaotically driven neuron models. The quality of such reconstruction depends on the available length of the analyzed dataset. We discuss how data resampling improves the reconstruction for short amount of data and show that this effect is observed for different types of mechanisms for spike generation.

  12. Experimental validation of a phenomenological model of the plasma contacting process

    NASA Technical Reports Server (NTRS)

    Williams, John D.; Wilbur, Paul J.; Monheiser, Jeff M.

    1988-01-01

    A preliminary model of the plasma coupling process is presented which describes the phenomena observed in ground-based experiments using a hollow cathode plasma contactor to collect electrons from a dilute ambient plasma under conditions where magnetic field effects can be neglected. The locations of the double-sheath region boundaries are estimated and correlated with experimental results. Ion production mechanisms in the plasma plume caused by discharge electrons from the contactor cathode and by electrons streaming into the plasma plume through the double-sheath from the ambient plasma are also discussed.

  13. Global Gross Primary Productivity for 2015 Inferred from OCO-2 SIF and a Carbon-Cycle Data Assimilation System

    NASA Astrophysics Data System (ADS)

    Norton, A.; Rayner, P. J.; Scholze, M.; Koffi, E. N. D.

    2016-12-01

    The intercomparison study CMIP5 among other studies (e.g. Bodman et al., 2013) has shown that the land carbon flux contributes significantly to the uncertainty in projections of future CO2 concentration and climate (Friedlingstein et al., 2014)). The main challenge lies in disaggregating the relatively well-known net land carbon flux into its component fluxes, gross primary production (GPP) and respiration. Model simulations of these processes disagree considerably, and accurate observations of photosynthetic activity have proved a hindrance. Here we build upon the Carbon Cycle Data Assimilation System (CCDAS) (Rayner et al., 2005) to constrain estimates of one of these uncertain fluxes, GPP, using satellite observations of Solar Induced Fluorescence (SIF). SIF has considerable benefits over other proxy observations as it tracks not just the presence of vegetation but actual photosynthetic activity (Walther et al., 2016; Yang et al., 2015). To combine these observations with process-based simulations of GPP we have coupled the model SCOPE with the CCDAS model BETHY. This provides a mechanistic relationship between SIF and GPP, and the means to constrain the processes relevant to SIF and GPP via model parameters in a data assimilation system. We ingest SIF observations from NASA's Orbiting Carbon Observatory 2 (OCO-2) for 2015 into the data assimilation system to constrain estimates of GPP in space and time, while allowing for explicit consideration of uncertainties in parameters and observations. Here we present first results of the assimilation with SIF. Preliminary results indicate a constraint on global annual GPP of at least 75% when using SIF observations, reducing the uncertainty to < 3 PgC yr-1. A large portion of the constraint is propagated via parameters that describe leaf phenology. These results help to bring together state-of-the-art observations and model to improve understanding and predictive capability of GPP.

  14. Loblolly pine foliar patterns and growth dynamics at age 12 in response to planting density and cultural intensity

    Treesearch

    Madison Katherine Akers; Michael Kane; Dehai Zhao; Richard F. Daniels; Robert O. Teskey

    2015-01-01

    Examining the role of foliage in stand development across a range of stand structures provides a more detailed understanding of the processes driving productivity and allows further development of process-based models for prediction. Productivity changes observed at the stand scale will be the integration of changes at the individual tree scale, but few studies have...

  15. Sketching the Invisible to Predict the Visible: From Drawing to Modeling in Chemistry.

    PubMed

    Cooper, Melanie M; Stieff, Mike; DeSutter, Dane

    2017-10-01

    Sketching as a scientific practice goes beyond the simple act of inscribing diagrams onto paper. Scientists produce a wide range of representations through sketching, as it is tightly coupled to model-based reasoning. Chemists in particular make extensive use of sketches to reason about chemical phenomena and to communicate their ideas. However, the chemical sciences have a unique problem in that chemists deal with the unseen world of the atomic-molecular level. Using sketches, chemists strive to develop causal mechanisms that emerge from the structure and behavior of molecular-level entities, to explain observations of the macroscopic visible world. Interpreting these representations and constructing sketches of molecular-level processes is a crucial component of student learning in the modern chemistry classroom. Sketches also serve as an important component of assessment in the chemistry classroom as student sketches give insight into developing mental models, which allows instructors to observe how students are thinking about a process. In this paper we discuss how sketching can be used to promote such model-based reasoning in chemistry and discuss two case studies of curricular projects, CLUE and The Connected Chemistry Curriculum, that have demonstrated a benefit of this approach. We show how sketching activities can be centrally integrated into classroom norms to promote model-based reasoning both with and without component visualizations. Importantly, each of these projects deploys sketching in support of other types of inquiry activities, such as making predictions or depicting models to support a claim; sketching is not an isolated activity but is used as a tool to support model-based reasoning in the discipline. Copyright © 2017 Cognitive Science Society, Inc.

  16. A Physical Model-based Correction for Charge Traps in the Hubble Space Telescope’s Wide Field Camera 3 Near-IR Detector and Its Applications to Transiting Exoplanets and Brown Dwarfs

    NASA Astrophysics Data System (ADS)

    Zhou, Yifan; Apai, Dániel; Lew, Ben W. P.; Schneider, Glenn

    2017-06-01

    The Hubble Space Telescope Wide Field Camera 3 (WFC3) near-IR channel is extensively used in time-resolved observations, especially for transiting exoplanet spectroscopy as well as brown dwarf and directly imaged exoplanet rotational phase mapping. The ramp effect is the dominant source of systematics in the WFC3 for time-resolved observations, which limits its photometric precision. Current mitigation strategies are based on empirical fits and require additional orbits to help the telescope reach a thermal equilibrium. We show that the ramp-effect profiles can be explained and corrected with high fidelity using charge trapping theories. We also present a model for this process that can be used to predict and to correct charge trap systematics. Our model is based on a very small number of parameters that are intrinsic to the detector. We find that these parameters are very stable between the different data sets, and we provide best-fit values. Our model is tested with more than 120 orbits (∼40 visits) of WFC3 observations and is proved to be able to provide near photon noise limited corrections for observations made with both staring and scanning modes of transiting exoplanets as well as for starting-mode observations of brown dwarfs. After our model correction, the light curve of the first orbit in each visit has the same photometric precision as subsequent orbits, so data from the first orbit no longer need to be discarded. Near-IR arrays with the same physical characteristics (e.g., JWST/NIRCam) may also benefit from the extension of this model if similar systematic profiles are observed.

  17. Modeling Demic and Cultural Diffusion: An Introduction.

    PubMed

    Fort, Joaquim; Crema, Enrico R; Madella, Marco

    2015-07-01

    Identifying the processes by which human cultures spread across different populations is one of the most topical objectives shared among different fields of study. Seminal works have analyzed a variety of data and attempted to determine whether empirically observed patterns are the result of demic and/or cultural diffusion. This special issue collects articles exploring several themes (from modes of cultural transmission to drivers of dispersal mechanisms) and contexts (from the Neolithic in Europe to the spread of computer programming languages), which offer new insights that will augment the theoretical and empirical basis for the study of demic and cultural diffusion. In this introduction we outline the state of art in the modeling of these processes, briefly discuss the pros and cons of two of the most commonly used frameworks (equation-based models and agent-based models), and summarize the significance of each article in this special issue.

  18. The Effects of Science Models on Students' Understanding of Scientific Processes

    NASA Astrophysics Data System (ADS)

    Berglin, Riki Susan

    This action research study investigated how the use of science models affected fifth-grade students' ability to transfer their science curriculum to a deeper understanding of scientific processes. This study implemented a variety of science models into a chemistry unit throughout a 6-week study. The research question addressed was: In what ways do using models to learn and teach science help students transfer classroom knowledge to a deeper understanding of the scientific processes? Qualitative and quantitative data were collected through pre- and post-science interest inventories, observations field notes, student work samples, focus group interviews, and chemistry unit tests. These data collection tools assessed students' attitudes, engagement, and content knowledge throughout their chemistry unit. The results of the data indicate that the model-based instruction program helped with students' engagement in the lessons and understanding of chemistry content. The results also showed that students displayed positive attitudes toward using science models.

  19. Wab-InSAR: a new wavelet based InSAR time series technique applied to volcanic and tectonic areas

    NASA Astrophysics Data System (ADS)

    Walter, T. R.; Shirzaei, M.; Nankali, H.; Roustaei, M.

    2009-12-01

    Modern geodetic techniques such as InSAR and GPS provide valuable observations of the deformation field. Because of the variety of environmental interferences (e.g., atmosphere, topography distortion) and incompleteness of the models (assumption of the linear model for deformation), those observations are usually tainted by various systematic and random errors. Therefore we develop and test new methods to identify and filter unwanted periodic or episodic artifacts to obtain accurate and precise deformation measurements. Here we present and implement a new wavelet based InSAR (Wab-InSAR) time series approach. Because wavelets are excellent tools for identifying hidden patterns and capturing transient signals, we utilize wavelet functions for reducing the effect of atmospheric delay and digital elevation model inaccuracies. Wab-InSAR is a model free technique, reducing digital elevation model errors in individual interferograms using a 2D spatial Legendre polynomial wavelet filter. Atmospheric delays are reduced using a 3D spatio-temporal wavelet transform algorithm and a novel technique for pixel selection. We apply Wab-InSAR to several targets, including volcano deformation processes at Hawaii Island, and mountain building processes in Iran. Both targets are chosen to investigate large and small amplitude signals, variable and complex topography and atmospheric effects. In this presentation we explain different steps of the technique, validate the results by comparison to other high resolution processing methods (GPS, PS-InSAR, SBAS) and discuss the geophysical results.

  20. Grammar-Supported 3d Indoor Reconstruction from Point Clouds for As-Built Bim

    NASA Astrophysics Data System (ADS)

    Becker, S.; Peter, M.; Fritsch, D.

    2015-03-01

    The paper presents a grammar-based approach for the robust automatic reconstruction of 3D interiors from raw point clouds. The core of the approach is a 3D indoor grammar which is an extension of our previously published grammar concept for the modeling of 2D floor plans. The grammar allows for the modeling of buildings whose horizontal, continuous floors are traversed by hallways providing access to the rooms as it is the case for most office buildings or public buildings like schools, hospitals or hotels. The grammar is designed in such way that it can be embedded in an iterative automatic learning process providing a seamless transition from LOD3 to LOD4 building models. Starting from an initial low-level grammar, automatically derived from the window representations of an available LOD3 building model, hypotheses about indoor geometries can be generated. The hypothesized indoor geometries are checked against observation data - here 3D point clouds - collected in the interior of the building. The verified and accepted geometries form the basis for an automatic update of the initial grammar. By this, the knowledge content of the initial grammar is enriched, leading to a grammar with increased quality. This higher-level grammar can then be applied to predict realistic geometries to building parts where only sparse observation data are available. Thus, our approach allows for the robust generation of complete 3D indoor models whose quality can be improved continuously as soon as new observation data are fed into the grammar-based reconstruction process. The feasibility of our approach is demonstrated based on a real-world example.

  1. Studying the Impacts of Environmental Factors and Agricultural Management on Methane Emissions from Rice Paddies Using a Land Surface Model

    NASA Astrophysics Data System (ADS)

    Lin, T. S.; Gahlot, S.; Shu, S.; Jain, A. K.; Kheshgi, H. S.

    2017-12-01

    Continued growth in population is projected to drive increased future demand for rice and the methane emissions associated with its production. However, observational studies of methane emissions from rice have reported seemingly conflicting results and do not all support this projection. In this study we couple an ecophysiological process-based rice paddy module and a methane emission module with a land surface model, Integrated Science Assessment Model (ISAM), to study the impacts of various environmental factors and agricultural management practices on rice production and methane emissions from rice fields. This coupled modeling framework accounts for dynamic rice growth processes with adaptation of photosynthesis, rice-specific phenology, biomass accumulation, leaf area development and structures responses to water, temperature, light and nutrient stresses. The coupled model is calibrated and validated with observations from various rice cultivation fields. We find that the differing results of observational studies can be caused by the interactions of environmental factors, including climate, atmospheric CO2 concentration, and N deposition, and agricultural management practices, such as irrigation and N fertilizer applications, with rice production at spatial and temporal scales.

  2. The Influence of Atmosphere-Ocean Interaction on MJO Development and Propagation

    DTIC Science & Technology

    2014-09-30

    evaluate modeling results and process studies. The field phase of this project is associated with DYNAMO , which is the US contribution to the...influence on ocean temperature 4. Extended run for DYNAMO with high vertical resolution NCOM RESULTS Summary of project results The work funded...model experiments of the November 2011 MJO – the strongest MJO episode observed during the DYNAMO . The previous conceptual model that was based on TOGA

  3. Assessment in health care education - modelling and implementation of a computer supported scoring process.

    PubMed

    Alfredsson, Jayne; Plichart, Patrick; Zary, Nabil

    2012-01-01

    Research on computer supported scoring of assessments in health care education has mainly focused on automated scoring. Little attention has been given to how informatics can support the currently predominant human-based grading approach. This paper reports steps taken to develop a model for a computer supported scoring process that focuses on optimizing a task that was previously undertaken without computer support. The model was also implemented in the open source assessment platform TAO in order to study its benefits. Ability to score test takers anonymously, analytics on the graders reliability and a more time efficient process are example of observed benefits. A computer supported scoring will increase the quality of the assessment results.

  4. Validating modelled variable surface saturation in the riparian zone with thermal infrared images

    NASA Astrophysics Data System (ADS)

    Glaser, Barbara; Klaus, Julian; Frei, Sven; Frentress, Jay; Pfister, Laurent; Hopp, Luisa

    2015-04-01

    Variable contributing areas and hydrological connectivity have become prominent new concepts for hydrologic process understanding in recent years. The dynamic connectivity within the hillslope-riparian-stream (HRS) system is known to have a first order control on discharge generation and especially the riparian zone functions as runoff buffering or producing zone. However, despite their importance, the highly dynamic processes of contraction and extension of saturation within the riparian zone and its impact on runoff generation still remain not fully understood. In this study, we analysed the potential of a distributed, fully coupled and physically based model (HydroGeoSphere) to represent the spatial and temporal water flux dynamics of a forested headwater HRS system (6 ha) in western Luxembourg. The model was set up and parameterised under consideration of experimentally-derived knowledge of catchment structure and was run for a period of four years (October 2010 to August 2014). For model evaluation, we especially focused on the temporally varying spatial patterns of surface saturation. We used ground-based thermal infrared (TIR) imagery to map surface saturation with a high spatial and temporal resolution and collected 20 panoramic snapshots of the riparian zone (ca. 10 by 20 m) under different hydrologic conditions. These TIR panoramas were used in addition to several classical discharge and soil moisture time series for a spatially-distributed model validation. In a manual calibration process we optimised model parameters (e.g. porosity, saturated hydraulic conductivity, evaporation depth) to achieve a better agreement between observed and modelled discharges and soil moistures. The subsequent validation of surface saturation patterns by a visual comparison of processed TIR panoramas and corresponding model output panoramas revealed an overall good accordance for all but one region that was always too dry in the model. However, quantitative comparisons of modelled and observed saturated pixel percentages and of their modelled and measured relationships to concurrent discharges revealed remarkable similarities. During the calibration process we observed that surface saturation patterns were mostly affected by changing the soil properties of the topsoil in the riparian zone, but that the discharge behaviour did not change substantially at the same time. This effect of various spatial patterns occurring concomitant to a nearly unchanged integrated response demonstrates the importance of spatially distributed validation data. Our study clearly benefited from using different kinds of data - spatially integrated and distributed, temporally continuous and discrete - for the model evaluation procedure.

  5. Quantifying the Uncertainty in Estimates of Surface Atmosphere Fluxes by Evaluation of SEBS and SCOPE Models

    NASA Astrophysics Data System (ADS)

    Timmermans, J.; van der Tol, C.; Verhoef, A.; Wang, L.; van Helvoirt, M.; Verhoef, W.; Su, Z.

    2009-11-01

    An earth observation based evapotranspiration (ET) product is essential to achieving the GEWEX CEOP science objectives and to achieve the GEOSS water resources societal benefit areas. Conventional techniques that employ point measurements to estimate the components of the energy balance are only representative for local scales and cannot be extended to large areas because of the heterogeneity of the land surface and the dynamic nature of heat transfer processes.The objective of this research is to quantify the uncertainties of evapotranspiration estimates by the Surface Energy Balance System (SEBS) algorithm through validation against the detailed Soil Canopy Observation, Photochemistry and Energy fluxes process (SCOPE) model with site optimized parameters. This SCOPE model takes both radiative processes and biochemical processes into account; it combines the SAIL radiative transfer model with the energy balance at leaf level to simulate the interaction between surface and atmosphere. In this paper the validation results are presented for a semi long term dataset in Reading on 2002.The comparison between the two models showed a high correlation over the complete growth of maize capturing the daily variation to good extent. The absolute values of the SEBS model are however much lower compared to those of the SCOPE model. This is due to the fact the SEBS model uses a surface resistance parameterization that is unable to account of high vegetation. An update of the SEBS model will resolve this problem.

  6. Assessing the Impact of Climatic Variability and Change on Maize Production in the Midwestern USA

    NASA Astrophysics Data System (ADS)

    Andresen, J.; Jain, A. K.; Niyogi, D. S.; Alagarswamy, G.; Biehl, L.; Delamater, P.; Doering, O.; Elias, A.; Elmore, R.; Gramig, B.; Hart, C.; Kellner, O.; Liu, X.; Mohankumar, E.; Prokopy, L. S.; Song, C.; Todey, D.; Widhalm, M.

    2013-12-01

    Weather and climate remain among the most important uncontrollable factors in agricultural production systems. In this study, three process-based crop simulation models were used to identify the impacts of climate on the production of maize in the Midwestern U.S.A. during the past century. The 12-state region is a key global production area, responsible for more than 80% of U.S. domestic and 25% of total global production. The study is a part of the Useful to Useable (U2U) Project, a USDA NIFA-sponsored project seeking to improve the resilience and profitability of farming operations in the region amid climate variability and change. Three process-based crop simulation models were used in the study: CERES-Maize (DSSAT, Hoogenboom et al., 2012), the Hybrid-Maize model (Yang et al., 2004), and the Integrated Science Assessment Model (ISAM, Song et al., 2013). Model validation was carried out with individual plot and county observations. The models were run with 4 to 50 km spatial resolution gridded weather data for representative soils and cultivars, 1981-2012, to examine spatial and temporal yield variability within the region. We also examined the influence of different crop models and spatial scales on regional scale yield estimation, as well as a yield gap analysis between observed and attainable yields. An additional study was carried out with the CERES-Maize model at 18 individual site locations 1901-2012 to examine longer term historical trends. For all simulations, all input variables were held constant in order to isolate the impacts of climate. In general, the model estimates were in good agreement with observed yields, especially in central sections of the region. Regionally, low precipitation and soil moisture stress were chief limitations to simulated crop yields. The study suggests that at least part of the observed yield increases in the region during recent decades have occurred as the result of wetter, less stressful growing season weather conditions.

  7. Benthic processes and coastal aquaculture: merging models and field data at a local scale

    NASA Astrophysics Data System (ADS)

    Brigolin, Daniele; Rabouille, Christophe; Bombled, Bruno; Colla, Silvia; Pastres, Roberto; Pranovi, Fabio

    2016-04-01

    Shellfish farming is regarded as an organic extractive aquaculture activity. However, the production of faeces and pseudofaeces, in fact, leads to a net transfer of organic matter from the water column to the surface sediment. This process, which is expected to locally affect the sediment biogeochemistry, may also cause relevant changes in coastal areas characterized by a high density of farms. In this paper, we present the result of a study recently carried out in the Gulf of Venice (northern Adriatic sea), combining mathematical modelling and field sampling efforts. The work aimed at using a longline mussel farm as an in-situ test-case for modelling the differences in soft sediments biogeochemical processes along a gradient of organic deposition. We used an existing integrated model, allowing to describe biogeochemical fluxes towards the mussel farm and to predict the extent of the deposition area underneath it. The model framework includes an individual-based population dynamic model of the Mediterranean mussel coupled with a Lagrangian deposition model and a 1D benthic model of early diagenesis. The work was articulated in 3 steps: 1) the integrated model allowed to simulate the downward fluxes of organic matter originated by the farm, and the extent of its deposition area; 2) based on the first model application, two stations were localized, at which sediment cores were collected during a field campaign, carried out in June 2015. Measurements included O2 and pH microprofiling, porosity and micro-porosity, Total Organic Carbon, and pore waters NH4, PO4, SO4, Alkalinity, and Dissolved Inorganic Carbon; 3) two distinct early diagenesis models were set-up, reproducing observed field data in the sampled cores. Observed oxygen microprofiles showed a different behavior underneath the farm with respect to the outside reference station. In particular, a remarkable decrease in the oxygen penetration depth, and an increase in the O2 influx calculated from the concentration gradients were observed. The integrated model described above allowed to extend the simulation over the entire farmed area, and to explore the response of the prediction to changes in water temperature.

  8. Picturing and modelling catchments by representative hillslopes

    NASA Astrophysics Data System (ADS)

    Loritz, Ralf; Hassler, Sibylle; Jackisch, Conrad; Zehe, Erwin

    2016-04-01

    Hydrological modelling studies often start with a qualitative sketch of the hydrological processes of a catchment. These so-called perceptual models are often pictured as hillslopes and are generalizations displaying only the dominant and relevant processes of a catchment or hillslope. The problem with these models is that they are prone to become too much predetermined by the designer's background and experience. Moreover it is difficult to know if that picture is correct and contains enough complexity to represent the system under study. Nevertheless, because of their qualitative form, perceptual models are easy to understand and can be an excellent tool for multidisciplinary exchange between researchers with different backgrounds, helping to identify the dominant structures and processes in a catchment. In our study we explore whether a perceptual model built upon an intensive field campaign may serve as a blueprint for setting up representative hillslopes in a hydrological model to reproduce the functioning of two distinctly different catchments. We use a physically-based 2D hillslope model which has proven capable to be driven by measured soil-hydrological parameters. A key asset of our approach is that the model structure itself remains a picture of the perceptual model, which is benchmarked against a) geo-physical images of the subsurface and b) observed dynamics of discharge, distributed state variables and fluxes (soil moisture, matric potential and sap flow). Within this approach we are able to set up two behavioral model structures which allow the simulation of the most important hydrological fluxes and state variables in good accordance with available observations within the 19.4 km2 large Colpach catchment and the 4.5 km2 large Wollefsbach catchment in Luxembourg without the necessity of calibration. This corroborates, contrary to the widespread opinion, that a) lower mesoscale catchments may be modelled by representative hillslopes and b) physically-based models can be parametrized based on comprehensive field data and a good perceptual model. Our results particularly indicate that the main challenge in understanding and modelling the seasonal water balance of a catchment is a proper representation of the phenological cycle of vegetation, not exclusively the structure of the subsurface and spatial variability of soil hydraulic parameters.

  9. 3D Surface Temperature Measurement of Plant Canopies Using Photogrammetry Techniques From A UAV.

    NASA Astrophysics Data System (ADS)

    Irvine, M.; Lagouarde, J. P.

    2017-12-01

    Surface temperature of plant canopies and within canopies results from the coupling of radiative and energy exchanges processes which govern the fluxes at the interface soil-plant-atmosphere. As a key parameter, surface temperature permits the estimation of canopy exchanges using processes based modeling methods. However detailed 3D surface temperature measurements or even profile surface temperature measurements are rarely made as they have inherent difficulties. Such measurements would greatly improve multi-level canopy models such as NOAH (Chen and Dudhia 2001) or MuSICA (Ogée and Brunet 2002, Ogée et al 2003) where key surface temperature estimations, at present, are not tested. Additionally, at larger scales, canopy structure greatly influences satellite based surface temperature measurements as the structure impacts the observations which are intrinsically made at varying satellite viewing angles and solar heights. In order to account for these differences, again accurate modeling is required such as through the above mentioned multi-layer models or with several source type models such as SCOPE (Van der Tol 2009) in order to standardize observations. As before, in order to validate these models, detailed field observations are required. With the need for detailed surface temperature observations in mind we have planned a series of experiments over non-dense plant canopies to investigate the use of photogrammetry techniques. Photogrammetry is normally used for visible wavelengths to produce 3D images using cloud point reconstruction of aerial images (for example Dandois and Ellis, 2010, 2013 over a forest). From these cloud point models it should be possible to establish 3D plant surface temperature images when using thermal infrared array sensors. In order to do this our experiments are based on the use of a thermal Infrared camera embarked on a UAV. We adapt standard photogrammetry to account for limits imposed by thermal imaginary, especially the low image resolution compared with standard RGB sensors. At the session B081, we intend to present first results of our thermal photogrammetric experiments with 3D surface temperature plots in order to discuss and adapt our methods to the modelling community's needs.

  10. Shaping asteroid models using genetic evolution (SAGE)

    NASA Astrophysics Data System (ADS)

    Bartczak, P.; Dudziński, G.

    2018-02-01

    In this work, we present SAGE (shaping asteroid models using genetic evolution), an asteroid modelling algorithm based solely on photometric lightcurve data. It produces non-convex shapes, orientations of the rotation axes and rotational periods of asteroids. The main concept behind a genetic evolution algorithm is to produce random populations of shapes and spin-axis orientations by mutating a seed shape and iterating the process until it converges to a stable global minimum. We tested SAGE on five artificial shapes. We also modelled asteroids 433 Eros and 9 Metis, since ground truth observations for them exist, allowing us to validate the models. We compared the derived shape of Eros with the NEAR Shoemaker model and that of Metis with adaptive optics and stellar occultation observations since other models from various inversion methods were available for Metis.

  11. Validation in the Absence of Observed Events

    DOE PAGES

    Lathrop, John; Ezell, Barry

    2015-07-22

    Here our paper addresses the problem of validating models in the absence of observed events, in the area of Weapons of Mass Destruction terrorism risk assessment. We address that problem with a broadened definition of “Validation,” based on “backing up” to the reason why modelers and decision makers seek validation, and from that basis re-define validation as testing how well the model can advise decision makers in terrorism risk management decisions. We develop that into two conditions: Validation must be based on cues available in the observable world; and it must focus on what can be done to affect thatmore » observable world, i.e. risk management. That in turn leads to two foci: 1.) the risk generating process, 2.) best use of available data. Based on our experience with nine WMD terrorism risk assessment models, we then describe three best use of available data pitfalls: SME confidence bias, lack of SME cross-referencing, and problematic initiation rates. Those two foci and three pitfalls provide a basis from which we define validation in this context in terms of four tests -- Does the model: … capture initiation? … capture the sequence of events by which attack scenarios unfold? … consider unanticipated scenarios? … consider alternative causal chains? Finally, we corroborate our approach against three key validation tests from the DOD literature: Is the model a correct representation of the simuland? To what degree are the model results comparable to the real world? Over what range of inputs are the model results useful?« less

  12. A Modularized Efficient Framework for Non-Markov Time Series Estimation

    NASA Astrophysics Data System (ADS)

    Schamberg, Gabriel; Ba, Demba; Coleman, Todd P.

    2018-06-01

    We present a compartmentalized approach to finding the maximum a-posteriori (MAP) estimate of a latent time series that obeys a dynamic stochastic model and is observed through noisy measurements. We specifically consider modern signal processing problems with non-Markov signal dynamics (e.g. group sparsity) and/or non-Gaussian measurement models (e.g. point process observation models used in neuroscience). Through the use of auxiliary variables in the MAP estimation problem, we show that a consensus formulation of the alternating direction method of multipliers (ADMM) enables iteratively computing separate estimates based on the likelihood and prior and subsequently "averaging" them in an appropriate sense using a Kalman smoother. As such, this can be applied to a broad class of problem settings and only requires modular adjustments when interchanging various aspects of the statistical model. Under broad log-concavity assumptions, we show that the separate estimation problems are convex optimization problems and that the iterative algorithm converges to the MAP estimate. As such, this framework can capture non-Markov latent time series models and non-Gaussian measurement models. We provide example applications involving (i) group-sparsity priors, within the context of electrophysiologic specrotemporal estimation, and (ii) non-Gaussian measurement models, within the context of dynamic analyses of learning with neural spiking and behavioral observations.

  13. Modeling winter hydrological processes under differing climatic conditions: Modifying WEPP

    NASA Astrophysics Data System (ADS)

    Dun, Shuhui

    Water erosion is a serious and continuous environmental problem worldwide. In cold regions, soil freeze and thaw has great impacts on infiltration and erosion. Rain or snowmelt on a thawing soil can cause severe water erosion. Of equal importance is snow accumulation and snowmelt, which can be the predominant hydrological process in areas of mid- to high latitudes and forested watersheds. Modelers must properly simulate winter processes to adequately represent the overall hydrological outcome and sediment and chemical transport in these areas. Modeling winter hydrology is presently lacking in water erosion models. Most of these models are based on the functional Universal Soil Loss Equation (USLE) or its revised forms, e.g., Revised USLE (RUSLE). In RUSLE a seasonally variable soil erodibility factor (K) was used to account for the effects of frozen and thawing soil. Yet the use of this factor requires observation data for calibration, and such a simplified approach cannot represent the complicated transient freeze-thaw processes and their impacts on surface runoff and erosion. The Water Erosion Prediction Project (WEPP) watershed model, a physically-based erosion prediction software developed by the USDA-ARS, has seen numerous applications within and outside the US. WEPP simulates winter processes, including snow accumulation, snowmelt, and soil freeze-thaw, using an approach based on mass and energy conservation. However, previous studies showed the inadequacy of the winter routines in the WEPP model. Therefore, the objectives of this study were: (1) To adapt a modeling approach for winter hydrology based on mass and energy conservation, and to implement this approach into a physically-oriented hydrological model, such as WEPP; and (2) To assess this modeling approach through case applications to different geographic conditions. A new winter routine was developed and its performance was evaluated by incorporating it into WEPP (v2008.9) and then applying WEPP to four study sites at different spatial scales under different climatic conditions, including experimental plots in Pullman, WA and Morris, MN, two agricultural drainages in Pendleton, OR, and a forest watershed in Mica Creek, ID. The model applications showed promising results, indicating adequacy of the mass- and energy-balance-based approach for winter hydrology simulation.

  14. A magnetic model for low/hard state of black hole binaries

    NASA Astrophysics Data System (ADS)

    Wang, Ding-Xiong

    2015-08-01

    A magnetic model for low/hard state (LHS) of black hole X-ray binaries (BHXBs), H1743-322 and GX 339-4, is proposed based on the transportation of magnetic field from a companion into an accretion disc around a black hole (BH). This model consists of a truncated thin disc with an inner advection-dominated accretion flow (ADAF). The spectral profiles of the sources are fitted in agreement with the data observed at four different dates corresponding to the rising stage of the LHS. In addition, the association of the LHS with quasi-steady jet is modelled based on the transportation of magnetic field, where the Blandford-Znajek (BZ) and Blandford-Payne (BP) processes are invoked to drive the jets from BH and inner ADAF. It turns out that the steep radio-X-ray correlations observed in H1743-322 and GX 339-4 can be interpreted based on our model. It is suggested that large-scale magnetic field can be regarded as the second parameter for governing the state transitions in some BHXBs.

  15. A Mesoscale Model-Based Climatography of Nocturnal Boundary-Layer Characteristics over the Complex Terrain of North-Western Utah.

    PubMed

    Serafin, Stefano; De Wekker, Stephan F J; Knievel, Jason C

    Nocturnal boundary-layer phenomena in regions of complex topography are extremely diverse and respond to a multiplicity of forcing factors, acting primarily at the mesoscale and microscale. The interaction between different physical processes, e.g., drainage promoted by near-surface cooling and ambient flow over topography in a statically stable environment, may give rise to special flow patterns, uncommon over flat terrain. Here we present a climatography of boundary-layer flows, based on a 2-year archive of simulations from a high-resolution operational mesoscale weather modelling system, 4DWX. The geographical context is Dugway Proving Ground, in north-western Utah, USA, target area of the field campaigns of the MATERHORN (Mountain Terrain Atmospheric Modeling and Observations Program) project. The comparison between model fields and available observations in 2012-2014 shows that the 4DWX model system provides a realistic representation of wind speed and direction in the area, at least in an average sense. Regions displaying strong spatial gradients in the field variables, thought to be responsible for enhanced nocturnal mixing, are typically located in transition areas from mountain sidewalls to adjacent plains. A key dynamical process in this respect is the separation of dynamically accelerated downslope flows from the surface.

  16. Accessing the inaccessible: making (successful) field observations at tidewater glacier termini

    NASA Astrophysics Data System (ADS)

    Kienholz, C.; Amundson, J. M.; Jackson, R. H.; Motyka, R. J.; Nash, J. D.; Sutherland, D.

    2017-12-01

    Glaciers terminating in ocean water (tidewater glaciers) show complex dynamic behavior driven predominantly by processes at the ice-ocean interface (sedimentation, erosion, iceberg calving, submarine melting). A quantitative understanding of these processes is required, for example, to better assess tidewater glaciers' fate in our rapidly warming environment. Lacking observations close to glacier termini, due to unpredictable risks from calving, hamper this understanding. In an effort to remedy this lack of knowledge, we initiated a large field-based effort at LeConte Glacier, southeast Alaska, in 2016. LeConte Glacier is a regional analog for many tidewater glaciers, but better accessible and observable and thus an ideal target for our multi-disciplinary effort. Our ongoing campaigns comprise measurements from novel autonomous vessels (temperature, salinity and current) in the immediate proximity of the glacier terminus and additional surveys (including multibeam bathymetry) from boats and moorings in the proglacial fjord. These measurements are complemented by iceberg and glacier velocity measurements from time lapse cameras and a portable radar interferometer situated above LeConte Bay. GPS-based velocity observations and melt measurements are conducted on the glacier. These measurements provide necessary input for process-based understanding and numerical modeling of the glacier and fjord systems. In the presentation, we discuss promising initial results and lessons learned from the campaign.

  17. Partially linear mixed-effects joint models for skewed and missing longitudinal competing risks outcomes.

    PubMed

    Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong

    2017-12-18

    Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.

  18. Estimation of Carbon Flux of Forest Ecosystem over Qilian Mountains by BIOME-BGC Model

    NASA Astrophysics Data System (ADS)

    Yan, Min; Tian, Xin; Li, Zengyuan; Chen, Erxue; Li, Chunmei

    2014-11-01

    The gross primary production (GPP) and net ecosystem exchange (NEE) are important indicators for carbon fluxes. This study aims at evaluating the forest GPP and NEE over the Qilian Mountains using meteorological, remotely sensed and other ancillary data at large scale. To realize this, the widely used ecological-process-based model, Biome-BGC, and remote-sensing-based model, MODIS GPP algorithm, were selected for the simulation of the forest carbon fluxes. The combination of these two models was based on calibrating the Biome-BGC by the optimized MODIS GPP algorithm. The simulated GPP and NEE values were evaluated against the eddy covariance observed GPPs and NEEs, and the well agreements have been reached, with R2=0.76, 0.67 respectively.

  19. Estimation of Carbon Flux of Forest Ecosystem over Qilian Mountains by BIOME-BGC Model

    NASA Astrophysics Data System (ADS)

    Yan, Min; Tian, Xin; Li, Zengyuan; Chen, Erxue; Li, Chunmei

    2014-11-01

    The gross primary production (GPP) and net ecosystem exchange (NEE) are important indicators for carbon fluxes. This study aims at evaluating the forest GPP and NEE over the Qilian Mountains using meteorological, remotely sensed and other ancillary data at large scale. To realize this, the widely used ecological-process- based model, Biome-BGC, and remote-sensing-based model, MODIS GPP algorithm, were selected for the simulation of the forest carbon fluxes. The combination of these two models was based on calibrating the Biome-BGC by the optimized MODIS GPP algorithm. The simulated GPP and NEE values were evaluated against the eddy covariance observed GPPs and NEEs, and the well agreements have been reached, with R2=0.76, 0.67 respectively.

  20. "Observing" the Circumnuclear Stars and Gas in Disk Galaxy Simulations

    NASA Astrophysics Data System (ADS)

    Cook, Angela; Hicks, Erin K. S.

    2018-06-01

    We present simulations based on theoretical models of common disk processes designed to represent potential inflow observed within the central 500 pc of local Seyfert galaxies. Mock observations of these n-body plus smoothed particle hydrodynamical simulations provide the conceptual framework in which to identify the driving inflow mechanism, for example nuclear bars, and to quantify to the inflow based on observable properties. From these mock observations the azimuthal average of the flux distribution, velocity dispersion, and velocity of both the stars and interstellar medium on scales of 50pc have been measured at a range of inclinations angles. A comparison of the simulated disk galaxies with these observed azimuthal averages in 40 Seyfert galaxies measured as part of the KONA (Keck OSIRIS Nearby AGN) survey will be presented.

Top