Sample records for observations process-based models

  1. Process-oriented Observational Metrics for CMIP6 Climate Model Assessments

    NASA Astrophysics Data System (ADS)

    Jiang, J. H.; Su, H.

    2016-12-01

    Observational metrics based on satellite observations have been developed and effectively applied during post-CMIP5 model evaluation and improvement projects. As new physics and parameterizations continue to be included in models for the upcoming CMIP6, it is important to continue objective comparisons between observations and model results. This talk will summarize the process-oriented observational metrics and methodologies for constraining climate models with A-Train satellite observations and support CMIP6 model assessments. We target parameters and processes related to atmospheric clouds and water vapor, which are critically important for Earth's radiative budget, climate feedbacks, and water and energy cycles, and thus reduce uncertainties in climate models.

  2. Identification of AR(I)MA processes for modelling temporal correlations of GPS observations

    NASA Astrophysics Data System (ADS)

    Luo, X.; Mayer, M.; Heck, B.

    2009-04-01

    In many geodetic applications observations of the Global Positioning System (GPS) are routinely processed by means of the least-squares method. However, this algorithm delivers reliable estimates of unknown parameters und realistic accuracy measures only if both the functional and stochastic models are appropriately defined within GPS data processing. One deficiency of the stochastic model used in many GPS software products consists in neglecting temporal correlations of GPS observations. In practice the knowledge of the temporal stochastic behaviour of GPS observations can be improved by analysing time series of residuals resulting from the least-squares evaluation. This paper presents an approach based on the theory of autoregressive (integrated) moving average (AR(I)MA) processes to model temporal correlations of GPS observations using time series of observation residuals. A practicable integration of AR(I)MA models in GPS data processing requires the determination of the order parameters of AR(I)MA processes at first. In case of GPS, the identification of AR(I)MA processes could be affected by various factors impacting GPS positioning results, e.g. baseline length, multipath effects, observation weighting, or weather variations. The influences of these factors on AR(I)MA identification are empirically analysed based on a large amount of representative residual time series resulting from differential GPS post-processing using 1-Hz observation data collected within the permanent SAPOS® (Satellite Positioning Service of the German State Survey) network. Both short and long time series are modelled by means of AR(I)MA processes. The final order parameters are determined based on the whole residual database; the corresponding empirical distribution functions illustrate that multipath and weather variations seem to affect the identification of AR(I)MA processes much more significantly than baseline length and observation weighting. Additionally, the modelling

  3. OBSERVATIONAL DATA PROCESSING AT NCEP

    Science.gov Websites

    operations, but also for research and study. 2. The various NCEP networks access the observational data base Statistics Observational Data Processing Data Assimilation Monsoon Desk Model Transition Seminars Seminar / VISION | About EMC Observational Data Processing at NCEP Dennis Keyser - NOAA/NWS/NCEP/EMC (Last Revised

  4. Rapid response tools and datasets for post-fire modeling: Linking Earth Observations and process-based hydrological models to support post-fire remediation

    Treesearch

    M. E. Miller; M. Billmire; W. J. Elliot; K. A. Endsley; P. R. Robichaud

    2015-01-01

    Preparation is key to utilizing Earth Observations and process-based models to support post-wildfire mitigation. Post-fire flooding and erosion can pose a serious threat to life, property and municipal water supplies. Increased runoff and sediment delivery due to the loss of surface cover and fire-induced changes in soil properties are of great concern. Remediation...

  5. Global validation of a process-based model on vegetation gross primary production using eddy covariance observations.

    PubMed

    Liu, Dan; Cai, Wenwen; Xia, Jiangzhou; Dong, Wenjie; Zhou, Guangsheng; Chen, Yang; Zhang, Haicheng; Yuan, Wenping

    2014-01-01

    Gross Primary Production (GPP) is the largest flux in the global carbon cycle. However, large uncertainties in current global estimations persist. In this study, we examined the performance of a process-based model (Integrated BIosphere Simulator, IBIS) at 62 eddy covariance sites around the world. Our results indicated that the IBIS model explained 60% of the observed variation in daily GPP at all validation sites. Comparison with a satellite-based vegetation model (Eddy Covariance-Light Use Efficiency, EC-LUE) revealed that the IBIS simulations yielded comparable GPP results as the EC-LUE model. Global mean GPP estimated by the IBIS model was 107.50±1.37 Pg C year(-1) (mean value ± standard deviation) across the vegetated area for the period 2000-2006, consistent with the results of the EC-LUE model (109.39±1.48 Pg C year(-1)). To evaluate the uncertainty introduced by the parameter Vcmax, which represents the maximum photosynthetic capacity, we inversed Vcmax using Markov Chain-Monte Carlo (MCMC) procedures. Using the inversed Vcmax values, the simulated global GPP increased by 16.5 Pg C year(-1), indicating that IBIS model is sensitive to Vcmax, and large uncertainty exists in model parameterization.

  6. Observer-based perturbation extremum seeking control with input constraints for direct-contact membrane distillation process

    NASA Astrophysics Data System (ADS)

    Eleiwi, Fadi; Laleg-Kirati, Taous Meriem

    2018-06-01

    An observer-based perturbation extremum seeking control is proposed for a direct-contact membrane distillation (DCMD) process. The process is described with a dynamic model that is based on a 2D advection-diffusion equation model which has pump flow rates as process inputs. The objective of the controller is to optimise the trade-off between the permeate mass flux and the energy consumption by the pumps inside the process. Cases of single and multiple control inputs are considered through the use of only the feed pump flow rate or both the feed and the permeate pump flow rates. A nonlinear Lyapunov-based observer is designed to provide an estimation for the temperature distribution all over the designated domain of the DCMD process. Moreover, control inputs are constrained with an anti-windup technique to be within feasible and physical ranges. Performance of the proposed structure is analysed, and simulations based on real DCMD process parameters for each control input are provided.

  7. SEIPS-based process modeling in primary care.

    PubMed

    Wooldridge, Abigail R; Carayon, Pascale; Hundt, Ann Schoofs; Hoonakker, Peter L T

    2017-04-01

    Process mapping, often used as part of the human factors and systems engineering approach to improve care delivery and outcomes, should be expanded to represent the complex, interconnected sociotechnical aspects of health care. Here, we propose a new sociotechnical process modeling method to describe and evaluate processes, using the SEIPS model as the conceptual framework. The method produces a process map and supplementary table, which identify work system barriers and facilitators. In this paper, we present a case study applying this method to three primary care processes. We used purposeful sampling to select staff (care managers, providers, nurses, administrators and patient access representatives) from two clinics to observe and interview. We show the proposed method can be used to understand and analyze healthcare processes systematically and identify specific areas of improvement. Future work is needed to assess usability and usefulness of the SEIPS-based process modeling method and further refine it. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. SEIPS-Based Process Modeling in Primary Care

    PubMed Central

    Wooldridge, Abigail R.; Carayon, Pascale; Hundt, Ann Schoofs; Hoonakker, Peter

    2016-01-01

    Process mapping, often used as part of the human factors and systems engineering approach to improve care delivery and outcomes, should be expanded to represent the complex, interconnected sociotechnical aspects of health care. Here, we propose a new sociotechnical process modeling method to describe and evaluate processes, using the SEIPS model as the conceptual framework. The method produces a process map and supplementary table, which identify work system barriers and facilitators. In this paper, we present a case study applying this method to three primary care processes. We used purposeful sampling to select staff (care managers, providers, nurses, administrators and patient access representatives) from two clinics to observe and interview. We show the proposed method can be used to understand and analyze healthcare processes systematically and identify specific areas of improvement. Future work is needed to assess usability and usefulness of the SEIPS-based process modeling method and further refine it. PMID:28166883

  9. An assessment of the carbon balance of arctic tundra: comparisons among observations, process models, and atmospheric inversions

    USGS Publications Warehouse

    McGuire, A.D.; Christensen, T.R.; Hayes, D.; Heroult, A.; Euskirchen, E.; Yi, Y.; Kimball, J.S.; Koven, C.; Lafleur, P.; Miller, P.A.; Oechel, W.; Peylin, P.; Williams, M.

    2012-01-01

    Although arctic tundra has been estimated to cover only 8% of the global land surface, the large and potentially labile carbon pools currently stored in tundra soils have the potential for large emissions of carbon (C) under a warming climate. These emissions as radiatively active greenhouse gases in the form of both CO2 and CH4 could amplify global warming. Given the potential sensitivity of these ecosystems to climate change and the expectation that the Arctic will experience appreciable warming over the next century, it is important to assess whether responses of C exchange in tundra regions are likely to enhance or mitigate warming. In this study we compared analyses of C exchange of Arctic tundra between 1990–1999 and 2000–2006 among observations, regional and global applications of process-based terrestrial biosphere models, and atmospheric inversion models. Syntheses of the compilation of flux observations and of inversion model results indicate that the annual exchange of CO2 between arctic tundra and the atmosphere has large uncertainties that cannot be distinguished from neutral balance. The mean estimate from an ensemble of process-based model simulations suggests that arctic tundra acted as a sink for atmospheric CO2 in recent decades, but based on the uncertainty estimates it cannot be determined with confidence whether these ecosystems represent a weak or a strong sink. Tundra was 0.6 °C warmer in the 2000s compared to the 1990s. The central estimates of the observations, process-based models, and inversion models each identify stronger sinks in the 2000s compared with the 1990s. Similarly, the observations and the applications of regional process-based models suggest that CH4 emissions from arctic tundra have increased from the 1990s to 2000s. Based on our analyses of the estimates from observations, process-based models, and inversion models, we estimate that arctic tundra was a sink for atmospheric CO2 of 110 Tg C yr-1 (uncertainty between a

  10. Science-Grade Observing Systems as Process Observatories: Mapping and Understanding Nonlinearity and Multiscale Memory with Models and Observations

    NASA Astrophysics Data System (ADS)

    Barros, A. P.; Wilson, A. M.; Miller, D. K.; Tao, J.; Genereux, D. P.; Prat, O.; Petersen, W. A.; Brunsell, N. A.; Petters, M. D.; Duan, Y.

    2015-12-01

    Using the planet as a study domain and collecting observations over unprecedented ranges of spatial and temporal scales, NASA's EOS (Earth Observing System) program was an agent of transformational change in Earth Sciences over the last thirty years. The remarkable space-time organization and variability of atmospheric and terrestrial moist processes that emerged from the analysis of comprehensive satellite observations provided much impetus to expand the scope of land-atmosphere interaction studies in Hydrology and Hydrometeorology. Consequently, input and output terms in the mass and energy balance equations evolved from being treated as fluxes that can be used as boundary conditions, or forcing, to being viewed as dynamic processes of a coupled system interacting at multiple scales. Measurements of states or fluxes are most useful if together they map, reveal and/or constrain the underlying physical processes and their interactions. This can only be accomplished through an integrated observing system designed to capture the coupled physics, including nonlinear feedbacks and tipping points. Here, we first review and synthesize lessons learned from hydrometeorology studies in the Southern Appalachians and in the Southern Great Plains using both ground-based and satellite observations, physical models and data-assimilation systems. We will specifically focus on mapping and understanding nonlinearity and multiscale memory of rainfall-runoff processes in mountainous regions. It will be shown that beyond technical rigor, variety, quantity and duration of measurements, the utility of observing systems is determined by their interpretive value in the context of physical models to describe the linkages among different observations. Second, we propose a framework for designing science-grade and science-minded process-oriented integrated observing and modeling platforms for hydrometeorological studies.

  11. Parameter Estimation and Model Selection for Indoor Environments Based on Sparse Observations

    NASA Astrophysics Data System (ADS)

    Dehbi, Y.; Loch-Dehbi, S.; Plümer, L.

    2017-09-01

    This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

  12. Bayesian framework for modeling diffusion processes with nonlinear drift based on nonlinear and incomplete observations.

    PubMed

    Wu, Hao; Noé, Frank

    2011-03-01

    Diffusion processes are relevant for a variety of phenomena in the natural sciences, including diffusion of cells or biomolecules within cells, diffusion of molecules on a membrane or surface, and diffusion of a molecular conformation within a complex energy landscape. Many experimental tools exist now to track such diffusive motions in single cells or molecules, including high-resolution light microscopy, optical tweezers, fluorescence quenching, and Förster resonance energy transfer (FRET). Experimental observations are most often indirect and incomplete: (1) They do not directly reveal the potential or diffusion constants that govern the diffusion process, (2) they have limited time and space resolution, and (3) the highest-resolution experiments do not track the motion directly but rather probe it stochastically by recording single events, such as photons, whose properties depend on the state of the system under investigation. Here, we propose a general Bayesian framework to model diffusion processes with nonlinear drift based on incomplete observations as generated by various types of experiments. A maximum penalized likelihood estimator is given as well as a Gibbs sampling method that allows to estimate the trajectories that have caused the measurement, the nonlinear drift or potential function and the noise or diffusion matrices, as well as uncertainty estimates of these properties. The approach is illustrated on numerical simulations of FRET experiments where it is shown that trajectories, potentials, and diffusion constants can be efficiently and reliably estimated even in cases with little statistics or nonequilibrium measurement conditions.

  13. Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2014-02-01

    Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation

  14. Advancing land surface model development with satellite-based Earth observations

    NASA Astrophysics Data System (ADS)

    Orth, Rene; Dutra, Emanuel; Trigo, Isabel F.; Balsamo, Gianpaolo

    2017-04-01

    The land surface forms an essential part of the climate system. It interacts with the atmosphere through the exchange of water and energy and hence influences weather and climate, as well as their predictability. Correspondingly, the land surface model (LSM) is an essential part of any weather forecasting system. LSMs rely on partly poorly constrained parameters, due to sparse land surface observations. With the use of newly available land surface temperature observations, we show in this study that novel satellite-derived datasets help to improve LSM configuration, and hence can contribute to improved weather predictability. We use the Hydrology Tiled ECMWF Scheme of Surface Exchanges over Land (HTESSEL) and validate it comprehensively against an array of Earth observation reference datasets, including the new land surface temperature product. This reveals satisfactory model performance in terms of hydrology, but poor performance in terms of land surface temperature. This is due to inconsistencies of process representations in the model as identified from an analysis of perturbed parameter simulations. We show that HTESSEL can be more robustly calibrated with multiple instead of single reference datasets as this mitigates the impact of the structural inconsistencies. Finally, performing coupled global weather forecasts we find that a more robust calibration of HTESSEL also contributes to improved weather forecast skills. In summary, new satellite-based Earth observations are shown to enhance the multi-dataset calibration of LSMs, thereby improving the representation of insufficiently captured processes, advancing weather predictability and understanding of climate system feedbacks. Orth, R., E. Dutra, I. F. Trigo, and G. Balsamo (2016): Advancing land surface model development with satellite-based Earth observations. Hydrol. Earth Syst. Sci. Discuss., doi:10.5194/hess-2016-628

  15. Unified Modeling Language (UML) for hospital-based cancer registration processes.

    PubMed

    Shiki, Naomi; Ohno, Yuko; Fujii, Ayumi; Murata, Taizo; Matsumura, Yasushi

    2008-01-01

    Hospital-based cancer registry involves complex processing steps that span across multiple departments. In addition, management techniques and registration procedures differ depending on each medical facility. Establishing processes for hospital-based cancer registry requires clarifying specific functions and labor needed. In recent years, the business modeling technique, in which management evaluation is done by clearly spelling out processes and functions, has been applied to business process analysis. However, there are few analytical reports describing the applications of these concepts to medical-related work. In this study, we initially sought to model hospital-based cancer registration processes using the Unified Modeling Language (UML), to clarify functions. The object of this study was the cancer registry of Osaka University Hospital. We organized the hospital-based cancer registration processes based on interview and observational surveys, and produced an As-Is model using activity, use-case, and class diagrams. After drafting every UML model, it was fed-back to practitioners to check its validity and improved. We were able to define the workflow for each department using activity diagrams. In addition, by using use-case diagrams we were able to classify each department within the hospital as a system, and thereby specify the core processes and staff that were responsible for each department. The class diagrams were effective in systematically organizing the information to be used for hospital-based cancer registries. Using UML modeling, hospital-based cancer registration processes were broadly classified into three separate processes, namely, registration tasks, quality control, and filing data. An additional 14 functions were also extracted. Many tasks take place within the hospital-based cancer registry office, but the process of providing information spans across multiple departments. Moreover, additional tasks were required in comparison to using a

  16. The Geolocation model for lunar-based Earth observation

    NASA Astrophysics Data System (ADS)

    Ding, Yixing; Liu, Guang; Ren, Yuanzhen; Ye, Hanlin; Guo, Huadong; Lv, Mingyang

    2016-07-01

    In recent years, people are more and more aware of that the earth need to treated as an entirety, and consequently to be observed in a holistic, systematic and multi-scale view. However, the interaction mechanism between the Earth's inner layers and outer layers is still unclear. Therefore, we propose to observe the Earth's inner layers and outer layers instantaneously on the Moon which may be helpful to the studies in climatology, meteorology, seismology, etc. At present, the Moon has been proved to be an irreplaceable platform for Earth's outer layers observation. Meanwhile, some discussions have been made in lunar-based observation of the Earth's inner layers, but the geolocation model of lunar-based observation has not been specified yet. In this paper, we present a geolocation model based on transformation matrix. The model includes six coordinate systems: The telescope coordinate system, the lunar local coordinate system, the lunar-reference coordinate system, the selenocentric inertial coordinate system, the geocentric inertial coordinate system and the geo-reference coordinate system. The parameters, lncluding the position of the Sun, the Earth, the Moon, the libration and the attitude of the Earth, can be acquired from the Ephemeris. By giving an elevation angle and an azimuth angle of the lunar-based telescope, this model links the image pixel to the ground point uniquely.

  17. An Observation-based Assessment of Instrument Requirements for a Future Precipitation Process Observing System

    NASA Astrophysics Data System (ADS)

    Nelson, E.; L'Ecuyer, T. S.; Wood, N.; Smalley, M.; Kulie, M.; Hahn, W.

    2017-12-01

    Global models exhibit substantial biases in the frequency, intensity, duration, and spatial scales of precipitation systems. Much of this uncertainty stems from an inadequate representation of the processes by which water is cycled between the surface and atmosphere and, in particular, those that govern the formation and maintenance of cloud systems and their propensity to form the precipitation. Progress toward improving precipitation process models requires observing systems capable of quantifying the coupling between the ice content, vertical mass fluxes, and precipitation yield of precipitating cloud systems. Spaceborne multi-frequency, Doppler radar offers a unique opportunity to address this need but the effectiveness of such a mission is heavily dependent on its ability to actually observe the processes of interest in the widest possible range of systems. Planning for a next generation precipitation process observing system should, therefore, start with a fundamental evaluation of the trade-offs between sensitivity, resolution, sampling, cost, and the overall potential scientific yield of the mission. Here we provide an initial assessment of the scientific and economic trade-space by evaluating hypothetical spaceborne multi-frequency radars using a combination of current real-world and model-derived synthetic observations. Specifically, we alter the field of view, vertical resolution, and sensitivity of a hypothetical Ka- and W-band radar system and propagate those changes through precipitation detection and intensity retrievals. The results suggest that sampling biases introduced by reducing sensitivity disproportionately affect the light rainfall and frozen precipitation regimes that are critical for warm cloud feedbacks and ice sheet mass balance, respectively. Coarser spatial resolution observations introduce regime-dependent biases in both precipitation occurrence and intensity that depend on cloud regime, with even the sign of the bias varying within a

  18. Process observation in fiber laser-based selective laser melting

    NASA Astrophysics Data System (ADS)

    Thombansen, Ulrich; Gatej, Alexander; Pereira, Milton

    2015-01-01

    The process observation in selective laser melting (SLM) focuses on observing the interaction point where the powder is processed. To provide process relevant information, signals have to be acquired that are resolved in both time and space. Especially in high-power SLM, where more than 1 kW of laser power is used, processing speeds of several meters per second are required for a high-quality processing results. Therefore, an implementation of a suitable process observation system has to acquire a large amount of spatially resolved data at low sampling speeds or it has to restrict the acquisition to a predefined area at a high sampling speed. In any case, it is vitally important to synchronously record the laser beam position and the acquired signal. This is a prerequisite that allows the recorded data become information. Today, most SLM systems employ f-theta lenses to focus the processing laser beam onto the powder bed. This report describes the drawbacks that result for process observation and suggests a variable retro-focus system which solves these issues. The beam quality of fiber lasers delivers the processing laser beam to the powder bed at relevant focus diameters, which is a key prerequisite for this solution to be viable. The optical train we present here couples the processing laser beam and the process observation coaxially, ensuring consistent alignment of interaction zone and observed area. With respect to signal processing, we have developed a solution that synchronously acquires signals from a pyrometer and the position of the laser beam by sampling the data with a field programmable gate array. The relevance of the acquired signals has been validated by the scanning of a sample filament. Experiments with grooved samples show a correlation between different powder thicknesses and the acquired signals at relevant processing parameters. This basic work takes a first step toward self-optimization of the manufacturing process in SLM. It enables the

  19. Fuzzy model-based observers for fault detection in CSTR.

    PubMed

    Ballesteros-Moncada, Hazael; Herrera-López, Enrique J; Anzurez-Marín, Juan

    2015-11-01

    Under the vast variety of fuzzy model-based observers reported in the literature, what would be the properone to be used for fault detection in a class of chemical reactor? In this study four fuzzy model-based observers for sensor fault detection of a Continuous Stirred Tank Reactor were designed and compared. The designs include (i) a Luenberger fuzzy observer, (ii) a Luenberger fuzzy observer with sliding modes, (iii) a Walcott-Zak fuzzy observer, and (iv) an Utkin fuzzy observer. A negative, an oscillating fault signal, and a bounded random noise signal with a maximum value of ±0.4 were used to evaluate and compare the performance of the fuzzy observers. The Utkin fuzzy observer showed the best performance under the tested conditions. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Aligning observed and modelled behaviour based on workflow decomposition

    NASA Astrophysics Data System (ADS)

    Wang, Lu; Du, YuYue; Liu, Wei

    2017-09-01

    When business processes are mostly supported by information systems, the availability of event logs generated from these systems, as well as the requirement of appropriate process models are increasing. Business processes can be discovered, monitored and enhanced by extracting process-related information. However, some events cannot be correctly identified because of the explosion of the amount of event logs. Therefore, a new process mining technique is proposed based on a workflow decomposition method in this paper. Petri nets (PNs) are used to describe business processes, and then conformance checking of event logs and process models is investigated. A decomposition approach is proposed to divide large process models and event logs into several separate parts that can be analysed independently; while an alignment approach based on a state equation method in PN theory enhances the performance of conformance checking. Both approaches are implemented in programmable read-only memory (ProM). The correctness and effectiveness of the proposed methods are illustrated through experiments.

  1. Models of Quantitative Estimations: Rule-Based and Exemplar-Based Processes Compared

    ERIC Educational Resources Information Center

    von Helversen, Bettina; Rieskamp, Jorg

    2009-01-01

    The cognitive processes underlying quantitative estimations vary. Past research has identified task-contingent changes between rule-based and exemplar-based processes (P. Juslin, L. Karlsson, & H. Olsson, 2008). B. von Helversen and J. Rieskamp (2008), however, proposed a simple rule-based model--the mapping model--that outperformed the…

  2. Characterizing and reducing equifinality by constraining a distributed catchment model with regional signatures, local observations, and process understanding

    NASA Astrophysics Data System (ADS)

    Kelleher, Christa; McGlynn, Brian; Wagener, Thorsten

    2017-07-01

    Distributed catchment models are widely used tools for predicting hydrologic behavior. While distributed models require many parameters to describe a system, they are expected to simulate behavior that is more consistent with observed processes. However, obtaining a single set of acceptable parameters can be problematic, as parameter equifinality often results in several behavioral sets that fit observations (typically streamflow). In this study, we investigate the extent to which equifinality impacts a typical distributed modeling application. We outline a hierarchical approach to reduce the number of behavioral sets based on regional, observation-driven, and expert-knowledge-based constraints. For our application, we explore how each of these constraint classes reduced the number of behavioral parameter sets and altered distributions of spatiotemporal simulations, simulating a well-studied headwater catchment, Stringer Creek, Montana, using the distributed hydrology-soil-vegetation model (DHSVM). As a demonstrative exercise, we investigated model performance across 10 000 parameter sets. Constraints on regional signatures, the hydrograph, and two internal measurements of snow water equivalent time series reduced the number of behavioral parameter sets but still left a small number with similar goodness of fit. This subset was ultimately further reduced by incorporating pattern expectations of groundwater table depth across the catchment. Our results suggest that utilizing a hierarchical approach based on regional datasets, observations, and expert knowledge to identify behavioral parameter sets can reduce equifinality and bolster more careful application and simulation of spatiotemporal processes via distributed modeling at the catchment scale.

  3. Modelling Ground Based X- and Ku-Band Observations of Tundra Snow

    NASA Astrophysics Data System (ADS)

    Kasurak, A.; King, J. M.; Kelly, R. E.

    2012-12-01

    main physical processes represented by the model are snow volume scattering and ground surface reflectance. With a larger correction needed for X-band, where the ground portion of backscatter is expected to be larger, the contribution from the underlying soil is explored first. The ground contribution in sRT is computed using the semi-empirical Oh et al. (1992) model using permittivity from a temperate mineral soil based model. The ground response is tested against two observations of snow-removed tundra, and one observation of snow free tundra. A secondary analysis is completed using a modified sRT ground model, incorporating recent work on frozen organic permittivity by Mironov et al. (2010). Multi-scale surface roughness resulting from superimposed microtopography on regularly distributed hummocks is also addressed. These results demonstrate the applicability of microwave scattering models to tundra snowpacks underlain with peat, and demonstrate the applicability of the CoReH2O sRT model.

  4. An accurate Kriging-based regional ionospheric model using combined GPS/BeiDou observations

    NASA Astrophysics Data System (ADS)

    Abdelazeem, Mohamed; Çelik, Rahmi N.; El-Rabbany, Ahmed

    2018-01-01

    In this study, we propose a regional ionospheric model (RIM) based on both of the GPS-only and the combined GPS/BeiDou observations for single-frequency precise point positioning (SF-PPP) users in Europe. GPS/BeiDou observations from 16 reference stations are processed in the zero-difference mode. A least-squares algorithm is developed to determine the vertical total electron content (VTEC) bi-linear function parameters for a 15-minute time interval. The Kriging interpolation method is used to estimate the VTEC values at a 1 ° × 1 ° grid. The resulting RIMs are validated for PPP applications using GNSS observations from another set of stations. The SF-PPP accuracy and convergence time obtained through the proposed RIMs are computed and compared with those obtained through the international GNSS service global ionospheric maps (IGS-GIM). The results show that the RIMs speed up the convergence time and enhance the overall positioning accuracy in comparison with the IGS-GIM model, particularly the combined GPS/BeiDou-based model.

  5. The Iterative Research Cycle: Process-Based Model Evaluation

    NASA Astrophysics Data System (ADS)

    Vrugt, J. A.

    2014-12-01

    The ever increasing pace of computational power, along with continued advances in measurement technologies and improvements in process understanding has stimulated the development of increasingly complex physics based models that simulate a myriad of processes at different spatial and temporal scales. Reconciling these high-order system models with perpetually larger volumes of field data is becoming more and more difficult, particularly because classical likelihood-based fitting methods lack the power to detect and pinpoint deficiencies in the model structure. In this talk I will give an overview of our latest research on process-based model calibration and evaluation. This approach, rooted in Bayesian theory, uses summary metrics of the calibration data rather than the data itself to help detect which component(s) of the model is (are) malfunctioning and in need of improvement. A few case studies involving hydrologic and geophysical models will be used to demonstrate the proposed methodology.

  6. Dynamic frailty models based on compound birth-death processes.

    PubMed

    Putter, Hein; van Houwelingen, Hans C

    2015-07-01

    Frailty models are used in survival analysis to model unobserved heterogeneity. They accommodate such heterogeneity by the inclusion of a random term, the frailty, which is assumed to multiply the hazard of a subject (individual frailty) or the hazards of all subjects in a cluster (shared frailty). Typically, the frailty term is assumed to be constant over time. This is a restrictive assumption and extensions to allow for time-varying or dynamic frailties are of interest. In this paper, we extend the auto-correlated frailty models of Henderson and Shimakura and of Fiocco, Putter and van Houwelingen, developed for longitudinal count data and discrete survival data, to continuous survival data. We present a rigorous construction of the frailty processes in continuous time based on compound birth-death processes. When the frailty processes are used as mixtures in models for survival data, we derive the marginal hazards and survival functions and the marginal bivariate survival functions and cross-ratio function. We derive distributional properties of the processes, conditional on observed data, and show how to obtain the maximum likelihood estimators of the parameters of the model using a (stochastic) expectation-maximization algorithm. The methods are applied to a publicly available data set. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. Livingstone Model-Based Diagnosis of Earth Observing One Infusion Experiment

    NASA Technical Reports Server (NTRS)

    Hayden, Sandra C.; Sweet, Adam J.; Christa, Scott E.

    2004-01-01

    The Earth Observing One satellite, launched in November 2000, is an active earth science observation platform. This paper reports on the progress of an infusion experiment in which the Livingstone 2 Model-Based Diagnostic engine is deployed on Earth Observing One, demonstrating the capability to monitor the nominal operation of the spacecraft under command of an on-board planner, and demonstrating on-board diagnosis of spacecraft failures. Design and development of the experiment, specification and validation of diagnostic scenarios, characterization of performance results and benefits of the model- based approach are presented.

  8. Successes and Challenges in Linking Observations and Modeling of Marine and Terrestrial Cryospheric Processes

    NASA Astrophysics Data System (ADS)

    Herzfeld, U. C.; Hunke, E. C.; Trantow, T.; Greve, R.; McDonald, B.; Wallin, B.

    2014-12-01

    Understanding of the state of the cryosphere and its relationship to other components of the Earth system requires both models of geophysical processes and observations of geophysical properties and processes, however linking observations and models is far from trivial. This paper looks at examples from sea ice and land ice model-observation linkages to examine some approaches, challenges and solutions. In a sea-ice example, ice deformation is analyzed as a key process that indicates fundamental changes in the Arctic sea ice cover. Simulation results from the Los Alamos Sea-Ice Model CICE, which is also the sea-ice component of the Community Earth System Model (CESM), are compared to parameters indicative of deformation as derived from mathematical analysis of remote sensing data. Data include altimeter, micro-ASAR and image data from manned and unmanned aircraft campaigns (NASA OIB and Characterization of Arctic Sea Ice Experiment, CASIE). The key problem to linking data and model results is the derivation of matching parameters on both the model and observation side.For terrestrial glaciology, we include an example of a surge process in a glacier system and and example of a dynamic ice sheet model for Greenland. To investigate the surge of the Bering Bagley Glacier System, we use numerical forward modeling experiments and, on the data analysis side, a connectionist approach to analyze crevasse provinces. In the Greenland ice sheet example, we look at the influence of ice surface and bed topography, as derived from remote sensing data, on on results from a dynamic ice sheet model.

  9. Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Dali; Yuan, Fengming; Hernandez, Benjamin

    Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less

  10. Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations

    DOE PAGES

    Wang, Dali; Yuan, Fengming; Hernandez, Benjamin; ...

    2017-01-01

    Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less

  11. Model-Based Control of Observer Bias for the Analysis of Presence-Only Data in Ecology

    PubMed Central

    Warton, David I.; Renner, Ian W.; Ramp, Daniel

    2013-01-01

    Presence-only data, where information is available concerning species presence but not species absence, are subject to bias due to observers being more likely to visit and record sightings at some locations than others (hereafter “observer bias”). In this paper, we describe and evaluate a model-based approach to accounting for observer bias directly – by modelling presence locations as a function of known observer bias variables (such as accessibility variables) in addition to environmental variables, then conditioning on a common level of bias to make predictions of species occurrence free of such observer bias. We implement this idea using point process models with a LASSO penalty, a new presence-only method related to maximum entropy modelling, that implicitly addresses the “pseudo-absence problem” of where to locate pseudo-absences (and how many). The proposed method of bias-correction is evaluated using systematically collected presence/absence data for 62 plant species endemic to the Blue Mountains near Sydney, Australia. It is shown that modelling and controlling for observer bias significantly improves the accuracy of predictions made using presence-only data, and usually improves predictions as compared to pseudo-absence or “inventory” methods of bias correction based on absences from non-target species. Future research will consider the potential for improving the proposed bias-correction approach by estimating the observer bias simultaneously across multiple species. PMID:24260167

  12. A Test of Bayesian Observer Models of Processing in the Eriksen Flanker Task

    ERIC Educational Resources Information Center

    White, Corey N.; Brown, Scott; Ratcliff, Roger

    2012-01-01

    Two Bayesian observer models were recently proposed to account for data from the Eriksen flanker task, in which flanking items interfere with processing of a central target. One model assumes that interference stems from a perceptual bias to process nearby items as if they are compatible, and the other assumes that the interference is due to…

  13. Observations and 3D hydrodynamics-based modeling of decadal-scale shoreline change along the Outer Banks, North Carolina

    USGS Publications Warehouse

    Safak, Ilgar; List, Jeffrey; Warner, John C.; Kumar, Nirnimesh

    2017-01-01

    Long-term decadal-scale shoreline change is an important parameter for quantifying the stability of coastal systems. The decadal-scale coastal change is controlled by processes that occur on short time scales (such as storms) and long-term processes (such as prevailing waves). The ability to predict decadal-scale shoreline change is not well established and the fundamental physical processes controlling this change are not well understood. Here we investigate the processes that create large-scale long-term shoreline change along the Outer Banks of North Carolina, an uninterrupted 60 km stretch of coastline, using both observations and a numerical modeling approach. Shoreline positions for a 24-yr period were derived from aerial photographs of the Outer Banks. Analysis of the shoreline position data showed that, although variable, the shoreline eroded an average of 1.5 m/yr throughout this period. The modeling approach uses a three-dimensional hydrodynamics-based numerical model coupled to a spectral wave model and simulates the full 24-yr time period on a spatial grid running on a short (second scale) time-step to compute the sediment transport patterns. The observations and the model results show similar magnitudes (O(105 m3/yr)) and patterns of alongshore sediment fluxes. Both the observed and the modeled alongshore sediment transport rates have more rapid changes at the north of our section due to continuously curving coastline, and possible effects of alongshore variations in shelf bathymetry. The southern section with a relatively uniform orientation, on the other hand, has less rapid transport rate changes. Alongshore gradients of the modeled sediment fluxes are translated into shoreline change rates that have agreement in some locations but vary in others. Differences between observations and model results are potentially influenced by geologic framework processes not included in the model. Both the observations and the model results show higher rates of

  14. A Petri Net-Based Software Process Model for Developing Process-Oriented Information Systems

    NASA Astrophysics Data System (ADS)

    Li, Yu; Oberweis, Andreas

    Aiming at increasing flexibility, efficiency, effectiveness, and transparency of information processing and resource deployment in organizations to ensure customer satisfaction and high quality of products and services, process-oriented information systems (POIS) represent a promising realization form of computerized business information systems. Due to the complexity of POIS, explicit and specialized software process models are required to guide POIS development. In this chapter we characterize POIS with an architecture framework and present a Petri net-based software process model tailored for POIS development with consideration of organizational roles. As integrated parts of the software process model, we also introduce XML nets, a variant of high-level Petri nets as basic methodology for business processes modeling, and an XML net-based software toolset providing comprehensive functionalities for POIS development.

  15. [GSH fermentation process modeling using entropy-criterion based RBF neural network model].

    PubMed

    Tan, Zuoping; Wang, Shitong; Deng, Zhaohong; Du, Guocheng

    2008-05-01

    The prediction accuracy and generalization of GSH fermentation process modeling are often deteriorated by noise existing in the corresponding experimental data. In order to avoid this problem, we present a novel RBF neural network modeling approach based on entropy criterion. It considers the whole distribution structure of the training data set in the parameter learning process compared with the traditional MSE-criterion based parameter learning, and thus effectively avoids the weak generalization and over-learning. Then the proposed approach is applied to the GSH fermentation process modeling. Our results demonstrate that this proposed method has better prediction accuracy, generalization and robustness such that it offers a potential application merit for the GSH fermentation process modeling.

  16. Solar Ion Processing of Itokawa Grains: Reconciling Model Predictions with Sample Observations

    NASA Technical Reports Server (NTRS)

    Christoffersen, Roy; Keller, L. P.

    2014-01-01

    Analytical TEM observations of Itokawa grains reported to date show complex solar wind ion processing effects in the outer 30-100 nm of pyroxene and olivine grains. The effects include loss of long-range structural order, formation of isolated interval cavities or "bubbles", and other nanoscale compositional/microstructural variations. None of the effects so far described have, however, included complete ion-induced amorphization. To link the array of observed relationships to grain surface exposure times, we have adapted our previous numerical model for progressive solar ion processing effects in lunar regolith grains to the Itokawa samples. The model uses SRIM ion collision damage and implantation calculations within a framework of a constant-deposited-energy model for amorphization. Inputs include experimentally-measured amorphization fluences, a Pi steradian variable ion incidence geometry required for a rotating asteroid, and a numerical flux-versus-velocity solar wind spectrum.

  17. Agent-Based Modeling of Growth Processes

    ERIC Educational Resources Information Center

    Abraham, Ralph

    2014-01-01

    Growth processes abound in nature, and are frequently the target of modeling exercises in the sciences. In this article we illustrate an agent-based approach to modeling, in the case of a single example from the social sciences: bullying.

  18. Species distribution modeling based on the automated identification of citizen observations.

    PubMed

    Botella, Christophe; Joly, Alexis; Bonnet, Pierre; Monestiez, Pascal; Munoz, François

    2018-02-01

    A species distribution model computed with automatically identified plant observations was developed and evaluated to contribute to future ecological studies. We used deep learning techniques to automatically identify opportunistic plant observations made by citizens through a popular mobile application. We compared species distribution modeling of invasive alien plants based on these data to inventories made by experts. The trained models have a reasonable predictive effectiveness for some species, but they are biased by the massive presence of cultivated specimens. The method proposed here allows for fine-grained and regular monitoring of some species of interest based on opportunistic observations. More in-depth investigation of the typology of the observations and the sampling bias should help improve the approach in the future.

  19. Inverse modeling of Texas NOx emissions using space-based and ground-based NO2 observations

    NASA Astrophysics Data System (ADS)

    Tang, W.; Cohan, D. S.; Lamsal, L. N.; Xiao, X.; Zhou, W.

    2013-11-01

    Inverse modeling of nitrogen oxide (NOx) emissions using satellite-based NO2 observations has become more prevalent in recent years, but has rarely been applied to regulatory modeling at regional scales. In this study, OMI satellite observations of NO2 column densities are used to conduct inverse modeling of NOx emission inventories for two Texas State Implementation Plan (SIP) modeling episodes. Addition of lightning, aircraft, and soil NOx emissions to the regulatory inventory narrowed but did not close the gap between modeled and satellite-observed NO2 over rural regions. Satellite-based top-down emission inventories are created with the regional Comprehensive Air Quality Model with extensions (CAMx) using two techniques: the direct scaling method and discrete Kalman filter (DKF) with decoupled direct method (DDM) sensitivity analysis. The simulations with satellite-inverted inventories are compared to the modeling results using the a priori inventory as well as an inventory created by a ground-level NO2-based DKF inversion. The DKF inversions yield conflicting results: the satellite-based inversion scales up the a priori NOx emissions in most regions by factors of 1.02 to 1.84, leading to 3-55% increase in modeled NO2 column densities and 1-7 ppb increase in ground 8 h ozone concentrations, while the ground-based inversion indicates the a priori NOx emissions should be scaled by factors of 0.34 to 0.57 in each region. However, none of the inversions improve the model performance in simulating aircraft-observed NO2 or ground-level ozone (O3) concentrations.

  20. Inverse modeling of Texas NOx emissions using space-based and ground-based NO2 observations

    NASA Astrophysics Data System (ADS)

    Tang, W.; Cohan, D.; Lamsal, L. N.; Xiao, X.; Zhou, W.

    2013-07-01

    Inverse modeling of nitrogen oxide (NOx) emissions using satellite-based NO2 observations has become more prevalent in recent years, but has rarely been applied to regulatory modeling at regional scales. In this study, OMI satellite observations of NO2 column densities are used to conduct inverse modeling of NOx emission inventories for two Texas State Implementation Plan (SIP) modeling episodes. Addition of lightning, aircraft, and soil NOx emissions to the regulatory inventory narrowed but did not close the gap between modeled and satellite observed NO2 over rural regions. Satellite-based top-down emission inventories are created with the regional Comprehensive Air Quality Model with extensions (CAMx) using two techniques: the direct scaling method and discrete Kalman filter (DKF) with Decoupled Direct Method (DDM) sensitivity analysis. The simulations with satellite-inverted inventories are compared to the modeling results using the a priori inventory as well as an inventory created by a ground-level NO2 based DKF inversion. The DKF inversions yield conflicting results: the satellite-based inversion scales up the a priori NOx emissions in most regions by factors of 1.02 to 1.84, leading to 3-55% increase in modeled NO2 column densities and 1-7 ppb increase in ground 8 h ozone concentrations, while the ground-based inversion indicates the a priori NOx emissions should be scaled by factors of 0.34 to 0.57 in each region. However, none of the inversions improve the model performance in simulating aircraft-observed NO2 or ground-level ozone (O3) concentrations.

  1. A-Train Based Observational Metrics for Model Evaluation in Extratropical Cyclones

    NASA Technical Reports Server (NTRS)

    Naud, Catherine M.; Booth, James F.; Del Genio, Anthony D.; van den Heever, Susan C.; Posselt, Derek J.

    2015-01-01

    Extratropical cyclones contribute most of the precipitation in the midlatitudes, i.e. up to 70 during winter in the northern hemisphere, and can generate flooding, extreme winds, blizzards and have large socio-economic impacts. As such, it is important that general circulation models (GCMs) accurately represent these systems so their evolution in a warming climate can be understood. However, there are still uncertainties on whether warming will increase their frequency of occurrence, their intensity and how much rain or snow they bring. Part of the issue is that models have trouble representing their strength, but models also have biases in the amount of clouds and precipitation they produce. This is caused by potential issues in various aspects of the models: convection, boundary layer, and cloud scheme to only mention a few. In order to pinpoint which aspects of the models need improvement for a better representation of extratropical cyclone precipitation and cloudiness, we will present A-train based observational metrics: cyclone-centered, warm and cold frontal composites of cloud amount and type, precipitation rate and frequency of occurrence. Using the same method to extract similar fields from the model, we will present an evaluation of the GISS-ModelE2 and the IPSL-LMDZ-5B models, based on their AR5 and more recent versions. The AR5 version of the GISS model underestimates cloud cover in extratropical cyclones while the IPSL AR5 version overestimates it. In addition, we will show how the observed CloudSat-CALIPSO cloud vertical distribution across cold fronts changes with moisture amount and cyclone strength, and test if the two models successfully represent these changes. We will also show how CloudSat-CALIPSO derived cloud type (i.e. convective vs. stratiform) evolves across warm fronts as cyclones age, and again how this is represented in the models. Our third process-based analysis concerns cumulus clouds in the post-cold frontal region and how their

  2. QRS complex detection based on continuous density hidden Markov models using univariate observations

    NASA Astrophysics Data System (ADS)

    Sotelo, S.; Arenas, W.; Altuve, M.

    2018-04-01

    In the electrocardiogram (ECG), the detection of QRS complexes is a fundamental step in the ECG signal processing chain since it allows the determination of other characteristics waves of the ECG and provides information about heart rate variability. In this work, an automatic QRS complex detector based on continuous density hidden Markov models (HMM) is proposed. HMM were trained using univariate observation sequences taken either from QRS complexes or their derivatives. The detection approach is based on the log-likelihood comparison of the observation sequence with a fixed threshold. A sliding window was used to obtain the observation sequence to be evaluated by the model. The threshold was optimized by receiver operating characteristic curves. Sensitivity (Sen), specificity (Spc) and F1 score were used to evaluate the detection performance. The approach was validated using ECG recordings from the MIT-BIH Arrhythmia database. A 6-fold cross-validation shows that the best detection performance was achieved with 2 states HMM trained with QRS complexes sequences (Sen = 0.668, Spc = 0.360 and F1 = 0.309). We concluded that these univariate sequences provide enough information to characterize the QRS complex dynamics from HMM. Future works are directed to the use of multivariate observations to increase the detection performance.

  3. Bridging process-based and empirical approaches to modeling tree growth

    Treesearch

    Harry T. Valentine; Annikki Makela; Annikki Makela

    2005-01-01

    The gulf between process-based and empirical approaches to modeling tree growth may be bridged, in part, by the use of a common model. To this end, we have formulated a process-based model of tree growth that can be fitted and applied in an empirical mode. The growth model is grounded in pipe model theory and an optimal control model of crown development. Together, the...

  4. PROcess Based Diagnostics PROBE

    NASA Technical Reports Server (NTRS)

    Clune, T.; Schmidt, G.; Kuo, K.; Bauer, M.; Oloso, H.

    2013-01-01

    Many of the aspects of the climate system that are of the greatest interest (e.g., the sensitivity of the system to external forcings) are emergent properties that arise via the complex interplay between disparate processes. This is also true for climate models most diagnostics are not a function of an isolated portion of source code, but rather are affected by multiple components and procedures. Thus any model-observation mismatch is hard to attribute to any specific piece of code or imperfection in a specific model assumption. An alternative approach is to identify diagnostics that are more closely tied to specific processes -- implying that if a mismatch is found, it should be much easier to identify and address specific algorithmic choices that will improve the simulation. However, this approach requires looking at model output and observational data in a more sophisticated way than the more traditional production of monthly or annual mean quantities. The data must instead be filtered in time and space for examples of the specific process being targeted.We are developing a data analysis environment called PROcess-Based Explorer (PROBE) that seeks to enable efficient and systematic computation of process-based diagnostics on very large sets of data. In this environment, investigators can define arbitrarily complex filters and then seamlessly perform computations in parallel on the filtered output from their model. The same analysis can be performed on additional related data sets (e.g., reanalyses) thereby enabling routine comparisons between model and observational data. PROBE also incorporates workflow technology to automatically update computed diagnostics for subsequent executions of a model. In this presentation, we will discuss the design and current status of PROBE as well as share results from some preliminary use cases.

  5. Building v/s Exploring Models: Comparing Learning of Evolutionary Processes through Agent-based Modeling

    NASA Astrophysics Data System (ADS)

    Wagh, Aditi

    Two strands of work motivate the three studies in this dissertation. Evolutionary change can be viewed as a computational complex system in which a small set of rules operating at the individual level result in different population level outcomes under different conditions. Extensive research has documented students' difficulties with learning about evolutionary change (Rosengren et al., 2012), particularly in terms of levels slippage (Wilensky & Resnick, 1999). Second, though building and using computational models is becoming increasingly common in K-12 science education, we know little about how these two modalities compare. This dissertation adopts agent-based modeling as a representational system to compare these modalities in the conceptual context of micro-evolutionary processes. Drawing on interviews, Study 1 examines middle-school students' productive ways of reasoning about micro-evolutionary processes to find that the specific framing of traits plays a key role in whether slippage explanations are cued. Study 2, which was conducted in 2 schools with about 150 students, forms the crux of the dissertation. It compares learning processes and outcomes when students build their own models or explore a pre-built model. Analysis of Camtasia videos of student pairs reveals that builders' and explorers' ways of accessing rules, and sense-making of observed trends are of a different character. Builders notice rules through available blocks-based primitives, often bypassing their enactment while explorers attend to rules primarily through the enactment. Moreover, builders' sense-making of observed trends is more rule-driven while explorers' is more enactment-driven. Pre and posttests reveal that builders manifest a greater facility with accessing rules, providing explanations manifesting targeted assembly. Explorers use rules to construct explanations manifesting non-targeted assembly. Interviews reveal varying degrees of shifts away from slippage in both

  6. Inverse Modeling of Texas NOx Emissions Using Space-Based and Ground-Based NO2 Observations

    NASA Technical Reports Server (NTRS)

    Tang, Wei; Cohan, D.; Lamsal, L. N.; Xiao, X.; Zhou, W.

    2013-01-01

    Inverse modeling of nitrogen oxide (NOx) emissions using satellite-based NO2 observations has become more prevalent in recent years, but has rarely been applied to regulatory modeling at regional scales. In this study, OMI satellite observations of NO2 column densities are used to conduct inverse modeling of NOx emission inventories for two Texas State Implementation Plan (SIP) modeling episodes. Addition of lightning, aircraft, and soil NOx emissions to the regulatory inventory narrowed but did not close the gap between modeled and satellite observed NO2 over rural regions. Satellitebased top-down emission inventories are created with the regional Comprehensive Air Quality Model with extensions (CAMx) using two techniques: the direct scaling method and discrete Kalman filter (DKF) with Decoupled Direct Method (DDM) sensitivity analysis. The simulations with satellite-inverted inventories are compared to the modeling results using the a priori inventory as well as an inventory created by a ground-level NO2 based DKF inversion. The DKF inversions yield conflicting results: the satellite based inversion scales up the a priori NOx emissions in most regions by factors of 1.02 to 1.84, leading to 3-55% increase in modeled NO2 column densities and 1-7 ppb increase in ground 8 h ozone concentrations, while the ground-based inversion indicates the a priori NOx emissions should be scaled by factors of 0.34 to 0.57 in each region. However, none of the inversions improve the model performance in simulating aircraft-observed NO2 or ground-level ozone (O3) concentrations.

  7. Testing the Model-Observer Similarity Hypothesis with Text-Based Worked Examples

    ERIC Educational Resources Information Center

    Hoogerheide, Vincent; Loyens, Sofie M. M.; Jadi, Fedora; Vrins, Anna; van Gog, Tamara

    2017-01-01

    Example-based learning is a very effective and efficient instructional strategy for novices. It can be implemented using text-based worked examples that provide a written demonstration of how to perform a task, or (video) modelling examples in which an instructor (the "model") provides a demonstration. The model-observer similarity (MOS)…

  8. A Regional Climate Model Evaluation System based on Satellite and other Observations

    NASA Astrophysics Data System (ADS)

    Lean, P.; Kim, J.; Waliser, D. E.; Hall, A. D.; Mattmann, C. A.; Granger, S. L.; Case, K.; Goodale, C.; Hart, A.; Zimdars, P.; Guan, B.; Molotch, N. P.; Kaki, S.

    2010-12-01

    Regional climate models are a fundamental tool needed for downscaling global climate simulations and projections, such as those contributing to the Coupled Model Intercomparison Projects (CMIPs) that form the basis of the IPCC Assessment Reports. The regional modeling process provides the means to accommodate higher resolution and a greater complexity of Earth System processes. Evaluation of both the global and regional climate models against observations is essential to identify model weaknesses and to direct future model development efforts focused on reducing the uncertainty associated with climate projections. However, the lack of reliable observational data and the lack of formal tools are among the serious limitations to addressing these objectives. Recent satellite observations are particularly useful as they provide a wealth of information on many different aspects of the climate system, but due to their large volume and the difficulties associated with accessing and using the data, these datasets have been generally underutilized in model evaluation studies. Recognizing this problem, NASA JPL / UCLA is developing a model evaluation system to help make satellite observations, in conjunction with in-situ, assimilated, and reanalysis datasets, more readily accessible to the modeling community. The system includes a central database to store multiple datasets in a common format and codes for calculating predefined statistical metrics to assess model performance. This allows the time taken to compare model simulations with satellite observations to be reduced from weeks to days. Early results from the use this new model evaluation system for evaluating regional climate simulations over California/western US regions will be presented.

  9. Process-based Modeling of Ammonia Emission from Beef Cattle Feedyards with the Integrated Farm Systems Model.

    PubMed

    Waldrip, Heidi M; Rotz, C Alan; Hafner, Sasha D; Todd, Richard W; Cole, N Andy

    2014-07-01

    Ammonia (NH) volatilization from manure in beef cattle feedyards results in loss of agronomically important nitrogen (N) and potentially leads to overfertilization and acidification of aquatic and terrestrial ecosystems. In addition, NH is involved in the formation of atmospheric fine particulate matter (PM), which can affect human health. Process-based models have been developed to estimate NH emissions from various livestock production systems; however, little work has been conducted to assess their accuracy for large, open-lot beef cattle feedyards. This work describes the extension of an existing process-based model, the Integrated Farm Systems Model (IFSM), to include simulation of N dynamics in this type of system. To evaluate the model, IFSM-simulated daily per capita NH emission rates were compared with emissions data collected from two commercial feedyards in the Texas High Plains from 2007 to 2009. Model predictions were in good agreement with observations and were sensitive to variations in air temperature and dietary crude protein concentration. Predicted mean daily NH emission rates for the two feedyards had 71 to 81% agreement with observations. In addition, IFSM estimates of annual feedyard emissions were within 11 to 24% of observations, whereas a constant emission factor currently in use by the USEPA underestimated feedyard emissions by as much as 79%. The results from this study indicate that IFSM can quantify average feedyard NH emissions, assist with emissions reporting, provide accurate information for legislators and policymakers, investigate methods to mitigate NH losses, and evaluate the effects of specific management practices on farm nutrient balances. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  10. Stochastic simulation by image quilting of process-based geological models

    NASA Astrophysics Data System (ADS)

    Hoffimann, Júlio; Scheidt, Céline; Barfod, Adrian; Caers, Jef

    2017-09-01

    Process-based modeling offers a way to represent realistic geological heterogeneity in subsurface models. The main limitation lies in conditioning such models to data. Multiple-point geostatistics can use these process-based models as training images and address the data conditioning problem. In this work, we further develop image quilting as a method for 3D stochastic simulation capable of mimicking the realism of process-based geological models with minimal modeling effort (i.e. parameter tuning) and at the same time condition them to a variety of data. In particular, we develop a new probabilistic data aggregation method for image quilting that bypasses traditional ad-hoc weighting of auxiliary variables. In addition, we propose a novel criterion for template design in image quilting that generalizes the entropy plot for continuous training images. The criterion is based on the new concept of voxel reuse-a stochastic and quilting-aware function of the training image. We compare our proposed method with other established simulation methods on a set of process-based training images of varying complexity, including a real-case example of stochastic simulation of the buried-valley groundwater system in Denmark.

  11. Quantifying Atmospheric Moist Processes from Earth Observations. Really?

    NASA Astrophysics Data System (ADS)

    Shepson, P. B.; Cambaliza, M. O. L.; Salmon, O. E.; Heimburger, A. M. F.; Davis, K. J.; Lauvaux, T.; McGowan, L. E.; Miles, N.; Richardson, S.; Sarmiento, D. P.; Hardesty, M.; Karion, A.; Sweeney, C.; Iraci, L. T.; Hillyard, P. W.; Podolske, J. R.; Gurney, K. R.; Patarasuk, R.; Razlivanov, I. N.; Song, Y.; O'Keeffe, D.; Turnbull, J. C.; Vimont, I.; Whetstone, J. R.; Possolo, A.; Prasad, K.; Lopez-Coto, I.

    2014-12-01

    The amount of water in the Earth's atmosphere is tiny compared to all other sources of water on our planet, fresh or otherwise. However, this tiny amount of water is fundamental to most aspects of human life. The tiny amount of water that cycles from the Earth's surface, through condensation into clouds in the atmosphere returning as precipitation falling is not only natures way of delivering fresh water to land-locked human societies but it also exerts a fundamental control on our climate system producing the most important feedbacks in the system. The representation of these processes in Earth system models contain many errors that produce well now biases in the hydrological cycle. Surprisingly the parameterizations of these important processes are not well validated with observations. Part of the reason for this situation stems from the fact that process evaluation is difficult to achieve on the global scale since it has commonly been assumed that the static observations available from snap-shots of individual parameters contain little information on processes. One of the successes of the A-Train has been the development of multi-parameter analysis based on the multi-sensor data produced by the satellite constellation. This has led to new insights on how water cycles through the Earth's atmosphere. Examples of these insights will be highlighted. It will be described how the rain formation process has been observed and how this has been used to constrain this process in models, with a huge impact. How these observations are beginning to reveal insights on deep convection and examples of the use these observations applied to models will also be highlighted as will the effects of aerosol on clouds on radiation.

  12. Quantifying Atmospheric Moist Processes from Earth Observations. Really?

    NASA Astrophysics Data System (ADS)

    Stephens, G. L.

    2015-12-01

    The amount of water in the Earth's atmosphere is tiny compared to all other sources of water on our planet, fresh or otherwise. However, this tiny amount of water is fundamental to most aspects of human life. The tiny amount of water that cycles from the Earth's surface, through condensation into clouds in the atmosphere returning as precipitation falling is not only natures way of delivering fresh water to land-locked human societies but it also exerts a fundamental control on our climate system producing the most important feedbacks in the system. The representation of these processes in Earth system models contain many errors that produce well now biases in the hydrological cycle. Surprisingly the parameterizations of these important processes are not well validated with observations. Part of the reason for this situation stems from the fact that process evaluation is difficult to achieve on the global scale since it has commonly been assumed that the static observations available from snap-shots of individual parameters contain little information on processes. One of the successes of the A-Train has been the development of multi-parameter analysis based on the multi-sensor data produced by the satellite constellation. This has led to new insights on how water cycles through the Earth's atmosphere. Examples of these insights will be highlighted. It will be described how the rain formation process has been observed and how this has been used to constrain this process in models, with a huge impact. How these observations are beginning to reveal insights on deep convection and examples of the use these observations applied to models will also be highlighted as will the effects of aerosol on clouds on radiation.

  13. A Regional Climate Model Evaluation System based on contemporary Satellite and other Observations for Assessing Regional Climate Model Fidelity

    NASA Astrophysics Data System (ADS)

    Waliser, D. E.; Kim, J.; Mattman, C.; Goodale, C.; Hart, A.; Zimdars, P.; Lean, P.

    2011-12-01

    Evaluation of climate models against observations is an essential part of assessing the impact of climate variations and change on regionally important sectors and improving climate models. Regional climate models (RCMs) are of a particular concern. RCMs provide fine-scale climate needed by the assessment community via downscaling global climate model projections such as those contributing to the Coupled Model Intercomparison Project (CMIP) that form one aspect of the quantitative basis of the IPCC Assessment Reports. The lack of reliable fine-resolution observational data and formal tools and metrics has represented a challenge in evaluating RCMs. Recent satellite observations are particularly useful as they provide a wealth of information and constraints on many different processes within the climate system. Due to their large volume and the difficulties associated with accessing and using contemporary observations, however, these datasets have been generally underutilized in model evaluation studies. Recognizing this problem, NASA JPL and UCLA have developed the Regional Climate Model Evaluation System (RCMES) to help make satellite observations, in conjunction with in-situ and reanalysis datasets, more readily accessible to the regional modeling community. The system includes a central database (Regional Climate Model Evaluation Database: RCMED) to store multiple datasets in a common format and codes for calculating and plotting statistical metrics to assess model performance (Regional Climate Model Evaluation Tool: RCMET). This allows the time taken to compare model data with satellite observations to be reduced from weeks to days. RCMES is a component of the recent ExArch project, an international effort for facilitating the archive and access of massive amounts data for users using cloud-based infrastructure, in this case as applied to the study of climate and climate change. This presentation will describe RCMES and demonstrate its utility using examples

  14. Observationally-based Metrics of Ocean Carbon and Biogeochemical Variables are Essential for Evaluating Earth System Model Projections

    NASA Astrophysics Data System (ADS)

    Russell, J. L.; Sarmiento, J. L.

    2017-12-01

    The Southern Ocean is central to the climate's response to increasing levels of atmospheric greenhouse gases as it ventilates a large fraction of the global ocean volume. Global coupled climate models and earth system models, however, vary widely in their simulations of the Southern Ocean and its role in, and response to, the ongoing anthropogenic forcing. Due to its complex water-mass structure and dynamics, Southern Ocean carbon and heat uptake depend on a combination of winds, eddies, mixing, buoyancy fluxes and topography. Understanding how the ocean carries heat and carbon into its interior and how the observed wind changes are affecting this uptake is essential to accurately projecting transient climate sensitivity. Observationally-based metrics are critical for discerning processes and mechanisms, and for validating and comparing climate models. As the community shifts toward Earth system models with explicit carbon simulations, more direct observations of important biogeochemical parameters, like those obtained from the biogeochemically-sensored floats that are part of the Southern Ocean Carbon and Climate Observations and Modeling project, are essential. One goal of future observing systems should be to create observationally-based benchmarks that will lead to reducing uncertainties in climate projections, and especially uncertainties related to oceanic heat and carbon uptake.

  15. Local Scale Radiobrightness Modeling During the Intensive Observing Period-4 of the Cold Land Processes Experiment-1

    NASA Astrophysics Data System (ADS)

    Kim, E.; Tedesco, M.; de Roo, R.; England, A. W.; Gu, H.; Pham, H.; Boprie, D.; Graf, T.; Koike, T.; Armstrong, R.; Brodzik, M.; Hardy, J.; Cline, D.

    2004-12-01

    The NASA Cold Land Processes Field Experiment (CLPX-1) was designed to provide microwave remote sensing observations and ground truth for studies of snow and frozen ground remote sensing, particularly issues related to scaling. CLPX-1 was conducted in 2002 and 2003 in Colorado, USA. One of the goals of the experiment was to test the capabilities of microwave emission models at different scales. Initial forward model validation work has concentrated on the Local-Scale Observation Site (LSOS), a 0.8~ha study site consisting of open meadows separated by trees where the most detailed measurements were made of snow depth and temperature, density, and grain size profiles. Results obtained in the case of the 3rd Intensive Observing Period (IOP3) period (February, 2003, dry snow) suggest that a model based on Dense Medium Radiative Transfer (DMRT) theory is able to model the recorded brightness temperatures using snow parameters derived from field measurements. This paper focuses on the ability of forward DMRT modelling, combined with snowpack measurements, to reproduce the radiobrightness signatures observed by the University of Michigan's Truck-Mounted Radiometer System (TMRS) at 19 and 37~GHz during the 4th IOP (IOP4) in March, 2003. Unlike in IOP3, conditions during IOP4 include both wet and dry periods, providing a valuable test of DMRT model performance. In addition, a comparison will be made for the one day of coincident observations by the University of Tokyo's Ground-Based Microwave Radiometer-7 (GBMR-7) and the TMRS. The plot-scale study in this paper establishes a baseline of DMRT performance for later studies at successively larger scales. And these scaling studies will help guide the choice of future snow retrieval algorithms and the design of future Cold Lands observing systems.

  16. Modeling treatment of ischemic heart disease with partially observable Markov decision processes.

    PubMed

    Hauskrecht, M; Fraser, H

    1998-01-01

    Diagnosis of a disease and its treatment are not separate, one-shot activities. Instead they are very often dependent and interleaved over time, mostly due to uncertainty about the underlying disease, uncertainty associated with the response of a patient to the treatment and varying cost of different diagnostic (investigative) and treatment procedures. The framework of Partially observable Markov decision processes (POMDPs) developed and used in operations research, control theory and artificial intelligence communities is particularly suitable for modeling such a complex decision process. In the paper, we show how the POMDP framework could be used to model and solve the problem of the management of patients with ischemic heart disease, and point out modeling advantages of the framework over standard decision formalisms.

  17. Model-based software process improvement

    NASA Technical Reports Server (NTRS)

    Zettervall, Brenda T.

    1994-01-01

    The activities of a field test site for the Software Engineering Institute's software process definition project are discussed. Products tested included the improvement model itself, descriptive modeling techniques, the CMM level 2 framework document, and the use of process definition guidelines and templates. The software process improvement model represents a five stage cyclic approach for organizational process improvement. The cycles consist of the initiating, diagnosing, establishing, acting, and leveraging phases.

  18. Double-observer line transect surveys with Markov-modulated Poisson process models for animal availability.

    PubMed

    Borchers, D L; Langrock, R

    2015-12-01

    We develop maximum likelihood methods for line transect surveys in which animals go undetected at distance zero, either because they are stochastically unavailable while within view or because they are missed when they are available. These incorporate a Markov-modulated Poisson process model for animal availability, allowing more clustered availability events than is possible with Poisson availability models. They include a mark-recapture component arising from the independent-observer survey, leading to more accurate estimation of detection probability given availability. We develop models for situations in which (a) multiple detections of the same individual are possible and (b) some or all of the availability process parameters are estimated from the line transect survey itself, rather than from independent data. We investigate estimator performance by simulation, and compare the multiple-detection estimators with estimators that use only initial detections of individuals, and with a single-observer estimator. Simultaneous estimation of detection function parameters and availability model parameters is shown to be feasible from the line transect survey alone with multiple detections and double-observer data but not with single-observer data. Recording multiple detections of individuals improves estimator precision substantially when estimating the availability model parameters from survey data, and we recommend that these data be gathered. We apply the methods to estimate detection probability from a double-observer survey of North Atlantic minke whales, and find that double-observer data greatly improve estimator precision here too. © 2015 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  19. Development of KIAPS Observation Processing Package for Data Assimilation System

    NASA Astrophysics Data System (ADS)

    Kang, Jeon-Ho; Chun, Hyoung-Wook; Lee, Sihye; Han, Hyun-Jun; Ha, Su-Jin

    2015-04-01

    The Korea Institute of Atmospheric Prediction Systems (KIAPS) was founded in 2011 by the Korea Meteorological Administration (KMA) to develop Korea's own global Numerical Weather Prediction (NWP) system as nine year (2011-2019) project. Data assimilation team at KIAPS has been developing the observation processing system (KIAPS Package for Observation Processing: KPOP) to provide optimal observations to the data assimilation system for the KIAPS Global Model (KIAPS Integrated Model - Spectral Element method based on HOMME: KIM-SH). Currently, the KPOP is capable of processing the satellite radiance data (AMSU-A, IASI), GPS Radio Occultation (GPS-RO), AIRCRAFT (AMDAR, AIREP, and etc…), and synoptic observation (SONDE and SURFACE). KPOP adopted Radiative Transfer for TOVS version 10 (RTTOV_v10) to get brightness temperature (TB) for each channel at top of the atmosphere (TOA), and Radio Occultation Processing Package (ROPP) 1-dimensional forward module to get bending angle (BA) at each tangent point. The observation data are obtained from the KMA which has been composited with BUFR format to be converted with ODB that are used for operational data assimilation and monitoring at the KMA. The Unified Model (UM), Community Atmosphere - Spectral Element (CAM-SE) and KIM-SH model outputs are used for the bias correction (BC) and quality control (QC) of the observations, respectively. KPOP provides radiance and RO data for Local Ensemble Transform Kalman Filter (LETKF) and also provides SONDE, SURFACE and AIRCRAFT data for Three-Dimensional Variational Assimilation (3DVAR). We are expecting all of the observation type which processed in KPOP could be combined with both of the data assimilation method as soon as possible. The preliminary results from each observation type will be introduced with the current development status of the KPOP.

  20. Symbolic Processing Combined with Model-Based Reasoning

    NASA Technical Reports Server (NTRS)

    James, Mark

    2009-01-01

    A computer program for the detection of present and prediction of future discrete states of a complex, real-time engineering system utilizes a combination of symbolic processing and numerical model-based reasoning. One of the biggest weaknesses of a purely symbolic approach is that it enables prediction of only future discrete states while missing all unmodeled states or leading to incorrect identification of an unmodeled state as a modeled one. A purely numerical approach is based on a combination of statistical methods and mathematical models of the applicable physics and necessitates development of a complete model to the level of fidelity required for prediction. In addition, a purely numerical approach does not afford the ability to qualify its results without some form of symbolic processing. The present software implements numerical algorithms to detect unmodeled events and symbolic algorithms to predict expected behavior, correlate the expected behavior with the unmodeled events, and interpret the results in order to predict future discrete states. The approach embodied in this software differs from that of the BEAM methodology (aspects of which have been discussed in several prior NASA Tech Briefs articles), which provides for prediction of future measurements in the continuous-data domain.

  1. Agent-Based Phytoplankton Models of Cellular and Population Processes: Fostering Individual-Based Learning in Undergraduate Research

    NASA Astrophysics Data System (ADS)

    Berges, J. A.; Raphael, T.; Rafa Todd, C. S.; Bate, T. C.; Hellweger, F. L.

    2016-02-01

    Engaging undergraduate students in research projects that require expertise in multiple disciplines (e.g. cell biology, population ecology, and mathematical modeling) can be challenging because they have often not developed the expertise that allows them to participate at a satisfying level. Use of agent-based modeling can allow exploration of concepts at more intuitive levels, and encourage experimentation that emphasizes processes over computational skills. Over the past several years, we have involved undergraduate students in projects examining both ecological and cell biological aspects of aquatic microbial biology, using the freely-downloadable, agent-based modeling environment NetLogo (https://ccl.northwestern.edu/netlogo/). In Netlogo, actions of large numbers of individuals can be simulated, leading to complex systems with emergent behavior. The interface features appealing graphics, monitors, and control structures. In one example, a group of sophomores in a BioMathematics program developed an agent-based model of phytoplankton population dynamics in a pond ecosystem, motivated by observed macroscopic changes in cell numbers (due to growth and death), and driven by responses to irradiance, temperature and a limiting nutrient. In a second example, junior and senior undergraduates conducting Independent Studies created a model of the intracellular processes governing stress and cell death for individual phytoplankton cells (based on parameters derived from experiments using single-cell culturing and flow cytometry), and then this model was embedded in the agents in the pond ecosystem model. In our experience, students with a range of mathematical abilities learned to code quickly and could use the software with varying degrees of sophistication, for example, creation of spatially-explicit two and three-dimensional models. Skills developed quickly and transferred readily to other platforms (e.g. Matlab).

  2. Crustal block motion model and interplate coupling along Ecuador-Colombia trench based on GNSS observation network

    NASA Astrophysics Data System (ADS)

    Ito, T.; Mora-Páez, H.; Peláez-Gaviria, J. R.; Kimura, H.; Sagiya, T.

    2017-12-01

    IntroductionEcuador-Colombia trench is located at the boundary between South-America plate, Nazca Plate and Caribrian plate. This region is very complexes such as subducting Caribrian plate and Nazca plate, and collision between Panama and northern part of the Andes mountains. The previous large earthquakes occurred along the subducting boundary of Nazca plate, such as 1906 (M8.8) and 1979 (M8.2). And also, earthquakes occurred inland, too. So, it is important to evaluate earthquake potentials for preparing huge damage due to large earthquake in near future. GNSS observation In the last decade, the GNSS observation was established in Columbia. The GNSS observation is called by GEORED, which is operated by servicing Geologico Colomiano. The purpose of GEORED is research of crustal deformation. The number of GNSS site of GEORED is consist of 60 continuous GNSS observation site at 2017 (Mora et al., 2017). The sampling interval of almost GNSS site is 30 seconds. These GNSS data were processed by PPP processing using GIPSY-OASYS II software. GEORED can obtain the detailed crustal deformation map in whole Colombia. In addition, we use 100 GNSS data at Ecuador-Peru region (Nocquet et al. 2014). Method We developed a crustal block movements model based on crustal deformation derived from GNSS observation. Our model considers to the block motion with pole location and angular velocity and the interplate coupling between each block boundaries, including subduction between the South-American plate and the Nazca plate. And also, our approach of estimation of crustal block motion and coefficient of interplate coupling are based on MCMC method. The estimated each parameter is obtained probably density function (PDF). Result We tested 11 crustal block models based on geological data, such as active fault trace at surface. The optimal number of crustal blocks is 11 for based on geological and geodetic data using AIC. We use optimal block motion model. And also, we estimate

  3. Simplified process model discovery based on role-oriented genetic mining.

    PubMed

    Zhao, Weidong; Liu, Xi; Dai, Weihui

    2014-01-01

    Process mining is automated acquisition of process models from event logs. Although many process mining techniques have been developed, most of them are based on control flow. Meanwhile, the existing role-oriented process mining methods focus on correctness and integrity of roles while ignoring role complexity of the process model, which directly impacts understandability and quality of the model. To address these problems, we propose a genetic programming approach to mine the simplified process model. Using a new metric of process complexity in terms of roles as the fitness function, we can find simpler process models. The new role complexity metric of process models is designed from role cohesion and coupling, and applied to discover roles in process models. Moreover, the higher fitness derived from role complexity metric also provides a guideline for redesigning process models. Finally, we conduct case study and experiments to show that the proposed method is more effective for streamlining the process by comparing with related studies.

  4. Pursuing the method of multiple working hypotheses to understand differences in process-based snow models

    NASA Astrophysics Data System (ADS)

    Clark, Martyn; Essery, Richard

    2017-04-01

    When faced with the complex and interdisciplinary challenge of building process-based land models, different modelers make different decisions at different points in the model development process. These modeling decisions are generally based on several considerations, including fidelity (e.g., what approaches faithfully simulate observed processes), complexity (e.g., which processes should be represented explicitly), practicality (e.g., what is the computational cost of the model simulations; are there sufficient resources to implement the desired modeling concepts), and data availability (e.g., is there sufficient data to force and evaluate models). Consequently the research community, comprising modelers of diverse background, experience, and modeling philosophy, has amassed a wide range of models, which differ in almost every aspect of their conceptualization and implementation. Model comparison studies have been undertaken to explore model differences, but have not been able to meaningfully attribute inter-model differences in predictive ability to individual model components because there are often too many structural and implementation differences among the different models considered. As a consequence, model comparison studies to date have provided limited insight into the causes of differences in model behavior, and model development has often relied on the inspiration and experience of individual modelers rather than on a systematic analysis of model shortcomings. This presentation will summarize the use of "multiple-hypothesis" modeling frameworks to understand differences in process-based snow models. Multiple-hypothesis frameworks define a master modeling template, and include a a wide variety of process parameterizations and spatial configurations that are used in existing models. Such frameworks provide the capability to decompose complex models into the individual decisions that are made as part of model development, and evaluate each decision in

  5. Physics-based process model approach for detecting discontinuity during friction stir welding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shrivastava, Amber; Pfefferkorn, Frank E.; Duffie, Neil A.

    2015-02-12

    The goal of this work is to develop a method for detecting the creation of discontinuities during friction stir welding. This in situ weld monitoring method could significantly reduce the need for post-process inspection. A process force model and a discontinuity force model were created based on the state-of-the-art understanding of flow around an friction stir welding (FSW) tool. These models are used to predict the FSW forces and size of discontinuities formed in the weld. Friction stir welds with discontinuities and welds without discontinuities were created, and the differences in force dynamics were observed. In this paper, discontinuities weremore » generated by reducing the tool rotation frequency and increasing the tool traverse speed in order to create "cold" welds. Experimental force data for welds with discontinuities and welds without discontinuities compared favorably with the predicted forces. The model currently overpredicts the discontinuity size.« less

  6. Gaze data reveal distinct choice processes underlying model-based and model-free reinforcement learning

    PubMed Central

    Konovalov, Arkady; Krajbich, Ian

    2016-01-01

    Organisms appear to learn and make decisions using different strategies known as model-free and model-based learning; the former is mere reinforcement of previously rewarded actions and the latter is a forward-looking strategy that involves evaluation of action-state transition probabilities. Prior work has used neural data to argue that both model-based and model-free learners implement a value comparison process at trial onset, but model-based learners assign more weight to forward-looking computations. Here using eye-tracking, we report evidence for a different interpretation of prior results: model-based subjects make their choices prior to trial onset. In contrast, model-free subjects tend to ignore model-based aspects of the task and instead seem to treat the decision problem as a simple comparison process between two differentially valued items, consistent with previous work on sequential-sampling models of decision making. These findings illustrate a problem with assuming that experimental subjects make their decisions at the same prescribed time. PMID:27511383

  7. Comparing estimates of climate change impacts from process-based and statistical crop models

    NASA Astrophysics Data System (ADS)

    Lobell, David B.; Asseng, Senthold

    2017-01-01

    The potential impacts of climate change on crop productivity are of widespread interest to those concerned with addressing climate change and improving global food security. Two common approaches to assess these impacts are process-based simulation models, which attempt to represent key dynamic processes affecting crop yields, and statistical models, which estimate functional relationships between historical observations of weather and yields. Examples of both approaches are increasingly found in the scientific literature, although often published in different disciplinary journals. Here we compare published sensitivities to changes in temperature, precipitation, carbon dioxide (CO2), and ozone from each approach for the subset of crops, locations, and climate scenarios for which both have been applied. Despite a common perception that statistical models are more pessimistic, we find no systematic differences between the predicted sensitivities to warming from process-based and statistical models up to +2 °C, with limited evidence at higher levels of warming. For precipitation, there are many reasons why estimates could be expected to differ, but few estimates exist to develop robust comparisons, and precipitation changes are rarely the dominant factor for predicting impacts given the prominent role of temperature, CO2, and ozone changes. A common difference between process-based and statistical studies is that the former tend to include the effects of CO2 increases that accompany warming, whereas statistical models typically do not. Major needs moving forward include incorporating CO2 effects into statistical studies, improving both approaches’ treatment of ozone, and increasing the use of both methods within the same study. At the same time, those who fund or use crop model projections should understand that in the short-term, both approaches when done well are likely to provide similar estimates of warming impacts, with statistical models generally

  8. Cascade process modeling with mechanism-based hierarchical neural networks.

    PubMed

    Cong, Qiumei; Yu, Wen; Chai, Tianyou

    2010-02-01

    Cascade process, such as wastewater treatment plant, includes many nonlinear sub-systems and many variables. When the number of sub-systems is big, the input-output relation in the first block and the last block cannot represent the whole process. In this paper we use two techniques to overcome the above problem. Firstly we propose a new neural model: hierarchical neural networks to identify the cascade process; then we use serial structural mechanism model based on the physical equations to connect with neural model. A stable learning algorithm and theoretical analysis are given. Finally, this method is used to model a wastewater treatment plant. Real operational data of wastewater treatment plant is applied to illustrate the modeling approach.

  9. Spatio-Temporal Data Analysis at Scale Using Models Based on Gaussian Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stein, Michael

    Gaussian processes are the most commonly used statistical model for spatial and spatio-temporal processes that vary continuously. They are broadly applicable in the physical sciences and engineering and are also frequently used to approximate the output of complex computer models, deterministic or stochastic. We undertook research related to theory, computation, and applications of Gaussian processes as well as some work on estimating extremes of distributions for which a Gaussian process assumption might be inappropriate. Our theoretical contributions include the development of new classes of spatial-temporal covariance functions with desirable properties and new results showing that certain covariance models lead tomore » predictions with undesirable properties. To understand how Gaussian process models behave when applied to deterministic computer models, we derived what we believe to be the first significant results on the large sample properties of estimators of parameters of Gaussian processes when the actual process is a simple deterministic function. Finally, we investigated some theoretical issues related to maxima of observations with varying upper bounds and found that, depending on the circumstances, standard large sample results for maxima may or may not hold. Our computational innovations include methods for analyzing large spatial datasets when observations fall on a partially observed grid and methods for estimating parameters of a Gaussian process model from observations taken by a polar-orbiting satellite. In our application of Gaussian process models to deterministic computer experiments, we carried out some matrix computations that would have been infeasible using even extended precision arithmetic by focusing on special cases in which all elements of the matrices under study are rational and using exact arithmetic. The applications we studied include total column ozone as measured from a polar-orbiting satellite, sea surface temperatures over

  10. A Model of Process-Based Automation: Cost and Quality Implications in the Medication Management Process

    ERIC Educational Resources Information Center

    Spaulding, Trent Joseph

    2011-01-01

    The objective of this research is to understand how a set of systems, as defined by the business process, creates value. The three studies contained in this work develop the model of process-based automation. The model states that complementarities among systems are specified by handoffs in the business process. The model also provides theory to…

  11. Generation of global VTEC maps from low latency GNSS observations based on B-spline modelling and Kalman filtering

    NASA Astrophysics Data System (ADS)

    Erdogan, Eren; Dettmering, Denise; Limberger, Marco; Schmidt, Michael; Seitz, Florian; Börger, Klaus; Brandert, Sylvia; Görres, Barbara; Kersten, Wilhelm F.; Bothmer, Volker; Hinrichs, Johannes; Venzmer, Malte

    2015-04-01

    In May 2014 DGFI-TUM (the former DGFI) and the German Space Situational Awareness Centre (GSSAC) started to develop an OPerational Tool for Ionospheric Mapping And Prediction (OPTIMAP); since November 2014 the Institute of Astrophysics at the University of Göttingen (IAG) joined the group as the third partner. This project aims on the computation and prediction of maps of the vertical total electron content (VTEC) and the electron density distribution of the ionosphere on a global scale from both various space-geodetic observation techniques such as GNSS and satellite altimetry as well as Sun observations. In this contribution we present first results, i.e. a near-real time processing framework for generating VTEC maps by assimilating GNSS (GPS, GLONASS) based ionospheric data into a two-dimensional global B-spline approach. To be more specific, the spatial variations of VTEC are modelled by trigonometric B-spline functions in longitude and by endpoint-interpolating polynomial B-spline functions in latitude, respectively. Since B-spline functions are compactly supported and highly localizing our approach can handle large data gaps appropriately and, thus, provides a better approximation of data with heterogeneous density and quality compared to the commonly used spherical harmonics. The presented method models temporal variations of VTEC inside a Kalman filter. The unknown parameters of the filter state vector are composed of the B-spline coefficients as well as the satellite and receiver DCBs. To approximate the temporal variation of these state vector components as part of the filter the dynamical model has to be set up. The current implementation of the filter allows to select between a random walk process, a Gauss-Markov process and a dynamic process driven by an empirical ionosphere model, e.g. the International Reference Ionosphere (IRI). For running the model ionospheric input data is acquired from terrestrial GNSS networks through online archive systems

  12. From point process observations to collective neural dynamics: Nonlinear Hawkes process GLMs, low-dimensional dynamics and coarse graining

    PubMed Central

    Truccolo, Wilson

    2017-01-01

    This review presents a perspective on capturing collective dynamics in recorded neuronal ensembles based on multivariate point process models, inference of low-dimensional dynamics and coarse graining of spatiotemporal measurements. A general probabilistic framework for continuous time point processes reviewed, with an emphasis on multivariate nonlinear Hawkes processes with exogenous inputs. A point process generalized linear model (PP-GLM) framework for the estimation of discrete time multivariate nonlinear Hawkes processes is described. The approach is illustrated with the modeling of collective dynamics in neocortical neuronal ensembles recorded in human and non-human primates, and prediction of single-neuron spiking. A complementary approach to capture collective dynamics based on low-dimensional dynamics (“order parameters”) inferred via latent state-space models with point process observations is presented. The approach is illustrated by inferring and decoding low-dimensional dynamics in primate motor cortex during naturalistic reach and grasp movements. Finally, we briefly review hypothesis tests based on conditional inference and spatiotemporal coarse graining for assessing collective dynamics in recorded neuronal ensembles. PMID:28336305

  13. From point process observations to collective neural dynamics: Nonlinear Hawkes process GLMs, low-dimensional dynamics and coarse graining.

    PubMed

    Truccolo, Wilson

    2016-11-01

    This review presents a perspective on capturing collective dynamics in recorded neuronal ensembles based on multivariate point process models, inference of low-dimensional dynamics and coarse graining of spatiotemporal measurements. A general probabilistic framework for continuous time point processes reviewed, with an emphasis on multivariate nonlinear Hawkes processes with exogenous inputs. A point process generalized linear model (PP-GLM) framework for the estimation of discrete time multivariate nonlinear Hawkes processes is described. The approach is illustrated with the modeling of collective dynamics in neocortical neuronal ensembles recorded in human and non-human primates, and prediction of single-neuron spiking. A complementary approach to capture collective dynamics based on low-dimensional dynamics ("order parameters") inferred via latent state-space models with point process observations is presented. The approach is illustrated by inferring and decoding low-dimensional dynamics in primate motor cortex during naturalistic reach and grasp movements. Finally, we briefly review hypothesis tests based on conditional inference and spatiotemporal coarse graining for assessing collective dynamics in recorded neuronal ensembles. Published by Elsevier Ltd.

  14. Global GNSS processing based on the raw observation approach

    NASA Astrophysics Data System (ADS)

    Strasser, Sebastian; Zehentner, Norbert; Mayer-Gürr, Torsten

    2017-04-01

    Many global navigation satellite system (GNSS) applications, e.g. Precise Point Positioning (PPP), require high-quality GNSS products, such as precise GNSS satellite orbits and clocks. These products are routinely determined by analysis centers of the International GNSS Service (IGS). The current processing methods of the analysis centers make use of the ionosphere-free linear combination to reduce the ionospheric influence. Some of the analysis centers also form observation differences, in general double-differences, to eliminate several additional error sources. The raw observation approach is a new GNSS processing approach that was developed at Graz University of Technology for kinematic orbit determination of low Earth orbit (LEO) satellites and subsequently adapted to global GNSS processing in general. This new approach offers some benefits compared to well-established approaches, such as a straightforward incorporation of new observables due to the avoidance of observation differences and linear combinations. This becomes especially important in view of the changing GNSS landscape with two new systems, the European system Galileo and the Chinese system BeiDou, currently in deployment. GNSS products generated at Graz University of Technology using the raw observation approach currently comprise precise GNSS satellite orbits and clocks, station positions and clocks, code and phase biases, and Earth rotation parameters. To evaluate the new approach, products generated using the Global Positioning System (GPS) constellation and observations from the global IGS station network are compared to those of the IGS analysis centers. The comparisons show that the products generated at Graz University of Technology are on a similar level of quality to the products determined by the IGS analysis centers. This confirms that the raw observation approach is applicable to global GNSS processing. Some areas requiring further work have been identified, enabling future

  15. Model-Based Verification and Validation of the SMAP Uplink Processes

    NASA Technical Reports Server (NTRS)

    Khan, M. Omair; Dubos, Gregory F.; Tirona, Joseph; Standley, Shaun

    2013-01-01

    This case study stands as an example of how a project can validate a system-level design earlier in the project life cycle than traditional V&V processes by using simulation on a system model. Specifically, this paper describes how simulation was added to a system model of the Soil Moisture Active-Passive (SMAP) mission's uplink process.Also discussed are the advantages and disadvantages of the methods employed and the lessons learned; which are intended to benefit future model-based and simulation-based V&V development efforts.

  16. A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models

    NASA Astrophysics Data System (ADS)

    Brugnach, M.; Neilson, R.; Bolte, J.

    2001-12-01

    The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in

  17. Modeling of 2D diffusion processes based on microscopy data: parameter estimation and practical identifiability analysis.

    PubMed

    Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J

    2013-01-01

    Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.

  18. An assembly process model based on object-oriented hierarchical time Petri Nets

    NASA Astrophysics Data System (ADS)

    Wang, Jiapeng; Liu, Shaoli; Liu, Jianhua; Du, Zenghui

    2017-04-01

    In order to improve the versatility, accuracy and integrity of the assembly process model of complex products, an assembly process model based on object-oriented hierarchical time Petri Nets is presented. A complete assembly process information model including assembly resources, assembly inspection, time, structure and flexible parts is established, and this model describes the static and dynamic data involved in the assembly process. Through the analysis of three-dimensional assembly process information, the assembly information is hierarchically divided from the whole, the local to the details and the subnet model of different levels of object-oriented Petri Nets is established. The communication problem between Petri subnets is solved by using message database, and it reduces the complexity of system modeling effectively. Finally, the modeling process is presented, and a five layer Petri Nets model is established based on the hoisting process of the engine compartment of a wheeled armored vehicle.

  19. Ozone distributions over southern Lake Michigan: comparisons between ferry-based observations, shoreline-based DOAS observations and model forecasts

    NASA Astrophysics Data System (ADS)

    Cleary, P. A.; Fuhrman, N.; Schulz, L.; Schafer, J.; Fillingham, J.; Bootsma, H.; McQueen, J.; Tang, Y.; Langel, T.; McKeen, S.; Williams, E. J.; Brown, S. S.

    2015-05-01

    Air quality forecast models typically predict large summertime ozone abundances over water relative to land in the Great Lakes region. While each state bordering Lake Michigan has dedicated monitoring systems, offshore measurements have been sparse, mainly executed through specific short-term campaigns. This study examines ozone abundances over Lake Michigan as measured on the Lake Express ferry, by shoreline differential optical absorption spectroscopy (DOAS) observations in southeastern Wisconsin and as predicted by the Community Multiscale Air Quality (CMAQ) model. From 2008 to 2009 measurements of O3, SO2, NO2 and formaldehyde were made in the summertime by DOAS at a shoreline site in Kenosha, WI. From 2008 to 2010 measurements of ambient ozone were conducted on the Lake Express, a high-speed ferry that travels between Milwaukee, WI, and Muskegon, MI, up to six times daily from spring to fall. Ferry ozone observations over Lake Michigan were an average of 3.8 ppb higher than those measured at shoreline in Kenosha, with little dependence on position of the ferry or temperature and with greatest differences during evening and night. Concurrent 1-48 h forecasts from the CMAQ model in the upper Midwestern region surrounding Lake Michigan were compared to ferry ozone measurements, shoreline DOAS measurements and Environmental Protection Agency (EPA) station measurements. The bias of the model O3 forecast was computed and evaluated with respect to ferry-based measurements. Trends in the bias with respect to location and time of day were explored showing non-uniformity in model bias over the lake. Model ozone bias was consistently high over the lake in comparison to land-based measurements, with highest biases for 25-48 h after initialization.

  20. Model medication management process in Australian nursing homes using business process modeling.

    PubMed

    Qian, Siyu; Yu, Ping

    2013-01-01

    One of the reasons for end user avoidance or rejection to use health information systems is poor alignment of the system with healthcare workflow, likely causing by system designers' lack of thorough understanding about healthcare process. Therefore, understanding the healthcare workflow is the essential first step for the design of optimal technologies that will enable care staff to complete the intended tasks faster and better. The often use of multiple or "high risk" medicines by older people in nursing homes has the potential to increase medication error rate. To facilitate the design of information systems with most potential to improve patient safety, this study aims to understand medication management process in nursing homes using business process modeling method. The paper presents study design and preliminary findings from interviewing two registered nurses, who were team leaders in two nursing homes. Although there were subtle differences in medication management between the two homes, major medication management activities were similar. Further field observation will be conducted. Based on the data collected from observations, an as-is process model for medication management will be developed.

  1. Filtering with Marked Point Process Observations via Poisson Chaos Expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun Wei, E-mail: wsun@mathstat.concordia.ca; Zeng Yong, E-mail: zengy@umkc.edu; Zhang Shu, E-mail: zhangshuisme@hotmail.com

    2013-06-15

    We study a general filtering problem with marked point process observations. The motivation comes from modeling financial ultra-high frequency data. First, we rigorously derive the unnormalized filtering equation with marked point process observations under mild assumptions, especially relaxing the bounded condition of stochastic intensity. Then, we derive the Poisson chaos expansion for the unnormalized filter. Based on the chaos expansion, we establish the uniqueness of solutions of the unnormalized filtering equation. Moreover, we derive the Poisson chaos expansion for the unnormalized filter density under additional conditions. To explore the computational advantage, we further construct a new consistent recursive numerical schememore » based on the truncation of the chaos density expansion for a simple case. The new algorithm divides the computations into those containing solely system coefficients and those including the observations, and assign the former off-line.« less

  2. Model-based verification and validation of the SMAP uplink processes

    NASA Astrophysics Data System (ADS)

    Khan, M. O.; Dubos, G. F.; Tirona, J.; Standley, S.

    Model-Based Systems Engineering (MBSE) is being used increasingly within the spacecraft design community because of its benefits when compared to document-based approaches. As the complexity of projects expands dramatically with continually increasing computational power and technology infusion, the time and effort needed for verification and validation (V& V) increases geometrically. Using simulation to perform design validation with system-level models earlier in the life cycle stands to bridge the gap between design of the system (based on system-level requirements) and verifying those requirements/validating the system as a whole. This case study stands as an example of how a project can validate a system-level design earlier in the project life cycle than traditional V& V processes by using simulation on a system model. Specifically, this paper describes how simulation was added to a system model of the Soil Moisture Active-Passive (SMAP) mission's uplink process. Also discussed are the advantages and disadvantages of the methods employed and the lessons learned; which are intended to benefit future model-based and simulation-based development efforts.

  3. Stratospheric processes: Observations and interpretation

    NASA Technical Reports Server (NTRS)

    Brune, William H.; Cox, R. Anthony; Turco, Richard; Brasseur, Guy P.; Matthews, W. Andrew; Zhou, Xiuji; Douglass, Anne; Zander, Rudi J.; Prendez, Margarita; Rodriguez, Jose M.

    1991-01-01

    Explaining the observed ozone trends discussed in an earlier update and predicting future trends requires an understanding of the stratospheric processes that affect ozone. Stratospheric processes occur on both large and small spatial scales and over both long and short periods of time. Because these diverse processes interact with each other, only in rare cases can individual processes be studied by direct observation. Generally the cause and effect relationships for ozone changes were established by comparisons between observations and model simulations. Increasingly, these comparisons rely on the developing, observed relationships among trace gases and dynamical quantities to initialize and constrain the simulations. The goal of this discussion of stratospheric processes is to describe the causes for the observed ozone trends as they are currently understood. At present, we understand with considerable confidence the stratospheric processes responsible for the Antarctic ozone hole but are only beginning to understand the causes of the ozone trends at middle latitudes. Even though the causes of the ozone trends at middle latitudes were not clearly determined, it is likely that they, just as those over Antarctica, involved chlorine and bromine chemistry that was enhanced by heterogeneous processes. This discussion generally presents only an update of the observations that have occurred for stratospheric processes since the last assessment (World Meteorological Organization (WMO), 1990), and is not a complete review of all the new information about stratospheric processes. It begins with an update of the previous assessment of polar stratospheres (WMO, 1990), followed by a discussion on the possible causes for the ozone trends at middle latitudes and on the effects of bromine and of volcanoes.

  4. Performance of a process-based hydrodynamic model in predicting shoreline change

    NASA Astrophysics Data System (ADS)

    Safak, I.; Warner, J. C.; List, J. H.

    2012-12-01

    Shoreline change is controlled by a complex combination of processes that include waves, currents, sediment characteristics and availability, geologic framework, human interventions, and sea level rise. A comprehensive data set of shoreline position (14 shorelines between 1978-2002) along the continuous and relatively non-interrupted North Carolina Coast from Oregon Inlet to Cape Hatteras (65 km) reveals a spatial pattern of alternating erosion and accretion, with an erosional average shoreline change rate of -1.6 m/yr and up to -8 m/yr in some locations. This data set gives a unique opportunity to study long-term shoreline change in an area hit by frequent storm events while relatively uninfluenced by human interventions and the effects of tidal inlets. Accurate predictions of long-term shoreline change may require a model that accurately resolves surf zone processes and sediment transport patterns. Conventional methods for predicting shoreline change such as one-line models and regression of shoreline positions have been designed for computational efficiency. These methods, however, not only have several underlying restrictions (validity for small angle of wave approach, assuming bottom contours and shoreline to be parallel, depth of closure, etc.) but also their empirical estimates of sediment transport rates in the surf zone have been shown to vary greatly from the calculations of process-based hydrodynamic models. We focus on hind-casting long-term shoreline change using components of the process-based, three-dimensional coupled-ocean-atmosphere-wave-sediment transport modeling system (COAWST). COAWST is forced with historical predictions of atmospheric and oceanographic data from public-domain global models. Through a method of coupled concurrent grid-refinement approach in COAWST, the finest grid with resolution of O(10 m) that covers the surf zone along the section of interest is forced at its spatial boundaries with waves and currents computed on the grids

  5. On the upscaling of process-based models in deltaic applications

    NASA Astrophysics Data System (ADS)

    Li, L.; Storms, J. E. A.; Walstra, D. J. R.

    2018-03-01

    Process-based numerical models are increasingly used to study the evolution of marine and terrestrial depositional environments. Whilst a detailed description of small-scale processes provides an accurate representation of reality, application on geological timescales is restrained by the associated increase in computational time. In order to reduce the computational time, a number of acceleration methods are combined and evaluated for a schematic supply-driven delta (static base level) and an accommodation-driven delta (variable base level). The performance of the combined acceleration methods is evaluated by comparing the morphological indicators such as distributary channel networking and delta volumes derived from the model predictions for various levels of acceleration. The results of the accelerated models are compared to the outcomes from a series of simulations to capture autogenic variability. Autogenic variability is quantified by re-running identical models on an initial bathymetry with 1 cm added noise. The overall results show that the variability of the accelerated models fall within the autogenic variability range, suggesting that the application of acceleration methods does not significantly affect the simulated delta evolution. The Time-scale compression method (the acceleration method introduced in this paper) results in an increased computational efficiency of 75% without adversely affecting the simulated delta evolution compared to a base case. The combination of the Time-scale compression method with the existing acceleration methods has the potential to extend the application range of process-based models towards geologic timescales.

  6. Direct observation of the oxidation of DNA bases by phosphate radicals formed under radiation: a model of the backbone-to-base hole transfer.

    PubMed

    Ma, Jun; Marignier, Jean-Louis; Pernot, Pascal; Houée-Levin, Chantal; Kumar, Anil; Sevilla, Michael D; Adhikary, Amitava; Mostafavi, Mehran

    2018-05-30

    In irradiated DNA, by the base-to-base and backbone-to-base hole transfer processes, the hole (i.e., the unpaired spin) localizes on the most electropositive base, guanine. Phosphate radicals formed via ionization events in the DNA-backbone must play an important role in the backbone-to-base hole transfer process. However, earlier studies on irradiated hydrated DNA, on irradiated DNA-models in frozen aqueous solution and in neat dimethyl phosphate showed the formation of carbon-centered radicals and not phosphate radicals. Therefore, to model the backbone-to-base hole transfer process, we report picosecond pulse radiolysis studies of the reactions between H2PO4˙ with the DNA bases - G, A, T, and C in 6 M H3PO4 at 22 °C. The time-resolved observations show that in 6 M H3PO4, H2PO4˙ causes the one-electron oxidation of adenine, guanine and thymine, by forming the cation radicals via a single electron transfer (SET) process; however, the rate constant of the reaction of H2PO4˙ with cytosine is too low (<107 L mol-1 s-1) to be measured. The rates of these reactions are influenced by the protonation states and the reorganization energies of the base radicals and of the phosphate radical in 6 M H3PO4.

  7. The involvement of model-based but not model-free learning signals during observational reward learning in the absence of choice.

    PubMed

    Dunne, Simon; D'Souza, Arun; O'Doherty, John P

    2016-06-01

    A major open question is whether computational strategies thought to be used during experiential learning, specifically model-based and model-free reinforcement learning, also support observational learning. Furthermore, the question of how observational learning occurs when observers must learn about the value of options from observing outcomes in the absence of choice has not been addressed. In the present study we used a multi-armed bandit task that encouraged human participants to employ both experiential and observational learning while they underwent functional magnetic resonance imaging (fMRI). We found evidence for the presence of model-based learning signals during both observational and experiential learning in the intraparietal sulcus. However, unlike during experiential learning, model-free learning signals in the ventral striatum were not detectable during this form of observational learning. These results provide insight into the flexibility of the model-based learning system, implicating this system in learning during observation as well as from direct experience, and further suggest that the model-free reinforcement learning system may be less flexible with regard to its involvement in observational learning. Copyright © 2016 the American Physiological Society.

  8. Processing electronic photos of Mercury produced by ground based observation

    NASA Astrophysics Data System (ADS)

    Ksanfomality, Leonid

    New images of Mercury have been obtained by processing of ground based observations that were carried out using the short exposure technique. The disk of the planet extendeds usually from 6 to 7 arc seconds, with the linear size of the image in a focal plane of the telescope about 0.3-0.5 mm on the average. Processing initial millisecond electronic photos of the planet is very labour-consuming. Some features of processing of initial millisecond electronic photos by methods of correlation stacking were considered in (Ksanfomality et al., 2005; Ksanfomality and Sprague, 2007). The method uses manual selection of good photos including a so-called pilot- file, the search for which usually must be done manually. The pilot-file is the most successful one, in opinion of the operator. It defines the future result of the stacking. To change pilot-files increases the labor of processing many times. Programs of processing analyze the contents of a sample, find in it any details, and search for recurrence of these almost imperceptible details in thousand of other stacking electronic pictures. If, proceeding from experience, the form and position of a pilot-file still can be estimated, the estimation of a reality of barely distinct details in it is somewhere in between the imaging and imagination. In 2006-07 some programs of automatic processing have been created. Unfortunately, the efficiency of all automatic programs is not as good as manual selection. Together with the selection, some other known methods are used. The point spread function (PSF) is described by a known mathematical function which in its central part decreases smoothly from the center. Usually the width of this function is accepted at a level 0.7 or 0.5 of the maxima. If many thousands of initial electronic pictures are acquired, it is possible during their processing to take advantage of known statistics of random variables and to choose the width of the function at a level, say, 0.9 maxima. Then the

  9. A Mixed Kijima Model Using the Weibull-Based Generalized Renewal Processes

    PubMed Central

    2015-01-01

    Generalized Renewal Processes are useful for approaching the rejuvenation of dynamical systems resulting from planned or unplanned interventions. We present new perspectives for the Generalized Renewal Processes in general and for the Weibull-based Generalized Renewal Processes in particular. Disregarding from literature, we present a mixed Generalized Renewal Processes approach involving Kijima Type I and II models, allowing one to infer the impact of distinct interventions on the performance of the system under study. The first and second theoretical moments of this model are introduced as well as its maximum likelihood estimation and random sampling approaches. In order to illustrate the usefulness of the proposed Weibull-based Generalized Renewal Processes model, some real data sets involving improving, stable, and deteriorating systems are used. PMID:26197222

  10. Demonstrating the value of community-based ('citizen science') observations for catchment modelling and characterisation

    NASA Astrophysics Data System (ADS)

    Starkey, Eleanor; Parkin, Geoff; Birkinshaw, Stephen; Large, Andy; Quinn, Paul; Gibson, Ceri

    2017-05-01

    Despite there being well-established meteorological and hydrometric monitoring networks in the UK, many smaller catchments remain ungauged. This leaves a challenge for characterisation, modelling, forecasting and management activities. Here we demonstrate the value of community-based ('citizen science') observations for modelling and understanding catchment response as a contribution to catchment science. The scheme implemented within the 42 km2 Haltwhistle Burn catchment, a tributary of the River Tyne in northeast England, has harvested and used quantitative and qualitative observations from the public in a novel way to effectively capture spatial and temporal river response. Community-based rainfall, river level and flood observations have been successfully collected and quality-checked, and used to build and run a physically-based, spatially-distributed catchment model, SHETRAN. Model performance using different combinations of observations is tested against traditionally-derived hydrographs. Our results show how the local network of community-based observations alongside traditional sources of hydro-information supports characterisation of catchment response more accurately than using traditional observations alone over both spatial and temporal scales. We demonstrate that these community-derived datasets are most valuable during local flash flood events, particularly towards peak discharge. This information is often missed or poorly represented by ground-based gauges, or significantly underestimated by rainfall radar, as this study clearly demonstrates. While community-based observations are less valuable during prolonged and widespread floods, or over longer hydrological periods of interest, they can still ground-truth existing traditional sources of catchment data to increase confidence during characterisation and management activities. Involvement of the public in data collection activities also encourages wider community engagement, and provides important

  11. Quadratic Polynomial Regression using Serial Observation Processing:Implementation within DART

    NASA Astrophysics Data System (ADS)

    Hodyss, D.; Anderson, J. L.; Collins, N.; Campbell, W. F.; Reinecke, P. A.

    2017-12-01

    Many Ensemble-Based Kalman ltering (EBKF) algorithms process the observations serially. Serial observation processing views the data assimilation process as an iterative sequence of scalar update equations. What is useful about this data assimilation algorithm is that it has very low memory requirements and does not need complex methods to perform the typical high-dimensional inverse calculation of many other algorithms. Recently, the push has been towards the prediction, and therefore the assimilation of observations, for regions and phenomena for which high-resolution is required and/or highly nonlinear physical processes are operating. For these situations, a basic hypothesis is that the use of the EBKF is sub-optimal and performance gains could be achieved by accounting for aspects of the non-Gaussianty. To this end, we develop here a new component of the Data Assimilation Research Testbed [DART] to allow for a wide-variety of users to test this hypothesis. This new version of DART allows one to run several variants of the EBKF as well as several variants of the quadratic polynomial lter using the same forecast model and observations. Dierences between the results of the two systems will then highlight the degree of non-Gaussianity in the system being examined. We will illustrate in this work the differences between the performance of linear versus quadratic polynomial regression in a hierarchy of models from Lorenz-63 to a simple general circulation model.

  12. A model of a sunspot chromosphere based on OSO 8 observations

    NASA Technical Reports Server (NTRS)

    Lites, B. W.; Skumanich, A.

    1982-01-01

    OSO 8 spectrometer observations of the H I, Mg II, and Ca II resonance lines of a large quiet sunspot during November 16-17, 1975, along with a C IV line of that event obtained by a ground-based spectrometer, are analyzed together with near-simultaneous ground-based Stokes measurements to yield an umbral chromosphere and transition region model. Features of this model include a chromosphere that is effectively thin in the resonance lines of H I and Mg II, while being saturated in Ca II, and an upper chromospheric structure similar to that of quiet-sun models. The similarity of the upper chromosphere of the sunspot umbra to the quiet-sun chromosphere suggests that the intense magnetic field plays only a passive role in the chromospheric heating mechanism, and the observations cited indicate that solar-type stars with large areas of ordered magnetic flux would not necessarily exhibit extremely active chromosphere.

  13. A Nanoflare-Based Cellular Automaton Model and the Observed Properties of the Coronal Plasma

    NASA Technical Reports Server (NTRS)

    Lopez-Fuentes, Marcelo; Klimchuk, James Andrew

    2016-01-01

    We use the cellular automaton model described in Lopez Fuentes and Klimchuk to study the evolution of coronal loop plasmas. The model, based on the idea of a critical misalignment angle in tangled magnetic fields, produces nanoflares of varying frequency with respect to the plasma cooling time. We compare the results of the model with active region (AR) observations obtained with the Hinode/XRT and SDOAIA instruments. The comparison is based on the statistical properties of synthetic and observed loop light curves. Our results show that the model reproduces the main observational characteristics of the evolution of the plasma in AR coronal loops. The typical intensity fluctuations have amplitudes of 10 percent - 15 percent both for the model and the observations. The sign of the skewness of the intensity distributions indicates the presence of cooling plasma in the loops. We also study the emission measure (EM) distribution predicted by the model and obtain slopes in log(EM) versus log(T) between 2.7 and 4.3, in agreement with published observational values.

  14. A NANOFLARE-BASED CELLULAR AUTOMATON MODEL AND THE OBSERVED PROPERTIES OF THE CORONAL PLASMA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuentes, Marcelo López; Klimchuk, James A., E-mail: lopezf@iafe.uba.ar

    2016-09-10

    We use the cellular automaton model described in López Fuentes and Klimchuk to study the evolution of coronal loop plasmas. The model, based on the idea of a critical misalignment angle in tangled magnetic fields, produces nanoflares of varying frequency with respect to the plasma cooling time. We compare the results of the model with active region (AR) observations obtained with the Hinode /XRT and SDO /AIA instruments. The comparison is based on the statistical properties of synthetic and observed loop light curves. Our results show that the model reproduces the main observational characteristics of the evolution of the plasmamore » in AR coronal loops. The typical intensity fluctuations have amplitudes of 10%–15% both for the model and the observations. The sign of the skewness of the intensity distributions indicates the presence of cooling plasma in the loops. We also study the emission measure (EM) distribution predicted by the model and obtain slopes in log(EM) versus log(T) between 2.7 and 4.3, in agreement with published observational values.« less

  15. Prediction of Proper Temperatures for the Hot Stamping Process Based on the Kinetics Models

    NASA Astrophysics Data System (ADS)

    Samadian, P.; Parsa, M. H.; Mirzadeh, H.

    2015-02-01

    Nowadays, the application of kinetics models for predicting microstructures of steels subjected to thermo-mechanical treatments has increased to minimize direct experimentation, which is costly and time consuming. In the current work, the final microstructures of AISI 4140 steel sheets after the hot stamping process were predicted using the Kirkaldy and Li kinetics models combined with new thermodynamically based models in order for the determination of the appropriate process temperatures. In this way, the effect of deformation during hot stamping on the Ae3, Acm, and Ae1 temperatures was considered, and then the equilibrium volume fractions of phases at different temperatures were calculated. Moreover, the ferrite transformation rate equations of the Kirkaldy and Li models were modified by a term proposed by Åkerström to consider the influence of plastic deformation. Results showed that the modified Kirkaldy model is satisfactory for the determination of appropriate austenitization temperatures for the hot stamping process of AISI 4140 steel sheets because of agreeable microstructure predictions in comparison with the experimental observations.

  16. MO-FG-209-05: Towards a Feature-Based Anthropomorphic Model Observer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avanaki, A.

    2016-06-15

    This symposium will review recent advances in the simulation methods for evaluation of novel breast imaging systems – the subject of AAPM Task Group TG234. Our focus will be on the various approaches to development and validation of software anthropomorphic phantoms and their use in the statistical assessment of novel imaging systems using such phantoms along with computational models for the x-ray image formation process. Due to the dynamic development and complex design of modern medical imaging systems, the simulation of anatomical structures, image acquisition modalities, and the image perception and analysis offers substantial benefits of reduced cost, duration, andmore » radiation exposure, as well as the known ground-truth and wide variability in simulated anatomies. For these reasons, Virtual Clinical Trials (VCTs) have been increasingly accepted as a viable tool for preclinical assessment of x-ray and other breast imaging methods. Activities of TG234 have encompassed the optimization of protocols for simulation studies, including phantom specifications, the simulated data representation, models of the imaging process, and statistical assessment of simulated images. The symposium will discuss the state-of-the-science of VCTs for novel breast imaging systems, emphasizing recent developments and future directions. Presentations will discuss virtual phantoms for intermodality breast imaging performance comparisons, extension of the breast anatomy simulation to the cellular level, optimized integration of the simulated imaging chain, and the novel directions in the observer models design. Learning Objectives: Review novel results in developing and applying virtual phantoms for inter-modality breast imaging performance comparisons; Discuss the efforts to extend the computer simulation of breast anatomy and pathology to the cellular level; Summarize the state of the science in optimized integration of modules in the simulated imaging chain; Compare novel

  17. Comparing and combining process-based crop models and statistical models with some implications for climate change

    NASA Astrophysics Data System (ADS)

    Roberts, Michael J.; Braun, Noah O.; Sinclair, Thomas R.; Lobell, David B.; Schlenker, Wolfram

    2017-09-01

    We compare predictions of a simple process-based crop model (Soltani and Sinclair 2012), a simple statistical model (Schlenker and Roberts 2009), and a combination of both models to actual maize yields on a large, representative sample of farmer-managed fields in the Corn Belt region of the United States. After statistical post-model calibration, the process model (Simple Simulation Model, or SSM) predicts actual outcomes slightly better than the statistical model, but the combined model performs significantly better than either model. The SSM, statistical model and combined model all show similar relationships with precipitation, while the SSM better accounts for temporal patterns of precipitation, vapor pressure deficit and solar radiation. The statistical and combined models show a more negative impact associated with extreme heat for which the process model does not account. Due to the extreme heat effect, predicted impacts under uniform climate change scenarios are considerably more severe for the statistical and combined models than for the process-based model.

  18. Model-based design of experiments for cellular processes.

    PubMed

    Chakrabarty, Ankush; Buzzard, Gregery T; Rundell, Ann E

    2013-01-01

    Model-based design of experiments (MBDOE) assists in the planning of highly effective and efficient experiments. Although the foundations of this field are well-established, the application of these techniques to understand cellular processes is a fertile and rapidly advancing area as the community seeks to understand ever more complex cellular processes and systems. This review discusses the MBDOE paradigm along with applications and challenges within the context of cellular processes and systems. It also provides a brief tutorial on Fisher information matrix (FIM)-based and Bayesian experiment design methods along with an overview of existing software packages and computational advances that support MBDOE application and adoption within the Systems Biology community. As cell-based products and biologics progress into the commercial sector, it is anticipated that MBDOE will become an essential practice for design, quality control, and production. Copyright © 2013 Wiley Periodicals, Inc.

  19. Implications of Bandura's Observational Learning Theory for a Competency Based Teacher Education Model.

    ERIC Educational Resources Information Center

    Hartjen, Raymond H.

    Albert Bandura of Stanford University has proposed four component processes to his theory of observational learning: a) attention, b) retention, c) motor reproduction, and d) reinforcement and motivation. This study represents one phase of an effort to relate modeling and observational learning theory to teacher training. The problem of this study…

  20. Model-based query language for analyzing clinical processes.

    PubMed

    Barzdins, Janis; Barzdins, Juris; Rencis, Edgars; Sostaks, Agris

    2013-01-01

    Nowadays large databases of clinical process data exist in hospitals. However, these data are rarely used in full scope. In order to perform queries on hospital processes, one must either choose from the predefined queries or develop queries using MS Excel-type software system, which is not always a trivial task. In this paper we propose a new query language for analyzing clinical processes that is easily perceptible also by non-IT professionals. We develop this language based on a process modeling language which is also described in this paper. Prototypes of both languages have already been verified using real examples from hospitals.

  1. Process-based model with flood control measures towards more realistic global flood modeling

    NASA Astrophysics Data System (ADS)

    Tang, Q.; Zhang, X.; Wang, Y.; Mu, M.; Lv, A.; Li, Z.

    2017-12-01

    In the profoundly human-influenced era, the Anthropocene, increased amount of land was developed in flood plains and many flood control measures were implemented to protect people and infrastructures placed in the flood-prone areas. These human influences (for example, dams and dykes) have altered peak streamflow and flood risk, and are already an integral part of flood. However, most of the process-based flood models have yet to taken into account the human influences. In this study, we used a hydrological model together with an advanced hydrodynamic model to assess flood risk at the Baiyangdian catchment. The Baiyangdian Lake is the largest shallow freshwater lake in North China, and it was used as a flood storage area in the past. A new development hub for the Beijing-Tianjin-Hebei economic triangle, namely the Xiongan new area, was recently established in the flood-prone area around the lake. The shuttle radar topography mission (SRTM) digital elevation model (DEMs) was used to parameterize the hydrodynamic model simulation, and the inundation estimates were compared with published flood maps and observed inundation area during the extreme historical flood events. A simple scheme was carried out to consider the impacts of flood control measures, including the reservoirs in the headwaters and the dykes to be built. By comparing model simulations with and without the influences of flood control measures, we demonstrated the importance of human influences in altering the inundated area and depth under design flood conditions. Based on the SRTM DEM and dam and reservoir data in the Global Reservoir and Dam (GRanD) database, we further discuss the potential to develop a global flood model with human influences.

  2. Integrating statistical and process-based models to produce probabilistic landslide hazard at regional scale

    NASA Astrophysics Data System (ADS)

    Strauch, R. L.; Istanbulluoglu, E.

    2017-12-01

    We develop a landslide hazard modeling approach that integrates a data-driven statistical model and a probabilistic process-based shallow landslide model for mapping probability of landslide initiation, transport, and deposition at regional scales. The empirical model integrates the influence of seven site attribute (SA) classes: elevation, slope, curvature, aspect, land use-land cover, lithology, and topographic wetness index, on over 1,600 observed landslides using a frequency ratio (FR) approach. A susceptibility index is calculated by adding FRs for each SA on a grid-cell basis. Using landslide observations we relate susceptibility index to an empirically-derived probability of landslide impact. This probability is combined with results from a physically-based model to produce an integrated probabilistic map. Slope was key in landslide initiation while deposition was linked to lithology and elevation. Vegetation transition from forest to alpine vegetation and barren land cover with lower root cohesion leads to higher frequency of initiation. Aspect effects are likely linked to differences in root cohesion and moisture controlled by solar insulation and snow. We demonstrate the model in the North Cascades of Washington, USA and identify locations of high and low probability of landslide impacts that can be used by land managers in their design, planning, and maintenance.

  3. Kinect Posture Reconstruction Based on a Local Mixture of Gaussian Process Models.

    PubMed

    Liu, Zhiguang; Zhou, Liuyang; Leung, Howard; Shum, Hubert P H

    2016-11-01

    Depth sensor based 3D human motion estimation hardware such as Kinect has made interactive applications more popular recently. However, it is still challenging to accurately recognize postures from a single depth camera due to the inherently noisy data derived from depth images and self-occluding action performed by the user. In this paper, we propose a new real-time probabilistic framework to enhance the accuracy of live captured postures that belong to one of the action classes in the database. We adopt the Gaussian Process model as a prior to leverage the position data obtained from Kinect and marker-based motion capture system. We also incorporate a temporal consistency term into the optimization framework to constrain the velocity variations between successive frames. To ensure that the reconstructed posture resembles the accurate parts of the observed posture, we embed a set of joint reliability measurements into the optimization framework. A major drawback of Gaussian Process is its cubic learning complexity when dealing with a large database due to the inverse of a covariance matrix. To solve the problem, we propose a new method based on a local mixture of Gaussian Processes, in which Gaussian Processes are defined in local regions of the state space. Due to the significantly decreased sample size in each local Gaussian Process, the learning time is greatly reduced. At the same time, the prediction speed is enhanced as the weighted mean prediction for a given sample is determined by the nearby local models only. Our system also allows incrementally updating a specific local Gaussian Process in real time, which enhances the likelihood of adapting to run-time postures that are different from those in the database. Experimental results demonstrate that our system can generate high quality postures even under severe self-occlusion situations, which is beneficial for real-time applications such as motion-based gaming and sport training.

  4. Predicting the Mineral Composition of Dust Aerosols. Part 2; Model Evaluation and Identification of Key Processes with Observations

    NASA Technical Reports Server (NTRS)

    Perlwitz, J. P.; Garcia-Pando, C. Perez; Miller, R. L.

    2015-01-01

    A global compilation of nearly sixty measurement studies is used to evaluate two methods of simulating the mineral composition of dust aerosols in an Earth system model. Both methods are based upon a Mean Mineralogical Table (MMT) that relates the soil mineral fractions to a global atlas of arid soil type. The Soil Mineral Fraction (SMF) method assumes that the aerosol mineral fractions match the fractions of the soil. The MMT is based upon soil measurements after wet sieving, a process that destroys aggregates of soil particles that would have been emitted from the original, undisturbed soil. The second method approximately reconstructs the emitted aggregates. This model is referred to as the Aerosol Mineral Fraction (AMF) method because the mineral fractions of the aerosols differ from those of the wet-sieved parent soil, partly due to reaggregation. The AMF method remedies some of the deficiencies of the SMF method in comparison to observations. Only the AMF method exhibits phyllosilicate mass at silt sizes, where they are abundant according to observations. In addition, the AMF quartz fraction of silt particles is in better agreement with measured values, in contrast to the overestimated SMF fraction. Measurements at distinct clay and silt particle sizes are shown to be more useful for evaluation of the models, in contrast to the sum over all particles sizes that is susceptible to compensating errors, as illustrated by the SMF experiment. Model errors suggest that allocation of the emitted silt fraction of each mineral into the corresponding transported size categories is an important remaining source of uncertainty. Evaluation of both models and the MMT is hindered by the limited number of size-resolved measurements of mineral content that sparsely sample aerosols from the major dust sources. The importance of climate processes dependent upon aerosol mineral composition shows the need for global and routine mineral measurements.

  5. A Microphysics-Based Black Carbon Aging Scheme in a Global Chemical Transport Model: Constraints from HIPPO Observations

    NASA Astrophysics Data System (ADS)

    He, C.; Li, Q.; Liou, K. N.; Qi, L.; Tao, S.; Schwarz, J. P.

    2015-12-01

    Black carbon (BC) aging significantly affects its distributions and radiative properties, which is an important uncertainty source in estimating BC climatic effects. Global models often use a fixed aging timescale for the hydrophobic-to-hydrophilic BC conversion or a simple parameterization. We have developed and implemented a microphysics-based BC aging scheme that accounts for condensation and coagulation processes into a global 3-D chemical transport model (GEOS-Chem). Model results are systematically evaluated by comparing with the HIPPO observations across the Pacific (67°S-85°N) during 2009-2011. We find that the microphysics-based scheme substantially increases the BC aging rate over source regions as compared with the fixed aging timescale (1.2 days), due to the condensation of sulfate and secondary organic aerosols (SOA) and coagulation with pre-existing hydrophilic aerosols. However, the microphysics-based scheme slows down BC aging over Polar regions where condensation and coagulation are rather weak. We find that BC aging is primarily dominated by condensation process that accounts for ~75% of global BC aging, while the coagulation process is important over source regions where a large amount of pre-existing aerosols are available. Model results show that the fixed aging scheme tends to overestimate BC concentrations over the Pacific throughout the troposphere by a factor of 2-5 at different latitudes, while the microphysics-based scheme reduces the discrepancies by up to a factor of 2, particularly in the middle troposphere. The microphysics-based scheme developed in this work decreases BC column total concentrations at all latitudes and seasons, especially over tropical regions, leading to large improvement in model simulations. We are presently analyzing the impact of this scheme on global BC budget and lifetime, quantifying its uncertainty associated with key parameters, and investigating the effects of heterogeneous chemical oxidation on BC aging.

  6. Stratospheric Aerosol--Observations, Processes, and Impact on Climate

    NASA Technical Reports Server (NTRS)

    Kresmer, Stefanie; Thomason, Larry W.; von Hobe, Marc; Hermann, Markus; Deshler, Terry; Timmreck, Claudia; Toohey, Matthew; Stenke, Andrea; Schwarz, Joshua P.; Weigel, Ralf; hide

    2016-01-01

    Interest in stratospheric aerosol and its role in climate have increased over the last decade due to the observed increase in stratospheric aerosol since 2000 and the potential for changes in the sulfur cycle induced by climate change. This review provides an overview about the advances in stratospheric aerosol research since the last comprehensive assessment of stratospheric aerosol was published in 2006. A crucial development since 2006 is the substantial improvement in the agreement between in situ and space-based inferences of stratospheric aerosol properties during volcanically quiescent periods. Furthermore, new measurement systems and techniques, both in situ and space based, have been developed for measuring physical aerosol properties with greater accuracy and for characterizing aerosol composition. However, these changes induce challenges to constructing a long-term stratospheric aerosol climatology. Currently, changes in stratospheric aerosol levels less than 20% cannot be confidently quantified. The volcanic signals tend to mask any nonvolcanically driven change, making them difficult to understand. While the role of carbonyl sulfide as a substantial and relatively constant source of stratospheric sulfur has been confirmed by new observations and model simulations, large uncertainties remain with respect to the contribution from anthropogenic sulfur dioxide emissions. New evidence has been provided that stratospheric aerosol can also contain small amounts of nonsulfatematter such as black carbon and organics. Chemistry-climate models have substantially increased in quantity and sophistication. In many models the implementation of stratospheric aerosol processes is coupled to radiation and/or stratospheric chemistry modules to account for relevant feedback processes.

  7. Development of 4D mathematical observer models for the task-based evaluation of gated myocardial perfusion SPECT

    NASA Astrophysics Data System (ADS)

    Lee, Taek-Soo; Frey, Eric C.; Tsui, Benjamin M. W.

    2015-04-01

    This paper presents two 4D mathematical observer models for the detection of motion defects in 4D gated medical images. Their performance was compared with results from human observers in detecting a regional motion abnormality in simulated 4D gated myocardial perfusion (MP) SPECT images. The first 4D mathematical observer model extends the conventional channelized Hotelling observer (CHO) based on a set of 2D spatial channels and the second is a proposed model that uses a set of 4D space-time channels. Simulated projection data were generated using the 4D NURBS-based cardiac-torso (NCAT) phantom with 16 gates/cardiac cycle. The activity distribution modelled uptake of 99mTc MIBI with normal perfusion and a regional wall motion defect. An analytical projector was used in the simulation and the filtered backprojection (FBP) algorithm was used in image reconstruction followed by spatial and temporal low-pass filtering with various cut-off frequencies. Then, we extracted 2D image slices from each time frame and reorganized them into a set of cine images. For the first model, we applied 2D spatial channels to the cine images and generated a set of feature vectors that were stacked for the images from different slices of the heart. The process was repeated for each of the 1,024 noise realizations, and CHO and receiver operating characteristics (ROC) analysis methodologies were applied to the ensemble of the feature vectors to compute areas under the ROC curves (AUCs). For the second model, a set of 4D space-time channels was developed and applied to the sets of cine images to produce space-time feature vectors to which the CHO methodology was applied. The AUC values of the second model showed better agreement (Spearman’s rank correlation (SRC) coefficient = 0.8) to human observer results than those from the first model (SRC coefficient = 0.4). The agreement with human observers indicates the proposed 4D mathematical observer model provides a good predictor of the

  8. Creating "Intelligent" Climate Model Ensemble Averages Using a Process-Based Framework

    NASA Astrophysics Data System (ADS)

    Baker, N. C.; Taylor, P. C.

    2014-12-01

    The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is often used to add value to model projections: consensus projections have been shown to consistently outperform individual models. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, certain models reproduce climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument and surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing weighted and unweighted model ensembles. For example, one tested metric weights the ensemble by how well models reproduce the time-series probability distribution of the cloud forcing component of reflected shortwave radiation. The weighted ensemble for this metric indicates lower simulated precipitation (up to .7 mm/day) in tropical regions than the unweighted ensemble: since CMIP5 models have been shown to

  9. MODFLOW-2000, the U.S. Geological Survey modular ground-water model; user guide to the observation, sensitivity, and parameter-estimation processes and three post-processing programs

    USGS Publications Warehouse

    Hill, Mary C.; Banta, E.R.; Harbaugh, A.W.; Anderman, E.R.

    2000-01-01

    This report documents the Observation, Sensitivity, and Parameter-Estimation Processes of the ground-water modeling computer program MODFLOW-2000. The Observation Process generates model-calculated values for comparison with measured, or observed, quantities. A variety of statistics is calculated to quantify this comparison, including a weighted least-squares objective function. In addition, a number of files are produced that can be used to compare the values graphically. The Sensitivity Process calculates the sensitivity of hydraulic heads throughout the model with respect to specified parameters using the accurate sensitivity-equation method. These are called grid sensitivities. If the Observation Process is active, it uses the grid sensitivities to calculate sensitivities for the simulated values associated with the observations. These are called observation sensitivities. Observation sensitivities are used to calculate a number of statistics that can be used (1) to diagnose inadequate data, (2) to identify parameters that probably cannot be estimated by regression using the available observations, and (3) to evaluate the utility of proposed new data. The Parameter-Estimation Process uses a modified Gauss-Newton method to adjust values of user-selected input parameters in an iterative procedure to minimize the value of the weighted least-squares objective function. Statistics produced by the Parameter-Estimation Process can be used to evaluate estimated parameter values; statistics produced by the Observation Process and post-processing program RESAN-2000 can be used to evaluate how accurately the model represents the actual processes; statistics produced by post-processing program YCINT-2000 can be used to quantify the uncertainty of model simulated values. Parameters are defined in the Ground-Water Flow Process input files and can be used to calculate most model inputs, such as: for explicitly defined model layers, horizontal hydraulic conductivity

  10. Space-based infrared scanning sensor LOS determination and calibration using star observation

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Xu, Zhan; An, Wei; Deng, Xin-Pu; Yang, Jun-Gang

    2015-10-01

    This paper provides a novel methodology for removing sensor bias from a space based infrared (IR) system (SBIRS) through the use of stars detected in the background field of the sensor. Space based IR system uses the LOS (line of sight) of target for target location. LOS determination and calibration is the key precondition of accurate location and tracking of targets in Space based IR system and the LOS calibration of scanning sensor is one of the difficulties. The subsequent changes of sensor bias are not been taking into account in the conventional LOS determination and calibration process. Based on the analysis of the imaging process of scanning sensor, a theoretical model based on the estimation of bias angles using star observation is proposed. By establishing the process model of the bias angles and the observation model of stars, using an extended Kalman filter (EKF) to estimate the bias angles, and then calibrating the sensor LOS. Time domain simulations results indicate that the proposed method has a high precision and smooth performance for sensor LOS determination and calibration. The timeliness and precision of target tracking process in the space based infrared (IR) tracking system could be met with the proposed algorithm.

  11. Integration of a three-dimensional process-based hydrological model into the Object Modeling System

    USDA-ARS?s Scientific Manuscript database

    The integration of a spatial process model into an environmental modelling framework can enhance the model’s capabilities. We present the integration of the GEOtop model into the Object Modeling System (OMS) version 3.0 and illustrate its application in a small watershed. GEOtop is a physically base...

  12. Single baseline GLONASS observations with VLBI: data processing and first results

    NASA Astrophysics Data System (ADS)

    Tornatore, V.; Haas, R.; Duev, D.; Pogrebenko, S.; Casey, S.; Molera Calvés, G.; Keimpema, A.

    2011-07-01

    Several tests to observe signals transmitted by GLONASS (GLObal NAvigation Satellite System) satellites have been performed using the geodetic VLBI (Very Long Baseline Interferometry) technique. The radio telescopes involved in these experiments were Medicina (Italy) and Onsala (Sweden), both equipped with L-band receivers. Observations at the stations were performed using the standard Mark4 VLBI data acquisition rack and Mark5A disk-based recorders. The goals of the observations were to develop and test the scheduling, signal acquisition and processing routines to verify the full tracking pipeline, foreseeing the cross-correlation of the recorded data on the baseline Onsala-Medicina. The natural radio source 3c286 was used as a calibrator before the starting of the satellite observation sessions. Delay models, including the tropospheric and ionospheric corrections, which are consistent for both far- and near-field sources are under development. Correlation of the calibrator signal has been performed using the DiFX software, while the satellite signals have been processed using the narrow band approach with the Metsaehovi software and analysed with a near-field delay model. Delay models both for the calibrator signals and the satellites signals, using the same geometrical, tropospheric and ionospheric models, are under investigation to make a correlation of the satellite signals possible.

  13. Evaluation of atmospheric dust prediction models using ground-based observations

    NASA Astrophysics Data System (ADS)

    Terradellas, Enric; María Baldasano, José; Cuevas, Emilio; Basart, Sara; Huneeus, Nicolás; Camino, Carlos; Dundar, Cinhan; Benincasa, Francesco

    2013-04-01

    An important step in numerical prediction of mineral dust is the model evaluation aimed to assess its performance to forecast the atmospheric dust content and to lead to new directions in model development and improvement. The first problem to address the evaluation is the scarcity of ground-based routine observations intended for dust monitoring. An alternative option would be the use of satellite products. They have the advantage of a large spatial coverage and a regular availability. However, they do have numerous drawbacks that make the quantitative retrievals of aerosol-related variables difficult and imprecise. This work presents the use of different ground-based observing systems for the evaluation of dust models in the Regional Center for Northern Africa, Middle East and Europe of the World Meteorological Organization (WMO) Sand and Dust Storm Warning Advisory and Assessment System (SDS-WAS). The dust optical depth at 550 nm forecast by different models is regularly compared with the AERONET measurements of Aerosol Optical Depth (AOD) for 40 selected stations. Photometric measurements are a powerful tool for remote sensing of the atmosphere allowing retrieval of aerosol properties, such as AOD. This variable integrates the contribution of different aerosol types, but may be complemented with spectral information that enables hypotheses about the nature of the particles. Comparison is restricted to cases with low Ångström exponent values in order to ensure that coarse mineral dust is the dominant aerosol type. Additionally to column dust load, it is important to evaluate dust surface concentration and dust vertical profiles. Air quality monitoring stations are the main source of data for the evaluation of surface concentration. However they are concentrated in populated and industrialized areas around the Mediterranean. In the present contribution, results of different models are compared with observations of PM10 from the Turkish air quality network for

  14. The Gravitational Process Path (GPP) model (v1.0) - a GIS-based simulation framework for gravitational processes

    NASA Astrophysics Data System (ADS)

    Wichmann, Volker

    2017-09-01

    The Gravitational Process Path (GPP) model can be used to simulate the process path and run-out area of gravitational processes based on a digital terrain model (DTM). The conceptual model combines several components (process path, run-out length, sink filling and material deposition) to simulate the movement of a mass point from an initiation site to the deposition area. For each component several modeling approaches are provided, which makes the tool configurable for different processes such as rockfall, debris flows or snow avalanches. The tool can be applied to regional-scale studies such as natural hazard susceptibility mapping but also contains components for scenario-based modeling of single events. Both the modeling approaches and precursor implementations of the tool have proven their applicability in numerous studies, also including geomorphological research questions such as the delineation of sediment cascades or the study of process connectivity. This is the first open-source implementation, completely re-written, extended and improved in many ways. The tool has been committed to the main repository of the System for Automated Geoscientific Analyses (SAGA) and thus will be available with every SAGA release.

  15. Generating structure from experience: A retrieval-based model of language processing.

    PubMed

    Johns, Brendan T; Jones, Michael N

    2015-09-01

    Standard theories of language generally assume that some abstraction of linguistic input is necessary to create higher level representations of linguistic structures (e.g., a grammar). However, the importance of individual experiences with language has recently been emphasized by both usage-based theories (Tomasello, 2003) and grounded and situated theories (e.g., Zwaan & Madden, 2005). Following the usage-based approach, we present a formal exemplar model that stores instances of sentences across a natural language corpus, applying recent advances from models of semantic memory. In this model, an exemplar memory is used to generate expectations about the future structure of sentences, using a mechanism for prediction in language processing (Altmann & Mirković, 2009). The model successfully captures a broad range of behavioral effects-reduced relative clause processing (Reali & Christiansen, 2007), the role of contextual constraint (Rayner & Well, 1996), and event knowledge activation (Ferretti, Kutas, & McRae, 2007), among others. We further demonstrate how perceptual knowledge could be integrated into this exemplar-based framework, with the goal of grounding language processing in perception. Finally, we illustrate how an exemplar memory system could have been used in the cultural evolution of language. The model provides evidence that an impressive amount of language processing may be bottom-up in nature, built on the storage and retrieval of individual linguistic experiences. (c) 2015 APA, all rights reserved).

  16. Modeling and validation of photometric characteristics of space targets oriented to space-based observation.

    PubMed

    Wang, Hongyuan; Zhang, Wei; Dong, Aotuo

    2012-11-10

    A modeling and validation method of photometric characteristics of the space target was presented in order to track and identify different satellites effectively. The background radiation characteristics models of the target were built based on blackbody radiation theory. The geometry characteristics of the target were illustrated by the surface equations based on its body coordinate system. The material characteristics of the target surface were described by a bidirectional reflectance distribution function model, which considers the character of surface Gauss statistics and microscale self-shadow and is obtained by measurement and modeling in advance. The contributing surfaces of the target to observation system were determined by coordinate transformation according to the relative position of the space-based target, the background radiation sources, and the observation platform. Then a mathematical model on photometric characteristics of the space target was built by summing reflection components of all the surfaces. Photometric characteristics simulation of the space-based target was achieved according to its given geometrical dimensions, physical parameters, and orbital parameters. Experimental validation was made based on the scale model of the satellite. The calculated results fit well with the measured results, which indicates the modeling method of photometric characteristics of the space target is correct.

  17. Geocenter Coordinates from a Combined Processing of LEO and Ground-based GPS Observations

    NASA Astrophysics Data System (ADS)

    Männel, Benjamin; Rothacher, Markus

    2017-04-01

    The GPS observations provided by the global IGS (International GNSS Service) tracking network play an important role for the realization of a unique terrestrial reference frame that is accurate enough to allow the monitoring of the Earth's system. Combining these ground-based data with GPS observations tracked by high-quality dual-frequency receivers on-board Low Earth Orbiters (LEO) might help to further improve the realization of the terrestrial reference frame and the estimation of the geocenter coordinates, GPS satellite orbits and Earth rotation parameters (ERP). To assess the scope of improvement, we processed a network of 50 globally distributed and stable IGS-stations together with four LEOs (GRACE-A, GRACE-B, OSTM/Jason-2 and GOCE) over a time interval of three years (2010-2012). To ensure fully consistent solutions the zero-difference phase observations of the ground stations and LEOs were processed in a common least-square adjustment, estimating GPS orbits, LEO orbits, station coordinates, ERPs, site-specific tropospheric delays, satellite and receiver clocks and ambiguities. We present the significant impact of the individual LEOs and a combination of all four LEOs on geocenter coordinates derived by using a translational approach (also called network shift approach). In addition, we present geocenter coordinates derived from the same set of GPS observations by using a unified approach. This approach combines the translational and the degree-one approach by estimating translations and surface deformations simultaneously. Based on comparisons against each other and against geocenter time series derived by other techniques the effect of the selected approach is assessed.

  18. Density and crosswind from GOCE - comparisons with other satellite data, ground-based observations and models

    NASA Astrophysics Data System (ADS)

    Doornbos, E.; Bruinsma, S.; Conde, M.; Forbes, J. M.

    2013-12-01

    Observations made by the European Space Agency (ESA) Gravity field and Ocean Circulation Explorer (GOCE) satellite have enabled the production of a spin-off product of high resolution and high accuracy data on thermosphere density, derived from aerodynamic analysis of acceleration measurements. In this regard, the mission follows in the footsteps of the earlier accelerometer-carrying gravity missions CHAMP and GRACE. The extremely high accuracy and redundancy of the six accelerometers carried by GOCE in its gravity gradiometer instrument has provided new insights on the performance and calibration of these instruments. Housekeeping data on the activation of the GOCE drag free control thruster, made available by ESA has made the production of the thermosphere data possible. The long duration low altitude of GOCE, enabled by its drag free control system, has ensured the presence of very large aerodynamic accelerations throughout its lifetime. This has been beneficial for the accurate derivation of data on the wind speed encountered by the satellite. We have compared the GOCE density observations with data from CHAMP and GRACE. The crosswind data has been compared with CHAMP observations, as well as ground-based observations, made using Scanning Doppler Imagers in Alaska. Models of the thermosphere can provide a bigger, global picture, required as a background in the interpretation of the local space- and ground-based measurements. The comparison of these different sources of information on thermosphere density and wind, each with their own strengths and weaknesses, can provide scientific insight, as well as inputs for further refinement of the processing algorithms and models that are part of the various techniques. Density and crosswind data derived from GOCE (dusk-dawn) and CHAMP (midnight-noon) satellite accelerometer data, superimposed over HWM07 modelled horizontal wind vectors.

  19. Thermoplastic matrix composite processing model

    NASA Technical Reports Server (NTRS)

    Dara, P. H.; Loos, A. C.

    1985-01-01

    The effects the processing parameters pressure, temperature, and time have on the quality of continuous graphite fiber reinforced thermoplastic matrix composites were quantitatively accessed by defining the extent to which intimate contact and bond formation has occurred at successive ply interfaces. Two models are presented predicting the extents to which the ply interfaces have achieved intimate contact and cohesive strength. The models are based on experimental observation of compression molded laminates and neat resin conditions, respectively. Identified as the mechanism explaining the phenomenon by which the plies bond to themselves is the theory of autohesion (or self diffusion). Theoretical predictions from the Reptation Theory between autohesive strength and contact time are used to explain the effects of the processing parameters on the observed experimental strengths. The application of a time-temperature relationship for autohesive strength predictions is evaluated. A viscoelastic compression molding model of a tow was developed to explain the phenomenon by which the prepreg ply interfaces develop intimate contact.

  20. Modeling and Simulation of Metallurgical Process Based on Hybrid Petri Net

    NASA Astrophysics Data System (ADS)

    Ren, Yujuan; Bao, Hong

    2016-11-01

    In order to achieve the goals of energy saving and emission reduction of iron and steel enterprises, an increasing number of modeling and simulation technologies are used to research and analyse metallurgical production process. In this paper, the basic principle of Hybrid Petri net is used to model and analyse the Metallurgical Process. Firstly, the definition of Hybrid Petri Net System of Metallurgical Process (MPHPNS) and its modeling theory are proposed. Secondly, the model of MPHPNS based on material flow is constructed. The dynamic flow of materials and the real-time change of each technological state in metallurgical process are simulated vividly by using this model. The simulation process can implement interaction between the continuous event dynamic system and the discrete event dynamic system at the same level, and play a positive role in the production decision.

  1. Soil Methane uptake Model (MeMo): a process based model for global methane consumption by soils

    NASA Astrophysics Data System (ADS)

    Murguia-Flores, F.; Arndt, S.; Ganesan, A.; Hornibrook, E. R. C.; Murray-Tortarolo, G.

    2016-12-01

    Atmospheric methane (CH4) is a powerful greenhouse gas, responsible for 20% of global warming. The only terrestrial and biological sink is the uptake in the soils by methanotrophic bacteria, however there is large spatial and temporal heterogeneity in the magnitude of this sink. One way to provide a global understanding of this process is by using a mathematical model to simulate the mechanisms of the underlying physical and biological drivers. Here we present the soil Methane uptake Model (MeMo) a process-based model for the global methane consumption by soils. We have built on previous models by Ridgwell et al., (1999) and Curry et al., (2007), by making several advances. First, a general analytical solution of the one-dimensional diffusion-reaction equation was implemented that accounts for a maximum uptake depth and for a CH4 flux coming from below the surface (i.e. CH4 production in the soil). Secondly, we revisited and improved the effect of nitrogen inhibition, soil moisture and soil temperature on CH4 uptake in the light of newly available data and advances in our understanding of these drivers. Using observed forcing data, we estimated a global mean CH4 uptake of 31.2±1.2 Tg y-1 for the period 1990-2009 with an increasing trend of 0.1 Tg y-2. Our model represented the latitudinal pattern of uptake shown by field observations, with the highest uptake per unit area occurring over dry tropical forest and the lowest uptake in the polar desert. The highest seasonality occurred in the Northern Hemisphere, showing that the main driver of variability in a given year is from a combination of temperature and soil moisture. Our model showed that CH4 uptake is reduced from previous studies by approximately 10% at the regions with the highest nitrogen deposition: East Asia and Europe. Finally, our results suggest that more field measurements are needed to improve the modelling of the process, such as the basal oxidation rate for different ecosystems, the Q10

  2. Modelling and simulating decision processes of linked lives: An approach based on concurrent processes and stochastic race.

    PubMed

    Warnke, Tom; Reinhardt, Oliver; Klabunde, Anna; Willekens, Frans; Uhrmacher, Adelinde M

    2017-10-01

    Individuals' decision processes play a central role in understanding modern migration phenomena and other demographic processes. Their integration into agent-based computational demography depends largely on suitable support by a modelling language. We are developing the Modelling Language for Linked Lives (ML3) to describe the diverse decision processes of linked lives succinctly in continuous time. The context of individuals is modelled by networks the individual is part of, such as family ties and other social networks. Central concepts, such as behaviour conditional on agent attributes, age-dependent behaviour, and stochastic waiting times, are tightly integrated in the language. Thereby, alternative decisions are modelled by concurrent processes that compete by stochastic race. Using a migration model, we demonstrate how this allows for compact description of complex decisions, here based on the Theory of Planned Behaviour. We describe the challenges for the simulation algorithm posed by stochastic race between multiple concurrent complex decisions.

  3. Space-based Observational Constraints for 1-D Plume Rise Models

    NASA Technical Reports Server (NTRS)

    Martin, Maria Val; Kahn, Ralph A.; Logan, Jennifer A.; Paguam, Ronan; Wooster, Martin; Ichoku, Charles

    2012-01-01

    We use a space-based plume height climatology derived from observations made by the Multi-angle Imaging SpectroRadiometer (MISR) instrument aboard the NASA Terra satellite to evaluate the ability of a plume-rise model currently embedded in several atmospheric chemical transport models (CTMs) to produce accurate smoke injection heights. We initialize the plume-rise model with assimilated meteorological fields from the NASA Goddard Earth Observing System and estimated fuel moisture content at the location and time of the MISR measurements. Fire properties that drive the plume-rise model are difficult to estimate and we test the model with four estimates for active fire area and four for total heat flux, obtained using empirical data and Moderate Resolution Imaging Spectroradiometer (MODIS) re radiative power (FRP) thermal anomalies available for each MISR plume. We show that the model is not able to reproduce the plume heights observed by MISR over the range of conditions studied (maximum r2 obtained in all configurations is 0.3). The model also fails to determine which plumes are in the free troposphere (according to MISR), key information needed for atmospheric models to simulate properly smoke dispersion. We conclude that embedding a plume-rise model using currently available re constraints in large-scale atmospheric studies remains a difficult proposition. However, we demonstrate the degree to which the fire dynamical heat flux (related to active fire area and sensible heat flux), and atmospheric stability structure influence plume rise, although other factors less well constrained (e.g., entrainment) may also be significant. Using atmospheric stability conditions, MODIS FRP, and MISR plume heights, we offer some constraints on the main physical factors that drive smoke plume rise. We find that smoke plumes reaching high altitudes are characterized by higher FRP and weaker atmospheric stability conditions than those at low altitude, which tend to remain confined

  4. Process-based modelling of NH3 exchange with grazed grasslands

    NASA Astrophysics Data System (ADS)

    Móring, Andrea; Vieno, Massimo; Doherty, Ruth M.; Milford, Celia; Nemitz, Eiko; Twigg, Marsailidh M.; Horváth, László; Sutton, Mark A.

    2017-09-01

    In this study the GAG model, a process-based ammonia (NH3) emission model for urine patches, was extended and applied for the field scale. The new model (GAG_field) was tested over two modelling periods, for which micrometeorological NH3 flux data were available. Acknowledging uncertainties in the measurements, the model was able to simulate the main features of the observed fluxes. The temporal evolution of the simulated NH3 exchange flux was found to be dominated by NH3 emission from the urine patches, offset by simultaneous NH3 deposition to areas of the field not affected by urine. The simulations show how NH3 fluxes over a grazed field in a given day can be affected by urine patches deposited several days earlier, linked to the interaction of volatilization processes with soil pH dynamics. Sensitivity analysis showed that GAG_field was more sensitive to soil buffering capacity (β), field capacity (θfc) and permanent wilting point (θpwp) than the patch-scale model. The reason for these different sensitivities is dual. Firstly, the difference originates from the different scales. Secondly, the difference can be explained by the different initial soil pH and physical properties, which determine the maximum volume of urine that can be stored in the NH3 source layer. It was found that in the case of urine patches with a higher initial soil pH and higher initial soil water content, the sensitivity of NH3 exchange to β was stronger. Also, in the case of a higher initial soil water content, NH3 exchange was more sensitive to the changes in θfc and θpwp. The sensitivity analysis showed that the nitrogen content of urine (cN) is associated with high uncertainty in the simulated fluxes. However, model experiments based on cN values randomized from an estimated statistical distribution indicated that this uncertainty is considerably smaller in practice. Finally, GAG_field was tested with a constant soil pH of 7.5. The variation of NH3 fluxes simulated in this way

  5. A joint modelling approach for multistate processes subject to resolution and under intermittent observations.

    PubMed

    Yiu, Sean; Tom, Brian

    2017-02-10

    Multistate processes provide a convenient framework when interest lies in characterising the transition intensities between a set of defined states. If, however, there is an unobserved event of interest (not known if and when the event occurs), which when it occurs stops future transitions in the multistate process from occurring, then drawing inference from the joint multistate and event process can be problematic. In health studies, a particular example of this could be resolution, where a resolved patient can no longer experience any further symptoms, and this is explored here for illustration. A multistate model that includes the state space of the original multistate process but partitions the state representing absent symptoms into a latent absorbing resolved state and a temporary transient state of absent symptoms is proposed. The expanded state space explicitly distinguishes between resolved and temporary spells of absent symptoms through disjoint states and allows the uncertainty of not knowing if resolution has occurred to be easily captured when constructing the likelihood; observations of absent symptoms can be considered to be temporary or having resulted from resolution. The proposed methodology is illustrated on a psoriatic arthritis data set where the outcome of interest is a set of intermittently observed disability scores. Estimated probabilities of resolving are also obtained from the model. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  6. Multi-scale Drivers of Variations in Atmospheric Evaporative Demand Based on Observations and Physically-based Modeling

    NASA Astrophysics Data System (ADS)

    Peng, L.; Sheffield, J.; Li, D.

    2015-12-01

    Evapotranspiration (ET) is a key link between the availability of water resources and climate change and climate variability. Variability of ET has important environmental and socioeconomic implications for managing hydrological hazards, food and energy production. Although there have been many observational and modeling studies of ET, how ET has varied and the drivers of the variations at different temporal scales remain elusive. Much of the uncertainty comes from the atmospheric evaporative demand (AED), which is the combined effect of radiative and aerodynamic controls. The inconsistencies among modeled AED estimates and the limited observational data may originate from multiple sources including the limited time span and uncertainties in the data. To fully investigate and untangle the intertwined drivers of AED, we present a spectrum analysis to identify key controls of AED across multiple temporal scales. We use long-term records of observed pan evaporation for 1961-2006 from 317 weather stations across China and physically-based model estimates of potential evapotranspiration (PET). The model estimates are based on surface meteorology and radiation derived from reanalysis, satellite retrievals and station data. Our analyses show that temperature plays a dominant role in regulating variability of AED at the inter-annual scale. At the monthly and seasonal scales, the primary control of AED shifts from radiation in humid regions to humidity in dry regions. Unlike many studies focusing on the spatial pattern of ET drivers based on a traditional supply and demand framework, this study underlines the importance of temporal scales when discussing controls of ET variations.

  7. A neuroanatomical model of space-based and object-centered processing in spatial neglect.

    PubMed

    Pedrazzini, Elena; Schnider, Armin; Ptak, Radek

    2017-11-01

    Visual attention can be deployed in space-based or object-centered reference frames. Right-hemisphere damage may lead to distinct deficits of space- or object-based processing, and such dissociations are thought to underlie the heterogeneous nature of spatial neglect. Previous studies have suggested that object-centered processing deficits (such as in copying, reading or line bisection) result from damage to retro-rolandic regions while impaired spatial exploration reflects damage to more anterior regions. However, this evidence is based on small samples and heterogeneous tasks. Here, we tested a theoretical model of neglect that takes in account the space- and object-based processing and relates them to neuroanatomical predictors. One hundred and one right-hemisphere-damaged patients were examined with classic neuropsychological tests and structural brain imaging. Relations between neglect measures and damage to the temporal-parietal junction, intraparietal cortex, insula and middle frontal gyrus were examined with two structural equation models by assuming that object-centered processing (involved in line bisection and single-word reading) and space-based processing (involved in cancelation tasks) either represented a unique latent variable or two distinct variables. Of these two models the latter had better explanatory power. Damage to the intraparietal sulcus was a significant predictor of object-centered, but not space-based processing, while damage to the temporal-parietal junction predicted space-based, but not object-centered processing. Space-based processing and object-centered processing were strongly intercorrelated, indicating that they rely on similar, albeit partly dissociated processes. These findings indicate that object-centered and space-based deficits in neglect are partly independent and result from superior parietal and inferior parietal damage, respectively.

  8. Development of a flash flood warning system based on real-time radar data and process-based erosion modelling

    NASA Astrophysics Data System (ADS)

    Schindewolf, Marcus; Kaiser, Andreas; Buchholtz, Arno; Schmidt, Jürgen

    2017-04-01

    Extreme rainfall events and resulting flash floods led to massive devastations in Germany during spring 2016. The study presented aims on the development of a early warning system, which allows the simulation and assessment of negative effects on infrastructure by radar-based heavy rainfall predictions, serving as input data for the process-based soil loss and deposition model EROSION 3D. Our approach enables a detailed identification of runoff and sediment fluxes in agricultural used landscapes. In a first step, documented historical events were analyzed concerning the accordance of measured radar rainfall and large scale erosion risk maps. A second step focused on a small scale erosion monitoring via UAV of source areas of heavy flooding events and a model reconstruction of the processes involved. In all examples damages were caused to local infrastructure. Both analyses are promising in order to detect runoff and sediment delivering areas even in a high temporal and spatial resolution. Results prove the important role of late-covering crops such as maize, sugar beet or potatoes in runoff generation. While e.g. winter wheat positively affects extensive runoff generation on undulating landscapes, massive soil loss and thus muddy flows are observed and depicted in model results. Future research aims on large scale model parameterization and application in real time, uncertainty estimation of precipitation forecast and interface developments.

  9. Tree injury and mortality in fires: developing process-based models

    Treesearch

    Bret W. Butler; Matthew B. Dickinson

    2010-01-01

    Wildland fire managers are often required to predict tree injury and mortality when planning a prescribed burn or when considering wildfire management options; and, currently, statistical models based on post-fire observations are the only tools available for this purpose. Implicit in the derivation of statistical models is the assumption that they are strictly...

  10. Soft sensor modeling based on variable partition ensemble method for nonlinear batch processes

    NASA Astrophysics Data System (ADS)

    Wang, Li; Chen, Xiangguang; Yang, Kai; Jin, Huaiping

    2017-01-01

    Batch processes are always characterized by nonlinear and system uncertain properties, therefore, the conventional single model may be ill-suited. A local learning strategy soft sensor based on variable partition ensemble method is developed for the quality prediction of nonlinear and non-Gaussian batch processes. A set of input variable sets are obtained by bootstrapping and PMI criterion. Then, multiple local GPR models are developed based on each local input variable set. When a new test data is coming, the posterior probability of each best performance local model is estimated based on Bayesian inference and used to combine these local GPR models to get the final prediction result. The proposed soft sensor is demonstrated by applying to an industrial fed-batch chlortetracycline fermentation process.

  11. Process-based models are required to manage ecological systems in a changing world

    Treesearch

    K. Cuddington; M.-J. Fortin; L.R. Gerber; A. Hastings; A. Liebhold; M. OConnor; C. Ray

    2013-01-01

    Several modeling approaches can be used to guide management decisions. However, some approaches are better fitted than others to address the problem of prediction under global change. Process-based models, which are based on a theoretical understanding of relevant ecological processes, provide a useful framework to incorporate specific responses to altered...

  12. Aircraft-based Observations and Modeling of Wintertime Submicron Aerosol Composition over the Northeastern U.S.

    NASA Astrophysics Data System (ADS)

    Shah, V.; Jaegle, L.; Schroder, J. C.; Campuzano-Jost, P.; Jimenez, J. L.; Guo, H.; Sullivan, A.; Weber, R. J.; Green, J. R.; Fiddler, M.; Bililign, S.; Lopez-Hilfiker, F.; Lee, B. H.; Thornton, J. A.

    2017-12-01

    Submicron aerosol particles (PM1) remain a major air pollution concern in the urban areas of northeastern U.S. While SO2 and NOx emission controls have been effective at reducing summertime PM1 concentrations, this has not been the case for wintertime sulfate and nitrate concentrations, suggesting a nonlinear response during winter. During winter, organic aerosol (OA) is also an important contributor to PM1 mass despite low biogenic emissions, suggesting the presence of important urban sources. We use aircraft-based observations collected during the Wintertime INvestigation of Transport, Emissions and Reactivity (WINTER) campaign (Feb-March 2015), together with the GEOS-Chem chemical transport model, to investigate the sources and chemical processes governing wintertime PM1 over the northeastern U.S. The mean observed concentration of PM1 between the surface and 1 km was 4 μg m-3, about 30% of which was composed of sulfate, 20% nitrate, 10% ammonium, and 40% OA. The model reproduces the observed sulfate, nitrate and ammonium concentrations after updates to HNO3 production and loss, SO2 oxidation, and NH3 emissions. We find that 65% of the sulfate formation occurs in the aqueous phase, and 55% of nitrate formation through N2O5 hydrolysis, highlighting the importance of multiphase and heterogeneous processes during winter. Aqueous-phase sulfate production and the gas-particle partitioning of nitrate and ammonium are affected by atmospheric acidity, which in turn depends on the concentration of these species. We examine these couplings with GEOS-Chem, and assess the response of wintertime PM1 concentrations to further emission reductions based on the U.S. EPA projections for the year 2023. For OA, we find that the standard GEOS-Chem simulation underestimates the observed concentrations, but a simple parameterization developed from previous summer field campaigns is able to reproduce the observations and the contribution of primary and secondary OA. We find that

  13. Graphics Processing Unit (GPU) Acceleration of the Goddard Earth Observing System Atmospheric Model

    NASA Technical Reports Server (NTRS)

    Putnam, Williama

    2011-01-01

    The Goddard Earth Observing System 5 (GEOS-5) is the atmospheric model used by the Global Modeling and Assimilation Office (GMAO) for a variety of applications, from long-term climate prediction at relatively coarse resolution, to data assimilation and numerical weather prediction, to very high-resolution cloud-resolving simulations. GEOS-5 is being ported to a graphics processing unit (GPU) cluster at the NASA Center for Climate Simulation (NCCS). By utilizing GPU co-processor technology, we expect to increase the throughput of GEOS-5 by at least an order of magnitude, and accelerate the process of scientific exploration across all scales of global modeling, including: The large-scale, high-end application of non-hydrostatic, global, cloud-resolving modeling at 10- to I-kilometer (km) global resolutions Intermediate-resolution seasonal climate and weather prediction at 50- to 25-km on small clusters of GPUs Long-range, coarse-resolution climate modeling, enabled on a small box of GPUs for the individual researcher After being ported to the GPU cluster, the primary physics components and the dynamical core of GEOS-5 have demonstrated a potential speedup of 15-40 times over conventional processor cores. Performance improvements of this magnitude reduce the required scalability of 1-km, global, cloud-resolving models from an unfathomable 6 million cores to an attainable 200,000 GPU-enabled cores.

  14. Foundation observation of teaching project--a developmental model of peer observation of teaching.

    PubMed

    Pattison, Andrew Timothy; Sherwood, Morgan; Lumsden, Colin James; Gale, Alison; Markides, Maria

    2012-01-01

    Peer observation of teaching is important in the development of educators. The foundation curriculum specifies teaching competencies that must be attained. We created a developmental model of peer observation of teaching to help our foundation doctors achieve these competencies and develop as educators. A process for peer observation was created based on key features of faculty development. The project consisted of a pre-observation meeting, the observation, a post-observation debrief, writing of reflective reports and group feedback sessions. The project was evaluated by completion of questionnaires and focus groups held with both foundation doctors and the students they taught to achieve triangulation. Twenty-one foundation doctors took part. All completed reflective reports on their teaching. Participants described the process as useful in their development as educators, citing specific examples of changes to their teaching practice. Medical students rated the sessions as better or much better quality as their usual teaching. The study highlights the benefits of the project to individual foundation doctors, undergraduate medical students and faculty. It acknowledges potential anxieties involved in having teaching observed. A structured programme of observation of teaching can deliver specific teaching competencies required by foundation doctors and provides additional benefits.

  15. Earth observation data based rapid flood-extent modelling for tsunami-devastated coastal areas

    NASA Astrophysics Data System (ADS)

    Hese, Sören; Heyer, Thomas

    2016-04-01

    Earth observation (EO)-based mapping and analysis of natural hazards plays a critical role in various aspects of post-disaster aid management. Spatial very high-resolution Earth observation data provide important information for managing post-tsunami activities on devastated land and monitoring re-cultivation and reconstruction. The automatic and fast use of high-resolution EO data for rapid mapping is, however, complicated by high spectral variability in densely populated urban areas and unpredictable textural and spectral land-surface changes. The present paper presents the results of the SENDAI project, which developed an automatic post-tsunami flood-extent modelling concept using RapidEye multispectral satellite data and ASTER Global Digital Elevation Model Version 2 (GDEM V2) data of the eastern coast of Japan (captured after the Tohoku earthquake). In this paper, the authors developed both a bathtub-modelling approach and a cost-distance approach, and integrated the roughness parameters of different land-use types to increase the accuracy of flood-extent modelling. Overall, the accuracy of the developed models reached 87-92%, depending on the analysed test site. The flood-modelling approach was explained and results were compared with published approaches. We came to the conclusion that the cost-factor-based approach reaches accuracy comparable to published results from hydrological modelling. However the proposed cost-factor approach is based on a much simpler dataset, which is available globally.

  16. Reduced order model based on principal component analysis for process simulation and optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lang, Y.; Malacina, A.; Biegler, L.

    2009-01-01

    It is well-known that distributed parameter computational fluid dynamics (CFD) models provide more accurate results than conventional, lumped-parameter unit operation models used in process simulation. Consequently, the use of CFD models in process/equipment co-simulation offers the potential to optimize overall plant performance with respect to complex thermal and fluid flow phenomena. Because solving CFD models is time-consuming compared to the overall process simulation, we consider the development of fast reduced order models (ROMs) based on CFD results to closely approximate the high-fidelity equipment models in the co-simulation. By considering process equipment items with complicated geometries and detailed thermodynamic property models,more » this study proposes a strategy to develop ROMs based on principal component analysis (PCA). Taking advantage of commercial process simulation and CFD software (for example, Aspen Plus and FLUENT), we are able to develop systematic CFD-based ROMs for equipment models in an efficient manner. In particular, we show that the validity of the ROM is more robust within well-sampled input domain and the CPU time is significantly reduced. Typically, it takes at most several CPU seconds to evaluate the ROM compared to several CPU hours or more to solve the CFD model. Two case studies, involving two power plant equipment examples, are described and demonstrate the benefits of using our proposed ROM methodology for process simulation and optimization.« less

  17. Optimal Estimation with Two Process Models and No Measurements

    DTIC Science & Technology

    2015-08-01

    models will be lost if either of the models includes deterministic modeling errors. 12 5. References and Notes 1. Brown RG, Hwang PYC. Introduction to...independent process models when no measurements are present. The observer follows a derivation similar to that of the discrete time Kalman filter. A simulation...discrete time Kalman filter. A simulation example is provided in which a process model based on the dynamics of a ballistic projectile is blended with an

  18. A General Accelerated Degradation Model Based on the Wiener Process.

    PubMed

    Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning

    2016-12-06

    Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses.

  19. A General Accelerated Degradation Model Based on the Wiener Process

    PubMed Central

    Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning

    2016-01-01

    Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses. PMID:28774107

  20. Observation-Based Dissipation and Input Terms for Spectral Wave Models, with End-User Testing

    DTIC Science & Technology

    2014-09-30

    scale influence of the Great barrier reef matrix on wave attenuation, Coral Reefs [published, refereed] Ghantous, M., and A.V. Babanin, 2014: One...Observation-Based Dissipation and Input Terms for Spectral Wave Models...functions, based on advanced understanding of physics of air-sea interactions, wave breaking and swell attenuation, in wave - forecast models. OBJECTIVES The

  1. A Hilbert Space Representation of Generalized Observables and Measurement Processes in the ESR Model

    NASA Astrophysics Data System (ADS)

    Sozzo, Sandro; Garola, Claudio

    2010-12-01

    The extended semantic realism ( ESR) model recently worked out by one of the authors embodies the mathematical formalism of standard (Hilbert space) quantum mechanics in a noncontextual framework, reinterpreting quantum probabilities as conditional instead of absolute. We provide here a Hilbert space representation of the generalized observables introduced by the ESR model that satisfy a simple physical condition, propose a generalization of the projection postulate, and suggest a possible mathematical description of the measurement process in terms of evolution of the compound system made up of the measured system and the measuring apparatus.

  2. Process-based modelling of the methane balance in periglacial landscapes (JSBACH-methane)

    NASA Astrophysics Data System (ADS)

    Kaiser, Sonja; Göckede, Mathias; Castro-Morales, Karel; Knoblauch, Christian; Ekici, Altug; Kleinen, Thomas; Zubrzycki, Sebastian; Sachs, Torsten; Wille, Christian; Beer, Christian

    2017-01-01

    A detailed process-based methane module for a global land surface scheme has been developed which is general enough to be applied in permafrost regions as well as wetlands outside permafrost areas. Methane production, oxidation and transport by ebullition, diffusion and plants are represented. In this model, oxygen has been explicitly incorporated into diffusion, transport by plants and two oxidation processes, of which one uses soil oxygen, while the other uses oxygen that is available via roots. Permafrost and wetland soils show special behaviour, such as variable soil pore space due to freezing and thawing or water table depths due to changing soil water content. This has been integrated directly into the methane-related processes. A detailed application at the Samoylov polygonal tundra site, Lena River Delta, Russia, is used for evaluation purposes. The application at Samoylov also shows differences in the importance of the several transport processes and in the methane dynamics under varying soil moisture, ice and temperature conditions during different seasons and on different microsites. These microsites are the elevated moist polygonal rim and the depressed wet polygonal centre. The evaluation shows sufficiently good agreement with field observations despite the fact that the module has not been specifically calibrated to these data. This methane module is designed such that the advanced land surface scheme is able to model recent and future methane fluxes from periglacial landscapes across scales. In addition, the methane contribution to carbon cycle-climate feedback mechanisms can be quantified when running coupled to an atmospheric model.

  3. Littoral transport rates in the Santa Barbara Littoral Cell: a process-based model analysis

    USGS Publications Warehouse

    Elias, E. P. L.; Barnard, Patrick L.; Brocatus, John

    2009-01-01

    Identification of the sediment transport patterns and pathways is essential for sustainable coastal zone management of the heavily modified coastline of Santa Barbara and Ventura County (California, USA). A process-based model application, based on Delft3D Online Morphology, is used to investigate the littoral transport potential along the Santa Barbara Littoral Cell (between Point Conception and Mugu Canyon). An advanced optimalization procedure is applied to enable annual sediment transport computations by reducing the ocean wave climate in 10 wave height - direction classes. Modeled littoral transport rates compare well with observed dredging volumes, and erosion or sedimentation hotspots coincide with the modeled divergence and convergence of the transport gradients. Sediment transport rates are strongly dependent on the alongshore variation in wave height due to wave sheltering, diffraction and focusing by the Northern Channel Islands, and the local orientation of the geologically-controlled coastline. Local transport gradients exceed the net eastward littoral transport, and are considered a primary driver for hot-spot erosion.

  4. Orientational analysis of planar fibre systems observed as a Poisson shot-noise process.

    PubMed

    Kärkkäinen, Salme; Lantuéjoul, Christian

    2007-10-01

    We consider two-dimensional fibrous materials observed as a digital greyscale image. The problem addressed is to estimate the orientation distribution of unobservable thin fibres from a greyscale image modelled by a planar Poisson shot-noise process. The classical stereological approach is not straightforward, because the point intensities of thin fibres along sampling lines may not be observable. For such cases, Kärkkäinen et al. (2001) suggested the use of scaled variograms determined from grey values along sampling lines in several directions. Their method is based on the assumption that the proportion between the scaled variograms and point intensities in all directions of sampling lines is constant. This assumption is proved to be valid asymptotically for Boolean models and dead leaves models, under some regularity conditions. In this work, we derive the scaled variogram and its approximations for a planar Poisson shot-noise process using the modified Bessel function. In the case of reasonable high resolution of the observed image, the scaled variogram has an approximate functional relation to the point intensity, and in the case of high resolution the relation is proportional. As the obtained relations are approximative, they are tested on simulations. The existing orientation analysis method based on the proportional relation is further experimented on images with different resolutions. The new result, the asymptotic proportionality between the scaled variograms and the point intensities for a Poisson shot-noise process, completes the earlier results for the Boolean models and for the dead leaves models.

  5. Improving the Aircraft Design Process Using Web-Based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)

    2000-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  6. Improving the Aircraft Design Process Using Web-based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.

    2003-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  7. A GUI-based Tool for Bridging the Gap between Models and Process-Oriented Studies

    NASA Astrophysics Data System (ADS)

    Kornfeld, A.; Van der Tol, C.; Berry, J. A.

    2014-12-01

    Models used for simulation of photosynthesis and transpiration by canopies of terrestrial plants typically have subroutines such as STOMATA.F90, PHOSIB.F90 or BIOCHEM.m that solve for photosynthesis and associated processes. Key parameters such as the Vmax for Rubisco and temperature response parameters are required by these subroutines. These are often taken from the literature or determined by separate analysis of gas exchange experiments. It is useful to note however that subroutines can be extracted and run as standalone models to simulate leaf responses collected in gas exchange experiments. Furthermore, there are excellent non-linear fitting tools that can be used to optimize the parameter values in these models to fit the observations. Ideally the Vmax fit in this way should be the same as that determined by a separate analysis, but it may not because of interactions with other kinetic constants and the temperature dependence of these in the full subroutine. We submit that it is more useful to fit the complete model to the calibration experiments rather as disaggregated constants. We designed a graphical user interface (GUI) based tool that uses gas exchange photosynthesis data to directly estimate model parameters in the SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) model and, at the same time, allow researchers to change parameters interactively to visualize how variation in model parameters affect predicted outcomes such as photosynthetic rates, electron transport, and chlorophyll fluorescence. We have also ported some of this functionality to an Excel spreadsheet, which could be used as a teaching tool to help integrate process-oriented and model-oriented studies.

  8. Research on application of intelligent computation based LUCC model in urbanization process

    NASA Astrophysics Data System (ADS)

    Chen, Zemin

    2007-06-01

    Global change study is an interdisciplinary and comprehensive research activity with international cooperation, arising in 1980s, with the largest scopes. The interaction between land use and cover change, as a research field with the crossing of natural science and social science, has become one of core subjects of global change study as well as the front edge and hot point of it. It is necessary to develop research on land use and cover change in urbanization process and build an analog model of urbanization to carry out description, simulation and analysis on dynamic behaviors in urban development change as well as to understand basic characteristics and rules of urbanization process. This has positive practical and theoretical significance for formulating urban and regional sustainable development strategy. The effect of urbanization on land use and cover change is mainly embodied in the change of quantity structure and space structure of urban space, and LUCC model in urbanization process has been an important research subject of urban geography and urban planning. In this paper, based upon previous research achievements, the writer systematically analyzes the research on land use/cover change in urbanization process with the theories of complexity science research and intelligent computation; builds a model for simulating and forecasting dynamic evolution of urban land use and cover change, on the basis of cellular automation model of complexity science research method and multi-agent theory; expands Markov model, traditional CA model and Agent model, introduces complexity science research theory and intelligent computation theory into LUCC research model to build intelligent computation-based LUCC model for analog research on land use and cover change in urbanization research, and performs case research. The concrete contents are as follows: 1. Complexity of LUCC research in urbanization process. Analyze urbanization process in combination with the contents

  9. Enhancing Users' Participation in Business Process Modeling through Ontology-Based Training

    NASA Astrophysics Data System (ADS)

    Macris, A.; Malamateniou, F.; Vassilacopoulos, G.

    Successful business process design requires active participation of users who are familiar with organizational activities and business process modelling concepts. Hence, there is a need to provide users with reusable, flexible, agile and adaptable training material in order to enable them instil their knowledge and expertise in business process design and automation activities. Knowledge reusability is of paramount importance in designing training material on process modelling since it enables users participate actively in process design/redesign activities stimulated by the changing business environment. This paper presents a prototype approach for the design and use of training material that provides significant advantages to both the designer (knowledge - content reusability and semantic web enabling) and the user (semantic search, knowledge navigation and knowledge dissemination). The approach is based on externalizing domain knowledge in the form of ontology-based knowledge networks (i.e. training scenarios serving specific training needs) so that it is made reusable.

  10. Process based modeling of total longshore sediment transport

    USGS Publications Warehouse

    Haas, K.A.; Hanes, D.M.

    2004-01-01

    Waves, currents, and longshore sand transport are calculated locally as a function of position in the nearshore region using process based numerical models. The resultant longshore sand transport is then integrated across the nearshore to provide predictions of the total longshore transport of sand due to waves and longshore currents. Model results are in close agreement with the I1-P1 correlation described by Komar and Inman (1970) and the CERC (1984) formula. Model results also indicate that the proportionality constant in the I1-P1 formula depends weakly upon the sediment size, the shape of the beach profile, and the particular local sediment flux formula that is employed. Model results indicate that the various effects and influences of sediment size tend to cancel out, resulting in little overall dependence on sediment size.

  11. A 2-D process-based model for suspended sediment dynamics: A first step towards ecological modeling

    USGS Publications Warehouse

    Achete, F. M.; van der Wegen, M.; Roelvink, D.; Jaffe, B.

    2015-01-01

    In estuaries suspended sediment concentration (SSC) is one of the most important contributors to turbidity, which influences habitat conditions and ecological functions of the system. Sediment dynamics differs depending on sediment supply and hydrodynamic forcing conditions that vary over space and over time. A robust sediment transport model is a first step in developing a chain of models enabling simulations of contaminants, phytoplankton and habitat conditions. This works aims to determine turbidity levels in the complex-geometry delta of the San Francisco estuary using a process-based approach (Delft3D Flexible Mesh software). Our approach includes a detailed calibration against measured SSC levels, a sensitivity analysis on model parameters and the determination of a yearly sediment budget as well as an assessment of model results in terms of turbidity levels for a single year, water year (WY) 2011. Model results show that our process-based approach is a valuable tool in assessing sediment dynamics and their related ecological parameters over a range of spatial and temporal scales. The model may act as the base model for a chain of ecological models assessing the impact of climate change and management scenarios. Here we present a modeling approach that, with limited data, produces reliable predictions and can be useful for estuaries without a large amount of processes data.

  12. A 2-D process-based model for suspended sediment dynamics: a first step towards ecological modeling

    NASA Astrophysics Data System (ADS)

    Achete, F. M.; van der Wegen, M.; Roelvink, D.; Jaffe, B.

    2015-06-01

    In estuaries suspended sediment concentration (SSC) is one of the most important contributors to turbidity, which influences habitat conditions and ecological functions of the system. Sediment dynamics differs depending on sediment supply and hydrodynamic forcing conditions that vary over space and over time. A robust sediment transport model is a first step in developing a chain of models enabling simulations of contaminants, phytoplankton and habitat conditions. This works aims to determine turbidity levels in the complex-geometry delta of the San Francisco estuary using a process-based approach (Delft3D Flexible Mesh software). Our approach includes a detailed calibration against measured SSC levels, a sensitivity analysis on model parameters and the determination of a yearly sediment budget as well as an assessment of model results in terms of turbidity levels for a single year, water year (WY) 2011. Model results show that our process-based approach is a valuable tool in assessing sediment dynamics and their related ecological parameters over a range of spatial and temporal scales. The model may act as the base model for a chain of ecological models assessing the impact of climate change and management scenarios. Here we present a modeling approach that, with limited data, produces reliable predictions and can be useful for estuaries without a large amount of processes data.

  13. Aerosol processing in mixed-phase clouds in ECHAM5-HAM: Model description and comparison to observations

    NASA Astrophysics Data System (ADS)

    Hoose, C.; Lohmann, U.; Stier, P.; Verheggen, B.; Weingartner, E.

    2008-04-01

    The global aerosol-climate model ECHAM5-HAM has been extended by an explicit treatment of cloud-borne particles. Two additional modes for in-droplet and in-crystal particles are introduced, which are coupled to the number of cloud droplet and ice crystal concentrations simulated by the ECHAM5 double-moment cloud microphysics scheme. Transfer, production, and removal of cloud-borne aerosol number and mass by cloud droplet activation, collision scavenging, aqueous-phase sulfate production, freezing, melting, evaporation, sublimation, and precipitation formation are taken into account. The model performance is demonstrated and validated with observations of the evolution of total and interstitial aerosol concentrations and size distributions during three different mixed-phase cloud events at the alpine high-altitude research station Jungfraujoch (Switzerland). Although the single-column simulations cannot be compared one-to-one with the observations, the governing processes in the evolution of the cloud and aerosol parameters are captured qualitatively well. High scavenged fractions are found during the presence of liquid water, while the release of particles during the Bergeron-Findeisen process results in low scavenged fractions after cloud glaciation. The observed coexistence of liquid and ice, which might be related to cloud heterogeneity at subgrid scales, can only be simulated in the model when assuming nonequilibrium conditions.

  14. The human body metabolism process mathematical simulation based on Lotka-Volterra model

    NASA Astrophysics Data System (ADS)

    Oliynyk, Andriy; Oliynyk, Eugene; Pyptiuk, Olexandr; DzierŻak, RóŻa; Szatkowska, Małgorzata; Uvaysova, Svetlana; Kozbekova, Ainur

    2017-08-01

    The mathematical model of metabolism process in human organism based on Lotka-Volterra model has beeng proposed, considering healing regime, nutrition system, features of insulin and sugar fragmentation process in the organism. The numerical algorithm of the model using IV-order Runge-Kutta method has been realized. After the result of calculations the conclusions have been made, recommendations about using the modeling results have been showed, the vectors of the following researches are defined.

  15. Modeling Cloud Phase Fraction Based on In-situ Observations in Stratiform Clouds

    NASA Astrophysics Data System (ADS)

    Boudala, F. S.; Isaac, G. A.

    2005-12-01

    Mixed-phase clouds influence weather and climate in several ways. Due to the fact that they exhibit very different optical properties as compared to ice or liquid only clouds, they play an important role in the earth's radiation balance by modifying the optical properties of clouds. Precipitation development in clouds is also enhanced under mixed-phase conditions and these clouds may contain large supercooled drops that freeze quickly in contact with aircraft surfaces that may be a hazard to aviation. The existence of ice and liquid phase clouds together in the same environment is thermodynamically unstable, and thus they are expected to disappear quickly. However, several observations show that mixed-phase clouds are relatively stable in the natural environment and last for several hours. Although there have been some efforts being made in the past to study the microphysical properties of mixed-phase clouds, there are still a number of uncertainties in modeling these clouds particularly in large scale numerical models. In most models, very simple temperature dependent parameterizations of cloud phase fraction are being used to estimate the fraction of ice or liquid phase in a given mixed-phase cloud. In this talk, two different parameterizations of ice fraction using in-situ aircraft measurements of cloud microphysical properties collected in extratropical stratiform clouds during several field programs will be presented. One of the parameterizations has been tested using a single prognostic equation developed by Tremblay et al. (1996) for application in the Canadian regional weather prediction model. The addition of small ice particles significantly increased the vapor deposition rate when the natural atmosphere is assumed to be water saturated, and thus this enhanced the glaciation of simulated mixed-phase cloud via the Bergeron-Findeisen process without significantly affecting the other cloud microphysical processes such as riming and particle sedimentation

  16. NAME Modeling and Climate Process Team

    NASA Astrophysics Data System (ADS)

    Schemm, J. E.; Williams, L. N.; Gutzler, D. S.

    2007-05-01

    NAME Climate Process and Modeling Team (CPT) has been established to address the need of linking climate process research to model development and testing activities for warm season climate prediction. The project builds on two existing NAME-related modeling efforts. One major component of this project is the organization and implementation of a second phase of NAMAP, based on the 2004 season. NAMAP2 will re-examine the metrics proposed by NAMAP, extend the NAMAP analysis to transient variability, exploit the extensive observational database provided by NAME 2004 to analyze simulation targets of special interest, and expand participation. Vertical column analysis will bring local NAME observations and model outputs together in a context where key physical processes in the models can be evaluated and improved. The second component builds on the current NAME-related modeling effort focused on the diurnal cycle of precipitation in several global models, including those implemented at NCEP, NASA and GFDL. Our activities will focus on the ability of the operational NCEP Global Forecast System (GFS) to simulate the diurnal and seasonal evolution of warm season precipitation during the NAME 2004 EOP, and on changes to the treatment of deep convection in the complicated terrain of the NAMS domain that are necessary to improve the simulations, and ultimately predictions of warm season precipitation These activities will be strongly tied to NAMAP2 to ensure technology transfer from research to operations. Results based on experiments conducted with the NCEP CFS GCM will be reported at the conference with emphasis on the impact of horizontal resolution in predicting warm season precipitation over North America.

  17. A New Strategy in Observer Modeling for Greenhouse Cucumber Seedling Growth

    PubMed Central

    Qiu, Quan; Zheng, Chenfei; Wang, Wenping; Qiao, Xiaojun; Bai, He; Yu, Jingquan; Shi, Kai

    2017-01-01

    State observer is an essential component in computerized control loops for greenhouse-crop systems. However, the current accomplishments of observer modeling for greenhouse-crop systems mainly focus on mass/energy balance, ignoring physiological responses of crops. As a result, state observers for crop physiological responses are rarely developed, and control operations are typically made based on experience rather than actual crop requirements. In addition, existing observer models require a large number of parameters, leading to heavy computational load and poor application feasibility. To address these problems, we present a new state observer modeling strategy that takes both environmental information and crop physiological responses into consideration during the observer modeling process. Using greenhouse cucumber seedlings as an instance, we sample 10 physiological parameters of cucumber seedlings at different time point during the exponential growth stage, and employ them to build growth state observers together with 8 environmental parameters. Support vector machine (SVM) acts as the mathematical tool for observer modeling. Canonical correlation analysis (CCA) is used to select the dominant environmental and physiological parameters in the modeling process. With the dominant parameters, simplified observer models are built and tested. We conduct contrast experiments with different input parameter combinations on simplified and un-simplified observers. Experimental results indicate that physiological information can improve the prediction accuracies of the growth state observers. Furthermore, the simplified observer models can give equivalent or even better performance than the un-simplified ones, which verifies the feasibility of CCA. The current study can enable state observers to reflect crop requirements and make them feasible for applications with simplified shapes, which is significant for developing intelligent greenhouse control systems for modern

  18. A New Strategy in Observer Modeling for Greenhouse Cucumber Seedling Growth.

    PubMed

    Qiu, Quan; Zheng, Chenfei; Wang, Wenping; Qiao, Xiaojun; Bai, He; Yu, Jingquan; Shi, Kai

    2017-01-01

    State observer is an essential component in computerized control loops for greenhouse-crop systems. However, the current accomplishments of observer modeling for greenhouse-crop systems mainly focus on mass/energy balance, ignoring physiological responses of crops. As a result, state observers for crop physiological responses are rarely developed, and control operations are typically made based on experience rather than actual crop requirements. In addition, existing observer models require a large number of parameters, leading to heavy computational load and poor application feasibility. To address these problems, we present a new state observer modeling strategy that takes both environmental information and crop physiological responses into consideration during the observer modeling process. Using greenhouse cucumber seedlings as an instance, we sample 10 physiological parameters of cucumber seedlings at different time point during the exponential growth stage, and employ them to build growth state observers together with 8 environmental parameters. Support vector machine (SVM) acts as the mathematical tool for observer modeling. Canonical correlation analysis (CCA) is used to select the dominant environmental and physiological parameters in the modeling process. With the dominant parameters, simplified observer models are built and tested. We conduct contrast experiments with different input parameter combinations on simplified and un-simplified observers. Experimental results indicate that physiological information can improve the prediction accuracies of the growth state observers. Furthermore, the simplified observer models can give equivalent or even better performance than the un-simplified ones, which verifies the feasibility of CCA. The current study can enable state observers to reflect crop requirements and make them feasible for applications with simplified shapes, which is significant for developing intelligent greenhouse control systems for modern

  19. How does higher frequency monitoring data affect the calibration of a process-based water quality model?

    NASA Astrophysics Data System (ADS)

    Jackson-Blake, Leah; Helliwell, Rachel

    2015-04-01

    -calibrated was lower for the fortnightly data, with a physically unrealistic TDP simulation being produced when too many parameters were allowed to vary during model calibration. Parameters should not therefore be varied spatially for models such as INCA-P unless there is solid evidence that this is appropriate, or there is a real need to do so for the model to fulfil its purpose. This study highlights the potential pitfalls of using low frequency timeseries of observed water quality to calibrate complex process-based models. For reliable model calibrations to be produced, monitoring programmes need to be designed which capture system variability, in particular nutrient dynamics during high flow events. In addition, there is a need for simpler models, so that all model parameters can be included in auto-calibration and uncertainty analysis, and to reduce the data needs during calibration.

  20. Sieve estimation in semiparametric modeling of longitudinal data with informative observation times.

    PubMed

    Zhao, Xingqiu; Deng, Shirong; Liu, Li; Liu, Lei

    2014-01-01

    Analyzing irregularly spaced longitudinal data often involves modeling possibly correlated response and observation processes. In this article, we propose a new class of semiparametric mean models that allows for the interaction between the observation history and covariates, leaving patterns of the observation process to be arbitrary. For inference on the regression parameters and the baseline mean function, a spline-based least squares estimation approach is proposed. The consistency, rate of convergence, and asymptotic normality of the proposed estimators are established. Our new approach is different from the usual approaches relying on the model specification of the observation scheme, and it can be easily used for predicting the longitudinal response. Simulation studies demonstrate that the proposed inference procedure performs well and is more robust. The analyses of bladder tumor data and medical cost data are presented to illustrate the proposed method.

  1. Multi-scale hydrometeorological observation and modelling for flash flood understanding

    NASA Astrophysics Data System (ADS)

    Braud, I.; Ayral, P.-A.; Bouvier, C.; Branger, F.; Delrieu, G.; Le Coz, J.; Nord, G.; Vandervaere, J.-P.; Anquetin, S.; Adamovic, M.; Andrieu, J.; Batiot, C.; Boudevillain, B.; Brunet, P.; Carreau, J.; Confoland, A.; Didon-Lescot, J.-F.; Domergue, J.-M.; Douvinet, J.; Dramais, G.; Freydier, R.; Gérard, S.; Huza, J.; Leblois, E.; Le Bourgeois, O.; Le Boursicaud, R.; Marchand, P.; Martin, P.; Nottale, L.; Patris, N.; Renard, B.; Seidel, J.-L.; Taupin, J.-D.; Vannier, O.; Vincendon, B.; Wijbrans, A.

    2014-09-01

    This paper presents a coupled observation and modelling strategy aiming at improving the understanding of processes triggering flash floods. This strategy is illustrated for the Mediterranean area using two French catchments (Gard and Ardèche) larger than 2000 km2. The approach is based on the monitoring of nested spatial scales: (1) the hillslope scale, where processes influencing the runoff generation and its concentration can be tackled; (2) the small to medium catchment scale (1-100 km2), where the impact of the network structure and of the spatial variability of rainfall, landscape and initial soil moisture can be quantified; (3) the larger scale (100-1000 km2), where the river routing and flooding processes become important. These observations are part of the HyMeX (HYdrological cycle in the Mediterranean EXperiment) enhanced observation period (EOP), which will last 4 years (2012-2015). In terms of hydrological modelling, the objective is to set up regional-scale models, while addressing small and generally ungauged catchments, which represent the scale of interest for flood risk assessment. Top-down and bottom-up approaches are combined and the models are used as "hypothesis testing" tools by coupling model development with data analyses in order to incrementally evaluate the validity of model hypotheses. The paper first presents the rationale behind the experimental set-up and the instrumentation itself. Second, we discuss the associated modelling strategy. Results illustrate the potential of the approach in advancing our understanding of flash flood processes on various scales.

  2. Multi-scale hydrometeorological observation and modelling for flash-flood understanding

    NASA Astrophysics Data System (ADS)

    Braud, I.; Ayral, P.-A.; Bouvier, C.; Branger, F.; Delrieu, G.; Le Coz, J.; Nord, G.; Vandervaere, J.-P.; Anquetin, S.; Adamovic, M.; Andrieu, J.; Batiot, C.; Boudevillain, B.; Brunet, P.; Carreau, J.; Confoland, A.; Didon-Lescot, J.-F.; Domergue, J.-M.; Douvinet, J.; Dramais, G.; Freydier, R.; Gérard, S.; Huza, J.; Leblois, E.; Le Bourgeois, O.; Le Boursicaud, R.; Marchand, P.; Martin, P.; Nottale, L.; Patris, N.; Renard, B.; Seidel, J.-L.; Taupin, J.-D.; Vannier, O.; Vincendon, B.; Wijbrans, A.

    2014-02-01

    This paper presents a coupled observation and modelling strategy aiming at improving the understanding of processes triggering flash floods. This strategy is illustrated for the Mediterranean area using two French catchments (Gard and Ardèche) larger than 2000 km2. The approach is based on the monitoring of nested spatial scales: (1) the hillslope scale, where processes influencing the runoff generation and its concentration can be tackled; (2) the small to medium catchment scale (1-100 km2) where the impact of the network structure and of the spatial variability of rainfall, landscape and initial soil moisture can be quantified; (3) the larger scale (100-1000 km2) where the river routing and flooding processes become important. These observations are part of the HyMeX (Hydrological Cycle in the Mediterranean Experiment) Enhanced Observation Period (EOP) and lasts four years (2012-2015). In terms of hydrological modelling the objective is to set up models at the regional scale, while addressing small and generally ungauged catchments, which is the scale of interest for flooding risk assessment. Top-down and bottom-up approaches are combined and the models are used as "hypothesis testing" tools by coupling model development with data analyses, in order to incrementally evaluate the validity of model hypotheses. The paper first presents the rationale behind the experimental set up and the instrumentation itself. Second, we discuss the associated modelling strategy. Results illustrate the potential of the approach in advancing our understanding of flash flood processes at various scales.

  3. Predicting the mineral composition of dust aerosols – Part 2: Model evaluation and identification of key processes with observations

    DOE PAGES

    Perlwitz, J. P.; Perez Garcia-Pando, C.; Miller, R. L.

    2015-10-21

    A global compilation of nearly sixty measurement studies is used to evaluate two methods of simulating the mineral composition of dust aerosols in an Earth system model. Both methods are based upon a Mean Mineralogical Table (MMT) that relates the soil mineral fractions to a global atlas of arid soil type. The Soil Mineral Fraction (SMF) method assumes that the aerosol mineral fractions match the fractions of the soil. The MMT is based upon soil measurements after wet sieving, a process that destroys aggregates of soil particles that would have been emitted from the original, undisturbed soil. The second methodmore » approximately reconstructs the emitted aggregates. This model is referred to as the Aerosol Mineral Fraction (AMF) method because the mineral fractions of the aerosols differ from those of the wet-sieved parent soil, partly due to reaggregation. The AMF method remedies some of the deficiencies of the SMF method in comparison to observations. Only the AMF method exhibits phyllosilicate mass at silt sizes, where they are abundant according to observations. In addition, the AMF quartz fraction of silt particles is in better agreement with measured values, in contrast to the overestimated SMF fraction. Measurements at distinct clay and silt particle sizes are shown to be more useful for evaluation of the models, in contrast to the sum over all particles sizes that is susceptible to compensating errors, as illustrated by the SMF experiment. Model errors suggest that allocation of the emitted silt fraction of each mineral into the corresponding transported size categories is an important remaining source of uncertainty. Evaluation of both models and the MMT is hindered by the limited number of size-resolved measurements of mineral content that sparsely sample aerosols from the major dust sources. In conclusion, the importance of climate processes dependent upon aerosol mineral composition shows the need for global and routine mineral measurements.« less

  4. Linking observations at active volcanoes to physical processes through conduit flow modelling

    NASA Astrophysics Data System (ADS)

    Thomas, Mark; Neuberg, Jurgen

    2010-05-01

    Low frequency seismic events observed on volcanoes such as Soufriere hills, Montserrat may offer key indications about the state of a volcanic system. To obtain a better understanding of the source of these events and of the physical processes that take place within a volcano it is necessary to understand the conditions of magma a depth. This can be achieved through conduit flow modelling (Collier & Neuberg, 2006). 2-D compressible Navier-Stokes equations are solved through a Finite Element approach, for differing initial water and crystal contents, magma temperatures, chamber overpressures and geometric shapes of conduit. In the fully interdependent modelled system each of these variables has an effect on the magma density, viscosity, gas content, and also the pressure within the flow. These variables in turn affect the magma ascent velocity and the overall eruption dynamics of an active system. Of particular interest are the changes engendered in the flow by relativity small variations in the conduit geometry. These changes can have a profound local effect of the ascent velocity of the magma. By restricting the width of 15m wide, 5000m long vertical conduit over a 100m distance a significant acceleration of the magma is seen in this area. This has implications for the generation of Low-Frequency (LF) events at volcanic systems. The strain-induced fracture of viscoelastic magma or brittle failure of melt has been previously discussed as a possible source of LF events by several authors (e.g. Tuffen et al., 2003; Neuberg et al., 2006). The location of such brittle failure however has been seen to occur at relativity shallow depths (<1000m), which does not agree with the location of recorded LF events. By varying the geometry of the conduit and causing accelerations in the magma flow, localised increases in the shear strain rate of up to 30% are observed. This provides a mechanism of increasing the depth over witch brittle failure of melt may occur. A key observable

  5. SU-F-J-178: A Computer Simulation Model Observer for Task-Based Image Quality Assessment in Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolly, S; Mutic, S; Anastasio, M

    Purpose: Traditionally, image quality in radiation therapy is assessed subjectively or by utilizing physically-based metrics. Some model observers exist for task-based medical image quality assessment, but almost exclusively for diagnostic imaging tasks. As opposed to disease diagnosis, the task for image observers in radiation therapy is to utilize the available images to design and deliver a radiation dose which maximizes patient disease control while minimizing normal tissue damage. The purpose of this study was to design and implement a new computer simulation model observer to enable task-based image quality assessment in radiation therapy. Methods: A modular computer simulation framework wasmore » developed to resemble the radiotherapy observer by simulating an end-to-end radiation therapy treatment. Given images and the ground-truth organ boundaries from a numerical phantom as inputs, the framework simulates an external beam radiation therapy treatment and quantifies patient treatment outcomes using the previously defined therapeutic operating characteristic (TOC) curve. As a preliminary demonstration, TOC curves were calculated for various CT acquisition and reconstruction parameters, with the goal of assessing and optimizing simulation CT image quality for radiation therapy. Sources of randomness and bias within the system were analyzed. Results: The relationship between CT imaging dose and patient treatment outcome was objectively quantified in terms of a singular value, the area under the TOC (AUTOC) curve. The AUTOC decreases more rapidly for low-dose imaging protocols. AUTOC variation introduced by the dose optimization algorithm was approximately 0.02%, at the 95% confidence interval. Conclusion: A model observer has been developed and implemented to assess image quality based on radiation therapy treatment efficacy. It enables objective determination of appropriate imaging parameter values (e.g. imaging dose). Framework flexibility allows for

  6. Process Correlation Analysis Model for Process Improvement Identification

    PubMed Central

    Park, Sooyong

    2014-01-01

    Software process improvement aims at improving the development process of software systems. It is initiated by process assessment identifying strengths and weaknesses and based on the findings, improvement plans are developed. In general, a process reference model (e.g., CMMI) is used throughout the process of software process improvement as the base. CMMI defines a set of process areas involved in software development and what to be carried out in process areas in terms of goals and practices. Process areas and their elements (goals and practices) are often correlated due to the iterative nature of software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data. PMID:24977170

  7. Process correlation analysis model for process improvement identification.

    PubMed

    Choi, Su-jin; Kim, Dae-Kyoo; Park, Sooyong

    2014-01-01

    Software process improvement aims at improving the development process of software systems. It is initiated by process assessment identifying strengths and weaknesses and based on the findings, improvement plans are developed. In general, a process reference model (e.g., CMMI) is used throughout the process of software process improvement as the base. CMMI defines a set of process areas involved in software development and what to be carried out in process areas in terms of goals and practices. Process areas and their elements (goals and practices) are often correlated due to the iterative nature of software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data.

  8. Basin infilling of a schematic 1D estuary using two different approaches: an aggregate diffusive type model and a processed based model.

    NASA Astrophysics Data System (ADS)

    Laginha Silva, Patricia; Martins, Flávio A.; Boski, Tomász; Sampath, Dissanayake M. R.

    2010-05-01

    processes. In this viewpoint the system is broken down into its fundamental components and processes and the model is build up by selecting the important processes regardless of its time and space scale. This viewpoint was only possible to pursue in the recent years due to improvement in system knowledge and computer power (Paola, 2000). The primary aim of this paper is to demonstrate that it is possible to simulate the evolution of the sediment river bed, traditionally studied with synthetic models, with a process-based hydrodynamic, sediment transport and morphodynamic model, solving explicitly the mass and momentum conservation equations. With this objective, a comparison between two mathematical models for alluvial rivers is made to simulate the evolution of the sediment river bed of a conceptual 1D embayment for periods in the order of a thousand years: the traditional synthetic basin infilling aggregate diffusive type model based on the diffusion equation (Paola, 2000), used in the "synthesist" viewpoint and the process-based model MOHID (Miranda et al., 2000). The simulation of the sediment river bed evolution achieved by the process-based model MOHID is very similar to those obtained by the diffusive type model, but more complete due to the complexity of the process-based model. In the MOHID results it is possible to observe a more comprehensive and realistic results because this type of model include processes that is impossible to a synthetic model to describe. At last the combined effect of tide, sea level rise and river discharges was investigated in the process based model. These effects cannot be simulated using the diffusive type model. The results demonstrate the feasibility of using process based models to perform studies in scales of 10000 years. This is an advance relative to the use of synthetic models, enabling the use of variable forcing. REFERENCES • Briggs, L.I. and Pollack, H.N., 1967. Digital model of evaporate sedimentation. Science, 155, 453

  9. Assessing changes to South African maize production areas in 2055 using empirical and process-based crop models

    NASA Astrophysics Data System (ADS)

    Estes, L.; Bradley, B.; Oppenheimer, M.; Beukes, H.; Schulze, R. E.; Tadross, M.

    2010-12-01

    Rising temperatures and altered precipitation patterns associated with climate change pose a significant threat to crop production, particularly in developing countries. In South Africa, a semi-arid country with a diverse agricultural sector, anthropogenic climate change is likely to affect staple crops and decrease food security. Here, we focus on maize production, South Africa’s most widely grown crop and one with high socio-economic value. We build on previous coarser-scaled studies by working at a finer spatial resolution and by employing two different modeling approaches: the process-based DSSAT Cropping System Model (CSM, version 4.5), and an empirical distribution model (Maxent). For climate projections, we use an ensemble of 10 general circulation models (GCMs) run under both high and low CO2 emissions scenarios (SRES A2 and B1). The models were down-scaled to historical climate records for 5838 quinary-scale catchments covering South Africa (mean area = 164.8 km2), using a technique based on self-organizing maps (SOMs) that generates precipitation patterns more consistent with observed gradients than those produced by the parent GCMs. Soil hydrological and mechanical properties were derived from textural and compositional data linked to a map of 26422 land forms (mean area = 46 km2), while organic carbon from 3377 soil profiles was mapped using regression kriging with 8 spatial predictors. CSM was run using typical management parameters for the several major dryland maize production regions, and with projected CO2 values. The Maxent distribution model was trained using maize locations identified using annual phenology derived from satellite images coupled with airborne crop sampling observations. Temperature and precipitation projections were based on GCM output, with an additional 10% increase in precipitation to simulate higher water-use efficiency under future CO2 concentrations. The two modeling approaches provide spatially explicit projections of

  10. Observability Analysis of a Matrix Kalman Filter-Based Navigation System Using Visual/Inertial/Magnetic Sensors

    PubMed Central

    Feng, Guohu; Wu, Wenqi; Wang, Jinling

    2012-01-01

    A matrix Kalman filter (MKF) has been implemented for an integrated navigation system using visual/inertial/magnetic sensors. The MKF rearranges the original nonlinear process model in a pseudo-linear process model. We employ the observability rank criterion based on Lie derivatives to verify the conditions under which the nonlinear system is observable. It has been proved that such observability conditions are: (a) at least one degree of rotational freedom is excited, and (b) at least two linearly independent horizontal lines and one vertical line are observed. Experimental results have validated the correctness of these observability conditions. PMID:23012523

  11. A stochastic hybrid systems based framework for modeling dependent failure processes

    PubMed Central

    Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying

    2017-01-01

    In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods. PMID:28231313

  12. A stochastic hybrid systems based framework for modeling dependent failure processes.

    PubMed

    Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying

    2017-01-01

    In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods.

  13. Modeling Elevation and Aspect Controls on Emerging Ecohydrologic Processes and Ecosystem Patterns Using the Component-based Landlab Framework

    NASA Astrophysics Data System (ADS)

    Nudurupati, S. S.; Istanbulluoglu, E.; Adams, J. M.; Hobley, D. E. J.; Gasparini, N. M.; Tucker, G. E.; Hutton, E. W. H.

    2014-12-01

    Topography plays a commanding role on the organization of ecohydrologic processes and resulting vegetation patterns. In southwestern United States, climate conditions lead to terrain aspect- and elevation-controlled ecosystems, with mesic north-facing and xeric south-facing vegetation types; and changes in biodiversity as a function of elevation from shrublands in low desert elevations, to mixed grass/shrublands in mid elevations, and forests at high elevations and ridge tops. These observed patterns have been attributed to differences in topography-mediated local soil moisture availability, micro-climatology, and life history processes of plants that control chances of plant establishment and survival. While ecohydrologic models represent local vegetation dynamics in sufficient detail up to sub-hourly time scales, plant life history and competition for space and resources has not been adequately represented in models. In this study we develop an ecohydrologic cellular automata model within the Landlab component-based modeling framework. This model couples local vegetation dynamics (biomass production, death) and plant establishment and competition processes for resources and space. This model is used to study the vegetation organization in a semiarid New Mexico catchment where elevation and hillslope aspect play a defining role on plant types. Processes that lead to observed plant types across the landscape are examined by initializing the domain with randomly assigned plant types and systematically changing model parameters that couple plant response with soil moisture dynamics. Climate perturbation experiments are conducted to examine the plant response in space and time. Understanding the inherently transient ecohydrologic systems is critical to improve predictions of climate change impacts on ecosystems.

  14. Processing speed enhances model-based over model-free reinforcement learning in the presence of high working memory functioning

    PubMed Central

    Schad, Daniel J.; Jünger, Elisabeth; Sebold, Miriam; Garbusow, Maria; Bernhardt, Nadine; Javadi, Amir-Homayoun; Zimmermann, Ulrich S.; Smolka, Michael N.; Heinz, Andreas; Rapp, Michael A.; Huys, Quentin J. M.

    2014-01-01

    Theories of decision-making and its neural substrates have long assumed the existence of two distinct and competing valuation systems, variously described as goal-directed vs. habitual, or, more recently and based on statistical arguments, as model-free vs. model-based reinforcement-learning. Though both have been shown to control choices, the cognitive abilities associated with these systems are under ongoing investigation. Here we examine the link to cognitive abilities, and find that individual differences in processing speed covary with a shift from model-free to model-based choice control in the presence of above-average working memory function. This suggests shared cognitive and neural processes; provides a bridge between literatures on intelligence and valuation; and may guide the development of process models of different valuation components. Furthermore, it provides a rationale for individual differences in the tendency to deploy valuation systems, which may be important for understanding the manifold neuropsychiatric diseases associated with malfunctions of valuation. PMID:25566131

  15. Proposal of diagnostic process model for computer based diagnosis.

    PubMed

    Matsumura, Yasushi; Takeda, Toshihiro; Manabe, Shiro; Saito, Hirokazu; Teramoto, Kei; Kuwata, Shigeki; Mihara, Naoki

    2012-01-01

    We aim at making a diagnosis support system that can be put to practical use. We proposed a diagnostic process model based on simple knowledge which can be gleaned from textbooks. We defined clinical finding (CF) as a general concept for patient's symptom or findings etc., whose value is expressed by Boolean. We call the combination of several CFs a "CF pattern", and a set of CF patterns with concomitant diseases "case base". We consider diagnosis as a process of searching an instance from the case base whose CF pattern is concomitant with that of a patient. The diseases which have the same CF pattern are candidates for diagnosis. Then we select a CF which is present in part of the candidates and check whether it is present or absent in the patient in order to narrow down the candidates. Because the case base does not exist in reality, the probability of CF pattern is calculated by the product of CF occurrence rate assuming that occurrence of CF is independent. Therefore the knowledge required for diagnosis is frequency of disease under sex and age group and CF-disease relation (CF and its occurrence rate in the disease). By processing these two types of knowledge, diagnosis can be made.

  16. Improving Science Process Skills for Primary School Students Through 5E Instructional Model-Based Learning

    NASA Astrophysics Data System (ADS)

    Choirunnisa, N. L.; Prabowo, P.; Suryanti, S.

    2018-01-01

    The main objective of this study is to describe the effectiveness of 5E instructional model-based learning to improve primary school students’ science process skills. The science process skills is important for students as it is the foundation for enhancing the mastery of concepts and thinking skills needed in the 21st century. The design of this study was experimental involving one group pre-test and post-test design. The result of this study shows that (1) the implementation of learning in both of classes, IVA and IVB, show that the percentage of learning implementation increased which indicates a better quality of learning and (2) the percentage of students’ science process skills test results on the aspects of observing, formulating hypotheses, determining variable, interpreting data and communicating increased as well.

  17. Pitfalls in alignment of observation models resolved using PROV as an upper ontology

    NASA Astrophysics Data System (ADS)

    Cox, S. J. D.

    2015-12-01

    A number of models for observation metadata have been developed in the earth and environmental science communities, including OGC's Observations and Measurements (O&M), the ecosystems community's Extensible Observation Ontology (OBOE), the W3C's Semantic Sensor Network Ontology (SSNO), and the CUAHSI/NSF Observations Data Model v2 (ODM2). In order to combine data formalized in the various models, mappings between these must be developed. In some cases this is straightforward: since ODM2 took O&M as its starting point, their terminology is almost completely aligned. In the eco-informatics world observations are almost never made in isolation of other observations, so OBOE pays particular attention to groupings, with multiple atomic 'Measurements' in each oboe:Observation which does not have a result of its own and thus plays a different role to an om:Observation. And while SSN also adopted terminology from O&M, mapping is confounded by the fact that SSN uses DOLCE as its foundation and places ssn:Observations as 'Social Objects' which are explicitly disjoint from 'Events', while O&M is formalized as part of the ISO/TC 211 harmonised (UML) model and sees om:Observations as value assignment activities. Foundational ontologies (such as BFO, GFO, UFO or DOLCE) can provide a framework for alignment, but different upper ontologies can be based in profoundly different worldviews and use of incommensurate frameworks can confound rather than help. A potential resolution is provided by comparing recent studies that align SSN and O&M, respectively, with the PROV-O ontology. PROV-O provides just three base classes: Entity, Activity and Agent. om:Observation is sub-classed from prov:Activity, while ssn:Observation is sub-classed from prov:Entity. This confirms that, despite the same name, om:Observation and ssn:Observation denote different aspects of the observation process: the observation event, and the record of the observation event, respectively. Alignment with the simple

  18. Modelling and control for laser based welding processes: modern methods of process control to improve quality of laser-based joining methods

    NASA Astrophysics Data System (ADS)

    Zäh, Ralf-Kilian; Mosbach, Benedikt; Hollwich, Jan; Faupel, Benedikt

    2017-02-01

    To ensure the competitiveness of manufacturing companies it is indispensable to optimize their manufacturing processes. Slight variations of process parameters and machine settings have only marginally effects on the product quality. Therefore, the largest possible editing window is required. Such parameters are, for example, the movement of the laser beam across the component for the laser keyhole welding. That`s why it is necessary to keep the formation of welding seams within specified limits. Therefore, the quality of laser welding processes is ensured, by using post-process methods, like ultrasonic inspection, or special in-process methods. These in-process systems only achieve a simple evaluation which shows whether the weld seam is acceptable or not. Furthermore, in-process systems use no feedback for changing the control variables such as speed of the laser or adjustment of laser power. In this paper the research group presents current results of the research field of Online Monitoring, Online Controlling and Model predictive controlling in laser welding processes to increase the product quality. To record the characteristics of the welding process, tested online methods are used during the process. Based on the measurement data, a state space model is ascertained, which includes all the control variables of the system. Depending on simulation tools the model predictive controller (MPC) is designed for the model and integrated into an NI-Real-Time-System.

  19. Process-based upscaling of surface-atmosphere exchange

    NASA Astrophysics Data System (ADS)

    Keenan, T. F.; Prentice, I. C.; Canadell, J.; Williams, C. A.; Wang, H.; Raupach, M. R.; Collatz, G. J.; Davis, T.; Stocker, B.; Evans, B. J.

    2015-12-01

    Empirical upscaling techniques such as machine learning and data-mining have proven invaluable tools for the global scaling of disparate observations of surface-atmosphere exchange, but are not based on a theoretical understanding of the key processes involved. This makes spatial and temporal extrapolation outside of the training domain difficult at best. There is therefore a clear need for the incorporation of knowledge of ecosystem function, in combination with the strength of data mining. Here, we present such an approach. We describe a novel diagnostic process-based model of global photosynthesis and ecosystem respiration, which is directly informed by a variety of global datasets relevant to ecosystem state and function. We use the model framework to estimate global carbon cycling both spatially and temporally, with a specific focus on the mechanisms responsible for long-term change. Our results show the importance of incorporating process knowledge into upscaling approaches, and highlight the effect of key processes on the terrestrial carbon cycle.

  20. Switching and optimizing control for coal flotation process based on a hybrid model

    PubMed Central

    Dong, Zhiyong; Wang, Ranfeng; Fan, Minqiang; Fu, Xiang

    2017-01-01

    Flotation is an important part of coal preparation, and the flotation column is widely applied as efficient flotation equipment. This process is complex and affected by many factors, with the froth depth and reagent dosage being two of the most important and frequently manipulated variables. This paper proposes a new method of switching and optimizing control for the coal flotation process. A hybrid model is built and evaluated using industrial data. First, wavelet analysis and principal component analysis (PCA) are applied for signal pre-processing. Second, a control model for optimizing the set point of the froth depth is constructed based on fuzzy control, and a control model is designed to optimize the reagent dosages based on expert system. Finally, the least squares-support vector machine (LS-SVM) is used to identify the operating conditions of the flotation process and to select one of the two models (froth depth or reagent dosage) for subsequent operation according to the condition parameters. The hybrid model is developed and evaluated on an industrial coal flotation column and exhibits satisfactory performance. PMID:29040305

  1. Extracting business vocabularies from business process models: SBVR and BPMN standards-based approach

    NASA Astrophysics Data System (ADS)

    Skersys, Tomas; Butleris, Rimantas; Kapocius, Kestutis

    2013-10-01

    Approaches for the analysis and specification of business vocabularies and rules are very relevant topics in both Business Process Management and Information Systems Development disciplines. However, in common practice of Information Systems Development, the Business modeling activities still are of mostly empiric nature. In this paper, basic aspects of the approach for business vocabularies' semi-automated extraction from business process models are presented. The approach is based on novel business modeling-level OMG standards "Business Process Model and Notation" (BPMN) and "Semantics for Business Vocabularies and Business Rules" (SBVR), thus contributing to OMG's vision about Model-Driven Architecture (MDA) and to model-driven development in general.

  2. Verification of ARMA identification for modelling temporal correlation of GPS observations using the toolbox ARMASA

    NASA Astrophysics Data System (ADS)

    Luo, Xiaoguang; Mayer, Michael; Heck, Bernhard

    2010-05-01

    One essential deficiency of the stochastic model used in many GNSS (Global Navigation Satellite Systems) software products consists in neglecting temporal correlation of GNSS observations. Analysing appropriately detrended time series of observation residuals resulting from GPS (Global Positioning System) data processing, the temporal correlation behaviour of GPS observations can be sufficiently described by means of so-called autoregressive moving average (ARMA) processes. Using the toolbox ARMASA which is available free of charge in MATLAB® Central (open exchange platform for the MATLAB® and SIMULINK® user community), a well-fitting time series model can be identified automatically in three steps. Firstly, AR, MA, and ARMA models are computed up to some user-specified maximum order. Subsequently, for each model type, the best-fitting model is selected using the combined (for AR processes) resp. generalised (for MA and ARMA processes) information criterion. The final model identification among the best-fitting AR, MA, and ARMA models is performed based on the minimum prediction error characterising the discrepancies between the given data and the fitted model. The ARMA coefficients are computed using Burg's maximum entropy algorithm (for AR processes), Durbin's first (for MA processes) and second (for ARMA processes) methods, respectively. This paper verifies the performance of the automated ARMA identification using the toolbox ARMASA. For this purpose, a representative data base is generated by means of ARMA simulation with respect to sample size, correlation level, and model complexity. The model error defined as a transform of the prediction error is used as measure for the deviation between the true and the estimated model. The results of the study show that the recognition rates of underlying true processes increase with increasing sample sizes and decrease with rising model complexity. Considering large sample sizes, the true underlying processes can be

  3. A Process-Based Transport-Distance Model of Aeolian Transport

    NASA Astrophysics Data System (ADS)

    Naylor, A. K.; Okin, G.; Wainwright, J.; Parsons, A. J.

    2017-12-01

    We present a new approach to modeling aeolian transport based on transport distance. Particle fluxes are based on statistical probabilities of particle detachment and distributions of transport lengths, which are functions of particle size classes. A computational saltation model is used to simulate transport distances over a variety of sizes. These are fit to an exponential distribution, which has the advantages of computational economy, concordance with current field measurements, and a meaningful relationship to theoretical assumptions about mean and median particle transport distance. This novel approach includes particle-particle interactions, which are important for sustaining aeolian transport and dust emission. Results from this model are compared with results from both bulk- and particle-sized-specific transport equations as well as empirical wind tunnel studies. The transport-distance approach has been successfully used for hydraulic processes, and extending this methodology from hydraulic to aeolian transport opens up the possibility of modeling joint transport by wind and water using consistent physics. Particularly in nutrient-limited environments, modeling the joint action of aeolian and hydraulic transport is essential for understanding the spatial distribution of biomass across landscapes and how it responds to climatic variability and change.

  4. Support of surgical process modeling by using adaptable software user interfaces

    NASA Astrophysics Data System (ADS)

    Neumuth, T.; Kaschek, B.; Czygan, M.; Goldstein, D.; Strauß, G.; Meixensberger, J.; Burgert, O.

    2010-03-01

    Surgical Process Modeling (SPM) is a powerful method for acquiring data about the evolution of surgical procedures. Surgical Process Models are used in a variety of use cases including evaluation studies, requirements analysis and procedure optimization, surgical education, and workflow management scheme design. This work proposes the use of adaptive, situation-aware user interfaces for observation support software for SPM. We developed a method to support the modeling of the observer by using an ontological knowledge base. This is used to drive the graphical user interface for the observer to restrict the search space of terminology depending on the current situation. In the evaluation study it is shown, that the workload of the observer was decreased significantly by using adaptive user interfaces. 54 SPM observation protocols were analyzed by using the NASA Task Load Index and it was shown that the use of the adaptive user interface disburdens the observer significantly in workload criteria effort, mental demand and temporal demand, helping him to concentrate on his essential task of modeling the Surgical Process.

  5. A Process-based, Climate-Sensitive Model to Derive Methane Emissions from Natural Wetlands: Application to 5 Wetland Sites, Sensitivity to Model Parameters and Climate

    NASA Technical Reports Server (NTRS)

    Walter, Bernadette P.; Heimann, Martin

    1999-01-01

    Methane emissions from natural wetlands constitutes the largest methane source at present and depends highly on the climate. In order to investigate the response of methane emissions from natural wetlands to climate variations, a 1-dimensional process-based climate-sensitive model to derive methane emissions from natural wetlands is developed. In the model the processes leading to methane emission are simulated within a 1-dimensional soil column and the three different transport mechanisms diffusion, plant-mediated transport and ebullition are modeled explicitly. The model forcing consists of daily values of soil temperature, water table and Net Primary Productivity, and at permafrost sites the thaw depth is included. The methane model is tested using observational data obtained at 5 wetland sites located in North America, Europe and Central America, representing a large variety of environmental conditions. It can be shown that in most cases seasonal variations in methane emissions can be explained by the combined effect of changes in soil temperature and the position of the water table. Our results also show that a process-based approach is needed, because there is no simple relationship between these controlling factors and methane emissions that applies to a variety of wetland sites. The sensitivity of the model to the choice of key model parameters is tested and further sensitivity tests are performed to demonstrate how methane emissions from wetlands respond to climate variations.

  6. Validating the ACE Model for Evaluating Student Performance Using a Teaching-Learning Process Based on Computational Modeling Systems

    ERIC Educational Resources Information Center

    Louzada, Alexandre Neves; Elia, Marcos da Fonseca; Sampaio, Fábio Ferrentini; Vidal, Andre Luiz Pestana

    2014-01-01

    The aim of this work is to adapt and test, in a Brazilian public school, the ACE model proposed by Borkulo for evaluating student performance as a teaching-learning process based on computational modeling systems. The ACE model is based on different types of reasoning involving three dimensions. In addition to adapting the model and introducing…

  7. Aircraft- and ground-based assessment of the CCN-AOD relationship and implications on model analysis of ACI and underlying aerosol processes

    NASA Astrophysics Data System (ADS)

    Shinozuka, Y.; Clarke, A. D.; Nenes, A.; Lathem, T. L.; Redemann, J.; Jefferson, A.; Wood, R.

    2014-12-01

    Contrary to common assumptions in satellite-based modeling of aerosol-cloud interactions, ∂logCCN/∂logAOD is less than unity, i.e., the number concentration of cloud condensation nuclei (CCN) less than doubles as aerosol optical depth (AOD) doubles. This can be explained by omnipresent aerosol processes. Condensation, coagulation and cloud processing, for example, generally make particles scatter more light while hardly increasing their number. This paper reports on the relationship in local air masses between CCN concentration, aerosol size distribution and light extinction observed from aircraft and the ground at diverse locations. The CCN-to-local-extinction relationship, when averaged over ~1 km distance and sorted by the wavelength dependence of extinction, varies approximately by a factor of 2, reflecting the variability in aerosol intensive properties. This, together with retrieval uncertainties and the variability in aerosol spatio-temporal distribution and hygroscopic growth, challenges satellite-based CCN estimates. However, the large differences in estimated CCN may correspond to a considerably lower uncertainty in cloud drop number concentration (CDNC), given the sublinear response of CDNC to CCN. Overall, our findings from airborne and ground-based observations call for model-based reexamination of aerosol-cloud interactions and underlying aerosol processes.

  8. The Lunar Phases Project: A Mental Model-Based Observational Project for Undergraduate Nonscience Majors

    ERIC Educational Resources Information Center

    Meyer, Angela Osterman; Mon, Manuel J.; Hibbard, Susan T.

    2011-01-01

    We present our Lunar Phases Project, an ongoing effort utilizing students' actual observations within a mental model building framework to improve student understanding of the causes and process of the lunar phases. We implement this project with a sample of undergraduate, nonscience major students enrolled in a midsized public university located…

  9. Key issues, observations and goals for coupled, thermodynamic/geodynamic models

    NASA Astrophysics Data System (ADS)

    Kelemen, P. B.

    2017-12-01

    In coupled, thermodynamic/geodynamic models, focus should be on processes involving major rock forming minerals and simple fluid compositions, and parameters with first-order effects on likely dynamic processes: In a given setting, will fluid mass increase or decrease? How about solid density? Will flow become localized or diffuse? Will rocks flow or break? How do reactions affect global processes such as formation and evolution of the plates, plate boundary deformation, metamorphism, weathering, climate and geochemical cycles. Important reaction feedbacks in geodynamics include formation of dissolution channels and armored channels; divergence of flow and formation of permeability barriers due to crystallization in pore space; localization of fluid transport and ductile deformation in shear zones; reaction-driven cracking; mechanical channels granular media; shear heating; density instabilities; viscous fluid-weakening; fluid-induced frictional failure; and hydraulic fracture. Density instabilities often lead to melting, and there is an interesting dialectic between porous flow and diapirs. The best models provide a simple but comprehensive framework that can account for the general features in many or most of these phenomena. Ideally, calculations based on thermodynamic data and rheological observations alone should delineate the regimes in which each of these processes will occur and the boundaries between them. These often start with "toy models" and lab experiments on analog systems, with highly approximate scaling to simplified geological conditions and materials. Geologic observations provide the best constraints where `frozen' fluid transport pathways or deformation processes are preserved. Inferences about completed processes based on fluid or solid products alone is more challenging and less unique. Not all important processes have good examples in outcrop, so directed searches for specific phenomena may fail. A highly generalized approach provides a way

  10. Implementing a Nitrogen-Based Model for Autotrophic Respiration Using Satellite and Field Observations

    NASA Technical Reports Server (NTRS)

    Choudhury, Bhaskar J.; Houser, Paul (Technical Monitor)

    2001-01-01

    The rate of carbon accumulation by terrestrial plant communities in a process-level, mechanistic modeling is the difference of the rate of gross photosynthesis by a canopy (A(sub g)) and autotrophic respiration (R) of the stand. Observations for different biomes often show that R to be a large and variable fraction of A(sub g), ca. 35% to 75%, although other studies suggest the ratio of R and A(sub g) to be less variable. Here, R has been calculated according to the two compartment model as being the sum of maintenance and growth components. The maintenance respiration of foliage and living fine roots for different biomes has been determined objectively from observed nitrogen content of these organs. The sapwood maintenance respiration is based on pipe theory, and checked against an independently derived equation considering sapwood biomass and its maintenance coefficient. The growth respiration has been calculated from the difference of A(sub g) and maintenance respiration. The A(sub g) is obtained as the product of biome-specific radiation use efficiency for gross photosynthesis under unstressed conditions and intercepted photosynthetically active radiation, and adjusted for stress. Calculations have been done using satellite and ground observations for 36 consecutive months (1987-1989) over large contiguous areas (ca. 10(exp 5) sq km) of boreal forests, crop land, temperate deciduous forest, temperate grassland, tropical deciduous forest, tropical evergreen forest, tropical savanna, and tundra. The ratio of annual respiration and gross photosynthesis, (R/A(sub g)), is found to be 0.5-0.6 for temperate and cold adopted biome areas, but somewhat higher for tropical biome areas (0.6-0.7). Interannual variation of the fluxes is found to be generally less than 15%. Calculated fluxes are compared with observations and several previous estimates. Results of sensitivity analysis are presented for uncertainties in parameterization and input data. It is found that

  11. Ionospheric effects in uncalibrated phase delay estimation and ambiguity-fixed PPP based on raw observable model

    NASA Astrophysics Data System (ADS)

    Gu, Shengfeng; Shi, Chuang; Lou, Yidong; Liu, Jingnan

    2015-05-01

    Zero-difference (ZD) ambiguity resolution (AR) reveals the potential to further improve the performance of precise point positioning (PPP). Traditionally, PPP AR is achieved by Melbourne-Wübbena and ionosphere-free combinations in which the ionosphere effect are removed. To exploit the ionosphere characteristics, PPP AR with L1 and L2 raw observable has also been developed recently. In this study, we apply this new approach in uncalibrated phase delay (UPD) generation and ZD AR and compare it with the traditional model. The raw observable processing strategy treats each ionosphere delay as an unknown parameter. In this manner, both a priori ionosphere correction model and its spatio-temporal correlation can be employed as constraints to improve the ambiguity resolution. However, theoretical analysis indicates that for the wide-lane (WL) UPD retrieved from L1/L2 ambiguities to benefit from this raw observable approach, high precision ionosphere correction of better than 0.7 total electron content unit (TECU) is essential. This conclusion is then confirmed with over 1 year data collected at about 360 stations. Firstly, both global and regional ionosphere model were generated and evaluated, the results of which demonstrated that, for large-scale ionosphere modeling, only an accuracy of 3.9 TECU can be achieved on average for the vertical delays, and this accuracy can be improved to about 0.64 TECU when dense network is involved. Based on these ionosphere products, WL/narrow-lane (NL) UPDs are then extracted with the raw observable model. The NL ambiguity reveals a better stability and consistency compared to traditional approach. Nonetheless, the WL ambiguity can be hardly improved even constrained with the high spatio-temporal resolution ionospheric corrections. By applying both these approaches in PPP-RTK, it is interesting to find that the traditional model is more efficient in AR as evidenced by the shorter time to first fix, while the three

  12. Using artificial neural networks to model aluminium based sheet forming processes and tools details

    NASA Astrophysics Data System (ADS)

    Mekras, N.

    2017-09-01

    In this paper, a methodology and a software system will be presented concerning the use of Artificial Neural Networks (ANNs) for modeling aluminium based sheet forming processes. ANNs models’ creation is based on the training of the ANNs using experimental, trial and historical data records of processes’ inputs and outputs. ANNs models are useful in cases that processes’ mathematical models are not accurate enough, are not well defined or are missing e.g. in cases of complex product shapes, new material alloys, new process requirements, micro-scale products, etc. Usually, after the design and modeling of the forming tools (die, punch, etc.) and before mass production, a set of trials takes place at the shop floor for finalizing processes and tools details concerning e.g. tools’ minimum radii, die/punch clearance, press speed, process temperature, etc. and in relation with the material type, the sheet thickness and the quality achieved from the trials. Using data from the shop floor trials and forming theory data, ANNs models can be trained and created, and can be used to estimate processes and tools final details, hence supporting efficient set-up of processes and tools before mass production starts. The proposed ANNs methodology and the respective software system are implemented within the EU H2020 project LoCoMaTech for the aluminium-based sheet forming process HFQ (solution Heat treatment, cold die Forming and Quenching).

  13. A dual-process perspective on fluency-based aesthetics: the pleasure-interest model of aesthetic liking.

    PubMed

    Graf, Laura K M; Landwehr, Jan R

    2015-11-01

    In this article, we develop an account of how aesthetic preferences can be formed as a result of two hierarchical, fluency-based processes. Our model suggests that processing performed immediately upon encountering an aesthetic object is stimulus driven, and aesthetic preferences that accrue from this processing reflect aesthetic evaluations of pleasure or displeasure. When sufficient processing motivation is provided by a perceiver's need for cognitive enrichment and/or the stimulus' processing affordance, elaborate perceiver-driven processing can emerge, which gives rise to fluency-based aesthetic evaluations of interest, boredom, or confusion. Because the positive outcomes in our model are pleasure and interest, we call it the Pleasure-Interest Model of Aesthetic Liking (PIA Model). Theoretically, this model integrates a dual-process perspective and ideas from lay epistemology into processing fluency theory, and it provides a parsimonious framework to embed and unite a wealth of aesthetic phenomena, including contradictory preference patterns for easy versus difficult-to-process aesthetic stimuli. © 2015 by the Society for Personality and Social Psychology, Inc.

  14. Ecosystem function in complex mountain terrain: Combining models and long-term observations to advance process-based understanding

    NASA Astrophysics Data System (ADS)

    Wieder, William R.; Knowles, John F.; Blanken, Peter D.; Swenson, Sean C.; Suding, Katharine N.

    2017-04-01

    Abiotic factors structure plant community composition and ecosystem function across many different spatial scales. Often, such variation is considered at regional or global scales, but here we ask whether ecosystem-scale simulations can be used to better understand landscape-level variation that might be particularly important in complex terrain, such as high-elevation mountains. We performed ecosystem-scale simulations by using the Community Land Model (CLM) version 4.5 to better understand how the increased length of growing seasons may impact carbon, water, and energy fluxes in an alpine tundra landscape. The model was forced with meteorological data and validated with observations from the Niwot Ridge Long Term Ecological Research Program site. Our results demonstrate that CLM is capable of reproducing the observed carbon, water, and energy fluxes for discrete vegetation patches across this heterogeneous ecosystem. We subsequently accelerated snowmelt and increased spring and summer air temperatures in order to simulate potential effects of climate change in this region. We found that vegetation communities that were characterized by different snow accumulation dynamics showed divergent biogeochemical responses to a longer growing season. Contrary to expectations, wet meadow ecosystems showed the strongest decreases in plant productivity under extended summer scenarios because of disruptions in hydrologic connectivity. These findings illustrate how Earth system models such as CLM can be used to generate testable hypotheses about the shifting nature of energy, water, and nutrient limitations across space and through time in heterogeneous landscapes; these hypotheses may ultimately guide further experimental work and model development.

  15. A first packet processing subdomain cluster model based on SDN

    NASA Astrophysics Data System (ADS)

    Chen, Mingyong; Wu, Weimin

    2017-08-01

    For the current controller cluster packet processing performance bottlenecks and controller downtime problems. An SDN controller is proposed to allocate the priority of each device in the SDN (Software Defined Network) network, and the domain contains several network devices and Controller, the controller is responsible for managing the network equipment within the domain, the switch performs data delivery based on the load of the controller, processing network equipment data. The experimental results show that the model can effectively solve the risk of single point failure of the controller, and can solve the performance bottleneck of the first packet processing.

  16. Modeling the Footprint and Equivalent Radiance Transfer Path Length for Tower-Based Hemispherical Observations of Chlorophyll Fluorescence

    PubMed Central

    Liu, Xinjie; Liu, Liangyun; Hu, Jiaochan; Du, Shanshan

    2017-01-01

    The measurement of solar-induced chlorophyll fluorescence (SIF) is a new tool for estimating gross primary production (GPP). Continuous tower-based spectral observations together with flux measurements are an efficient way of linking the SIF to the GPP. Compared to conical observations, hemispherical observations made with cosine-corrected foreoptic have a much larger field of view and can better match the footprint of the tower-based flux measurements. However, estimating the equivalent radiation transfer path length (ERTPL) for hemispherical observations is more complex than for conical observations and this is a key problem that needs to be addressed before accurate retrieval of SIF can be made. In this paper, we first modeled the footprint of hemispherical spectral measurements and found that, under convective conditions with light winds, 90% of the total radiation came from an FOV of width 72°, which in turn covered 75.68% of the source area of the flux measurements. In contrast, conical spectral observations covered only 1.93% of the flux footprint. Secondly, using theoretical considerations, we modeled the ERTPL of the hemispherical spectral observations made with cosine-corrected foreoptic and found that the ERTPL was approximately equal to twice the sensor height above the canopy. Finally, the modeled ERTPL was evaluated using a simulated dataset. The ERTPL calculated using the simulated data was about 1.89 times the sensor’s height above the target surface, which was quite close to the results for the modeled ERTPL. Furthermore, the SIF retrieved from atmospherically corrected spectra using the modeled ERTPL fitted well with the reference values, giving a relative root mean square error of 18.22%. These results show that the modeled ERTPL was reasonable and that this method is applicable to tower-based hemispherical observations of SIF. PMID:28509843

  17. Observations and modeling of methane flux in northern wetlands

    NASA Astrophysics Data System (ADS)

    Futakuchi, Y.; Ueyama, M.; Matsumoto, Y.; Yazaki, T.; Hirano, T.; Kominami, Y.; Harazono, Y.; Igarashi, Y.

    2016-12-01

    Methane (CH4) budgets in northern wetlands vary greatly with high spatio-temporal heterogeneity. Owing to limited available data, yet, it is difficult to constrain the CH4 emission from northern wetlands. In this context, we continuously measured CH4 fluxes at two northern wetlands. Measured fluxes were used for constraining the new model that empirically partitioned net CH4 fluxes into the processes of production, oxidation, and transport associated with ebullition, diffusion, and plant, based on the optimization technique. This study reveal the important processes related to the seasonal variations in CH4 emission with the continuous observations and inverse model analysis. The measurements have been conducted at a Sphagnum-dominated cool temperate bog (BBY) since April 2015 using the open-path eddy covariance method and a sub-arctic forested bog on permafrost in University of Alaska Fairbanks (UAF) since May 2016 using three automated chambers by a laser-based gas analyzer (FGGA-24r-EP, Los Gatos Research Inc., USA). In BBY, daily CH4 fluxes ranged from 1.9 nmol m-2 s-1 in early spring to 97.9 nmol m-2 s-1 in mid-summer. Growing-season total CH4 flux was 13 g m-2 yr-1 in 2015. In contrast, CH4 flux at the UAF site was small (0.2 to 1.0 nmol m-2 s-1), and hardly increased since start of the observation. This difference could be caused by the difference in the climate and soil conditions; mean air and soil temperature, and presence of permafrost. For BBY, the seasonal variation of CH4 emission was mostly explained by soil temperature, suggesting that the production was the important controlling process. In mid-summer when soil temperature was high, however, decrease in atmospheric pressure and increase in vegetation greenness stimulated CH4 emission probably through plant-mediated transport and form of bubble, suggesting that the transport processes were important. Based on a preliminary results by the model optimization in BBY site, CH4 fluxes were strongly

  18. Model Improvement by Assimilating Observations of Storm-Induced Coastal Change

    NASA Astrophysics Data System (ADS)

    Long, J. W.; Plant, N. G.; Sopkin, K.

    2010-12-01

    Discrete, large scale, meteorological events such as hurricanes can cause wide-spread destruction of coastal islands, habitats, and infrastructure. The effects can vary significantly along the coast depending on the configuration of the coastline, variable dune elevations, changes in geomorphology (sandy beach vs. marshland), and alongshore variations in storm hydrodynamic forcing. There are two primary methods of determining the changing state of a coastal system. Process-based numerical models provide highly resolved (in space and time) representations of the dominant dynamics in a physical system but must employ certain parameterizations due to computational limitations. The predictive capability may also suffer from the lack of reliable initial or boundary conditions. On the other hand, observations of coastal topography before and after the storm allow the direct quantification of cumulative storm impacts. Unfortunately these measurements suffer from instrument noise and a lack of necessary temporal resolution. This research focuses on the combination of these two pieces of information to make more reliable forecasts of storm-induced coastal change. Of primary importance is the development of a data assimilation strategy that is efficient, applicable for use with highly nonlinear models, and able to quantify the remaining forecast uncertainty based on the reliability of each individual piece of information used in the assimilation process. We concentrate on an event time-scale and estimate/update unobserved model information (boundary conditions, free parameters, etc.) by assimilating direct observations of coastal change with those simulated by the model. The data assimilation can help estimate spatially varying quantities (e.g. friction coefficients) that are often modeled as homogeneous and identify processes inadequately characterized in the model.

  19. Advancing coastal ocean modelling, analysis, and prediction for the US Integrated Ocean Observing System

    USGS Publications Warehouse

    Wilkin, John L.; Rosenfeld, Leslie; Allen, Arthur; Baltes, Rebecca; Baptista, Antonio; He, Ruoying; Hogan, Patrick; Kurapov, Alexander; Mehra, Avichal; Quintrell, Josie; Schwab, David; Signell, Richard; Smith, Jane

    2017-01-01

    This paper outlines strategies that would advance coastal ocean modelling, analysis and prediction as a complement to the observing and data management activities of the coastal components of the US Integrated Ocean Observing System (IOOS®) and the Global Ocean Observing System (GOOS). The views presented are the consensus of a group of US-based researchers with a cross-section of coastal oceanography and ocean modelling expertise and community representation drawn from Regional and US Federal partners in IOOS. Priorities for research and development are suggested that would enhance the value of IOOS observations through model-based synthesis, deliver better model-based information products, and assist the design, evaluation, and operation of the observing system itself. The proposed priorities are: model coupling, data assimilation, nearshore processes, cyberinfrastructure and model skill assessment, modelling for observing system design, evaluation and operation, ensemble prediction, and fast predictors. Approaches are suggested to accomplish substantial progress in a 3–8-year timeframe. In addition, the group proposes steps to promote collaboration between research and operations groups in Regional Associations, US Federal Agencies, and the international ocean research community in general that would foster coordination on scientific and technical issues, and strengthen federal–academic partnerships benefiting IOOS stakeholders and end users.

  20. Pre- and Post-Processing Tools to Create and Characterize Particle-Based Composite Model Structures

    DTIC Science & Technology

    2017-11-01

    ARL-TR-8213 ● NOV 2017 US Army Research Laboratory Pre- and Post -Processing Tools to Create and Characterize Particle-Based...ARL-TR-8213 ● NOV 2017 US Army Research Laboratory Pre- and Post -Processing Tools to Create and Characterize Particle-Based Composite...AND SUBTITLE Pre- and Post -Processing Tools to Create and Characterize Particle-Based Composite Model Structures 5a. CONTRACT NUMBER 5b. GRANT

  1. CATS - A process-based model for turbulent turbidite systems at the reservoir scale

    NASA Astrophysics Data System (ADS)

    Teles, Vanessa; Chauveau, Benoît; Joseph, Philippe; Weill, Pierre; Maktouf, Fakher

    2016-09-01

    The Cellular Automata for Turbidite systems (CATS) model is intended to simulate the fine architecture and facies distribution of turbidite reservoirs with a multi-event and process-based approach. The main processes of low-density turbulent turbidity flow are modeled: downslope sediment-laden flow, entrainment of ambient water, erosion and deposition of several distinct lithologies. This numerical model, derived from (Salles, 2006; Salles et al., 2007), proposes a new approach based on the Rouse concentration profile to consider the flow capacity to carry the sediment load in suspension. In CATS, the flow distribution on a given topography is modeled with local rules between neighboring cells (cellular automata) based on potential and kinetic energy balance and diffusion concepts. Input parameters are the initial flow parameters and a 3D topography at depositional time. An overview of CATS capabilities in different contexts is presented and discussed.

  2. Observation- and model-based estimates of particulate dry nitrogen deposition to the oceans

    NASA Astrophysics Data System (ADS)

    Baker, Alex R.; Kanakidou, Maria; Altieri, Katye E.; Daskalakis, Nikos; Okin, Gregory S.; Myriokefalitakis, Stelios; Dentener, Frank; Uematsu, Mitsuo; Sarin, Manmohan M.; Duce, Robert A.; Galloway, James N.; Keene, William C.; Singh, Arvind; Zamora, Lauren; Lamarque, Jean-Francois; Hsu, Shih-Chieh; Rohekar, Shital S.; Prospero, Joseph M.

    2017-07-01

    Anthropogenic nitrogen (N) emissions to the atmosphere have increased significantly the deposition of nitrate (NO3-) and ammonium (NH4+) to the surface waters of the open ocean, with potential impacts on marine productivity and the global carbon cycle. Global-scale understanding of the impacts of N deposition to the oceans is reliant on our ability to produce and validate models of nitrogen emission, atmospheric chemistry, transport and deposition. In this work, ˜ 2900 observations of aerosol NO3- and NH4+ concentrations, acquired from sampling aboard ships in the period 1995-2012, are used to assess the performance of modelled N concentration and deposition fields over the remote ocean. Three ocean regions (the eastern tropical North Atlantic, the northern Indian Ocean and northwest Pacific) were selected, in which the density and distribution of observational data were considered sufficient to provide effective comparison to model products. All of these study regions are affected by transport and deposition of mineral dust, which alters the deposition of N, due to uptake of nitrogen oxides (NOx) on mineral surfaces. Assessment of the impacts of atmospheric N deposition on the ocean requires atmospheric chemical transport models to report deposition fluxes; however, these fluxes cannot be measured over the ocean. Modelling studies such as the Atmospheric Chemistry and Climate Model Intercomparison Project (ACCMIP), which only report deposition flux, are therefore very difficult to validate for dry deposition. Here, the available observational data were averaged over a 5° × 5° grid and compared to ACCMIP dry deposition fluxes (ModDep) of oxidised N (NOy) and reduced N (NHx) and to the following parameters from the Tracer Model 4 of the Environmental Chemical Processes Laboratory (TM4): ModDep for NOy, NHx and particulate NO3- and NH4+, and surface-level particulate NO3- and NH4+ concentrations. As a model ensemble, ACCMIP can be expected to be more robust than

  3. Analysis of the hydrological response of a distributed physically-based model using post-assimilation (EnKF) diagnostics of streamflow and in situ soil moisture observations

    NASA Astrophysics Data System (ADS)

    Trudel, Mélanie; Leconte, Robert; Paniconi, Claudio

    2014-06-01

    Data assimilation techniques not only enhance model simulations and forecast, they also provide the opportunity to obtain a diagnostic of both the model and observations used in the assimilation process. In this research, an ensemble Kalman filter was used to assimilate streamflow observations at a basin outlet and at interior locations, as well as soil moisture at two different depths (15 and 45 cm). The simulation model is the distributed physically-based hydrological model CATHY (CATchment HYdrology) and the study site is the Des Anglais watershed, a 690 km2 river basin located in southern Quebec, Canada. Use of Latin hypercube sampling instead of a conventional Monte Carlo method to generate the ensemble reduced the size of the ensemble, and therefore the calculation time. Different post-assimilation diagnostics, based on innovations (observation minus background), analysis residuals (observation minus analysis), and analysis increments (analysis minus background), were used to evaluate assimilation optimality. An important issue in data assimilation is the estimation of error covariance matrices. These diagnostics were also used in a calibration exercise to determine the standard deviation of model parameters, forcing data, and observations that led to optimal assimilations. The analysis of innovations showed a lag between the model forecast and the observation during rainfall events. Assimilation of streamflow observations corrected this discrepancy. Assimilation of outlet streamflow observations improved the Nash-Sutcliffe efficiencies (NSE) between the model forecast (one day) and the observation at both outlet and interior point locations, owing to the structure of the state vector used. However, assimilation of streamflow observations systematically increased the simulated soil moisture values.

  4. Data-based mechanistic modeling of dissolved organic carbon load through storms using continuous 15-minute resolution observations within UK upland watersheds

    NASA Astrophysics Data System (ADS)

    Jones, T.; Chappell, N. A.

    2013-12-01

    Few watershed modeling studies have addressed DOC dynamics through storm hydrographs (notable exceptions include Boyer et al., 1997 Hydrol Process; Jutras et al., 2011 Ecol Model; Xu et al., 2012 Water Resour Res). In part this has been a consequence of an incomplete understanding of the biogeochemical processes leading to DOC export to streams (Neff & Asner, 2001, Ecosystems) & an insufficient frequency of DOC monitoring to capture sometimes complex time-varying relationships between DOC & storm hydrographs (Kirchner et al., 2004, Hydrol Process). We present the results of a new & ongoing UK study that integrates two components - 1/ New observations of DOC concentrations (& derived load) continuously monitored at 15 minute intervals through multiple seasons for replicated watersheds; & 2/ A dynamic modeling technique that is able to quantify storage-decay effects, plus hysteretic, nonlinear, lagged & non-stationary relationships between DOC & controlling variables (including rainfall, streamflow, temperature & specific biogeochemical variables e.g., pH, nitrate). DOC concentration is being monitored continuously using the latest generation of UV spectrophotometers (i.e. S::CAN spectro::lysers) with in situ calibrations to laboratory analyzed DOC. The controlling variables are recorded simultaneously at the same stream stations. The watersheds selected for study are among the most intensively studied basins in the UK uplands, namely the Plynlimon & Llyn Brianne experimental basins. All contain areas of organic soils, with three having improved grasslands & three conifer afforested. The dynamic response characteristics (DRCs) that describe detailed DOC behaviour through sequences of storms are simulated using the latest identification routines for continuous time transfer function (CT-TF) models within the Matlab-based CAPTAIN toolbox (some incorporating nonlinear components). To our knowledge this is the first application of CT-TFs to modelling DOC processes

  5. Diagnosis by integrating model-based reasoning with knowledge-based reasoning

    NASA Technical Reports Server (NTRS)

    Bylander, Tom

    1988-01-01

    Our research investigates how observations can be categorized by integrating a qualitative physical model with experiential knowledge. Our domain is diagnosis of pathologic gait in humans, in which the observations are the gait motions, muscle activity during gait, and physical exam data, and the diagnostic hypotheses are the potential muscle weaknesses, muscle mistimings, and joint restrictions. Patients with underlying neurological disorders typically have several malfunctions. Among the problems that need to be faced are: the ambiguity of the observations, the ambiguity of the qualitative physical model, correspondence of the observations and hypotheses to the qualitative physical model, the inherent uncertainty of experiential knowledge, and the combinatorics involved in forming composite hypotheses. Our system divides the work so that the knowledge-based reasoning suggests which hypotheses appear more likely than others, the qualitative physical model is used to determine which hypotheses explain which observations, and another process combines these functionalities to construct a composite hypothesis based on explanatory power and plausibility. We speculate that the reasoning architecture of our system is generally applicable to complex domains in which a less-than-perfect physical model and less-than-perfect experiential knowledge need to be combined to perform diagnosis.

  6. Automatic Earth observation data service based on reusable geo-processing workflow

    NASA Astrophysics Data System (ADS)

    Chen, Nengcheng; Di, Liping; Gong, Jianya; Yu, Genong; Min, Min

    2008-12-01

    A common Sensor Web data service framework for Geo-Processing Workflow (GPW) is presented as part of the NASA Sensor Web project. This framework consists of a data service node, a data processing node, a data presentation node, a Catalogue Service node and BPEL engine. An abstract model designer is used to design the top level GPW model, model instantiation service is used to generate the concrete BPEL, and the BPEL execution engine is adopted. The framework is used to generate several kinds of data: raw data from live sensors, coverage or feature data, geospatial products, or sensor maps. A scenario for an EO-1 Sensor Web data service for fire classification is used to test the feasibility of the proposed framework. The execution time and influences of the service framework are evaluated. The experiments show that this framework can improve the quality of services for sensor data retrieval and processing.

  7. Learning-Testing Process in Classroom: An Empirical Simulation Model

    ERIC Educational Resources Information Center

    Buda, Rodolphe

    2009-01-01

    This paper presents an empirical micro-simulation model of the teaching and the testing process in the classroom (Programs and sample data are available--the actual names of pupils have been hidden). It is a non-econometric micro-simulation model describing informational behaviors of the pupils, based on the observation of the pupils'…

  8. Chemistry-Transport Modeling of the Satellite Observed Distribution of Tropical Tropospheric Ozone

    NASA Technical Reports Server (NTRS)

    Peters, Wouter; Krol, Maarten; Dentener, Frank; Thompson, Anne M.; Leloeveld, Jos; Bhartia, P. K. (Technical Monitor)

    2002-01-01

    We have compared the 14-year record of satellite derived tropical tropospheric ozone columns (TTOC) from the NIMBUS-7 Total Ozone Mapping Spectrometer (TOMS) to TTOC calculated by a chemistry-transport model (CTM). An objective measure of error, based on the zonal distribution of TTOC in the tropics, is applied to perform this comparison systematically. In addition, the sensitivity of the model to several key processes in the tropics is quantified to select directions for future improvements. The comparisons indicate a widespread, systematic (20%) discrepancy over the tropical Atlantic Ocean, which maximizes during austral Spring. Although independent evidence from ozonesondes shows that some of the disagreement is due to satellite over-estimate of TTOC, the Atlantic mismatch is largely due to a misrepresentation of seasonally recurring processes in the model. Only minor differences between the model and observations over the Pacific occur, mostly due to interannual variability not captured by the model. Although chemical processes determine the TTOC extent, dynamical processes dominate the TTOC distribution, as the use of actual meteorology pertaining to the year of observations always leads to a better agreement with TTOC observations than using a random year or a climatology. The modeled TTOC is remarkably insensitive to many model parameters due to efficient feedbacks in the ozone budget. Nevertheless, the simulations would profit from an improved biomass burning calendar, as well as from an increase in NOX abundances in free tropospheric biomass burning plumes. The model showed the largest response to lightning NOX emissions, but systematic improvements could not be found. The use of multi-year satellite derived tropospheric data to systematically test and improve a CTM is a promising new addition to existing methods of model validation, and is a first step to integrating tropospheric satellite observations into global ozone modeling studies. Conversely

  9. Business Process Design Method Based on Business Event Model for Enterprise Information System Integration

    NASA Astrophysics Data System (ADS)

    Kobayashi, Takashi; Komoda, Norihisa

    The traditional business process design methods, in which the usecase is the most typical, have no useful framework to design the activity sequence with. Therefore, the design efficiency and quality vary widely according to the designer’s experience and skill. In this paper, to solve this problem, we propose the business events and their state transition model (a basic business event model) based on the language/action perspective, which is the result in the cognitive science domain. In the business process design, using this model, we decide event occurrence conditions so that every event synchronizes with each other. We also propose the design pattern to decide the event occurrence condition (a business event improvement strategy). Lastly, we apply the business process design method based on the business event model and the business event improvement strategy to the credit card issue process and estimate its effect.

  10. Model-based evaluation of two BNR processes--UCT and A2N.

    PubMed

    Hao, X; Van Loosdrecht, M C; Meijer, S C; Qian, Y

    2001-08-01

    The activity of denitrifying P-accumulating bacteria (DPB) has been verified to exist in most WWTPs with biological nutrient removal (BNR). The modified UCT process has a high content of DPB. A new BNR process with a two-sludge system named A2N was especially developed to exploit denitrifying dephosphatation. With the identical inflow and effluent standards, an existing full-scale UCT-type WWTP and a designed A2N process were evaluated by simulation. The used model is based on the Delft metabolical model for bio-P removal and ASM2d model for COD and N removal. Both processes accommodate denitrifying dephosphatation, but the A2N process has a more stable performance in N removal. Although excess sludge is increased by 6%, the A2N process leads to savings of 35, 85 and 30% in aeration energy, mixed liquor internal recirculation and land occupation respectively, as compared to the UCT process. Low temperature has a negative effect on growth of poly-P bacteria, which becomes to especially appear in the A2N process.

  11. Evaluate transport processes in MERRA driven chemical transport models using updated 222Rn emission inventories and global observations

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Liu, H.; Crawford, J. H.; Fairlie, T. D.; Chen, G.; Chambers, S. D.; Kang, C. H.; Williams, A. G.; Zhang, K.; Considine, D. B.; Payer Sulprizio, M.; Yantosca, R.

    2015-12-01

    Convective and synoptic processes play a major role in determining the transport and distribution of trace gases and aerosols in the troposphere. The representation of these processes in global models (at ~100-1000 km horizontal resolution) is challenging, because convection is a sub-grid process and needs to be parameterized, while synoptic processes are close to the grid scale. Depending on the parameterization schemes used in climate models, the role of convection in transporting trace gases and aerosols may vary from model to model. 222Rn is a chemically inert and radioactive gas constantly emitted from soil and has a half-life (3.8 days) comparable to synoptic timescale, which makes it an effective tracer for convective and synoptic transport. In this study, we evaluate the convective and synoptic transport in two chemical transport models (GMI and GEOS-Chem), both driven by the NASA's MERRA reanalysis. Considering the uncertainties in 222Rn emissions, we incorporate two more recent scenarios with regionally varying 222Rn emissions into GEOS-Chem/MERRA and compare the simulation results with those using the relatively uniform 222Rn emissions in the standard model. We evaluate the global distribution and seasonality of 222Rn concentrations simulated by the two models against an extended collection of 222Rn observations from 1970s to 2010s. The intercomparison will improve our understanding of the spatial variability in global 222Rn emissions, including the suspected excessive 222Rn emissions in East Asia, and provide useful feedbacks on 222Rn emission models. We will assess 222Rn vertical distributions at different latitudes in the models using observations at surface sites and in the upper troposphere and lower stratosphere. Results will be compared with previous models driven by other meteorological fields (e.g., fvGCM and GEOS4). Since the decay of 222Rn is the source of 210Pb, a useful radionuclide tracer attached to submicron aerosols, improved

  12. Observational and Model Studies of Large-Scale Mixing Processes in the Stratosphere

    NASA Technical Reports Server (NTRS)

    Bowman, Kenneth P.

    1997-01-01

    The following is the final technical report for grant NAGW-3442, 'Observational and Model Studies of Large-Scale Mixing Processes in the Stratosphere'. Research efforts in the first year concentrated on transport and mixing processes in the polar vortices. Three papers on mixing in the Antarctic were published. The first was a numerical modeling study of wavebreaking and mixing and their relationship to the period of observed stratospheric waves (Bowman). The second paper presented evidence from TOMS for wavebreaking in the Antarctic (Bowman and Mangus 1993). The third paper used Lagrangian trajectory calculations from analyzed winds to show that there is very little transport into the Antarctic polar vortex prior to the vortex breakdown (Bowman). Mixing is significantly greater at lower levels. This research helped to confirm theoretical arguments for vortex isolation and data from the Antarctic field experiments that were interpreted as indicating isolation. A Ph.D. student, Steve Dahlberg, used the trajectory approach to investigate mixing and transport in the Arctic. While the Arctic vortex is much more disturbed than the Antarctic, there still appears to be relatively little transport across the vortex boundary at 450 K prior to the vortex breakdown. The primary reason for the absence of an ozone hole in the Arctic is the earlier warming and breakdown of the vortex compared to the Antarctic, not replenishment of ozone by greater transport. Two papers describing these results have appeared (Dahlberg and Bowman; Dahlberg and Bowman). Steve Dahlberg completed his Ph.D. thesis (Dahlberg and Bowman) and is now teaching in the Physics Department at Concordia College. We also prepared an analysis of the QBO in SBUV ozone data (Hollandsworth et al.). A numerical study in collaboration with Dr. Ping Chen investigated mixing by barotropic instability, which is the probable origin of the 4-day wave in the upper stratosphere (Bowman and Chen). The important result from

  13. Interactive Computing and Processing of NASA Land Surface Observations Using Google Earth Engine

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew; Burks, Jason; Bell, Jordan

    2016-01-01

    Google's Earth Engine offers a "big data" approach to processing large volumes of NASA and other remote sensing products. h\\ps://earthengine.google.com/ Interfaces include a Javascript or Python-based API, useful for accessing and processing over large periods of record for Landsat and MODIS observations. Other data sets are frequently added, including weather and climate model data sets, etc. Demonstrations here focus on exploratory efforts to perform land surface change detection related to severe weather, and other disaster events.

  14. Identity in agent-based models : modeling dynamic multiscale social processes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ozik, J.; Sallach, D. L.; Macal, C. M.

    Identity-related issues play central roles in many current events, including those involving factional politics, sectarianism, and tribal conflicts. Two popular models from the computational-social-science (CSS) literature - the Threat Anticipation Program and SharedID models - incorporate notions of identity (individual and collective) and processes of identity formation. A multiscale conceptual framework that extends some ideas presented in these models and draws other capabilities from the broader CSS literature is useful in modeling the formation of political identities. The dynamic, multiscale processes that constitute and transform social identities can be mapped to expressive structures of the framework

  15. Results from the VALUE perfect predictor experiment: process-based evaluation

    NASA Astrophysics Data System (ADS)

    Maraun, Douglas; Soares, Pedro; Hertig, Elke; Brands, Swen; Huth, Radan; Cardoso, Rita; Kotlarski, Sven; Casado, Maria; Pongracz, Rita; Bartholy, Judit

    2016-04-01

    Until recently, the evaluation of downscaled climate model simulations has typically been limited to surface climatologies, including long term means, spatial variability and extremes. But these aspects are often, at least partly, tuned in regional climate models to match observed climate. The tuning issue is of course particularly relevant for bias corrected regional climate models. In general, a good performance of a model for these aspects in present climate does therefore not imply a good performance in simulating climate change. It is now widely accepted that, to increase our condidence in climate change simulations, it is necessary to evaluate how climate models simulate relevant underlying processes. In other words, it is important to assess whether downscaling does the right for the right reason. Therefore, VALUE has carried out a broad process-based evaluation study based on its perfect predictor experiment simulations: the downscaling methods are driven by ERA-Interim data over the period 1979-2008, reference observations are given by a network of 85 meteorological stations covering all European climates. More than 30 methods participated in the evaluation. In order to compare statistical and dynamical methods, only variables provided by both types of approaches could be considered. This limited the analysis to conditioning local surface variables on variables from driving processes that are simulated by ERA-Interim. We considered the following types of processes: at the continental scale, we evaluated the performance of downscaling methods for positive and negative North Atlantic Oscillation, Atlantic ridge and blocking situations. At synoptic scales, we considered Lamb weather types for selected European regions such as Scandinavia, the United Kingdom, the Iberian Pensinsula or the Alps. At regional scales we considered phenomena such as the Mistral, the Bora or the Iberian coastal jet. Such process-based evaluation helps to attribute biases in surface

  16. Transforming Collaborative Process Models into Interface Process Models by Applying an MDA Approach

    NASA Astrophysics Data System (ADS)

    Lazarte, Ivanna M.; Chiotti, Omar; Villarreal, Pablo D.

    Collaborative business models among enterprises require defining collaborative business processes. Enterprises implement B2B collaborations to execute these processes. In B2B collaborations the integration and interoperability of processes and systems of the enterprises are required to support the execution of collaborative processes. From a collaborative process model, which describes the global view of the enterprise interactions, each enterprise must define the interface process that represents the role it performs in the collaborative process in order to implement the process in a Business Process Management System. Hence, in this work we propose a method for the automatic generation of the interface process model of each enterprise from a collaborative process model. This method is based on a Model-Driven Architecture to transform collaborative process models into interface process models. By applying this method, interface processes are guaranteed to be interoperable and defined according to a collaborative process.

  17. An Observation-Driven Agent-Based Modeling and Analysis Framework for C. elegans Embryogenesis.

    PubMed

    Wang, Zi; Ramsey, Benjamin J; Wang, Dali; Wong, Kwai; Li, Husheng; Wang, Eric; Bao, Zhirong

    2016-01-01

    With cutting-edge live microscopy and image analysis, biologists can now systematically track individual cells in complex tissues and quantify cellular behavior over extended time windows. Computational approaches that utilize the systematic and quantitative data are needed to understand how cells interact in vivo to give rise to the different cell types and 3D morphology of tissues. An agent-based, minimum descriptive modeling and analysis framework is presented in this paper to study C. elegans embryogenesis. The framework is designed to incorporate the large amounts of experimental observations on cellular behavior and reserve data structures/interfaces that allow regulatory mechanisms to be added as more insights are gained. Observed cellular behaviors are organized into lineage identity, timing and direction of cell division, and path of cell movement. The framework also includes global parameters such as the eggshell and a clock. Division and movement behaviors are driven by statistical models of the observations. Data structures/interfaces are reserved for gene list, cell-cell interaction, cell fate and landscape, and other global parameters until the descriptive model is replaced by a regulatory mechanism. This approach provides a framework to handle the ongoing experiments of single-cell analysis of complex tissues where mechanistic insights lag data collection and need to be validated on complex observations.

  18. Weighting climate model projections using observational constraints.

    PubMed

    Gillett, Nathan P

    2015-11-13

    Projected climate change integrates the net response to multiple climate feedbacks. Whereas existing long-term climate change projections are typically based on unweighted individual climate model simulations, as observed climate change intensifies it is increasingly becoming possible to constrain the net response to feedbacks and hence projected warming directly from observed climate change. One approach scales simulated future warming based on a fit to observations over the historical period, but this approach is only accurate for near-term projections and for scenarios of continuously increasing radiative forcing. For this reason, the recent Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR5) included such observationally constrained projections in its assessment of warming to 2035, but used raw model projections of longer term warming to 2100. Here a simple approach to weighting model projections based on an observational constraint is proposed which does not assume a linear relationship between past and future changes. This approach is used to weight model projections of warming in 2081-2100 relative to 1986-2005 under the Representative Concentration Pathway 4.5 forcing scenario, based on an observationally constrained estimate of the Transient Climate Response derived from a detection and attribution analysis. The resulting observationally constrained 5-95% warming range of 0.8-2.5 K is somewhat lower than the unweighted range of 1.1-2.6 K reported in the IPCC AR5. © 2015 The Authors.

  19. High-resolution CO2 and CH4 flux inverse modeling combining GOSAT, OCO-2 and ground-based observations

    NASA Astrophysics Data System (ADS)

    Maksyutov, S. S.; Oda, T.; Saito, M.; Ito, A.; Janardanan Achari, R.; Sasakawa, M.; Machida, T.; Kaiser, J. W.; Belikov, D.; Valsala, V.; O'Dell, C.; Yoshida, Y.; Matsunaga, T.

    2017-12-01

    We develop a high-resolution CO2 and CH4 flux inversion system that is based on the Lagrangian-Eulerian coupled tracer transport model, and is designed to estimate surface fluxes from atmospheric CO2 and CH4 data observed by the GOSAT and OCO-2 satellites and by global in-situ networks, including observation in Siberia. We use the Lagrangian particle dispersion model (LPDM) FLEXPART to estimate the surface flux footprints for each observation at 0.1-degree spatial resolution for three days of transport. The LPDM is coupled to a global atmospheric tracer transport model (NIES-TM). The adjoint of the coupled transport model is used in an iterative optimization procedure based on either quasi-Newtonian algorithm or singular value decomposition. Combining surface and satellite data for use in inversion requires correcting for biases present in satellite observation data, that is done in a two-step procedure. As a first step, bi-weekly corrections to prior flux fields are estimated for the period of 2009 to 2015 from in-situ CO2 and CH4 data from global observation network, included in Obspack-GVP (for CO2), WDCGG (CH4) and JR-STATION datasets. High-resolution prior fluxes were prepared for anthropogenic emissions (ODIAC and EDGAR), biomass burning (GFAS), and the terrestrial biosphere. The terrestrial biosphere flux was constructed using a vegetation mosaic map and separate simulations of CO2 fluxes by the VISIT model for each vegetation type present in a grid. The prior flux uncertainty for land is scaled proportionally to monthly mean GPP by the MODIS product for CO2 and EDGAR emissions for CH4. Use of the high-resolution transport leads to improved representation of the anthropogenic plumes, often observed at continental continuous observation sites. OCO-2 observations are aggregated to 1 second averages, to match the 0.1 degree resolution of the transport model. Before including satellite observations in the inversion, the monthly varying latitude-dependent bias is

  20. Tree-based flood damage modeling of companies: Damage processes and model performance

    NASA Astrophysics Data System (ADS)

    Sieg, Tobias; Vogel, Kristin; Merz, Bruno; Kreibich, Heidi

    2017-07-01

    Reliable flood risk analyses, including the estimation of damage, are an important prerequisite for efficient risk management. However, not much is known about flood damage processes affecting companies. Thus, we conduct a flood damage assessment of companies in Germany with regard to two aspects. First, we identify relevant damage-influencing variables. Second, we assess the prediction performance of the developed damage models with respect to the gain by using an increasing amount of training data and a sector-specific evaluation of the data. Random forests are trained with data from two postevent surveys after flood events occurring in the years 2002 and 2013. For a sector-specific consideration, the data set is split into four subsets corresponding to the manufacturing, commercial, financial, and service sectors. Further, separate models are derived for three different company assets: buildings, equipment, and goods and stock. Calculated variable importance values reveal different variable sets relevant for the damage estimation, indicating significant differences in the damage process for various company sectors and assets. With an increasing number of data used to build the models, prediction errors decrease. Yet the effect is rather small and seems to saturate for a data set size of several hundred observations. In contrast, the prediction improvement achieved by a sector-specific consideration is more distinct, especially for damage to equipment and goods and stock. Consequently, sector-specific data acquisition and a consideration of sector-specific company characteristics in future flood damage assessments is expected to improve the model performance more than a mere increase in data.

  1. A specialized face-processing model inspired by the organization of monkey face patches explains several face-specific phenomena observed in humans.

    PubMed

    Farzmahdi, Amirhossein; Rajaei, Karim; Ghodrati, Masoud; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi

    2016-04-26

    Converging reports indicate that face images are processed through specialized neural networks in the brain -i.e. face patches in monkeys and the fusiform face area (FFA) in humans. These studies were designed to find out how faces are processed in visual system compared to other objects. Yet, the underlying mechanism of face processing is not completely revealed. Here, we show that a hierarchical computational model, inspired by electrophysiological evidence on face processing in primates, is able to generate representational properties similar to those observed in monkey face patches (posterior, middle and anterior patches). Since the most important goal of sensory neuroscience is linking the neural responses with behavioral outputs, we test whether the proposed model, which is designed to account for neural responses in monkey face patches, is also able to predict well-documented behavioral face phenomena observed in humans. We show that the proposed model satisfies several cognitive face effects such as: composite face effect and the idea of canonical face views. Our model provides insights about the underlying computations that transfer visual information from posterior to anterior face patches.

  2. Mathematical modeling of PDC bit drilling process based on a single-cutter mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wojtanowicz, A.K.; Kuru, E.

    1993-12-01

    An analytical development of a new mechanistic drilling model for polycrystalline diamond compact (PDC) bits is presented. The derivation accounts for static balance of forces acting on a single PDC cutter and is based on assumed similarity between bit and cutter. The model is fully explicit with physical meanings given to all constants and functions. Three equations constitute the mathematical model: torque, drilling rate, and bit life. The equations comprise cutter`s geometry, rock properties drilling parameters, and four empirical constants. The constants are used to match the model to a PDC drilling process. Also presented are qualitative and predictive verificationsmore » of the model. Qualitative verification shows that the model`s response to drilling process variables is similar to the behavior of full-size PDC bits. However, accuracy of the model`s predictions of PDC bit performance is limited primarily by imprecision of bit-dull evaluation. The verification study is based upon the reported laboratory drilling and field drilling tests as well as field data collected by the authors.« less

  3. Latent Heating Retrieval from TRMM Observations Using a Simplified Thermodynamic Model

    NASA Technical Reports Server (NTRS)

    Grecu, Mircea; Olson, William S.

    2003-01-01

    A procedure for the retrieval of hydrometeor latent heating from TRMM active and passive observations is presented. The procedure is based on current methods for estimating multiple-species hydrometeor profiles from TRMM observations. The species include: cloud water, cloud ice, rain, and graupel (or snow). A three-dimensional wind field is prescribed based on the retrieved hydrometeor profiles, and, assuming a steady-state, the sources and sinks in the hydrometeor conservation equations are determined. Then, the momentum and thermodynamic equations, in which the heating and cooling are derived from the hydrometeor sources and sinks, are integrated one step forward in time. The hydrometeor sources and sinks are reevaluated based on the new wind field, and the momentum and thermodynamic equations are integrated one more step. The reevalution-integration process is repeated until a steady state is reached. The procedure is tested using cloud model simulations. Cloud-model derived fields are used to synthesize TRMM observations, from which hydrometeor profiles are derived. The procedure is applied to the retrieved hydrometeor profiles, and the latent heating estimates are compared to the actual latent heating produced by the cloud model. Examples of procedure's applications to real TRMM data are also provided.

  4. Wave Processes in Arctic Seas, Observed from TerraSAR-X

    DTIC Science & Technology

    2015-09-30

    in order to improve wave models as well as ice models applicable to a changing Arctic wave/ and ice climate . This includes observation and...fields retrieved from the TS-X image swaths. 4. “Wave Climate and Wave Mixing in the Marginal Ice Zones of Arctic Seas, Observations and Modelling”, by...1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. “Wave Processes in Arctic Seas, Observed from TerraSAR-X

  5. Modeling urbanized watershed flood response changes with distributed hydrological model: key hydrological processes, parameterization and case studies

    NASA Astrophysics Data System (ADS)

    Chen, Y.

    2017-12-01

    Urbanization is the world development trend for the past century, and the developing countries have been experiencing much rapider urbanization in the past decades. Urbanization brings many benefits to human beings, but also causes negative impacts, such as increasing flood risk. Impact of urbanization on flood response has long been observed, but quantitatively studying this effect still faces great challenges. For example, setting up an appropriate hydrological model representing the changed flood responses and determining accurate model parameters are very difficult in the urbanized or urbanizing watershed. In the Pearl River Delta area, rapidest urbanization has been observed in China for the past decades, and dozens of highly urbanized watersheds have been appeared. In this study, a physically based distributed watershed hydrological model, the Liuxihe model is employed and revised to simulate the hydrological processes of the highly urbanized watershed flood in the Pearl River Delta area. A virtual soil type is then defined in the terrain properties dataset, and its runoff production and routing algorithms are added to the Liuxihe model. Based on a parameter sensitive analysis, the key hydrological processes of a highly urbanized watershed is proposed, that provides insight into the hydrological processes and for parameter optimization. Based on the above analysis, the model is set up in the Songmushan watershed where there is hydrological data observation. A model parameter optimization and updating strategy is proposed based on the remotely sensed LUC types, which optimizes model parameters with PSO algorithm and updates them based on the changed LUC types. The model parameters in Songmushan watershed are regionalized at the Pearl River Delta area watersheds based on the LUC types of the other watersheds. A dozen watersheds in the highly urbanized area of Dongguan City in the Pearl River Delta area were studied for the flood response changes due to

  6. Implementation of nursing conceptual models: observations of a multi-site research team.

    PubMed

    Shea, H; Rogers, M; Ross, E; Tucker, D; Fitch, M; Smith, I

    1989-01-01

    The general acceptance by nursing of the nursing process as the methodology of practice enabled nurses to have a common grounding for practice, research and theory development in the 1970s. It has become clear, however, that the nursing process is just that--a process. What is sorely needed is the nursing content for that process and consequently in the past 10 years nursing theorists have further developed their particular conceptual models (CM). Three major teaching hospitals in Toronto have instituted a conceptual model (CM) of nursing as a basis of nursing practice. Mount Sinai Hospital has adopted Roy's adaptation model; Sunnybrook Medical Centre, Kings's goal attainment model; and Toronto General Hospital, Orem's self-care deficit theory model. All of these hospitals are affiliated through a series of cross appointments with the Faculty of Nursing at the University of Toronto. Two community hospitals, Mississauga and Scarborough General, have also adopted Orem's model and are related to the University through educational, community and interest groups. A group of researchers from these hospitals and the University of Toronto have proposed a collaborative project to determine what impact using a conceptual model will make on nursing practice. Discussions among the participants of this research group indicate that there are observations associated with instituting conceptual models that can be identified early in the process of implementation. These observations may be of assistance to others contemplating the implementation of conceptually based practice in their institution.

  7. Geocenter variations derived from a combined processing of LEO- and ground-based GPS observations

    NASA Astrophysics Data System (ADS)

    Männel, Benjamin; Rothacher, Markus

    2017-08-01

    GNSS observations provided by the global tracking network of the International GNSS Service (IGS, Dow et al. in J Geod 83(3):191-198, 2009) play an important role in the realization of a unique terrestrial reference frame that is accurate enough to allow a detailed monitoring of the Earth's system. Combining these ground-based data with GPS observations tracked by high-quality dual-frequency receivers on-board low earth orbiters (LEOs) is a promising way to further improve the realization of the terrestrial reference frame and the estimation of geocenter coordinates, GPS satellite orbits and Earth rotation parameters. To assess the scope of the improvement on the geocenter coordinates, we processed a network of 53 globally distributed and stable IGS stations together with four LEOs (GRACE-A, GRACE-B, OSTM/Jason-2 and GOCE) over a time interval of 3 years (2010-2012). To ensure fully consistent solutions, the zero-difference phase observations of the ground stations and LEOs were processed in a common least-squares adjustment, estimating all the relevant parameters such as GPS and LEO orbits, station coordinates, Earth rotation parameters and geocenter motion. We present the significant impact of the individual LEO and a combination of all four LEOs on the geocenter coordinates. The formal errors are reduced by around 20% due to the inclusion of one LEO into the ground-only solution, while in a solution with four LEOs LEO-specific characteristics are significantly reduced. We compare the derived geocenter coordinates w.r.t. LAGEOS results and external solutions based on GPS and SLR data. We found good agreement in the amplitudes of all components; however, the phases in x- and z-direction do not agree well.

  8. Development and evaluation of spatial point process models for epidermal nerve fibers.

    PubMed

    Olsbo, Viktor; Myllymäki, Mari; Waller, Lance A; Särkkä, Aila

    2013-06-01

    We propose two spatial point process models for the spatial structure of epidermal nerve fibers (ENFs) across human skin. The models derive from two point processes, Φb and Φe, describing the locations of the base and end points of the fibers. Each point of Φe (the end point process) is connected to a unique point in Φb (the base point process). In the first model, both Φe and Φb are Poisson processes, yielding a null model of uniform coverage of the skin by end points and general baseline results and reference values for moments of key physiologic indicators. The second model provides a mechanistic model to generate end points for each base, and we model the branching structure more directly by defining Φe as a cluster process conditioned on the realization of Φb as its parent points. In both cases, we derive distributional properties for observable quantities of direct interest to neurologists such as the number of fibers per base, and the direction and range of fibers on the skin. We contrast both models by fitting them to data from skin blister biopsy images of ENFs and provide inference regarding physiological properties of ENFs. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Incorporating Ecosystem Experiments and Observations into Process Models of Forest Carbon and Water Cycles: Challenges and Solutions

    NASA Astrophysics Data System (ADS)

    Ward, E. J.; Thomas, R. Q.; Sun, G.; McNulty, S. G.; Domec, J. C.; Noormets, A.; King, J. S.

    2015-12-01

    Numerous studies, both experimental and observational, have been conducted over the past two decades in an attempt to understand how water and carbon cycling in terrestrial ecosystems may respond to changes in climatic conditions. These studies have produced a wealth of detailed data on key processes driving these cycles. In parallel, sophisticated models of these processes have been formulated to answer a variety of questions relevant to natural resource management. Recent advances in data assimilation techniques offer exciting new possibilities to combine this wealth of ecosystem data with process models of ecosystem function to improve prediction and quantify associated uncertainty. Using forests of the southeastern United States as our focus, we will specify how fine-scale physiological (e.g. half-hourly sap flux) can be scaled up with quantified error for use in models of stand growth and hydrology. This approach represents an opportunity to leverage current and past research from experiments including throughfall displacement × fertilization (PINEMAP), irrigation × fertilization (SETRES), elevated CO­2­ (Duke and ORNL FACE) and a variety of observational studies in both conifer and hardwood forests throughout the region, using a common platform for data assimilation and prediction. As part of this discussion, we will address variation in dominant species, stand structure, site age, management practices, soils and climate that represent both challenges to the development of a common analytical approach and opportunities to address questions of interest to policy makers and natural resource managers.

  10. Energy-based and process-based constraints on aerosol-climate interaction

    NASA Astrophysics Data System (ADS)

    Suzuki, K.; Sato, Y.; Takemura, T.; Michibata, T.; Goto, D.; Oikawa, E.

    2017-12-01

    Recent advance in both satellite observations and global modeling provides us with a novel opportunity to investigate the long-standing aerosol-climate interaction issue at a fundamental process level, particularly with a combined use of them. In this presentation, we will highlight our recent progress in understanding the aerosol-cloud-precipitation interaction and its implication for global climate with a synergistic use of a state-of-the-art global climate model (MIROC), a global cloud-resolving model (NICAM) and recent satellite observations (A-Train). In particular, we explore two different aspects of the aerosol-climate interaction issue, i.e. (i) the global energy balance perspective with its modulation due to aerosols and (ii) the process-level characteristics of the aerosol-induced perturbations to cloud and precipitation. For the former, climate model simulations are used to quantify how components of global energy budget are modulated by the aerosol forcing. The moist processes are shown to be a critical pathway that links the forcing efficacy and the hydrologic sensitivity arising from aerosol perturbations. Effects of scattering (e.g. sulfate) and absorbing (e.g. black carbon) aerosols are compared in this context to highlight their distinctively different impacts on climate and hydrologic cycle. The aerosol-induced modulation of moist processes is also investigated in the context of the second aspect above to facilitate recent arguments on possible overestimates of the aerosol-cloud interaction in climate models. Our recent simulations with NICAM are shown to highlight how diverse responses of cloud to aerosol perturbation, which have been failed to represent in traditional climate models, are reproduced by the high-resolution global model with sophisticated cloud microphysics. We will discuss implications of these findings for a linkage between the two aspects above to aid advance process-based understandings of the aerosol-climate interaction and

  11. Dirichlet Process Gaussian-mixture model: An application to localizing coalescing binary neutron stars with gravitational-wave observations

    NASA Astrophysics Data System (ADS)

    Del Pozzo, W.; Berry, C. P. L.; Ghosh, A.; Haines, T. S. F.; Singer, L. P.; Vecchio, A.

    2018-06-01

    We reconstruct posterior distributions for the position (sky area and distance) of a simulated set of binary neutron-star gravitational-waves signals observed with Advanced LIGO and Advanced Virgo. We use a Dirichlet Process Gaussian-mixture model, a fully Bayesian non-parametric method that can be used to estimate probability density functions with a flexible set of assumptions. The ability to reliably reconstruct the source position is important for multimessenger astronomy, as recently demonstrated with GW170817. We show that for detector networks comparable to the early operation of Advanced LIGO and Advanced Virgo, typical localization volumes are ˜104-105 Mpc3 corresponding to ˜102-103 potential host galaxies. The localization volume is a strong function of the network signal-to-noise ratio, scaling roughly ∝ϱnet-6. Fractional localizations improve with the addition of further detectors to the network. Our Dirichlet Process Gaussian-mixture model can be adopted for localizing events detected during future gravitational-wave observing runs, and used to facilitate prompt multimessenger follow-up.

  12. Key Process Uncertainties in Soil Carbon Dynamics: Comparing Multiple Model Structures and Observational Meta-analysis

    NASA Astrophysics Data System (ADS)

    Sulman, B. N.; Moore, J.; Averill, C.; Abramoff, R. Z.; Bradford, M.; Classen, A. T.; Hartman, M. D.; Kivlin, S. N.; Luo, Y.; Mayes, M. A.; Morrison, E. W.; Riley, W. J.; Salazar, A.; Schimel, J.; Sridhar, B.; Tang, J.; Wang, G.; Wieder, W. R.

    2016-12-01

    Soil carbon (C) dynamics are crucial to understanding and predicting C cycle responses to global change and soil C modeling is a key tool for understanding these dynamics. While first order model structures have historically dominated this area, a recent proliferation of alternative model structures representing different assumptions about microbial activity and mineral protection is providing new opportunities to explore process uncertainties related to soil C dynamics. We conducted idealized simulations of soil C responses to warming and litter addition using models from five research groups that incorporated different sets of assumptions about processes governing soil C decomposition and stabilization. We conducted a meta-analysis of published warming and C addition experiments for comparison with simulations. Assumptions related to mineral protection and microbial dynamics drove strong differences among models. In response to C additions, some models predicted long-term C accumulation while others predicted transient increases that were counteracted by accelerating decomposition. In experimental manipulations, doubling litter addition did not change soil C stocks in studies spanning as long as two decades. This result agreed with simulations from models with strong microbial growth responses and limited mineral sorption capacity. In observations, warming initially drove soil C loss via increased CO2 production, but in some studies soil C rebounded and increased over decadal time scales. In contrast, all models predicted sustained C losses under warming. The disagreement with experimental results could be explained by physiological or community-level acclimation, or by warming-related changes in plant growth. In addition to the role of microbial activity, assumptions related to mineral sorption and protected C played a key role in driving long-term model responses. In general, simulations were similar in their initial responses to perturbations but diverged over

  13. The impacts of data constraints on the predictive performance of a general process-based crop model (PeakN-crop v1.0)

    NASA Astrophysics Data System (ADS)

    Caldararu, Silvia; Purves, Drew W.; Smith, Matthew J.

    2017-04-01

    Improving international food security under a changing climate and increasing human population will be greatly aided by improving our ability to modify, understand and predict crop growth. What we predominantly have at our disposal are either process-based models of crop physiology or statistical analyses of yield datasets, both of which suffer from various sources of error. In this paper, we present a generic process-based crop model (PeakN-crop v1.0) which we parametrise using a Bayesian model-fitting algorithm to three different sources: data-space-based vegetation indices, eddy covariance productivity measurements and regional crop yields. We show that the model parametrised without data, based on prior knowledge of the parameters, can largely capture the observed behaviour but the data-constrained model greatly improves both the model fit and reduces prediction uncertainty. We investigate the extent to which each dataset contributes to the model performance and show that while all data improve on the prior model fit, the satellite-based data and crop yield estimates are particularly important for reducing model error and uncertainty. Despite these improvements, we conclude that there are still significant knowledge gaps, in terms of available data for model parametrisation, but our study can help indicate the necessary data collection to improve our predictions of crop yields and crop responses to environmental changes.

  14. Overcoming uncertainty with carbonyl sulfide-based GPP estimates: observing and modeling soil COS fluxes in terrestrial ecosystems

    NASA Astrophysics Data System (ADS)

    Whelan, M.; Hilton, T. W.; Berry, J. A.; Berkelhammer, M. B.; Desai, A. R.; Rastogi, B.; Campbell, J. E.

    2015-12-01

    Significant carbonyl sulfide (COS) exchange by soils limits the applicability of net ecosystem COS flux observations as a proxy for stomatal trace gas exchange. High frequency measurements of COS over urban and natural ecosystems offer a potential window into processes regulating the carbon and water cycle: photosynthetic carbon uptake and stomatal conductance. COS diffuses through plant stomata and is irreversibly consumed by enzymes involved in photosynthesis. In certain environments, the magnitude of soil COS fluxes may constitute one-quarter of COS uptake by plants. Here we present a way of anticipating conditions when anomalously large soil COS fluxes are likely to occur and be taken into account. Previous studies have pointed to either a tendency for soil uptake of COS from the atmosphere with a soil moisture optimum, or exponential COS production coincident with temperature. Data from field and laboratory studies were used to deconvolve the two processes. CO2 and COS fluxes were observed from forest, desert, grassland, and agricultural soils under a range of temperature and soil moisture conditions. We demonstrate how to estimate temperature and soil moisture impacts on COS soil production based on our cross-site incubations. By building a model of soil COS exchange that combines production and consumption terms, we offer a framework for interpreting the two disparate conclusions about soil COS exchange in previous studies. Such a construction should be used in ecosystem and continental scale modeling of COS fluxes to anticipate where the influence of soil COS exchange needs to be accounted for, resulting in greater utility of carbonyl sulfide as a tracer of plant physiological processes.

  15. Model of Values-Based Management Process in Schools: A Mixed Design Study

    ERIC Educational Resources Information Center

    Dogan, Soner

    2016-01-01

    The aim of this paper is to evaluate the school administrators' values-based management behaviours according to the teachers' perceptions and opinions and, accordingly, to build a model of values-based management process in schools. The study was conducted using explanatory design which is inclusive of both quantitative and qualitative methods.…

  16. Extra-Tropical Cyclones at Climate Scales: Comparing Models to Observations

    NASA Astrophysics Data System (ADS)

    Tselioudis, G.; Bauer, M.; Rossow, W.

    2009-04-01

    Climate is often defined as the accumulation of weather, and weather is not the concern of climate models. Justification for this latter sentiment has long been hidden behind coarse model resolutions and blunt validation tools based on climatological maps. The spatial-temporal resolutions of today's climate models and observations are converging onto meteorological scales, however, which means that with the correct tools we can test the largely unproven assumption that climate model weather is correct enough that its accumulation results in a robust climate simulation. Towards this effort we introduce a new tool for extracting detailed cyclone statistics from observations and climate model output. These include the usual cyclone characteristics (centers, tracks), but also adaptive cyclone-centric composites. We have created a novel dataset, the MAP Climatology of Mid-latitude Storminess (MCMS), which provides a detailed 6 hourly assessment of the areas under the influence of mid-latitude cyclones, using a search algorithm that delimits the boundaries of each system from the outer-most closed SLP contour. Using this we then extract composites of cloud, radiation, and precipitation properties from sources such as ISCCP and GPCP to create a large comparative dataset for climate model validation. A demonstration of the potential usefulness of these tools in process-based climate model evaluation studies will be shown.

  17. Interpreting Field-based Observations of Complex Fluvial System Behavior through Theory and Numerical Models: Examples from the Ganges-Brahmaputra-Meghna Delta

    NASA Astrophysics Data System (ADS)

    Sincavage, R.; Goodbred, S. L., Jr.; Pickering, J.; Diamond, M. S.; Paola, C.; Liang, M.

    2016-12-01

    Field observations of depositional systems using outcrop, borehole, and geophysical data stimulate ideas regarding process-based creation of the sedimentary record. Theory and numerical modeling provide insight into the often perplexing nature of these systems by isolating the processes responsible for the observed response. An extensive dataset of physical and chemical sediment properties from field data in the Ganges-Brahmaputra-Meghna Delta (GBMD) indicate the presence of complex, multi-dimensional fluvial system behaviors. Paleodischarges during the last lowstand were insufficient to generate paleovalley geometries and transport boulder-sized basal gravel as observed in densely-spaced (3-5 km) borehole data and a 255 km long fluvial multichannel seismic survey. Instead, uniform flow-derived flood heights and Shields-derived flow velocities based on measured field observations support the conclusion that previously documented megafloods conveyed through the Tsangpo Gorge created the antecedent topography upon which the Holocene sediment dispersal system has since evolved. In the fault-bounded Sylhet Basin east of the main valley system, borehole data reveal three principal mid-Holocene sediment delivery pathways; two that terminate in the basin interior and exhibit rapid mass extraction, and one located along the western margin of Sylhet Basin that serves to bypass the basin interior to downstream depocenters. In spite of topographically favorable conditions and enhanced subsidence rates for delivery into the basin, the fluvial system has favored the bypass-dominated pathway, leaving the central basin perennially underfilled. A "hydrologic barrier" effect from seasonally high monsoon-lake levels has been proposed as a mechanism that precludes sediment delivery to Sylhet Basin. However, numerical models with varying lake level heights indicate that the presence or absence of a seasonal lake has little effect on channel path selection. Rather, it appears that pre

  18. Approximation of epidemic models by diffusion processes and their statistical inference.

    PubMed

    Guy, Romain; Larédo, Catherine; Vergu, Elisabeta

    2015-02-01

    Multidimensional continuous-time Markov jump processes [Formula: see text] on [Formula: see text] form a usual set-up for modeling [Formula: see text]-like epidemics. However, when facing incomplete epidemic data, inference based on [Formula: see text] is not easy to be achieved. Here, we start building a new framework for the estimation of key parameters of epidemic models based on statistics of diffusion processes approximating [Formula: see text]. First, previous results on the approximation of density-dependent [Formula: see text]-like models by diffusion processes with small diffusion coefficient [Formula: see text], where [Formula: see text] is the population size, are generalized to non-autonomous systems. Second, our previous inference results on discretely observed diffusion processes with small diffusion coefficient are extended to time-dependent diffusions. Consistent and asymptotically Gaussian estimates are obtained for a fixed number [Formula: see text] of observations, which corresponds to the epidemic context, and for [Formula: see text]. A correction term, which yields better estimates non asymptotically, is also included. Finally, performances and robustness of our estimators with respect to various parameters such as [Formula: see text] (the basic reproduction number), [Formula: see text], [Formula: see text] are investigated on simulations. Two models, [Formula: see text] and [Formula: see text], corresponding to single and recurrent outbreaks, respectively, are used to simulate data. The findings indicate that our estimators have good asymptotic properties and behave noticeably well for realistic numbers of observations and population sizes. This study lays the foundations of a generic inference method currently under extension to incompletely observed epidemic data. Indeed, contrary to the majority of current inference techniques for partially observed processes, which necessitates computer intensive simulations, our method being mostly an

  19. BioNetSim: a Petri net-based modeling tool for simulations of biochemical processes.

    PubMed

    Gao, Junhui; Li, Li; Wu, Xiaolin; Wei, Dong-Qing

    2012-03-01

    BioNetSim, a Petri net-based software for modeling and simulating biochemistry processes, is developed, whose design and implement are presented in this paper, including logic construction, real-time access to KEGG (Kyoto Encyclopedia of Genes and Genomes), and BioModel database. Furthermore, glycolysis is simulated as an example of its application. BioNetSim is a helpful tool for researchers to download data, model biological network, and simulate complicated biochemistry processes. Gene regulatory networks, metabolic pathways, signaling pathways, and kinetics of cell interaction are all available in BioNetSim, which makes modeling more efficient and effective. Similar to other Petri net-based softwares, BioNetSim does well in graphic application and mathematic construction. Moreover, it shows several powerful predominances. (1) It creates models in database. (2) It realizes the real-time access to KEGG and BioModel and transfers data to Petri net. (3) It provides qualitative analysis, such as computation of constants. (4) It generates graphs for tracing the concentration of every molecule during the simulation processes.

  20. Junior high school students' cognitive process in solving the developed algebraic problems based on information processing taxonomy model

    NASA Astrophysics Data System (ADS)

    Purwoko, Saad, Noor Shah; Tajudin, Nor'ain Mohd

    2017-05-01

    This study aims to: i) develop problem solving questions of Linear Equations System of Two Variables (LESTV) based on levels of IPT Model, ii) explain the level of students' skill of information processing in solving LESTV problems; iii) explain students' skill in information processing in solving LESTV problems; and iv) explain students' cognitive process in solving LESTV problems. This study involves three phases: i) development of LESTV problem questions based on Tessmer Model; ii) quantitative survey method on analyzing students' skill level of information processing; and iii) qualitative case study method on analyzing students' cognitive process. The population of the study was 545 eighth grade students represented by a sample of 170 students of five Junior High Schools in Hilir Barat Zone, Palembang (Indonesia) that were chosen using cluster sampling. Fifteen students among them were drawn as a sample for the interview session with saturated information obtained. The data were collected using the LESTV problem solving test and the interview protocol. The quantitative data were analyzed using descriptive statistics, while the qualitative data were analyzed using the content analysis. The finding of this study indicated that students' cognitive process was just at the step of indentifying external source and doing algorithm in short-term memory fluently. Only 15.29% students could retrieve type A information and 5.88% students could retrieve type B information from long-term memory. The implication was the development problems of LESTV had validated IPT Model in modelling students' assessment by different level of hierarchy.

  1. Likelihood-based inference for discretely observed birth-death-shift processes, with applications to evolution of mobile genetic elements.

    PubMed

    Xu, Jason; Guttorp, Peter; Kato-Maeda, Midori; Minin, Vladimir N

    2015-12-01

    Continuous-time birth-death-shift (BDS) processes are frequently used in stochastic modeling, with many applications in ecology and epidemiology. In particular, such processes can model evolutionary dynamics of transposable elements-important genetic markers in molecular epidemiology. Estimation of the effects of individual covariates on the birth, death, and shift rates of the process can be accomplished by analyzing patient data, but inferring these rates in a discretely and unevenly observed setting presents computational challenges. We propose a multi-type branching process approximation to BDS processes and develop a corresponding expectation maximization algorithm, where we use spectral techniques to reduce calculation of expected sufficient statistics to low-dimensional integration. These techniques yield an efficient and robust optimization routine for inferring the rates of the BDS process, and apply broadly to multi-type branching processes whose rates can depend on many covariates. After rigorously testing our methodology in simulation studies, we apply our method to study intrapatient time evolution of IS6110 transposable element, a genetic marker frequently used during estimation of epidemiological clusters of Mycobacterium tuberculosis infections. © 2015, The International Biometric Society.

  2. Observational and Modeling-based Study of Corsica Thunderstorms: Preparation of the EXAEDRE Airborne Campaign

    NASA Astrophysics Data System (ADS)

    Defer, E.; Coquillat, S.; Lambert, D.; Pinty, J. P.; Prieur, S.; Caumont, O.; Labatut, L.; Nuret, M.; Blanchet, P.; Buguet, M.; Lalande, P.; Labrouche, G.; Pedeboy, S.; Lojou, J. Y.; Schwarzenboeck, A.; Delanoë, J.; Bourdon, A.; Guiraud, L.

    2017-12-01

    The 4-year EXAEDRE (EXploiting new Atmospheric Electricity Data for Research and the Environment; Oct 2016-Sept 2020) project is sponsored by the French Science Foundation ANR (Agence Nationale de la Recherche). This project is a French contribution to the HyMeX (HYdrological cycle in the Mediterranean EXperiment) program. The EXAEDRE activities rely on innovative multi-disciplinary and state of the art instrumentation and modeling tools to provide a comprehensive description of the electrical activity in thunderstorms. The EXAEDRE observational part is based on i) existing lightning records collected during HyMeX Special Observation Period (SOP1; Sept-Nov 2012), and permanent lightning observations provided by the research Lightning Mapping Array SAETTA and the operational Météorage lightning locating systems, ii) additional lightning observations mapped with a new VHF interferometer especially developed within the EXAEDRE project, and iii) a dedicated airborne campaign over Corsica. The modeling part of the EXAEDRE project exploits the electrification and lightning schemes developed in the cloud resolving model MesoNH and promotes an innovative technique of flash data assimilation in the french operational model AROME of Météo-France. An overview of the EXAEDRE project will be given with an emphasis on the instrumental, observational and modeling activities performed during the 1st year of the project. The preparation of the EXAEDRE airborne campaign scheduled for September 2018 over Corsica will then be discussed. Acknowledgements. The EXAEDRE project is sponsored by grant ANR-16-CE04-0005 with support from the MISTRALS/HyMeX meta program.

  3. Event-based hydrological modeling for detecting dominant hydrological process and suitable model strategy for semi-arid catchments

    NASA Astrophysics Data System (ADS)

    Huang, Pengnian; Li, Zhijia; Chen, Ji; Li, Qiaoling; Yao, Cheng

    2016-11-01

    To simulate the hydrological processes in semi-arid areas properly is still challenging. This study assesses the impact of different modeling strategies on simulating flood processes in semi-arid catchments. Four classic hydrological models, TOPMODEL, XINANJIANG (XAJ), SAC-SMA and TANK, were selected and applied to three semi-arid catchments in North China. Based on analysis and comparison of the simulation results of these classic models, four new flexible models were constructed and used to further investigate the suitability of various modeling strategies for semi-arid environments. Numerical experiments were also designed to examine the performances of the models. The results show that in semi-arid catchments a suitable model needs to include at least one nonlinear component to simulate the main process of surface runoff generation. If there are more than two nonlinear components in the hydrological model, they should be arranged in parallel, rather than in series. In addition, the results show that the parallel nonlinear components should be combined by multiplication rather than addition. Moreover, this study reveals that the key hydrological process over semi-arid catchments is the infiltration excess surface runoff, a non-linear component.

  4. Surface Soil Moisture Estimates Across China Based on Multi-satellite Observations and A Soil Moisture Model

    NASA Astrophysics Data System (ADS)

    Zhang, Ke; Yang, Tao; Ye, Jinyin; Li, Zhijia; Yu, Zhongbo

    2017-04-01

    Soil moisture is a key variable that regulates exchanges of water and energy between land surface and atmosphere. Soil moisture retrievals based on microwave satellite remote sensing have made it possible to estimate global surface (up to about 10 cm in depth) soil moisture routinely. Although there are many satellites operating, including NASA's Soil Moisture Acitive Passive mission (SMAP), ESA's Soil Moisture and Ocean Salinity mission (SMOS), JAXA's Advanced Microwave Scanning Radiometer 2 mission (AMSR2), and China's Fengyun (FY) missions, key differences exist between different satellite-based soil moisture products. In this study, we applied a single-channel soil moisture retrieval model forced by multiple sources of satellite brightness temperature observations to estimate consistent daily surface soil moisture across China at a spatial resolution of 25 km. By utilizing observations from multiple satellites, we are able to estimate daily soil moisture across the whole domain of China. We further developed a daily soil moisture accounting model and applied it to downscale the 25-km satellite-based soil moisture to 5 km. By comparing our estimated soil moisture with observations from a dense observation network implemented in Anhui Province, China, our estimated soil moisture results show a reasonably good agreement with the observations (RMSE < 0.1 and r > 0.8).

  5. A specialized face-processing model inspired by the organization of monkey face patches explains several face-specific phenomena observed in humans

    PubMed Central

    Farzmahdi, Amirhossein; Rajaei, Karim; Ghodrati, Masoud; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi

    2016-01-01

    Converging reports indicate that face images are processed through specialized neural networks in the brain –i.e. face patches in monkeys and the fusiform face area (FFA) in humans. These studies were designed to find out how faces are processed in visual system compared to other objects. Yet, the underlying mechanism of face processing is not completely revealed. Here, we show that a hierarchical computational model, inspired by electrophysiological evidence on face processing in primates, is able to generate representational properties similar to those observed in monkey face patches (posterior, middle and anterior patches). Since the most important goal of sensory neuroscience is linking the neural responses with behavioral outputs, we test whether the proposed model, which is designed to account for neural responses in monkey face patches, is also able to predict well-documented behavioral face phenomena observed in humans. We show that the proposed model satisfies several cognitive face effects such as: composite face effect and the idea of canonical face views. Our model provides insights about the underlying computations that transfer visual information from posterior to anterior face patches. PMID:27113635

  6. An Ontology-Based Conceptual Model For Accumulating And Reusing Knowledge In A DMAIC Process

    NASA Astrophysics Data System (ADS)

    Nguyen, ThanhDat; Kifor, Claudiu Vasile

    2015-09-01

    DMAIC (Define, Measure, Analyze, Improve, and Control) is an important process used to enhance quality of processes basing on knowledge. However, it is difficult to access DMAIC knowledge. Conventional approaches meet a problem arising from structuring and reusing DMAIC knowledge. The main reason is that DMAIC knowledge is not represented and organized systematically. In this article, we overcome the problem basing on a conceptual model that is a combination of DMAIC process, knowledge management, and Ontology engineering. The main idea of our model is to utilizing Ontologies to represent knowledge generated by each of DMAIC phases. We build five different knowledge bases for storing all knowledge of DMAIC phases with the support of necessary tools and appropriate techniques in Information Technology area. Consequently, these knowledge bases provide knowledge available to experts, managers, and web users during or after DMAIC execution in order to share and reuse existing knowledge.

  7. Quantitative evaluation of specific vulnerability to nitrate for groundwater resource protection based on process-based simulation model.

    PubMed

    Huan, Huan; Wang, Jinsheng; Zhai, Yuanzheng; Xi, Beidou; Li, Juan; Li, Mingxiao

    2016-04-15

    It has been proved that groundwater vulnerability assessment is an effective tool for groundwater protection. Nowadays, quantitative assessment methods for specific vulnerability are scarce due to limited cognition of complicated contaminant fate and transport processes in the groundwater system. In this paper, process-based simulation model for specific vulnerability to nitrate using 1D flow and solute transport model in the unsaturated vadose zone is presented for groundwater resource protection. For this case study in Jilin City of northeast China, rate constants of denitrification and nitrification as well as adsorption constants of ammonium and nitrate in the vadose zone were acquired by laboratory experiments. The transfer time at the groundwater table t50 was taken as the specific vulnerability indicator. Finally, overall vulnerability was assessed by establishing the relationship between groundwater net recharge, layer thickness and t50. The results suggested that the most vulnerable regions of Jilin City were mainly distributed in the floodplain of Songhua River and Mangniu River. The least vulnerable areas mostly appear in the second terrace and back of the first terrace. The overall area of low, relatively low and moderate vulnerability accounted for 76% of the study area, suggesting the relatively low possibility of suffering nitrate contamination. In addition, the sensitivity analysis showed that the most sensitive factors of specific vulnerability in the vadose zone included the groundwater net recharge rate, physical properties of soil medium and rate constants of nitrate denitrification. By validating the suitability of the process-based simulation model for specific vulnerability and comparing with index-based method by a group of integrated indicators, more realistic and accurate specific vulnerability mapping could be acquired by the process-based simulation model acquiring. In addition, the advantages, disadvantages, constraint conditions and

  8. Multistage degradation modeling for BLDC motor based on Wiener process

    NASA Astrophysics Data System (ADS)

    Yuan, Qingyang; Li, Xiaogang; Gao, Yuankai

    2018-05-01

    Brushless DC motors are widely used, and their working temperatures, regarding as degradation processes, are nonlinear and multistage. It is necessary to establish a nonlinear degradation model. In this research, our study was based on accelerated degradation data of motors, which are their working temperatures. A multistage Wiener model was established by using the transition function to modify linear model. The normal weighted average filter (Gauss filter) was used to improve the results of estimation for the model parameters. Then, to maximize likelihood function for parameter estimation, we used numerical optimization method- the simplex method for cycle calculation. Finally, the modeling results show that the degradation mechanism changes during the degradation of the motor with high speed. The effectiveness and rationality of model are verified by comparison of the life distribution with widely used nonlinear Wiener model, as well as a comparison of QQ plots for residual. Finally, predictions for motor life are gained by life distributions in different times calculated by multistage model.

  9. Monte Carlo based toy model for fission process

    NASA Astrophysics Data System (ADS)

    Kurniadi, R.; Waris, A.; Viridi, S.

    2014-09-01

    There are many models and calculation techniques to obtain visible image of fission yield process. In particular, fission yield can be calculated by using two calculations approach, namely macroscopic approach and microscopic approach. This work proposes another calculation approach in which the nucleus is treated as a toy model. Hence, the fission process does not represent real fission process in nature completely. The toy model is formed by Gaussian distribution of random number that randomizes distance likesthe distance between particle and central point. The scission process is started by smashing compound nucleus central point into two parts that are left central and right central points. These three points have different Gaussian distribution parameters such as mean (μCN, μL, μR), and standard deviation (σCN, σL, σR). By overlaying of three distributions, the number of particles (NL, NR) that are trapped by central points can be obtained. This process is iterated until (NL, NR) become constant numbers. Smashing process is repeated by changing σL and σR, randomly.

  10. Equifinality and process-based modelling

    NASA Astrophysics Data System (ADS)

    Khatami, S.; Peel, M. C.; Peterson, T. J.; Western, A. W.

    2017-12-01

    Equifinality is understood as one of the fundamental difficulties in the study of open complex systems, including catchment hydrology. A review of the hydrologic literature reveals that the term equifinality has been widely used, but in many cases inconsistently and without coherent recognition of the various facets of equifinality, which can lead to ambiguity but also methodological fallacies. Therefore, in this study we first characterise the term equifinality within the context of hydrological modelling by reviewing the genesis of the concept of equifinality and then presenting a theoretical framework. During past decades, equifinality has mainly been studied as a subset of aleatory (arising due to randomness) uncertainty and for the assessment of model parameter uncertainty. Although the connection between parameter uncertainty and equifinality is undeniable, we argue there is more to equifinality than just aleatory parameter uncertainty. That is, the importance of equifinality and epistemic uncertainty (arising due to lack of knowledge) and their implications is overlooked in our current practice of model evaluation. Equifinality and epistemic uncertainty in studying, modelling, and evaluating hydrologic processes are treated as if they can be simply discussed in (or often reduced to) probabilistic terms (as for aleatory uncertainty). The deficiencies of this approach to conceptual rainfall-runoff modelling are demonstrated for selected Australian catchments by examination of parameter and internal flux distributions and interactions within SIMHYD. On this basis, we present a new approach that expands equifinality concept beyond model parameters to inform epistemic uncertainty. The new approach potentially facilitates the identification and development of more physically plausible models and model evaluation schemes particularly within the multiple working hypotheses framework, and is generalisable to other fields of environmental modelling as well.

  11. Measurement and modeling of moist processes

    NASA Technical Reports Server (NTRS)

    Cotton, William; Starr, David; Mitchell, Kenneth; Fleming, Rex; Koch, Steve; Smith, Steve; Mailhot, Jocelyn; Perkey, Don; Tripoli, Greg

    1993-01-01

    The keynote talk summarized five years of work simulating observed mesoscale convective systems with the RAMS (Regional Atmospheric Modeling System) model. Excellent results are obtained when simulating squall line or other convective systems that are strongly forced by fronts or other lifting mechanisms. Less highly forced systems are difficult to model. The next topic in this colloquium was measurement of water vapor and other constituents of the hydrologic cycle. Impressive accuracy was shown measuring water vapor with both the airborne DIAL (Differential Absorption Lidar) system and the the ground-based Raman Lidar. NMC's plans for initializing land water hydrology in mesoscale models was presented before water vapor measurement concepts for GCIP were discussed. The subject of using satellite data to provide mesoscale moisture and wind analyses was next. Recent activities in modeling of moist processes in mesoscale systems was reported on. These modeling activities at the Canadian Atmospheric Environment Service (AES) used a hydrostatic, variable-resolution grid model. Next the spatial resolution effects of moisture budgets was discussed; in particular, the effects of temporal resolution on heat and moisture budgets for cumulus parameterization. The conclusion of this colloquium was on modeling scale interaction processes.

  12. Two solar proton fluence models based on ground level enhancement observations

    NASA Astrophysics Data System (ADS)

    Raukunen, Osku; Vainio, Rami; Tylka, Allan J.; Dietrich, William F.; Jiggens, Piers; Heynderickx, Daniel; Dierckxsens, Mark; Crosby, Norma; Ganse, Urs; Siipola, Robert

    2018-01-01

    Solar energetic particles (SEPs) constitute an important component of the radiation environment in interplanetary space. Accurate modeling of SEP events is crucial for the mitigation of radiation hazards in spacecraft design. In this study we present two new statistical models of high energy solar proton fluences based on ground level enhancement (GLE) observations during solar cycles 19-24. As the basis of our modeling, we utilize a four parameter double power law function (known as the Band function) fits to integral GLE fluence spectra in rigidity. In the first model, the integral and differential fluences for protons with energies between 10 MeV and 1 GeV are calculated using the fits, and the distributions of the fluences at certain energies are modeled with an exponentially cut-off power law function. In the second model, we use a more advanced methodology: by investigating the distributions and relationships of the spectral fit parameters we find that they can be modeled as two independent and two dependent variables. Therefore, instead of modeling the fluences separately at different energies, we can model the shape of the fluence spectrum. We present examples of modeling results and show that the two methodologies agree well except for a short mission duration (1 year) at low confidence level. We also show that there is a reasonable agreement between our models and three well-known solar proton models (JPL, ESP and SEPEM), despite the differences in both the modeling methodologies and the data used to construct the models.

  13. Developing a Data Driven Process-Based Model for Remote Sensing of Ecosystem Production

    NASA Astrophysics Data System (ADS)

    Elmasri, B.; Rahman, A. F.

    2010-12-01

    Estimating ecosystem carbon fluxes at various spatial and temporal scales is essential for quantifying the global carbon cycle. Numerous models have been developed for this purpose using several environmental variables as well as vegetation indices derived from remotely sensed data. Here we present a data driven modeling approach for gross primary production (GPP) that is based on a process based model BIOME-BGC. The proposed model was run using available remote sensing data and it does not depend on look-up tables. Furthermore, this approach combines the merits of both empirical and process models, and empirical models were used to estimate certain input variables such as light use efficiency (LUE). This was achieved by using remotely sensed data to the mathematical equations that represent biophysical photosynthesis processes in the BIOME-BGC model. Moreover, a new spectral index for estimating maximum photosynthetic activity, maximum photosynthetic rate index (MPRI), is also developed and presented here. This new index is based on the ratio between the near infrared and the green bands (ρ858.5/ρ555). The model was tested and validated against MODIS GPP product and flux measurements from two eddy covariance flux towers located at Morgan Monroe State Forest (MMSF) in Indiana and Harvard Forest in Massachusetts. Satellite data acquired by the Advanced Microwave Scanning Radiometer (AMSR-E) and MODIS were used. The data driven model showed a strong correlation between the predicted and measured GPP at the two eddy covariance flux towers sites. This methodology produced better predictions of GPP than did the MODIS GPP product. Moreover, the proportion of error in the predicted GPP for MMSF and Harvard forest was dominated by unsystematic errors suggesting that the results are unbiased. The analysis indicated that maintenance respiration is one of the main factors that dominate the overall model outcome errors and improvement in maintenance respiration estimation

  14. Terrestrial N Cycling And C Storage: Some Insights From A Process-based Land Surface Model

    NASA Astrophysics Data System (ADS)

    Zaehle, S.; Friend, A. D.; Friedlingstein, P.

    2008-12-01

    We present results of a new land surface model, O-CN, which includes a process-based coupling between the terrestrial cycling of energy, water, carbon, and nitrogen. The model represents the controls of the terrestrial nitrogen (N) cycling on carbon (C) pools and fluxes through photosynthesis, respiration, changes in allocation, and soil organic matter decomposition, and explicitly accounts for N leaching and gaseous losses. O-CN has been shown to give realistic results in comparison to observations at a wide range of scales, including in situ flux measurements, productivity databases, and atmospheric CO2 concentration data. O-CN is run for three free air carbon dioxide enrichment (FACE) sites (Duke, Oak Ridge, Aspen), and reproduces observed magnitudes of changes in net primary productivity, foliage area and foliage N content. Several alternative hypotheses concerning the control of N on vegetation growth and decomposition, including effects of diluting foliage N concentrations, down-regulation of photosynthesis and respiration, acclimation of C allocation patterns and biological N fixation, are tested with respect to their effect on long- term C sequestration estimate. Differences in initial N availability, small transient changes in N inputs and the assumed plasticity of C:N stoichiometry can lead to substantial differences in the simulated long-term changes in productivity and C sequestration. We discuss the capacity of observations obtained at FACE sites to evaluate these alternative hypotheses, and investigate implications of a transient versus instantaneous increase in atmospheric carbon dioxide for the magnitude of the simulated limiting effect of N on C cycling. Finally, we re-examine earlier model-based assessments of the terrestrial C sequestration potential using a global transient O-CN simulation driven by increases in atmospheric CO2, N deposition and climatic changes over the 21st century.

  15. Gaussian process model for extrapolation of scattering observables for complex molecules: From benzene to benzonitrile

    NASA Astrophysics Data System (ADS)

    Cui, Jie; Li, Zhiying; Krems, Roman V.

    2015-10-01

    We consider a problem of extrapolating the collision properties of a large polyatomic molecule A-H to make predictions of the dynamical properties for another molecule related to A-H by the substitution of the H atom with a small molecular group X, without explicitly computing the potential energy surface for A-X. We assume that the effect of the -H →-X substitution is embodied in a multidimensional function with unknown parameters characterizing the change of the potential energy surface. We propose to apply the Gaussian Process model to determine the dependence of the dynamical observables on the unknown parameters. This can be used to produce an interval of the observable values which corresponds to physical variations of the potential parameters. We show that the Gaussian Process model combined with classical trajectory calculations can be used to obtain the dependence of the cross sections for collisions of C6H5CN with He on the unknown parameters describing the interaction of the He atom with the CN fragment of the molecule. The unknown parameters are then varied within physically reasonable ranges to produce a prediction uncertainty of the cross sections. The results are normalized to the cross sections for He — C6H6 collisions obtained from quantum scattering calculations in order to provide a prediction interval of the thermally averaged cross sections for collisions of C6H5CN with He.

  16. Observed heavy precipitation increase confirms theory and early model

    NASA Astrophysics Data System (ADS)

    Fischer, E. M.; Knutti, R.

    2016-12-01

    Environmental phenomena are often first observed, and then explained or simulated quantitatively. The complexity and diversity of processes, the range of scales involved, and the lack of first principles to describe many processes make it challenging to predict conditions beyond the ones observed. Here we use the intensification of heavy precipitation as a counterexample, where seemingly complex and potentially computationally intractable processes to first order manifest themselves in simple ways: the intensification of heavy precipitation is now emerging in the observed record across many regions of the world, confirming both theory and a variety of model predictions made decades ago, before robust evidence arose from observations. We here compare heavy precipitation changes over Europe and the contiguous United States across station series and gridded observations, theoretical considerations and multi-model ensembles of GCMs and RCMs. We demonstrate that the observed heavy precipitation intensification aggregated over large areas agrees remarkably well with Clausius-Clapeyron scaling. The observed changes in heavy precipitation are consistent yet somewhat larger than predicted by very coarse resolution GCMs in the 1980s and simulated by the newest generation of GCMs and RCMs. For instance the number of days with very heavy precipitation over Europe has increased by about 45% in observations (years 1981-2013 compared to 1951-1980) and by about 25% in the model average in both GCMs and RCMs, although with substantial spread across models and locations. As the anthropogenic climate signal strengthens, there will be more opportunities to test climate predictions for other variables against observations and across a hierarchy of different models and theoretical concepts. *Fischer, E.M., and R. Knutti, 2016, Observed heavy precipitation increase confirms theory and early models, Nature Climate Change, in press.

  17. An analysis of the Petri net based model of the human body iron homeostasis process.

    PubMed

    Sackmann, Andrea; Formanowicz, Dorota; Formanowicz, Piotr; Koch, Ina; Blazewicz, Jacek

    2007-02-01

    In the paper a Petri net based model of the human body iron homeostasis is presented and analyzed. The body iron homeostasis is an important but not fully understood complex process. The modeling of the process presented in the paper is expressed in the language of Petri net theory. An application of this theory to the description of biological processes allows for very precise analysis of the resulting models. Here, such an analysis of the body iron homeostasis model from a mathematical point of view is given.

  18. Embedding Task-Based Neural Models into a Connectome-Based Model of the Cerebral Cortex.

    PubMed

    Ulloa, Antonio; Horwitz, Barry

    2016-01-01

    A number of recent efforts have used large-scale, biologically realistic, neural models to help understand the neural basis for the patterns of activity observed in both resting state and task-related functional neural imaging data. An example of the former is The Virtual Brain (TVB) software platform, which allows one to apply large-scale neural modeling in a whole brain framework. TVB provides a set of structural connectomes of the human cerebral cortex, a collection of neural processing units for each connectome node, and various forward models that can convert simulated neural activity into a variety of functional brain imaging signals. In this paper, we demonstrate how to embed a previously or newly constructed task-based large-scale neural model into the TVB platform. We tested our method on a previously constructed large-scale neural model (LSNM) of visual object processing that consisted of interconnected neural populations that represent, primary and secondary visual, inferotemporal, and prefrontal cortex. Some neural elements in the original model were "non-task-specific" (NS) neurons that served as noise generators to "task-specific" neurons that processed shapes during a delayed match-to-sample (DMS) task. We replaced the NS neurons with an anatomical TVB connectome model of the cerebral cortex comprising 998 regions of interest interconnected by white matter fiber tract weights. We embedded our LSNM of visual object processing into corresponding nodes within the TVB connectome. Reciprocal connections between TVB nodes and our task-based modules were included in this framework. We ran visual object processing simulations and showed that the TVB simulator successfully replaced the noise generation originally provided by NS neurons; i.e., the DMS tasks performed with the hybrid LSNM/TVB simulator generated equivalent neural and fMRI activity to that of the original task-based models. Additionally, we found partial agreement between the functional

  19. Embedding Task-Based Neural Models into a Connectome-Based Model of the Cerebral Cortex

    PubMed Central

    Ulloa, Antonio; Horwitz, Barry

    2016-01-01

    A number of recent efforts have used large-scale, biologically realistic, neural models to help understand the neural basis for the patterns of activity observed in both resting state and task-related functional neural imaging data. An example of the former is The Virtual Brain (TVB) software platform, which allows one to apply large-scale neural modeling in a whole brain framework. TVB provides a set of structural connectomes of the human cerebral cortex, a collection of neural processing units for each connectome node, and various forward models that can convert simulated neural activity into a variety of functional brain imaging signals. In this paper, we demonstrate how to embed a previously or newly constructed task-based large-scale neural model into the TVB platform. We tested our method on a previously constructed large-scale neural model (LSNM) of visual object processing that consisted of interconnected neural populations that represent, primary and secondary visual, inferotemporal, and prefrontal cortex. Some neural elements in the original model were “non-task-specific” (NS) neurons that served as noise generators to “task-specific” neurons that processed shapes during a delayed match-to-sample (DMS) task. We replaced the NS neurons with an anatomical TVB connectome model of the cerebral cortex comprising 998 regions of interest interconnected by white matter fiber tract weights. We embedded our LSNM of visual object processing into corresponding nodes within the TVB connectome. Reciprocal connections between TVB nodes and our task-based modules were included in this framework. We ran visual object processing simulations and showed that the TVB simulator successfully replaced the noise generation originally provided by NS neurons; i.e., the DMS tasks performed with the hybrid LSNM/TVB simulator generated equivalent neural and fMRI activity to that of the original task-based models. Additionally, we found partial agreement between the functional

  20. Comparing statistical and process-based flow duration curve models in ungauged basins and changing rain regimes

    NASA Astrophysics Data System (ADS)

    Müller, M. F.; Thompson, S. E.

    2016-02-01

    The prediction of flow duration curves (FDCs) in ungauged basins remains an important task for hydrologists given the practical relevance of FDCs for water management and infrastructure design. Predicting FDCs in ungauged basins typically requires spatial interpolation of statistical or model parameters. This task is complicated if climate becomes non-stationary, as the prediction challenge now also requires extrapolation through time. In this context, process-based models for FDCs that mechanistically link the streamflow distribution to climate and landscape factors may have an advantage over purely statistical methods to predict FDCs. This study compares a stochastic (process-based) and statistical method for FDC prediction in both stationary and non-stationary contexts, using Nepal as a case study. Under contemporary conditions, both models perform well in predicting FDCs, with Nash-Sutcliffe coefficients above 0.80 in 75 % of the tested catchments. The main drivers of uncertainty differ between the models: parameter interpolation was the main source of error for the statistical model, while violations of the assumptions of the process-based model represented the main source of its error. The process-based approach performed better than the statistical approach in numerical simulations with non-stationary climate drivers. The predictions of the statistical method under non-stationary rainfall conditions were poor if (i) local runoff coefficients were not accurately determined from the gauge network, or (ii) streamflow variability was strongly affected by changes in rainfall. A Monte Carlo analysis shows that the streamflow regimes in catchments characterized by frequent wet-season runoff and a rapid, strongly non-linear hydrologic response are particularly sensitive to changes in rainfall statistics. In these cases, process-based prediction approaches are favored over statistical models.

  1. Testing a Dual Process Model of Gender-Based Violence: A Laboratory Examination.

    PubMed

    Berke, Danielle S; Zeichner, Amos

    2016-01-01

    The dire impact of gender-based violence on society compels development of models comprehensive enough to capture the diversity of its forms. Research has established hostile sexism (HS) as a robust predictor of gender-based violence. However, to date, research has yet to link men's benevolent sexism (BS) to physical aggression toward women, despite correlations between BS and HS and between BS and victim blaming. One model, the opposing process model of benevolent sexism (Sibley & Perry, 2010), suggests that, for men, BS acts indirectly through HS to predict acceptance of hierarchy-enhancing social policy as an expression of a preference for in-group dominance (i. e., social dominance orientation [SDO]). The extent to which this model applies to gender-based violence remains untested. Therefore, in this study, 168 undergraduate men in a U. S. university participated in a competitive reaction time task, during which they had the option to shock an ostensible female opponent as a measure of gender-based violence. Results of multiple-mediation path analyses indicated dual pathways potentiating gender-based violence and highlight SDO as a particularly potent mechanism of this violence. Findings are discussed in terms of group dynamics and norm-based violence prevention.

  2. Evaluation of Aerosol-cloud Interaction in the GISS Model E Using ARM Observations

    NASA Technical Reports Server (NTRS)

    DeBoer, G.; Bauer, S. E.; Toto, T.; Menon, Surabi; Vogelmann, A. M.

    2013-01-01

    Observations from the US Department of Energy's Atmospheric Radiation Measurement (ARM) program are used to evaluate the ability of the NASA GISS ModelE global climate model in reproducing observed interactions between aerosols and clouds. Included in the evaluation are comparisons of basic meteorology and aerosol properties, droplet activation, effective radius parameterizations, and surface-based evaluations of aerosol-cloud interactions (ACI). Differences between the simulated and observed ACI are generally large, but these differences may result partially from vertical distribution of aerosol in the model, rather than the representation of physical processes governing the interactions between aerosols and clouds. Compared to the current observations, the ModelE often features elevated droplet concentrations for a given aerosol concentration, indicating that the activation parameterizations used may be too aggressive. Additionally, parameterizations for effective radius commonly used in models were tested using ARM observations, and there was no clear superior parameterization for the cases reviewed here. This lack of consensus is demonstrated to result in potentially large, statistically significant differences to surface radiative budgets, should one parameterization be chosen over another.

  3. Land Surface Model Biases and their Impacts on the Assimilation of Snow-related Observations

    NASA Astrophysics Data System (ADS)

    Arsenault, K. R.; Kumar, S.; Hunter, S. M.; Aman, R.; Houser, P. R.; Toll, D.; Engman, T.; Nigro, J.

    2007-12-01

    Some recent snow modeling studies have employed a wide range of assimilation methods to incorporate snow cover or other snow-related observations into different hydrological or land surface models. These methods often include taking both model and observation biases into account throughout the model integration. This study focuses more on diagnosing the model biases and presenting their subsequent impacts on assimilating snow observations and modeled snowmelt processes. In this study, the land surface model, the Community Land Model (CLM), is used within the Land Information System (LIS) modeling framework to show how such biases impact the assimilation of MODIS snow cover observations. Alternative in-situ and satellite-based observations are used to help guide the CLM LSM in better predicting snowpack conditions and more realistic timing of snowmelt for a western US mountainous region. Also, MODIS snow cover observation biases will be discussed, and validation results will be provided. The issues faced with inserting or assimilating MODIS snow cover at moderate spatial resolutions (like 1km or less) will be addressed, and the impacts on CLM will be presented.

  4. Constraining land carbon cycle process understanding with observations of atmospheric CO2 variability

    NASA Astrophysics Data System (ADS)

    Collatz, G. J.; Kawa, S. R.; Liu, Y.; Zeng, F.; Ivanoff, A.

    2013-12-01

    We evaluate our understanding of the land biospheric carbon cycle by benchmarking a model and its variants to atmospheric CO2 observations and to an atmospheric CO2 inversion. Though the seasonal cycle in CO2 observations is well simulated by the model (RMSE/standard deviation of observations <0.5 at most sites north of 15N and <1 for Southern Hemisphere sites) different model setups suggest that the CO2 seasonal cycle provides some constraint on gross photosynthesis, respiration, and fire fluxes revealed in the amplitude and phase at northern latitude sites. CarbonTracker inversions (CT) and model show similar phasing of the seasonal fluxes but agreement in the amplitude varies by region. We also evaluate interannual variability (IAV) in the measured atmospheric CO2 which, in contrast to the seasonal cycle, is not well represented by the model. We estimate the contributions of biospheric and fire fluxes, and atmospheric transport variability to explaining observed variability in measured CO2. Comparisons with CT show that modeled IAV has some correspondence to the inversion results >40N though fluxes match poorly at regional to continental scales. Regional and global fire emissions are strongly correlated with variability observed at northern flask sample sites and in the global atmospheric CO2 growth rate though in the latter case fire emissions anomalies are not large enough to account fully for the observed variability. We discuss remaining unexplained variability in CO2 observations in terms of the representation of fluxes by the model. This work also demonstrates the limitations of the current network of CO2 observations and the potential of new denser surface measurements and space based column measurements for constraining carbon cycle processes in models.

  5. Development of optimization model for sputtering process parameter based on gravitational search algorithm

    NASA Astrophysics Data System (ADS)

    Norlina, M. S.; Diyana, M. S. Nor; Mazidah, P.; Rusop, M.

    2016-07-01

    In the RF magnetron sputtering process, the desirable layer properties are largely influenced by the process parameters and conditions. If the quality of the thin film has not reached up to its intended level, the experiments have to be repeated until the desirable quality has been met. This research is proposing Gravitational Search Algorithm (GSA) as the optimization model to reduce the time and cost to be spent in the thin film fabrication. The optimization model's engine has been developed using Java. The model is developed based on GSA concept, which is inspired by the Newtonian laws of gravity and motion. In this research, the model is expected to optimize four deposition parameters which are RF power, deposition time, oxygen flow rate and substrate temperature. The results have turned out to be promising and it could be concluded that the performance of the model is satisfying in this parameter optimization problem. Future work could compare GSA with other nature based algorithms and test them with various set of data.

  6. Observations-based GPP estimates

    NASA Astrophysics Data System (ADS)

    Joiner, J.; Yoshida, Y.; Jung, M.; Tucker, C. J.; Pinzon, J. E.

    2017-12-01

    We have developed global estimates of gross primary production based on a relatively simple satellite observations-based approach using reflectance data from the MODIS instruments in the form of vegetation indices that provide information about photosynthetic capacity at both high temporal and spatial resolution and combined with information from chlorophyll solar-induced fluorescence from the Global Ozone Monitoring Experiment-2 instrument that is noisier and available only at lower temporal and spatial scales. We compare our gross primary production estimates with those from eddy covariance flux towers and show that they are competitive with more complicated extrapolated machine learning gross primary production products. Our results provide insight into the amount of variance in gross primary production that can be explained with satellite observations data and also show how processing of the satellite reflectance data is key to using it for accurate GPP estimates.

  7. Generalized Processing Tree Models: Jointly Modeling Discrete and Continuous Variables.

    PubMed

    Heck, Daniel W; Erdfelder, Edgar; Kieslich, Pascal J

    2018-05-24

    Multinomial processing tree models assume that discrete cognitive states determine observed response frequencies. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response times, process-tracing measures, or neurophysiological variables. GPT models assume finite-mixture distributions, with weights determined by a processing tree structure, and continuous components modeled by parameterized distributions such as Gaussians with separate or shared parameters across states. We discuss identifiability, parameter estimation, model testing, a modeling syntax, and the improved precision of GPT estimates. Finally, a GPT version of the feature comparison model of semantic categorization is applied to computer-mouse trajectories.

  8. Consistency between hydrological models and field observations: Linking processes at the hillslope scale to hydrological responses at the watershed scale

    USGS Publications Warehouse

    Clark, M.P.; Rupp, D.E.; Woods, R.A.; Tromp-van, Meerveld; Peters, N.E.; Freer, J.E.

    2009-01-01

    The purpose of this paper is to identify simple connections between observations of hydrological processes at the hillslope scale and observations of the response of watersheds following rainfall, with a view to building a parsimonious model of catchment processes. The focus is on the well-studied Panola Mountain Research Watershed (PMRW), Georgia, USA. Recession analysis of discharge Q shows that while the relationship between dQ/dt and Q is approximately consistent with a linear reservoir for the hillslope, there is a deviation from linearity that becomes progressively larger with increasing spatial scale. To account for these scale differences conceptual models of streamflow recession are defined at both the hillslope scale and the watershed scale, and an assessment made as to whether models at the hillslope scale can be aggregated to be consistent with models at the watershed scale. Results from this study show that a model with parallel linear reservoirs provides the most plausible explanation (of those tested) for both the linear hillslope response to rainfall and non-linear recession behaviour observed at the watershed outlet. In this model each linear reservoir is associated with a landscape type. The parallel reservoir model is consistent with both geochemical analyses of hydrological flow paths and water balance estimates of bedrock recharge. Overall, this study demonstrates that standard approaches of using recession analysis to identify the functional form of storage-discharge relationships identify model structures that are inconsistent with field evidence, and that recession analysis at multiple spatial scales can provide useful insights into catchment behaviour. Copyright ?? 2008 John Wiley & Sons, Ltd.

  9. Theory and observations: Model simulations of the period 1955-1985

    NASA Technical Reports Server (NTRS)

    Isaksen, Ivar S. A.; Eckman, R.; Lacis, A.; Ko, Malcolm K. W.; Prather, M.; Pyle, J.; Rodhe, H.; Stordal, Frode; Stolarski, R. S.; Turco, R. P.

    1989-01-01

    The main objective of the theoretical studies presented here is to apply models of stratospheric chemistry and transport in order to understand the processes that control stratospheric ozone and that are responsible for the observed variations. The model calculations are intended to simulate the observed behavior of atmospheric ozone over the past three decades (1955-1985), for which there exists a substantial record of both ground-based and, more recently, satellite measurements. Ozone concentrations in the atmosphere vary on different time scales and for several different causes. The models described here were designed to simulate the effect on ozone of changes in the concentration of such trace gases as CFC, CH4, N2O, and CO2. Changes from year to year in ultraviolet radiation associated with the solar cycle are also included in the models. A third source of variability explicitly considered is the sporadic introduction of large amounts of NO sub x into the stratosphere during atmospheric nuclear tests.

  10. Supporting observation campaigns with high resolution modeling

    NASA Astrophysics Data System (ADS)

    Klocke, Daniel; Brueck, Matthias; Voigt, Aiko

    2017-04-01

    High resolution simulation in support of measurement campaigns offers a promising and emerging way to create large-scale context for small-scale observations of clouds and precipitation processes. As these simulation include the coupling of measured small-scale processes with the circulation, they also help to integrate the research communities from modeling and observations and allow for detailed model evaluations against dedicated observations. In connection with the measurement campaign NARVAL (August 2016 and December 2013) simulations with a grid-spacing of 2.5 km for the tropical Atlantic region (9000x3300 km), with local refinement to 1.2 km for the western part of the domain, were performed using the icosahedral non-hydrostatic (ICON) general circulation model. These simulations are again used to drive large eddy resolving simulations with the same model for selected days in the high definition clouds and precipitation for advancing climate prediction (HD(CP)2) project. The simulations are presented with the focus on selected results showing the benefit for the scientific communities doing atmospheric measurements and numerical modeling of climate and weather. Additionally, an outlook will be given on how similar simulations will support the NAWDEX measurement campaign in the North Atlantic and AC3 measurement campaign in the Arctic.

  11. Kinetic Modeling of Microbiological Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Chongxuan; Fang, Yilin

    Kinetic description of microbiological processes is vital for the design and control of microbe-based biotechnologies such as waste water treatment, petroleum oil recovery, and contaminant attenuation and remediation. Various models have been proposed to describe microbiological processes. This editorial article discusses the advantages and limiation of these modeling approaches in cluding tranditional, Monod-type models and derivatives, and recently developed constraint-based approaches. The article also offers the future direction of modeling researches that best suit for petroleum and environmental biotechnologies.

  12. Expert models and modeling processes associated with a computer-modeling tool

    NASA Astrophysics Data System (ADS)

    Zhang, Baohui; Liu, Xiufeng; Krajcik, Joseph S.

    2006-07-01

    Holding the premise that the development of expertise is a continuous process, this study concerns expert models and modeling processes associated with a modeling tool called Model-It. Five advanced Ph.D. students in environmental engineering and public health used Model-It to create and test models of water quality. Using think aloud technique and video recording, we captured their computer screen modeling activities and thinking processes. We also interviewed them the day following their modeling sessions to further probe the rationale of their modeling practices. We analyzed both the audio-video transcripts and the experts' models. We found the experts' modeling processes followed the linear sequence built in the modeling program with few instances of moving back and forth. They specified their goals up front and spent a long time thinking through an entire model before acting. They specified relationships with accurate and convincing evidence. Factors (i.e., variables) in expert models were clustered, and represented by specialized technical terms. Based on the above findings, we made suggestions for improving model-based science teaching and learning using Model-It.

  13. Steering operational synergies in terrestrial observation networks: opportunity for advancing Earth system dynamics modelling

    NASA Astrophysics Data System (ADS)

    Baatz, Roland; Sullivan, Pamela L.; Li, Li; Weintraub, Samantha R.; Loescher, Henry W.; Mirtl, Michael; Groffman, Peter M.; Wall, Diana H.; Young, Michael; White, Tim; Wen, Hang; Zacharias, Steffen; Kühn, Ingolf; Tang, Jianwu; Gaillardet, Jérôme; Braud, Isabelle; Flores, Alejandro N.; Kumar, Praveen; Lin, Henry; Ghezzehei, Teamrat; Jones, Julia; Gholz, Henry L.; Vereecken, Harry; Van Looy, Kris

    2018-05-01

    Advancing our understanding of Earth system dynamics (ESD) depends on the development of models and other analytical tools that apply physical, biological, and chemical data. This ambition to increase understanding and develop models of ESD based on site observations was the stimulus for creating the networks of Long-Term Ecological Research (LTER), Critical Zone Observatories (CZOs), and others. We organized a survey, the results of which identified pressing gaps in data availability from these networks, in particular for the future development and evaluation of models that represent ESD processes, and provide insights for improvement in both data collection and model integration. From this survey overview of data applications in the context of LTER and CZO research, we identified three challenges: (1) widen application of terrestrial observation network data in Earth system modelling, (2) develop integrated Earth system models that incorporate process representation and data of multiple disciplines, and (3) identify complementarity in measured variables and spatial extent, and promoting synergies in the existing observational networks. These challenges lead to perspectives and recommendations for an improved dialogue between the observation networks and the ESD modelling community, including co-location of sites in the existing networks and further formalizing these recommendations among these communities. Developing these synergies will enable cross-site and cross-network comparison and synthesis studies, which will help produce insights around organizing principles, classifications, and general rules of coupling processes with environmental conditions.

  14. Aerosol Processing in Mixed-Phase Clouds in ECHAM5-HAM: Comparison of Single-Column Model Simulations to Observations

    NASA Astrophysics Data System (ADS)

    Hoose, C.; Lohmann, U.; Stier, P.; Verheggen, B.; Weingartner, E.; Herich, H.

    2007-12-01

    The global aerosol-climate model ECHAM5-HAM (Stier et al., 2005) has been extended by an explicit treatment of cloud-borne particles. Two additional modes for in-droplet and in-crystal particles are introduced, which are coupled to the number of cloud droplet and ice crystal concentrations simulated by the ECHAM5 double-moment cloud microphysics scheme (Lohmann et al., 2007). Transfer, production and removal of cloud-borne aerosol number and mass by cloud droplet activation, collision scavenging, aqueous-phase sulfate production, freezing, melting, evaporation, sublimation and precipitation formation are taken into account. The model performance is demonstrated and validated with observations of the evolution of total and interstitial aerosol concentrations and size distributions during three different mixed-phase cloud events at the alpine high-altitude research station Jungfraujoch (Switzerland) (Verheggen et al, 2007). Although the single-column simulations can not be compared one-to-one with the observations, the governing processes in the evolution of the cloud and aerosol parameters are captured qualitatively well. High scavenged fractions are found during the presence of liquid water, while the release of particles during the Bergeron-Findeisen process results in low scavenged fractions after cloud glaciation. The observed coexistence of liquid and ice, which might be related to cloud heterogeneity at subgrid scales, can only be simulated in the model when forcing non-equilibrium conditions. References: U. Lohmann et al., Cloud microphysics and aerosol indirect effects in the global climate model ECHAM5-HAM, Atmos. Chem. Phys. 7, 3425-3446 (2007) P. Stier et al., The aerosol-climate model ECHAM5-HAM, Atmos. Chem. Phys. 5, 1125-1156 (2005) B. Verheggen et al., Aerosol partitioning between the interstitial and the condensed phase in mixed-phase clouds, Accepted for publication in J. Geophys. Res. (2007)

  15. Model-checking techniques based on cumulative residuals.

    PubMed

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  16. Process modeling of an advanced NH₃ abatement and recycling technology in the ammonia-based CO₂ capture process.

    PubMed

    Li, Kangkang; Yu, Hai; Tade, Moses; Feron, Paul; Yu, Jingwen; Wang, Shujuan

    2014-06-17

    An advanced NH3 abatement and recycling process that makes great use of the waste heat in flue gas was proposed to solve the problems of ammonia slip, NH3 makeup, and flue gas cooling in the ammonia-based CO2 capture process. The rigorous rate-based model, RateFrac in Aspen Plus, was thermodynamically and kinetically validated by experimental data from open literature and CSIRO pilot trials at Munmorah Power Station, Australia, respectively. After a thorough sensitivity analysis and process improvement, the NH3 recycling efficiency reached as high as 99.87%, and the NH3 exhaust concentration was only 15.4 ppmv. Most importantly, the energy consumption of the NH3 abatement and recycling system was only 59.34 kJ/kg CO2 of electricity. The evaluation of mass balance and temperature steady shows that this NH3 recovery process was technically effective and feasible. This process therefore is a promising prospect toward industrial application.

  17. Lunar-based Earth observation geometrical characteristics research

    NASA Astrophysics Data System (ADS)

    Ren, Yuanzhen; Liu, Guang; Ye, Hanlin; Guo, Huadong; Ding, Yixing; Chen, Zhaoning

    2016-07-01

    As is known to all, there are various platforms for carrying sensors to observe Earth, such as automobiles, aircrafts and satellites. Nowadays, we focus on a new platform, Moon, because of its longevity, stability and vast space. These advantages make it to be the next potential platform for observing Earth, enabling us to get the consistent and global measurements. In order to get a better understanding of lunar-based Earth observation, we discuss its geometrical characteristics. At present, there are no sensors on the Moon for observing Earth and we are not able to obtain a series of real experiment data. As a result, theoretical modeling and numerical calculation are used in this paper. At first, we construct an approximate geometrical model of lunar-based Earth observation, which assumes that Earth and Moon are spheres. Next, we calculate the position of Sun, Earth and Moon based on the JPL ephemeris. With the help of positions data and geometrical model, it is possible for us to decide the location of terminator and substellar points. However, in order to determine their precise position in the conventional terrestrial coordinate system, reference frames transformations are introduced as well. Besides, taking advantages of the relative positions of Sun, Earth and Moon, we get the total coverage of lunar-based Earth optical observation. Furthermore, we calculate a more precise coverage, considering placing sensors on different positions of Moon, which is influenced by its attitude parameters. In addition, different ephemeris data are compared in our research and little difference is found.

  18. Gaussian process model for extrapolation of scattering observables for complex molecules: From benzene to benzonitrile

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Jie; Krems, Roman V.; Li, Zhiying

    2015-10-21

    We consider a problem of extrapolating the collision properties of a large polyatomic molecule A–H to make predictions of the dynamical properties for another molecule related to A–H by the substitution of the H atom with a small molecular group X, without explicitly computing the potential energy surface for A–X. We assume that the effect of the −H →−X substitution is embodied in a multidimensional function with unknown parameters characterizing the change of the potential energy surface. We propose to apply the Gaussian Process model to determine the dependence of the dynamical observables on the unknown parameters. This can bemore » used to produce an interval of the observable values which corresponds to physical variations of the potential parameters. We show that the Gaussian Process model combined with classical trajectory calculations can be used to obtain the dependence of the cross sections for collisions of C{sub 6}H{sub 5}CN with He on the unknown parameters describing the interaction of the He atom with the CN fragment of the molecule. The unknown parameters are then varied within physically reasonable ranges to produce a prediction uncertainty of the cross sections. The results are normalized to the cross sections for He — C{sub 6}H{sub 6} collisions obtained from quantum scattering calculations in order to provide a prediction interval of the thermally averaged cross sections for collisions of C{sub 6}H{sub 5}CN with He.« less

  19. Atmospheric Nitrogen Deposition to the Oceans: Observation- and Model-Based Estimates

    NASA Astrophysics Data System (ADS)

    Baker, Alex; Altieri, Katye; Okin, Greg; Dentener, Frank; Uematsu, Mitsuo; Kanakidou, Maria; Sarin, Manmohan; Duce, Robert; Galloway, Jim; Keene, Bill; Singh, Arvind; Zamora, Lauren; Lamarque, Jean-Francois; Hsu, Shih-Chieh

    2014-05-01

    The reactive nitrogen (Nr) burden of the atmosphere has been increased by a factor of 3-4 by anthropogenic activity since the industrial revolution. This has led to large increases in the deposition of nitrate and ammonium to the surface waters of the open ocean, particularly downwind of major human population centres, such as those in North America, Europe and Southeast Asia. In oligotrophic waters, this deposition has the potential to significantly impact marine productivity and the global carbon cycle. Global-scale understanding of N deposition to the oceans is reliant on our ability to produce effective models of reactive nitrogen emission, atmospheric chemistry, transport and deposition (including deposition to the land surface). The Atmospheric Chemistry and Climate Model Intercomparison Project (ACCMIP) recently completed a multi-model analysis of global N deposition, including comparisons to wet deposition observations from three regional networks in North America, Europe and Southeast Asia (Lamarque et al., Atmos. Chem. Phys., 13, 7977-8018, 2013). No similar datasets exist which would allow observation - model comparisons of wet deposition for the open oceans, because long-term wet deposition records are available for only a handful of remote island sites and rain collection over the open ocean itself is very difficult. In this work we attempt instead to use ~2600 observations of aerosol nitrate and ammonium concentrations, acquired chiefly from sampling aboard ships in the period 1995 - 2012, to assess the ACCMIP N deposition fields over the remote ocean. This database is non-uniformly distributed in time and space. We selected four ocean regions (the eastern North Atlantic, the South Atlantic, the northern Indian Ocean and northwest Pacific) where we considered the density and distribution of observational data is sufficient to provide effective comparison to the model ensemble. Two of these regions are adjacent to the land networks used in the ACCMIP

  20. An overview of current applications, challenges, and future trends in distributed process-based models in hydrology

    USGS Publications Warehouse

    Fatichi, Simone; Vivoni, Enrique R.; Odgen, Fred L; Ivanov, Valeriy Y; Mirus, Benjamin B.; Gochis, David; Downer, Charles W; Camporese, Matteo; Davison, Jason H; Ebel, Brian A.; Jones, Norm; Kim, Jongho; Mascaro, Giuseppe; Niswonger, Richard G.; Restrepo, Pedro; Rigon, Riccardo; Shen, Chaopeng; Sulis, Mauro; Tarboton, David

    2016-01-01

    Process-based hydrological models have a long history dating back to the 1960s. Criticized by some as over-parameterized, overly complex, and difficult to use, a more nuanced view is that these tools are necessary in many situations and, in a certain class of problems, they are the most appropriate type of hydrological model. This is especially the case in situations where knowledge of flow paths or distributed state variables and/or preservation of physical constraints is important. Examples of this include: spatiotemporal variability of soil moisture, groundwater flow and runoff generation, sediment and contaminant transport, or when feedbacks among various Earth’s system processes or understanding the impacts of climate non-stationarity are of primary concern. These are situations where process-based models excel and other models are unverifiable. This article presents this pragmatic view in the context of existing literature to justify the approach where applicable and necessary. We review how improvements in data availability, computational resources and algorithms have made detailed hydrological simulations a reality. Avenues for the future of process-based hydrological models are presented suggesting their use as virtual laboratories, for design purposes, and with a powerful treatment of uncertainty.

  1. Computational modeling of residual stress formation during the electron beam melting process for Inconel 718

    DOE PAGES

    Prabhakar, P.; Sames, William J.; Dehoff, Ryan R.; ...

    2015-03-28

    Here, a computational modeling approach to simulate residual stress formation during the electron beam melting (EBM) process within the additive manufacturing (AM) technologies for Inconel 718 is presented in this paper. The EBM process has demonstrated a high potential to fabricate components with complex geometries, but the resulting components are influenced by the thermal cycles observed during the manufacturing process. When processing nickel based superalloys, very high temperatures (approx. 1000 °C) are observed in the powder bed, base plate, and build. These high temperatures, when combined with substrate adherence, can result in warping of the base plate and affect themore » final component by causing defects. It is important to have an understanding of the thermo-mechanical response of the entire system, that is, its mechanical behavior towards thermal loading occurring during the EBM process prior to manufacturing a component. Therefore, computational models to predict the response of the system during the EBM process will aid in eliminating the undesired process conditions, a priori, in order to fabricate the optimum component. Such a comprehensive computational modeling approach is demonstrated to analyze warping of the base plate, stress and plastic strain accumulation within the material, and thermal cycles in the system during different stages of the EBM process.« less

  2. Managing Analysis Models in the Design Process

    NASA Technical Reports Server (NTRS)

    Briggs, Clark

    2006-01-01

    Design of large, complex space systems depends on significant model-based support for exploration of the design space. Integrated models predict system performance in mission-relevant terms given design descriptions and multiple physics-based numerical models. Both the design activities and the modeling activities warrant explicit process definitions and active process management to protect the project from excessive risk. Software and systems engineering processes have been formalized and similar formal process activities are under development for design engineering and integrated modeling. JPL is establishing a modeling process to define development and application of such system-level models.

  3. Flexible Description and Adaptive Processing of Earth Observation Data through the BigEarth Platform

    NASA Astrophysics Data System (ADS)

    Gorgan, Dorian; Bacu, Victor; Stefanut, Teodor; Nandra, Cosmin; Mihon, Danut

    2016-04-01

    of some Earth Observation oriented applications based on flexible description of processing, and adaptive and portable execution over Cloud infrastructure. Main references for further information: [1] BigEarth project, http://cgis.utcluj.ro/projects/bigearth [2] Gorgan, D., "Flexible and Adaptive Processing of Earth Observation Data over High Performance Computation Architectures", International Conference and Exhibition Satellite 2015, August 17-19, Houston, Texas, USA. [3] Mihon, D., Bacu, V., Colceriu, V., Gorgan, D., "Modeling of Earth Observation Use Cases through the KEOPS System", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp. 455-460, (2015). [4] Nandra, C., Gorgan, D., "Workflow Description Language for Defining Big Earth Data Processing Tasks", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp. 461-468, (2015). [5] Bacu, V., Stefan, T., Gorgan, D., "Adaptive Processing of Earth Observation Data on Cloud Infrastructures Based on Workflow Description", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp.444-454, (2015).

  4. Simulating boreal forest carbon dynamics after stand-replacing fire disturbance: insights from a global process-based vegetation model

    USGS Publications Warehouse

    Yue, C.; Ciais, P.; Luyssaert, S.; Cadule, P.; Harden, J.; Randerson, J.; Bellassen, V.; Wang, T.; Piao, S.L.; Poulter, B.; Viovy, N.

    2013-01-01

    Stand-replacing fires are the dominant fire type in North American boreal forests. They leave a historical legacy of a mosaic landscape of different aged forest cohorts. This forest age dynamics must be included in vegetation models to accurately quantify the role of fire in the historical and current regional forest carbon balance. The present study adapted the global process-based vegetation model ORCHIDEE to simulate the CO2 emissions from boreal forest fire and the subsequent recovery after a stand-replacing fire; the model represents postfire new cohort establishment, forest stand structure and the self-thinning process. Simulation results are evaluated against observations of three clusters of postfire forest chronosequences in Canada and Alaska. The variables evaluated include: fire carbon emissions, CO2 fluxes (gross primary production, total ecosystem respiration and net ecosystem exchange), leaf area index, and biometric measurements (aboveground biomass carbon, forest floor carbon, woody debris carbon, stand individual density, stand basal area, and mean diameter at breast height). When forced by local climate and the atmospheric CO2 history at each chronosequence site, the model simulations generally match the observed CO2 fluxes and carbon stock data well, with model-measurement mean square root of deviation comparable with the measurement accuracy (for CO2 flux ~100 g C m−2 yr−1, for biomass carbon ~1000 g C m−2 and for soil carbon ~2000 g C m−2). We find that the current postfire forest carbon sink at the evaluation sites, as observed by chronosequence methods, is mainly due to a combination of historical CO2 increase and forest succession. Climate change and variability during this period offsets some of these expected carbon gains. The negative impacts of climate were a likely consequence of increasing water stress caused by significant temperature increases that were not matched by concurrent increases in precipitation. Our simulation

  5. Simulating boreal forest carbon dynamics after stand-replacing fire disturbance: insights from a global process-based vegetation model

    NASA Astrophysics Data System (ADS)

    Yue, C.; Ciais, P.; Luyssaert, S.; Cadule, P.; Harden, J.; Randerson, J.; Bellassen, V.; Wang, T.; Piao, S. L.; Poulter, B.; Viovy, N.

    2013-12-01

    Stand-replacing fires are the dominant fire type in North American boreal forests. They leave a historical legacy of a mosaic landscape of different aged forest cohorts. This forest age dynamics must be included in vegetation models to accurately quantify the role of fire in the historical and current regional forest carbon balance. The present study adapted the global process-based vegetation model ORCHIDEE to simulate the CO2 emissions from boreal forest fire and the subsequent recovery after a stand-replacing fire; the model represents postfire new cohort establishment, forest stand structure and the self-thinning process. Simulation results are evaluated against observations of three clusters of postfire forest chronosequences in Canada and Alaska. The variables evaluated include: fire carbon emissions, CO2 fluxes (gross primary production, total ecosystem respiration and net ecosystem exchange), leaf area index, and biometric measurements (aboveground biomass carbon, forest floor carbon, woody debris carbon, stand individual density, stand basal area, and mean diameter at breast height). When forced by local climate and the atmospheric CO2 history at each chronosequence site, the model simulations generally match the observed CO2 fluxes and carbon stock data well, with model-measurement mean square root of deviation comparable with the measurement accuracy (for CO2 flux ~100 g C m-2 yr-1, for biomass carbon ~1000 g C m-2 and for soil carbon ~2000 g C m-2). We find that the current postfire forest carbon sink at the evaluation sites, as observed by chronosequence methods, is mainly due to a combination of historical CO2 increase and forest succession. Climate change and variability during this period offsets some of these expected carbon gains. The negative impacts of climate were a likely consequence of increasing water stress caused by significant temperature increases that were not matched by concurrent increases in precipitation. Our simulation results

  6. Gaussian process-based surrogate modeling framework for process planning in laser powder-bed fusion additive manufacturing of 316L stainless steel

    DOE PAGES

    Tapia, Gustavo; Khairallah, Saad A.; Matthews, Manyalibo J.; ...

    2017-09-22

    Here, Laser Powder-Bed Fusion (L-PBF) metal-based additive manufacturing (AM) is complex and not fully understood. Successful processing for one material, might not necessarily apply to a different material. This paper describes a workflow process that aims at creating a material data sheet standard that describes regimes where the process can be expected to be robust. The procedure consists of building a Gaussian process-based surrogate model of the L-PBF process that predicts melt pool depth in single-track experiments given a laser power, scan speed, and laser beam size combination. The predictions are then mapped onto a power versus scan speed diagrammore » delimiting the conduction from the keyhole melting controlled regimes. This statistical framework is shown to be robust even for cases where experimental training data might be suboptimal in quality, if appropriate physics-based filters are applied. Additionally, it is demonstrated that a high-fidelity simulation model of L-PBF can equally be successfully used for building a surrogate model, which is beneficial since simulations are getting more efficient and are more practical to study the response of different materials, than to re-tool an AM machine for new material powder.« less

  7. Gaussian process-based surrogate modeling framework for process planning in laser powder-bed fusion additive manufacturing of 316L stainless steel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tapia, Gustavo; Khairallah, Saad A.; Matthews, Manyalibo J.

    Here, Laser Powder-Bed Fusion (L-PBF) metal-based additive manufacturing (AM) is complex and not fully understood. Successful processing for one material, might not necessarily apply to a different material. This paper describes a workflow process that aims at creating a material data sheet standard that describes regimes where the process can be expected to be robust. The procedure consists of building a Gaussian process-based surrogate model of the L-PBF process that predicts melt pool depth in single-track experiments given a laser power, scan speed, and laser beam size combination. The predictions are then mapped onto a power versus scan speed diagrammore » delimiting the conduction from the keyhole melting controlled regimes. This statistical framework is shown to be robust even for cases where experimental training data might be suboptimal in quality, if appropriate physics-based filters are applied. Additionally, it is demonstrated that a high-fidelity simulation model of L-PBF can equally be successfully used for building a surrogate model, which is beneficial since simulations are getting more efficient and are more practical to study the response of different materials, than to re-tool an AM machine for new material powder.« less

  8. A Model-based B2B (Batch to Batch) Control for An Industrial Batch Polymerization Process

    NASA Astrophysics Data System (ADS)

    Ogawa, Morimasa

    This paper describes overview of a model-based B2B (batch to batch) control for an industrial batch polymerization process. In order to control the reaction temperature precisely, several methods based on the rigorous process dynamics model are employed at all design stage of the B2B control, such as modeling and parameter estimation of the reaction kinetics which is one of the important part of the process dynamics model. The designed B2B control consists of the gain scheduled I-PD/II2-PD control (I-PD with double integral control), the feed-forward compensation at the batch start time, and the model adaptation utilizing the results of the last batch operation. Throughout the actual batch operations, the B2B control provides superior control performance compared with that of conventional control methods.

  9. Thermal Infrared Observations and Thermophysical Modeling of Phobos

    NASA Astrophysics Data System (ADS)

    Smith, Nathan Michael; Edwards, Christopher Scott; Mommert, Michael; Trilling, David E.; Glotch, Timothy

    2016-10-01

    Mars-observing spacecraft have the opportunity to study Phobos from Mars orbit, and have produced a sizeable record of observations using the same instruments that study the surface of the planet below. However, these observations are generally infrequent, acquired only rarely over each mission.Using observations gathered by Mars Global Surveyor's (MGS) Thermal Emission Spectrometer (TES), we can investigate the fine layer of regolith that blankets Phobos' surface, and characterize its thermal properties. The mapping of TES observations to footprints on the Phobos surface has not previously been undertaken, and must consider the orientation and position of both MGS and Phobos, and TES's pointing mirror angle. Approximately 300 fully resolved observations are available covering a significant subset of Phobos' surface at a variety of scales.The properties of the surface regolith, such as grain size, density, and conductivity, determine how heat is absorbed, transferred, and reradiated to space. Thermophysical modeling allows us to simulate these processes and predict, for a given set of assumed parameters, how the observed thermal infrared spectra will appear. By comparing models to observations, we can constrain the properties of the regolith, and see how these properties vary with depth, as well as regionally across the Phobos surface. These constraints are key to understanding how Phobos formed and evolved over time, which in turn will help inform the environment and processes that shaped the solar system as a whole.We have developed a thermophysical model of Phobos adapted from a model used for unresolved observations of asteroids. The model has been modified to integrate thermal infrared flux across each observed portion of Phobos. It will include the effects of surface roughness, temperature-dependent conductivity, as well as radiation scattered, reflected, and thermally emitted from the Martian surface. Combining this model with the newly-mapped TES

  10. The Process Model of Group-Based Emotion: Integrating Intergroup Emotion and Emotion Regulation Perspectives.

    PubMed

    Goldenberg, Amit; Halperin, Eran; van Zomeren, Martijn; Gross, James J

    2016-05-01

    Scholars interested in emotion regulation have documented the different goals and strategies individuals have for regulating their emotions. However, little attention has been paid to the regulation of group-based emotions, which are based on individuals' self-categorization as a group member and occur in response to situations perceived as relevant for that group. We propose a model for examining group-based emotion regulation that integrates intergroup emotions theory and the process model of emotion regulation. This synergy expands intergroup emotion theory by facilitating further investigation of different goals (i.e., hedonic or instrumental) and strategies (e.g., situation selection and modification strategies) used to regulate group-based emotions. It also expands emotion regulation research by emphasizing the role of self-categorization (e.g., as an individual or a group member) in the emotional process. Finally, we discuss the promise of this theoretical synergy and suggest several directions for future research on group-based emotion regulation. © 2015 by the Society for Personality and Social Psychology, Inc.

  11. A Theoretical Model of Drumlin Formation Based on Observations at Múlajökull, Iceland

    NASA Astrophysics Data System (ADS)

    Iverson, N. R.; McCracken, R. G.; Zoet, L. K.; Benediktsson, Í. Ö.; Schomacker, A.; Johnson, M. D.; Woodard, J.

    2017-12-01

    The drumlin field at the surge-type glacier, Múlajökull, provides an unusual opportunity to build a model of drumlin formation based on field observations in a modern drumlin-forming environment. These observations indicate that surges deposit till layers that drape the glacier forefield, conform to drumlin surfaces, and are deposited in shear. Observations also indicate that erosion helps create drumlin relief, effective stresses in subglacial till are highest between drumlins, and during quiescent flow, crevasses on the glacier surface overlie drumlins while subglacial channels occupy intervening swales. In the model, we consider gentle undulations on the bed bounded by subglacial channels at low water pressure. During quiescent flow, slip of temperate ice across these undulations and basal water flow toward bounding channels create an effective stress distribution that maximizes till entrainment in ice on the heads and flanks of drumlins. Crevasses amplify this effect but are not necessary for it. During surges, effective stresses are uniformly low, and the bed shears pervasively. Vigorous basal melting during surges releases debris from ice and deposits it on the bed, with deposition augmented by transport in the deforming bed. As surge cycles progress, drumlins migrate downglacier and grow at increasing rates, due to positive feedbacks that depend on drumlin height. Drumlin growth can be accompanied by either net aggradation or erosion of the bed, and drumlin heights and stratigraphy generally correspond with observations. This model highlights that drumlin growth can reflect instabilities other than those of bed shear instability models, which require heuristic till transport assumptions.

  12. A Dirichlet process model for classifying and forecasting epidemic curves

    PubMed Central

    2014-01-01

    Background A forecast can be defined as an endeavor to quantitatively estimate a future event or probabilities assigned to a future occurrence. Forecasting stochastic processes such as epidemics is challenging since there are several biological, behavioral, and environmental factors that influence the number of cases observed at each point during an epidemic. However, accurate forecasts of epidemics would impact timely and effective implementation of public health interventions. In this study, we introduce a Dirichlet process (DP) model for classifying and forecasting influenza epidemic curves. Methods The DP model is a nonparametric Bayesian approach that enables the matching of current influenza activity to simulated and historical patterns, identifies epidemic curves different from those observed in the past and enables prediction of the expected epidemic peak time. The method was validated using simulated influenza epidemics from an individual-based model and the accuracy was compared to that of the tree-based classification technique, Random Forest (RF), which has been shown to achieve high accuracy in the early prediction of epidemic curves using a classification approach. We also applied the method to forecasting influenza outbreaks in the United States from 1997–2013 using influenza-like illness (ILI) data from the Centers for Disease Control and Prevention (CDC). Results We made the following observations. First, the DP model performed as well as RF in identifying several of the simulated epidemics. Second, the DP model correctly forecasted the peak time several days in advance for most of the simulated epidemics. Third, the accuracy of identifying epidemics different from those already observed improved with additional data, as expected. Fourth, both methods correctly classified epidemics with higher reproduction numbers (R) with a higher accuracy compared to epidemics with lower R values. Lastly, in the classification of seasonal influenza epidemics

  13. A Dirichlet process model for classifying and forecasting epidemic curves.

    PubMed

    Nsoesie, Elaine O; Leman, Scotland C; Marathe, Madhav V

    2014-01-09

    A forecast can be defined as an endeavor to quantitatively estimate a future event or probabilities assigned to a future occurrence. Forecasting stochastic processes such as epidemics is challenging since there are several biological, behavioral, and environmental factors that influence the number of cases observed at each point during an epidemic. However, accurate forecasts of epidemics would impact timely and effective implementation of public health interventions. In this study, we introduce a Dirichlet process (DP) model for classifying and forecasting influenza epidemic curves. The DP model is a nonparametric Bayesian approach that enables the matching of current influenza activity to simulated and historical patterns, identifies epidemic curves different from those observed in the past and enables prediction of the expected epidemic peak time. The method was validated using simulated influenza epidemics from an individual-based model and the accuracy was compared to that of the tree-based classification technique, Random Forest (RF), which has been shown to achieve high accuracy in the early prediction of epidemic curves using a classification approach. We also applied the method to forecasting influenza outbreaks in the United States from 1997-2013 using influenza-like illness (ILI) data from the Centers for Disease Control and Prevention (CDC). We made the following observations. First, the DP model performed as well as RF in identifying several of the simulated epidemics. Second, the DP model correctly forecasted the peak time several days in advance for most of the simulated epidemics. Third, the accuracy of identifying epidemics different from those already observed improved with additional data, as expected. Fourth, both methods correctly classified epidemics with higher reproduction numbers (R) with a higher accuracy compared to epidemics with lower R values. Lastly, in the classification of seasonal influenza epidemics based on ILI data from the CDC

  14. An Amorphous Model for Morphological Processing in Visual Comprehension Based on Naive Discriminative Learning

    ERIC Educational Resources Information Center

    Baayen, R. Harald; Milin, Petar; Durdevic, Dusica Filipovic; Hendrix, Peter; Marelli, Marco

    2011-01-01

    A 2-layer symbolic network model based on the equilibrium equations of the Rescorla-Wagner model (Danks, 2003) is proposed. The study first presents 2 experiments in Serbian, which reveal for sentential reading the inflectional paradigmatic effects previously observed by Milin, Filipovic Durdevic, and Moscoso del Prado Martin (2009) for unprimed…

  15. Observational Search for Cometary Aging Processes

    NASA Technical Reports Server (NTRS)

    Meech, Karen J.

    1997-01-01

    The scientific objectives of this study were (i) to search for physical differences in the behavior of the dynamically new comets (those which are entering the solar system for the first time from the Oort cloud) and the periodic comets, and (ii) to interpret these differences, if any, in terms of the physical and chemical nature of the comets and the evolutionary histories of the two comet groups. Because outer solar system comets may be direct remnants of the planetary formation processes, it is clear that the understanding of both the physical characteristics of these bodies at the edge of the planet forming zone and of their activity at large heliocentric distances, r, will ultimately provide constraints on the planetary formation process both in our Solar System and in extra-solar planetary systems. A combination of new solar system models which suggest that the protoplanetary disk was relatively massive and as a consequence comets could form at large distances from the sun (e.g. from the Uranus-Neptune region to the vicinity of the Kuiper belt), observations of activity in comets at large r, and laboratory experiments on low temperature volatile condensation, are dramatically changing our understanding of the chemical'and physical conditions in the early solar nebula. In order to understand the physical processes driving the apparent large r activity, and to address the question of possible physical and chemical differences between periodic, non-periodic and Oort comets, the PI has been undertaking a long-term study of the behavior of a significant sample of these comets (approximately 50) over a wide range of r to watch the development, disappearance and changing morphology of the dust coma. The ultimate goal is to search for systematic physical differences between the comet classes by modelling the coma growth in terms of volatile-driven activity. The systematic observations for this have been ongoing since 1986, and have been obtained over the course of

  16. Observation and Modeling of Clear Air Turbulence (CAT) over Europe

    NASA Astrophysics Data System (ADS)

    Sprenger, M.; Mayoraz, L.; Stauch, V.; Sharman, B.; Polymeris, J.

    2012-04-01

    CAT represents a very relevant phenomenon for aviation safety. It can lead to passenger injuries, causes an increase in fuel consumption and, under severe intensity, can involve structural damages to the aircraft. The physical processes causing CAT remain at present not fully understood. Moreover, because of its small scale, CAT cannot be represented in numerical weather prediction (NWP) models. In this study, the physical processes related to CAT and its representation in NWP models is further investigated. First, 134 CAT events over Europe are extracted from a flight monitoring data base (FDM), run by the SWISS airline and containing over 100'000 flights. The location, time, and meteorological parameters along the turbulent spots are analysed. Furthermore, the 7-km NWP model run by the Swiss National Weather Service (Meteoswiss) is used to calculate model-based CAT indices, e.g. Richardson number, Ellrod & Knapp turbulence index and a complex/combined CAT index developed at NCAR. The CAT indices simulated with COSMO-7 is then compared to the observed CAT spots, hence allowing to assess the model's performance, and potential use in a CAT warning system. In a second step, the meteorological conditions associated with CAT are investigated. To this aim, CAT events are defined as coherent structures in space and in time, i.e. their dimension and life cycle is studied, in connection with jet streams and upper-level fronts. Finally, in a third step the predictability of CAT is assessed, by comparing CAT index predictions based on different lead times of the NWP model COSMO-7

  17. EvoBuild: A Quickstart Toolkit for Programming Agent-Based Models of Evolutionary Processes

    NASA Astrophysics Data System (ADS)

    Wagh, Aditi; Wilensky, Uri

    2018-04-01

    Extensive research has shown that one of the benefits of programming to learn about scientific phenomena is that it facilitates learning about mechanisms underlying the phenomenon. However, using programming activities in classrooms is associated with costs such as requiring additional time to learn to program or students needing prior experience with programming. This paper presents a class of programming environments that we call quickstart: Environments with a negligible threshold for entry into programming and a modest ceiling. We posit that such environments can provide benefits of programming for learning without incurring associated costs for novice programmers. To make this claim, we present a design-based research study conducted to compare programming models of evolutionary processes with a quickstart toolkit with exploring pre-built models of the same processes. The study was conducted in six seventh grade science classes in two schools. Students in the programming condition used EvoBuild, a quickstart toolkit for programming agent-based models of evolutionary processes, to build their NetLogo models. Students in the exploration condition used pre-built NetLogo models. We demonstrate that although students came from a range of academic backgrounds without prior programming experience, and all students spent the same number of class periods on the activities including the time students took to learn programming in this environment, EvoBuild students showed greater learning about evolutionary mechanisms. We discuss the implications of this work for design research on programming environments in K-12 science education.

  18. Update on the non-prewhitening model observer in computed tomography for the assessment of the adaptive statistical and model-based iterative reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Ott, Julien G.; Becce, Fabio; Monnin, Pascal; Schmidt, Sabine; Bochud, François O.; Verdun, Francis R.

    2014-08-01

    The state of the art to describe image quality in medical imaging is to assess the performance of an observer conducting a task of clinical interest. This can be done by using a model observer leading to a figure of merit such as the signal-to-noise ratio (SNR). Using the non-prewhitening (NPW) model observer, we objectively characterised the evolution of its figure of merit in various acquisition conditions. The NPW model observer usually requires the use of the modulation transfer function (MTF) as well as noise power spectra. However, although the computation of the MTF poses no problem when dealing with the traditional filtered back-projection (FBP) algorithm, this is not the case when using iterative reconstruction (IR) algorithms, such as adaptive statistical iterative reconstruction (ASIR) or model-based iterative reconstruction (MBIR). Given that the target transfer function (TTF) had already shown it could accurately express the system resolution even with non-linear algorithms, we decided to tune the NPW model observer, replacing the standard MTF by the TTF. It was estimated using a custom-made phantom containing cylindrical inserts surrounded by water. The contrast differences between the inserts and water were plotted for each acquisition condition. Then, mathematical transformations were performed leading to the TTF. As expected, the first results showed a dependency of the image contrast and noise levels on the TTF for both ASIR and MBIR. Moreover, FBP also proved to be dependent of the contrast and noise when using the lung kernel. Those results were then introduced in the NPW model observer. We observed an enhancement of SNR every time we switched from FBP to ASIR to MBIR. IR algorithms greatly improve image quality, especially in low-dose conditions. Based on our results, the use of MBIR could lead to further dose reduction in several clinical applications.

  19. CLIMLAB: a Python-based software toolkit for interactive, process-oriented climate modeling

    NASA Astrophysics Data System (ADS)

    Rose, B. E. J.

    2015-12-01

    Global climate is a complex emergent property of the rich interactions between simpler components of the climate system. We build scientific understanding of this system by breaking it down into component process models (e.g. radiation, large-scale dynamics, boundary layer turbulence), understanding each components, and putting them back together. Hands-on experience and freedom to tinker with climate models (whether simple or complex) is invaluable for building physical understanding. CLIMLAB is an open-ended software engine for interactive, process-oriented climate modeling. With CLIMLAB you can interactively mix and match model components, or combine simpler process models together into a more comprehensive model. It was created primarily to support classroom activities, using hands-on modeling to teach fundamentals of climate science at both undergraduate and graduate levels. CLIMLAB is written in Python and ties in with the rich ecosystem of open-source scientific Python tools for numerics and graphics. The IPython notebook format provides an elegant medium for distributing interactive example code. I will give an overview of the current capabilities of CLIMLAB, the curriculum we have developed thus far, and plans for the future. Using CLIMLAB requires some basic Python coding skills. We consider this an educational asset, as we are targeting upper-level undergraduates and Python is an increasingly important language in STEM fields. However CLIMLAB is well suited to be deployed as a computational back-end for a graphical gaming environment based on earth-system modeling.

  20. Quantum-like model of processing of information in the brain based on classical electromagnetic field.

    PubMed

    Khrennikov, Andrei

    2011-09-01

    We propose a model of quantum-like (QL) processing of mental information. This model is based on quantum information theory. However, in contrast to models of "quantum physical brain" reducing mental activity (at least at the highest level) to quantum physical phenomena in the brain, our model matches well with the basic neuronal paradigm of the cognitive science. QL information processing is based (surprisingly) on classical electromagnetic signals induced by joint activity of neurons. This novel approach to quantum information is based on representation of quantum mechanics as a version of classical signal theory which was recently elaborated by the author. The brain uses the QL representation (QLR) for working with abstract concepts; concrete images are described by classical information theory. Two processes, classical and QL, are performed parallely. Moreover, information is actively transmitted from one representation to another. A QL concept given in our model by a density operator can generate a variety of concrete images given by temporal realizations of the corresponding (Gaussian) random signal. This signal has the covariance operator coinciding with the density operator encoding the abstract concept under consideration. The presence of various temporal scales in the brain plays the crucial role in creation of QLR in the brain. Moreover, in our model electromagnetic noise produced by neurons is a source of superstrong QL correlations between processes in different spatial domains in the brain; the binding problem is solved on the QL level, but with the aid of the classical background fluctuations. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  1. A refined 'standard' thermal model for asteroids based on observations of 1 Ceres and 2 Pallas

    NASA Technical Reports Server (NTRS)

    Lebofsky, Larry A.; Sykes, Mark V.; Tedesco, Edward F.; Veeder, Glenn J.; Matson, Dennis L.

    1986-01-01

    An analysis of ground-based thermal IR observations of 1 Ceres and 2 Pallas in light of their recently determined occultation diameters and small amplitude light curves has yielded a new value for the IR beaming parameter employed in the standard asteroid thermal emission model which is significantly lower than the previous one. When applied to the reduction of thermal IR observations of other asteroids, this new value is expected to yield model diameters closer to actual values. The present formulation incorporates the IAU magnitude convention for asteroids that employs zero-phase magnitudes, including the opposition effect.

  2. Striatal and Hippocampal Entropy and Recognition Signals in Category Learning: Simultaneous Processes Revealed by Model-Based fMRI

    PubMed Central

    Davis, Tyler; Love, Bradley C.; Preston, Alison R.

    2012-01-01

    Category learning is a complex phenomenon that engages multiple cognitive processes, many of which occur simultaneously and unfold dynamically over time. For example, as people encounter objects in the world, they simultaneously engage processes to determine their fit with current knowledge structures, gather new information about the objects, and adjust their representations to support behavior in future encounters. Many techniques that are available to understand the neural basis of category learning assume that the multiple processes that subserve it can be neatly separated between different trials of an experiment. Model-based functional magnetic resonance imaging offers a promising tool to separate multiple, simultaneously occurring processes and bring the analysis of neuroimaging data more in line with category learning’s dynamic and multifaceted nature. We use model-based imaging to explore the neural basis of recognition and entropy signals in the medial temporal lobe and striatum that are engaged while participants learn to categorize novel stimuli. Consistent with theories suggesting a role for the anterior hippocampus and ventral striatum in motivated learning in response to uncertainty, we find that activation in both regions correlates with a model-based measure of entropy. Simultaneously, separate subregions of the hippocampus and striatum exhibit activation correlated with a model-based recognition strength measure. Our results suggest that model-based analyses are exceptionally useful for extracting information about cognitive processes from neuroimaging data. Models provide a basis for identifying the multiple neural processes that contribute to behavior, and neuroimaging data can provide a powerful test bed for constraining and testing model predictions. PMID:22746951

  3. Ensemble-sensitivity Analysis Based Observation Targeting for Mesoscale Convection Forecasts and Factors Influencing Observation-Impact Prediction

    NASA Astrophysics Data System (ADS)

    Hill, A.; Weiss, C.; Ancell, B. C.

    2017-12-01

    The basic premise of observation targeting is that additional observations, when gathered and assimilated with a numerical weather prediction (NWP) model, will produce a more accurate forecast related to a specific phenomenon. Ensemble-sensitivity analysis (ESA; Ancell and Hakim 2007; Torn and Hakim 2008) is a tool capable of accurately estimating the proper location of targeted observations in areas that have initial model uncertainty and large error growth, as well as predicting the reduction of forecast variance due to the assimilated observation. ESA relates an ensemble of NWP model forecasts, specifically an ensemble of scalar forecast metrics, linearly to earlier model states. A thorough investigation is presented to determine how different factors of the forecast process are impacting our ability to successfully target new observations for mesoscale convection forecasts. Our primary goals for this work are to determine: (1) If targeted observations hold more positive impact over non-targeted (i.e. randomly chosen) observations; (2) If there are lead-time constraints to targeting for convection; (3) How inflation, localization, and the assimilation filter influence impact prediction and realized results; (4) If there exist differences between targeted observations at the surface versus aloft; and (5) how physics errors and nonlinearity may augment observation impacts.Ten cases of dryline-initiated convection between 2011 to 2013 are simulated within a simplified OSSE framework and presented here. Ensemble simulations are produced from a cycling system that utilizes the Weather Research and Forecasting (WRF) model v3.8.1 within the Data Assimilation Research Testbed (DART). A "truth" (nature) simulation is produced by supplying a 3-km WRF run with GFS analyses and integrating the model forward 90 hours, from the beginning of ensemble initialization through the end of the forecast. Target locations for surface and radiosonde observations are computed 6, 12, and

  4. Drought Indicators Based on Model Assimilated GRACE Terrestrial Water Storage Observations

    NASA Technical Reports Server (NTRS)

    Houborg, Rasmus; Rodell, Matthew; Li, Bailing; Reichle, Rolf; Zaitchik, Benjamin F.

    2012-01-01

    The Gravity Recovery and Climate Experiment (GRACE) twin satellites observe time variations in Earth's gravity field which yield valuable information about changes in terrestrial water storage (TWS). GRACE is characterized by low spatial (greater than 150,000 square kilometers) and temporal (greater than 10 day) resolution but has the unique ability to sense water stored at all levels (including groundwater) systematically and continuously. The GRACE Data Assimilation System (GRACE-DAS), based on the Catchment Land Surface Model (CLSM) enhances the value of the GRACE water storage data by enabling spatial and temporal downscaling and vertical decomposition into moisture 39 components (i.e. groundwater, soil moisture, snow), which individually are more useful for scientific applications. In this study, GRACE-DAS was applied to North America and GRACE-based drought indicators were developed as part of a larger effort that investigates the possibility of more comprehensive and objective identification of drought conditions by integrating spatially, temporally and vertically disaggregated GRACE data into the U.S. and North American Drought Monitors. Previously, the Drought Monitors lacked objective information on deep soil moisture and groundwater conditions, which are useful indicators of drought. Extensive datasets of groundwater storage from USGS monitoring wells and soil moisture from the Soil Climate Analysis Network (SCAN) were used to assess improvements in the hydrological modeling skill resulting from the assimilation of GRACE TWS data. The results point toward modest, but statistically significant, improvements in the hydrological modeling skill across major parts of the United States, highlighting the potential value of GRACE assimilated water storage field for improving drought detection.

  5. Online sequential Monte Carlo smoother for partially observed diffusion processes

    NASA Astrophysics Data System (ADS)

    Gloaguen, Pierre; Étienne, Marie-Pierre; Le Corff, Sylvain

    2018-12-01

    This paper introduces a new algorithm to approximate smoothed additive functionals of partially observed diffusion processes. This method relies on a new sequential Monte Carlo method which allows to compute such approximations online, i.e., as the observations are received, and with a computational complexity growing linearly with the number of Monte Carlo samples. The original algorithm cannot be used in the case of partially observed stochastic differential equations since the transition density of the latent data is usually unknown. We prove that it may be extended to partially observed continuous processes by replacing this unknown quantity by an unbiased estimator obtained for instance using general Poisson estimators. This estimator is proved to be consistent and its performance are illustrated using data from two models.

  6. Assessment of snow-dominated water resources: (Ir-)relevant scales for observation and modelling

    NASA Astrophysics Data System (ADS)

    Schaefli, Bettina; Ceperley, Natalie; Michelon, Anthony; Larsen, Joshua; Beria, Harsh

    2017-04-01

    High Alpine catchments play an essential role for many world regions since they 1) provide water resources to low lying and often relatively dry regions, 2) are important for hydropower production as a result of their high hydraulic heads, 3) offer relatively undisturbed habitat for fauna and flora and 4) provide a source of cold water often late into the summer season (due to snowmelt), which is essential for many downstream river ecosystems. However, the water balance of such high Alpine hydrological systems is often difficult to accurately estimate, in part because of seasonal to interannual accumulation of precipitation in the form of snow and ice and by relatively low but highly seasonal evapotranspiration rates. These processes are strongly driven by the topography and related vegetation patterns, by air temperature gradients, solar radiation and wind patterns. Based on selected examples, we will discuss how the spatial scale of these patterns dictates at which scales we can make reliable water balance assessments. Overall, this contribution will provide an overview of some of the key open questions in terms of observing and modelling the dominant hydrological processes in Alpine areas at the right scale. A particular focus will be on the observation and modelling of snow accumulation and melt processes, discussing in particular the usefulness of simple models versus fully physical models at different spatial scales and the role of observed data.

  7. Simulating Soil C Stock with the Process-based Model CQESTR

    NASA Astrophysics Data System (ADS)

    Gollany, H.; Liang, Y.; Rickman, R.; Albrecht, S.; Follett, R.; Wilhelm, W.; Novak, J.; Douglas, C.

    2009-04-01

    The prospect of storing carbon (C) in soil, as soil organic matter (SOM), provides an opportunity for agriculture to contribute to the reduction of carbon dioxide in the atmosphere while enhancing soil properties. Soil C models are useful for examining the complex interactions between crop, soil management practices and climate and their effects on long-term carbon storage or loss. The process-based carbon model CQESTR, pronounced ‘sequester,' was developed by USDA-ARS scientists at the Columbia Plateau Conservation Research Center, Pendleton, Oregon, USA. It computes the rate of biological decomposition of crop residues or organic amendments as they convert to SOM. CQESTR uses readily available field-scale data to assess long-term effects of cropping systems or crop residue removal on SOM accretion/loss in agricultural soil. Data inputs include weather, above- ground and below-ground biomass additions, N content of residues and amendments, soil properties, and management factors such as tillage and crop rotation. The model was calibrated using information from six long-term experiments across North America (Florence, SC, 19 yrs; Lincoln, NE, 26 yrs; Hoytville, OH, 31 yrs; Breton, AB, 60 yrs; Pendleton, OR, 76 yrs; and Columbia, MO, >100 yrs) having a range of soil properties and climate. CQESTR was validated using data from several additional long-term experiments (8 - 106 yrs) across North America having a range of SOM (7.3 - 57.9 g SOM/kg). Regression analysis of 306 pairs of predicted and measured SOM data under diverse climate, soil texture and drainage classes, and agronomic practices at 13 agricultural sites resulted in a linear relationship with an r2 of 0.95 (P < 0.0001) and a 95% confidence interval of 4.3 g SOM/kg. Estimated SOC values from CQESTR and IPCC (the Intergovernmental Panel on Climate Change) were compared to observed values in three relatively long-term experiments (20 - 24 years). At one site, CQESTR and IPCC estimates of SOC stocks were

  8. HESS Opinions: The need for process-based evaluation of large-domain hyper-resolution models

    NASA Astrophysics Data System (ADS)

    Melsen, Lieke A.; Teuling, Adriaan J.; Torfs, Paul J. J. F.; Uijlenhoet, Remko; Mizukami, Naoki; Clark, Martyn P.

    2016-03-01

    A meta-analysis on 192 peer-reviewed articles reporting on applications of the variable infiltration capacity (VIC) model in a distributed way reveals that the spatial resolution at which the model is applied has increased over the years, while the calibration and validation time interval has remained unchanged. We argue that the calibration and validation time interval should keep pace with the increase in spatial resolution in order to resolve the processes that are relevant at the applied spatial resolution. We identified six time concepts in hydrological models, which all impact the model results and conclusions. Process-based model evaluation is particularly relevant when models are applied at hyper-resolution, where stakeholders expect credible results both at a high spatial and temporal resolution.

  9. HESS Opinions: The need for process-based evaluation of large-domain hyper-resolution models

    NASA Astrophysics Data System (ADS)

    Melsen, L. A.; Teuling, A. J.; Torfs, P. J. J. F.; Uijlenhoet, R.; Mizukami, N.; Clark, M. P.

    2015-12-01

    A meta-analysis on 192 peer-reviewed articles reporting applications of the Variable Infiltration Capacity (VIC) model in a distributed way reveals that the spatial resolution at which the model is applied has increased over the years, while the calibration and validation time interval has remained unchanged. We argue that the calibration and validation time interval should keep pace with the increase in spatial resolution in order to resolve the processes that are relevant at the applied spatial resolution. We identified six time concepts in hydrological models, which all impact the model results and conclusions. Process-based model evaluation is particularly relevant when models are applied at hyper-resolution, where stakeholders expect credible results both at a high spatial and temporal resolution.

  10. Space Shuttle earth observations photography - Data listing process

    NASA Technical Reports Server (NTRS)

    Lulla, Kamlesh

    1992-01-01

    The data listing process of the electronic data base of the Catalogs of Space Shuttle Earth Observations Photography is described. Similar data are recorded for each frame in each role from the mission. At the end of each roll, a computer printout is checked for mistakes, glitches, and typographical errors. After the roll and frames have been corrected, the data listings are ready for transfer to the data base and for development of the catalog.

  11. A process-based emission model of volatile organic compounds from silage sources on farms

    NASA Astrophysics Data System (ADS)

    Bonifacio, H. F.; Rotz, C. A.; Hafner, S. D.; Montes, F.; Cohen, M.; Mitloehner, F. M.

    2017-03-01

    Silage on dairy farms can emit large amounts of volatile organic compounds (VOCs), a precursor in the formation of tropospheric ozone. Because of the challenges associated with direct measurements, process-based modeling is another approach for estimating emissions of air pollutants from sources such as those from dairy farms. A process-based model for predicting VOC emissions from silage was developed and incorporated into the Integrated Farm System Model (IFSM, v. 4.3), a whole-farm simulation of crop, dairy, and beef production systems. The performance of the IFSM silage VOC emission model was evaluated using ethanol and methanol emissions measured from conventional silage piles (CSP), silage bags (SB), total mixed rations (TMR), and loose corn silage (LCS) at a commercial dairy farm in central California. With transport coefficients for ethanol refined using experimental data from our previous studies, the model performed well in simulating ethanol emission from CSP, TMR, and LCS; its lower performance for SB could be attributed to possible changes in face conditions of SB after silage removal that are not represented in the current model. For methanol emission, lack of experimental data for refinement likely caused the underprediction for CSP and SB whereas the overprediction observed for TMR can be explained as uncertainty in measurements. Despite these limitations, the model is a valuable tool for comparing silage management options and evaluating their relative effects on the overall performance, economics, and environmental impacts of farm production. As a component of IFSM, the silage VOC emission model was used to simulate a representative dairy farm in central California. The simulation showed most silage VOC emissions were from feed lying in feed lanes and not from the exposed face of silage storages. This suggests that mitigation efforts, particularly in areas prone to ozone non-attainment status, should focus on reducing emissions during feeding. For

  12. Tracking the critical offshore conditions leading to marine inundation via active learning of full-process based models

    NASA Astrophysics Data System (ADS)

    Rohmer, Jeremy; Idier, Deborah; Bulteau, Thomas; Paris, François

    2016-04-01

    From a risk management perspective, it can be of high interest to identify the critical set of offshore conditions that lead to inundation on key assets for the studied territory (e.g., assembly points, evacuation routes, hospitals, etc.). This inverse approach of risk assessment (Idier et al., NHESS, 2013) can be of primary importance either for the estimation of the coastal flood hazard return period or for constraining the early warning networks based on hydro-meteorological forecast or observations. However, full-process based models for coastal flooding simulation have very large computational time cost (typically of several hours), which often limits the analysis to a few scenarios. Recently, it has been shown that meta-modelling approaches can efficiently handle this difficulty (e.g., Rohmer & Idier, NHESS, 2012). Yet, the full-process based models are expected to present strong non-linearities (non-regularities) or shocks (discontinuities), i.e. dynamics controlled by thresholds. For instance, in case of coastal defense, the dynamics is characterized first by a linear behavior of the waterline position (increase with increasing offshore conditions), as long as there is no overtopping, and then by a very strong increase (as soon as the offshore conditions are energetic enough to lead to wave overtopping, and then overflow). Such behavior might make the training phase of the meta-model very tedious. In the present study, we propose to explore the feasibility of active learning techniques, aka semi-supervised machine learning, to track the set of critical conditions with a reduced number of long-running simulations. The basic idea relies on identifying the simulation scenarios which should both reduce the meta-model error and improve the prediction of the critical contour of interest. To overcome the afore-described difficulty related to non-regularity, we rely on Support Vector Machines, which have shown very high performance for structural reliability

  13. Models for Temperature and Composition in Uranus from Spitzer, Herschel and Ground-Based Infrared through Millimeter Observations

    NASA Astrophysics Data System (ADS)

    Orton, G. S.; Fletcher, L. N.; Feuchtgruber, H.; Lellouch, E.; Moreno, R.; Encrenaz, T.; Hartogh, P.; Jarchow, C.; Swinyard, B.; Moses, J. I.; Burgdorf, M. J.; Hammel, H. B.; Line, M. R.; Sandell, G.; Dowell, C. D.

    2013-12-01

    Photometric and spectroscopic observations of Uranus were combined to create self-consistent models of its global-mean temperature profile, bulk composition, and vertical distribution of gases. These were derived from a suite of spacecraft and ground-based observations that includes the Spitzer IRS, and the Herschel HIFI, PACS and SPIRE instruments, together with ground-based observations from UKIRT and CSO. Observations of the collision-induced absorption of H2 have constrained the temperature structure in the troposphere; this was possible up to atmospheric pressures of ~2 bars. Temperatures in the stratosphere were constrained by H2 quadrupole line emission. We coupled the vertical distribution of CH4 in the stratosphere of Uranus with models for the vertical mixing in a way that is consistent with the mixing ratios of hydrocarbons whose abundances are influenced primarily by mixing rather than chemistry. Spitzer and Herschel data constrain the abundances of CH3, CH4, C2H2, C2H6, C3H4, C4H2, H2O and CO2. At millimeter wavelengths, there is evidence that an additional opacity source is required besides the H2 collision-induced absorption and the NH3 absorption needed to match the microwave spectrum; this can reasonably (but not uniquely) be attributed to H2S. These models will be made more mature by consideration of spatial variability from Voyager IRIS and more recent spatially resolved imaging and mapping from ground-based observatories. The model is of ';programmatic' interest because it serves as a calibration source for Herschel instruments, and it provides a starting point for planning future spacecraft investigations of the atmosphere of Uranus.

  14. Model and system learners, optimal process constructors and kinetic theory-based goal-oriented design: A new paradigm in materials and processes informatics

    NASA Astrophysics Data System (ADS)

    Abisset-Chavanne, Emmanuelle; Duval, Jean Louis; Cueto, Elias; Chinesta, Francisco

    2018-05-01

    Traditionally, Simulation-Based Engineering Sciences (SBES) has relied on the use of static data inputs (model parameters, initial or boundary conditions, … obtained from adequate experiments) to perform simulations. A new paradigm in the field of Applied Sciences and Engineering has emerged in the last decade. Dynamic Data-Driven Application Systems [9, 10, 11, 12, 22] allow the linkage of simulation tools with measurement devices for real-time control of simulations and applications, entailing the ability to dynamically incorporate additional data into an executing application, and in reverse, the ability of an application to dynamically steer the measurement process. It is in that context that traditional "digital-twins" are giving raise to a new generation of goal-oriented data-driven application systems, also known as "hybrid-twins", embracing models based on physics and models exclusively based on data adequately collected and assimilated for filling the gap between usual model predictions and measurements. Within this framework new methodologies based on model learners, machine learning and kinetic goal-oriented design are defining a new paradigm in materials, processes and systems engineering.

  15. Improving Snow Modeling by Assimilating Observational Data Collected by Citizen Scientists

    NASA Astrophysics Data System (ADS)

    Crumley, R. L.; Hill, D. F.; Arendt, A. A.; Wikstrom Jones, K.; Wolken, G. J.; Setiawan, L.

    2017-12-01

    Modeling seasonal snow pack in alpine environments includes a multiplicity of challenges caused by a lack of spatially extensive and temporally continuous observational datasets. This is partially due to the difficulty of collecting measurements in harsh, remote environments where extreme gradients in topography exist, accompanied by large model domains and inclement weather. Engaging snow enthusiasts, snow professionals, and community members to participate in the process of data collection may address some of these challenges. In this study, we use SnowModel to estimate seasonal snow water equivalence (SWE) in the Thompson Pass region of Alaska while incorporating snow depth measurements collected by citizen scientists. We develop a modeling approach to assimilate hundreds of snow depth measurements from participants in the Community Snow Observations (CSO) project (www.communitysnowobs.org). The CSO project includes a mobile application where participants record and submit geo-located snow depth measurements while working and recreating in the study area. These snow depth measurements are randomly located within the model grid at irregular time intervals over the span of four months in the 2017 water year. This snow depth observation dataset is converted into a SWE dataset by employing an empirically-based, bulk density and SWE estimation method. We then assimilate this data using SnowAssim, a sub-model within SnowModel, to constrain the SWE output by the observed data. Multiple model runs are designed to represent an array of output scenarios during the assimilation process. An effort to present model output uncertainties is included, as well as quantification of the pre- and post-assimilation divergence in modeled SWE. Early results reveal pre-assimilation SWE estimations are consistently greater than the post-assimilation estimations, and the magnitude of divergence increases throughout the snow pack evolution period. This research has implications beyond the

  16. Orion Flight Test 1 Architecture: Observed Benefits of a Model Based Engineering Approach

    NASA Technical Reports Server (NTRS)

    Simpson, Kimberly A.; Sindiy, Oleg V.; McVittie, Thomas I.

    2012-01-01

    This paper details how a NASA-led team is using a model-based systems engineering approach to capture, analyze and communicate the end-to-end information system architecture supporting the first unmanned orbital flight of the Orion Multi-Purpose Crew Exploration Vehicle. Along with a brief overview of the approach and its products, the paper focuses on the observed program-level benefits, challenges, and lessons learned; all of which may be applied to improve system engineering tasks for characteristically similarly challenges

  17. Using metagenomic and metatranscriptomic observations to test a thermodynamic-based model of community metabolic expression over time and space

    NASA Astrophysics Data System (ADS)

    Vallino, J. J.; Huber, J. A.

    2016-02-01

    Marine biogeochemistry is orchestrated by a complex and dynamic community of microorganisms that attempt to maximize their own fecundity through a combination of competition and cooperation. At a systems level, the community can be described as a distributed metabolic network, where different species contribute their own unique set of metabolic capabilities. Our current project attempts to understand the governing principles that describe amplification or attenuation of metabolic pathways within the network through a combination of modeling and metagenomic, metatranscriptomic and biogeochemical observations. We will describe and present results from our thermodynamic-based model that determines optimal pathway expression from available resources based on the principle of maximum entropy production (MEP); that is, based on the hypothesis that non-equilibrium systems organize to maximize energy dissipation. The MEP model currently predicts metabolic pathway expression over time, and one spatial dimension. Model predictions will be compared to biogeochemical observations and gene presence and expression from samples collected over time and space from a costal meromictic basin (Siders Pond) located in Falmouth MA, US. Siders Pond permanent stratification, caused by occasional seawater intrusion, results in steep chemoclines and redox gradients, which supports both aerobic and anaerobic phototrophs as well as sulfur, Fe and Mn redox cycles. The diversity of metabolic capability and expression we have observed over depth makes it an ideal system to test our thermodynamic-based model.

  18. Model selection using cosmic chronometers with Gaussian Processes

    NASA Astrophysics Data System (ADS)

    Melia, Fulvio; Yennapureddy, Manoj K.

    2018-02-01

    The use of Gaussian Processes with a measurement of the cosmic expansion rate based solely on the observation of cosmic chronometers provides a completely cosmology-independent reconstruction of the Hubble constant H(z) suitable for testing different models. The corresponding dispersion σH is smaller than ~ 9% over the entire redshift range (lesssim zlesssim 20) of the observations, rivaling many kinds of cosmological measurements available today. We use the reconstructed H(z) function to test six different cosmologies, and show that it favours the Rh=ct universe, which has only one free parameter (i.e., H0) over other models, including Planck ΛCDM . The parameters of the standard model may be re-optimized to improve the fits to the reconstructed H(z) function, but the results have smaller p-values than one finds with Rh=ct.

  19. Comparing niche- and process-based models to reduce prediction uncertainty in species range shifts under climate change.

    PubMed

    Morin, Xavier; Thuiller, Wilfried

    2009-05-01

    Obtaining reliable predictions of species range shifts under climate change is a crucial challenge for ecologists and stakeholders. At the continental scale, niche-based models have been widely used in the last 10 years to predict the potential impacts of climate change on species distributions all over the world, although these models do not include any mechanistic relationships. In contrast, species-specific, process-based predictions remain scarce at the continental scale. This is regrettable because to secure relevant and accurate predictions it is always desirable to compare predictions derived from different kinds of models applied independently to the same set of species and using the same raw data. Here we compare predictions of range shifts under climate change scenarios for 2100 derived from niche-based models with those of a process-based model for 15 North American boreal and temperate tree species. A general pattern emerged from our comparisons: niche-based models tend to predict a stronger level of extinction and a greater proportion of colonization than the process-based model. This result likely arises because niche-based models do not take phenotypic plasticity and local adaptation into account. Nevertheless, as the two kinds of models rely on different assumptions, their complementarity is revealed by common findings. Both modeling approaches highlight a major potential limitation on species tracking their climatic niche because of migration constraints and identify similar zones where species extirpation is likely. Such convergent predictions from models built on very different principles provide a useful way to offset uncertainties at the continental scale. This study shows that the use in concert of both approaches with their own caveats and advantages is crucial to obtain more robust results and that comparisons among models are needed in the near future to gain accuracy regarding predictions of range shifts under climate change.

  20. A Bayesian Approach to Evaluating Consistency between Climate Model Output and Observations

    NASA Astrophysics Data System (ADS)

    Braverman, A. J.; Cressie, N.; Teixeira, J.

    2010-12-01

    Like other scientific and engineering problems that involve physical modeling of complex systems, climate models can be evaluated and diagnosed by comparing their output to observations of similar quantities. Though the global remote sensing data record is relatively short by climate research standards, these data offer opportunities to evaluate model predictions in new ways. For example, remote sensing data are spatially and temporally dense enough to provide distributional information that goes beyond simple moments to allow quantification of temporal and spatial dependence structures. In this talk, we propose a new method for exploiting these rich data sets using a Bayesian paradigm. For a collection of climate models, we calculate posterior probabilities its members best represent the physical system each seeks to reproduce. The posterior probability is based on the likelihood that a chosen summary statistic, computed from observations, would be obtained when the model's output is considered as a realization from a stochastic process. By exploring how posterior probabilities change with different statistics, we may paint a more quantitative and complete picture of the strengths and weaknesses of the models relative to the observations. We demonstrate our method using model output from the CMIP archive, and observations from NASA's Atmospheric Infrared Sounder.

  1. Use of ebRIM-based CSW with sensor observation services for registry and discovery of remote-sensing observations

    NASA Astrophysics Data System (ADS)

    Chen, Nengcheng; Di, Liping; Yu, Genong; Gong, Jianya; Wei, Yaxing

    2009-02-01

    Recent advances in Sensor Web geospatial data capture, such as high-resolution in satellite imagery and Web-ready data processing and modeling technologies, have led to the generation of large numbers of datasets from real-time or near real-time observations and measurements. Finding which sensor or data complies with criteria such as specific times, locations, and scales has become a bottleneck for Sensor Web-based applications, especially remote-sensing observations. In this paper, an architecture for use of the integration Sensor Observation Service (SOS) with the Open Geospatial Consortium (OGC) Catalogue Service-Web profile (CSW) is put forward. The architecture consists of a distributed geospatial sensor observation service, a geospatial catalogue service based on the ebXML Registry Information Model (ebRIM), SOS search and registry middleware, and a geospatial sensor portal. The SOS search and registry middleware finds the potential SOS, generating data granule information and inserting the records into CSW. The contents and sequence of the services, the available observations, and the metadata of the observations registry are described. A prototype system is designed and implemented using the service middleware technology and a standard interface and protocol. The feasibility and the response time of registry and retrieval of observations are evaluated using a realistic Earth Observing-1 (EO-1) SOS scenario. Extracting information from SOS requires the same execution time as record generation for CSW. The average data retrieval response time in SOS+CSW mode is 17.6% of that of the SOS-alone mode. The proposed architecture has the more advantages of SOS search and observation data retrieval than the existing sensor Web enabled systems.

  2. Modeling the Hydrologic Processes of a Permeable Pavement ...

    EPA Pesticide Factsheets

    A permeable pavement system can capture stormwater to reduce runoff volume and flow rate, improve onsite groundwater recharge, and enhance pollutant controls within the site. A new unit process model for evaluating the hydrologic performance of a permeable pavement system has been developed in this study. The developed model can continuously simulate infiltration through the permeable pavement surface, exfiltration from the storage to the surrounding in situ soils, and clogging impacts on infiltration/exfiltration capacity at the pavement surface and the bottom of the subsurface storage unit. The exfiltration modeling component simulates vertical and horizontal exfiltration independently based on Darcy’s formula with the Green-Ampt approximation. The developed model can be arranged with physically-based modeling parameters, such as hydraulic conductivity, Manning’s friction flow parameters, saturated and field capacity volumetric water contents, porosity, density, etc. The developed model was calibrated using high-frequency observed data. The modeled water depths are well matched with the observed values (R2 = 0.90). The modeling results show that horizontal exfiltration through the side walls of the subsurface storage unit is a prevailing factor in determining the hydrologic performance of the system, especially where the storage unit is developed in a long, narrow shape; or with a high risk of bottom compaction and clogging. This paper presents unit

  3. Progress on wave-ice interactions: satellite observations and model parameterizations

    NASA Astrophysics Data System (ADS)

    Ardhuin, Fabrice; Boutin, Guillaume; Dumont, Dany; Stopa, Justin; Girard-Ardhuin, Fanny; Accensi, Mickael

    2017-04-01

    In the open ocean, numerical wave models have their largest errors near sea ice, and, until recently, virtually no wave data was available in the sea ice to. Further, wave-ice interaction processes may play an important role in the Earth system. In particular, waves may break up an ice layer into floes, with significant impact on air-sea fluxes. With thinner Arctic ice, this process may contribut to the growing similarity between Arctic and Antarctic sea ice. In return, the ice has a strong damping impact on the waves that is highly variable and not understood. Here we report progress on parameterizations of waves interacting with a single ice layer, as implemented in the WAVEWATCH III model (WW3 Development Group, 2016), and based on few in situ observations, but extensive data derived from Synthetic Aperture Radars (SARs). Our parameterizations combine three processes. First a parameterization for the energy-conserving scattering of waves by ice floes (assuming isotropic back-scatter), which has very little effect on dominant waves of periods larger than 7 s, consistent with the observed narrow directional spectra and short travel times. Second, we implemented a basal friction below the ice layer (Stopa et al. The Cryosphere, 2016). Third, we use a secondary creep associated with ice flexure (Cole et al. 1998) adapted to random waves. These three processes (scattering, friction and creep) are strongly dependent on the maximum floe size. We have thus included an estimation of the potential floe size based on an ice flexure failure estimation adapted from Williams et al. (2013). This combination of dissipation and scattering is tested against measured patterns of wave height and directional spreading, and evidence of ice break-up, all obtained from SAR imagery (Ardhuin et al. 2017), and some in situ data (Collins et al. 2015). The combination of creep and friction is required to reproduce a strong reduction in wave attenuation in broken ice as observed by Collins

  4. Error Detection Processes during Observational Learning

    ERIC Educational Resources Information Center

    Badets, Arnaud; Blandin, Yannick; Wright, David L.; Shea, Charles H.

    2006-01-01

    The purpose of this experiment was to determine whether a faded knowledge of results (KR) frequency during observation of a model's performance enhanced error detection capabilities. During the observation phase, participants observed a model performing a timing task and received KR about the model's performance on each trial or on one of two…

  5. Predictive modelling of Ketzin - CO2 arrival in the observation well

    NASA Astrophysics Data System (ADS)

    Kühn, M.; Class, H.; Frykman, P.; Kopp, A.; Nielsen, C. M.; Probst, P.

    2009-04-01

    The design of the Ketzin CO2 storage site allows testing of different modelling approaches, ranging from analytical approaches to finite element modelling. As three wells are drilled in an L-shape configuration, 3D geophysical observations (electrical resistivity, seismic imaging - for details see further presentations at EGU2009) allow to determine the 4D evolvement of the CO2 plume within the reservoir. Further information is available through smart casing technologies (DTS, ERT), conventional fluid, and permanent gas sampling. As input parameters for the models, a high resolution 3D seismic as well as detailed analysed core samples from all three wells at Ketzin were available. Logging data and laboratory experiments on rock samples act as further boundary conditions for the geological model. Hydraulic testing of all three wells gave further information about the complex hydraulic situation of the highly heterogeneous reservoir. Before CO2 injection started at the Ketzin site on the 30th of June 2008 any member of the CO2SINK project was asked to place a bet in a competition and predict when the CO2 arrival in the observation well - 50 m away from the injection site - is to be expected. This allows for a double blind study, the approval of different modelling strategies, and to improve modelling tools and strategies. The discussed estimates are based on three different numerical models. Eclipse100, Eclipse300 (CO2STORE) and MUFTE-UG were applied for predictive modelling. The geological models are based on all available geophysical and geological information. We present the results of this modelling exercise and discuss the differences of all the models and assess the capability of numerical simulation to estimate processes occurring during CO2 storage. The role of grid size on the precision of the modelled two phase fluid flow in a layered reservoir is demonstrated, as a high resolution model of the two phase flow explains the observed arrival of the CO2 very

  6. Process-based modeling of temperature and water profiles in the seedling recruitment zone: Part I. Model validation

    USDA-ARS?s Scientific Manuscript database

    Process-based modeling provides detailed spatial and temporal information of the soil environment in the shallow seedling recruitment zone across field topography where measurements of soil temperature and water may not sufficiently describe the zone. Hourly temperature and water profiles within the...

  7. A hybrid agent-based approach for modeling microbiological systems.

    PubMed

    Guo, Zaiyi; Sloot, Peter M A; Tay, Joc Cing

    2008-11-21

    Models for systems biology commonly adopt Differential Equations or Agent-Based modeling approaches for simulating the processes as a whole. Models based on differential equations presuppose phenomenological intracellular behavioral mechanisms, while models based on Multi-Agent approach often use directly translated, and quantitatively less precise if-then logical rule constructs. We propose an extendible systems model based on a hybrid agent-based approach where biological cells are modeled as individuals (agents) while molecules are represented by quantities. This hybridization in entity representation entails a combined modeling strategy with agent-based behavioral rules and differential equations, thereby balancing the requirements of extendible model granularity with computational tractability. We demonstrate the efficacy of this approach with models of chemotaxis involving an assay of 10(3) cells and 1.2x10(6) molecules. The model produces cell migration patterns that are comparable to laboratory observations.

  8. Bayesian model selection validates a biokinetic model for zirconium processing in humans

    PubMed Central

    2012-01-01

    Background In radiation protection, biokinetic models for zirconium processing are of crucial importance in dose estimation and further risk analysis for humans exposed to this radioactive substance. They provide limiting values of detrimental effects and build the basis for applications in internal dosimetry, the prediction for radioactive zirconium retention in various organs as well as retrospective dosimetry. Multi-compartmental models are the tool of choice for simulating the processing of zirconium. Although easily interpretable, determining the exact compartment structure and interaction mechanisms is generally daunting. In the context of observing the dynamics of multiple compartments, Bayesian methods provide efficient tools for model inference and selection. Results We are the first to apply a Markov chain Monte Carlo approach to compute Bayes factors for the evaluation of two competing models for zirconium processing in the human body after ingestion. Based on in vivo measurements of human plasma and urine levels we were able to show that a recently published model is superior to the standard model of the International Commission on Radiological Protection. The Bayes factors were estimated by means of the numerically stable thermodynamic integration in combination with a recently developed copula-based Metropolis-Hastings sampler. Conclusions In contrast to the standard model the novel model predicts lower accretion of zirconium in bones. This results in lower levels of noxious doses for exposed individuals. Moreover, the Bayesian approach allows for retrospective dose assessment, including credible intervals for the initially ingested zirconium, in a significantly more reliable fashion than previously possible. All methods presented here are readily applicable to many modeling tasks in systems biology. PMID:22863152

  9. Time series modeling by a regression approach based on a latent process.

    PubMed

    Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice

    2009-01-01

    Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.

  10. One-dimension modeling on the parallel-plate ion extraction process based on a non-electron-equilibrium fluid model

    NASA Astrophysics Data System (ADS)

    Li, He-Ping; Chen, Jian; Guo, Heng; Jiang, Dong-Jun; Zhou, Ming-Sheng; Department of Engineering Physics Team

    2017-10-01

    Ion extraction from a plasma under an externally applied electric field involve multi-particle and multi-field interactions, and has wide applications in the fields of materials processing, etching, chemical analysis, etc. In order to develop the high-efficiency ion extraction methods, it is indispensable to establish a feasible model to understand the non-equilibrium transportation processes of the charged particles and the evolutions of the space charge sheath during the extraction process. Most of the previous studies on the ion extraction process are mainly based on the electron-equilibrium fluid model, which assumed that the electrons are in the thermodynamic equilibrium state. However, it may lead to some confusions with neglecting the electron movement during the sheath formation process. In this study, a non-electron-equilibrium model is established to describe the transportation of the charged particles in a parallel-plate ion extraction process. The numerical results show that the formation of the Child-Langmuir sheath is mainly caused by the charge separation. And thus, the sheath shielding effect will be significantly weakened if the charge separation is suppressed during the extraction process of the charged particles.

  11. Modelling hydrological processes in mountainous permafrost basin in North-East of Russia

    NASA Astrophysics Data System (ADS)

    Makarieva, Olga; Lebedeva, Lyudmila; Nesterova, Natalia

    2017-04-01

    The studies of hydrological processes in continuous permafrost and the projections of their changes in future have been receiving a lot of attention in the recent years. They are limited by the availability of long-term joint observational data on permafrost dynamic and river runoff which would allow revealing the mechanisms of interaction, tracking the dynamic in historical period and projecting changes in future. The Kolyma Water-Balance Station (KWBS), the Kontaktovy Creek watershed with an area of 22 km2, is situated in the zone of continuous permafrost in the upper reaches of the Kolyma River (Magadan district of Russia). The topography at KWBS is mountainous with the elevations up to 1700 m. Permafrost thickness ranges from 100 to 400 m with temperature -4...-6 °C. Detailed observations of river runoff, active layer dynamics and water balance were carried out at the KWBS from 1948 to 1997. After that permafrost studies were ceased but runoff gauges have been in use and have continuous time series of observations up to 68 years. The hydrological processes at KWBS are representative for the vast NE region of Russia where standard observational network is very scarce. We aim to study and model the mechanisms of interactions between permafrost and runoff, including water flow paths in different landscapes of mountainous permafrost based on detailed historical data of KWBS and the analysis of stable isotopes composition from water samples collected at KWBS in 2016. Mathematical modelling of soil temperature, active layer properties and dynamics, flow formation and interactions between ground and surface water is performed by the means of Hydrograph model (Vinogradov et al. 2011, Semenova et al. 2013). The model algorithms combine process-based and conceptual approaches, which allows for maintaining a balance between the complexity of model design and the use of limited input information. The method for modeling heat dynamics in soil was integrated into Hydrograph

  12. Shape models of asteroids based on lightcurve observations with BlueEye600 robotic observatory

    NASA Astrophysics Data System (ADS)

    Ďurech, Josef; Hanuš, Josef; Brož, Miroslav; Lehký, Martin; Behrend, Raoul; Antonini, Pierre; Charbonnel, Stephane; Crippa, Roberto; Dubreuil, Pierre; Farroni, Gino; Kober, Gilles; Lopez, Alain; Manzini, Federico; Oey, Julian; Poncy, Raymond; Rinner, Claudine; Roy, René

    2018-04-01

    We present physical models, i.e. convex shapes, directions of the rotation axis, and sidereal rotation periods, of 18 asteroids out of which 10 are new models and 8 are refined models based on much larger data sets than in previous work. The models were reconstructed by the lightcurve inversion method from archived publicly available lightcurves and our new observations with BlueEye600 robotic observatory. One of the new results is the shape model of asteroid (1663) van den Bos with the rotation period of 749 h, which makes it the slowest rotator with known shape. We describe our strategy for target selection that aims at fast production of new models using the enormous potential of already available photometry stored in public databases. We also briefly describe the control software and scheduler of the robotic observatory and we discuss the importance of building a database of asteroid models for studying asteroid physical properties in collisional families.

  13. 2D Process-based Microbialite Growth Model

    NASA Astrophysics Data System (ADS)

    Airo, A.; Smith, A.

    2007-12-01

    A 2D process-based microbialite growth model (MGM) has been developed that integrates the coupled effects of the microbialite growth and sediment distribution within a two-dimensional cross-section of a subaqueous bedrock profile. Sediment transport is realized through particle erosion and deposition that are a function of local wave energy which is computed on the basis of linear wave theory. Surface-normal microbialite growth is directly correlated to light intensity, which is computed for every point of the microbialite surface by using a Henyey- Greenstein-type relation for scattering and the Beer's Law for absorption in the water column. Shadowing effects by surrounding obstacles and/or overlying sediment are also considered. Sediment particles can be incorporated into the microbialite framework if growth occurs in the presence of sediment. The resulting meter-size microbialite constructs develop morphologies that correspond well to natural microbialites. Furthermore, changes of environmental factors such as light intensity, wave energy, and bedrock profile result in morphological variations of the microbialites that would be expected on the basis of the current understanding of microbialite growth and development.

  14. Crop monitoring & yield forecasting system based on Synthetic Aperture Radar (SAR) and process-based crop growth model: Development and validation in South and South East Asian Countries

    NASA Astrophysics Data System (ADS)

    Setiyono, T. D.

    2014-12-01

    Accurate and timely information on rice crop growth and yield helps governments and other stakeholders adapting their economic policies and enables relief organizations to better anticipate and coordinate relief efforts in the wake of a natural catastrophe. Such delivery of rice growth and yield information is made possible by regular earth observation using space-born Synthetic Aperture Radar (SAR) technology combined with crop modeling approach to estimate yield. Radar-based remote sensing is capable of observing rice vegetation growth irrespective of cloud coverage, an important feature given that in incidences of flooding the sky is often cloud-covered. The system allows rapid damage assessment over the area of interest. Rice yield monitoring is based on a crop growth simulation and SAR-derived key information, particularly start of season and leaf growth rate. Results from pilot study sites in South and South East Asian countries suggest that incorporation of SAR data into crop model improves yield estimation for actual yields. Remote-sensing data assimilation into crop model effectively capture responses of rice crops to environmental conditions over large spatial coverage, which otherwise is practically impossible to achieve. Such improvement of actual yield estimates offers practical application such as in a crop insurance program. Process-based crop simulation model is used in the system to ensure climate information is adequately captured and to enable mid-season yield forecast.

  15. Robust analysis of semiparametric renewal process models

    PubMed Central

    Lin, Feng-Chang; Truong, Young K.; Fine, Jason P.

    2013-01-01

    Summary A rate model is proposed for a modulated renewal process comprising a single long sequence, where the covariate process may not capture the dependencies in the sequence as in standard intensity models. We consider partial likelihood-based inferences under a semiparametric multiplicative rate model, which has been widely studied in the context of independent and identical data. Under an intensity model, gap times in a single long sequence may be used naively in the partial likelihood with variance estimation utilizing the observed information matrix. Under a rate model, the gap times cannot be treated as independent and studying the partial likelihood is much more challenging. We employ a mixing condition in the application of limit theory for stationary sequences to obtain consistency and asymptotic normality. The estimator's variance is quite complicated owing to the unknown gap times dependence structure. We adapt block bootstrapping and cluster variance estimators to the partial likelihood. Simulation studies and an analysis of a semiparametric extension of a popular model for neural spike train data demonstrate the practical utility of the rate approach in comparison with the intensity approach. PMID:24550568

  16. Simulating boreal forest carbon dynamics after stand-replacing fire disturbance: insights from a global process-based vegetation model

    NASA Astrophysics Data System (ADS)

    Yue, C.; Ciais, P.; Luyssaert, S.; Cadule, P.; Harden, J.; Randerson, J.; Bellassen, V.; Wang, T.; Piao, S. L.; Poulter, B.; Viovy, N.

    2013-04-01

    Stand-replacing fires are the dominant fire type in North American boreal forest and leave a historical legacy of a mosaic landscape of different aged forest cohorts. To accurately quantify the role of fire in historical and current regional forest carbon balance using models, one needs to explicitly simulate the new forest cohort that is established after fire. The present study adapted the global process-based vegetation model ORCHIDEE to simulate boreal forest fire CO2 emissions and follow-up recovery after a stand-replacing fire, with representation of postfire new cohort establishment, forest stand structure and the following self-thinning process. Simulation results are evaluated against three clusters of postfire forest chronosequence observations in Canada and Alaska. Evaluation variables for simulated postfire carbon dynamics include: fire carbon emissions, CO2 fluxes (gross primary production, total ecosystem respiration and net ecosystem exchange), leaf area index (LAI), and biometric measurements (aboveground biomass carbon, forest floor carbon, woody debris carbon, stand individual density, stand basal area, and mean diameter at breast height). The model simulation results, when forced by local climate and the atmospheric CO2 history on each chronosequence site, generally match the observed CO2 fluxes and carbon stock data well, with model-measurement mean square root of deviation comparable with measurement accuracy (for CO2 flux ~100 g C m-2 yr-1, for biomass carbon ~1000 g C m-2 and for soil carbon ~2000 g C m-2). We find that current postfire forest carbon sink on evaluation sites observed by chronosequence methods is mainly driven by historical atmospheric CO2 increase when forests recover from fire disturbance. Historical climate generally exerts a negative effect, probably due to increasing water stress caused by significant temperature increase without sufficient increase in precipitation. Our simulation results demonstrate that a global

  17. Multi-model comparison on the effects of climate change on tree species in the eastern U.S.: results from an enhanced niche model and process-based ecosystem and landscape models

    Treesearch

    Louis R. Iverson; Frank R. Thompson; Stephen Matthews; Matthew Peters; Anantha Prasad; William D. Dijak; Jacob Fraser; Wen J. Wang; Brice Hanberry; Hong He; Maria Janowiak; Patricia Butler; Leslie Brandt; Chris Swanston

    2016-01-01

    Context. Species distribution models (SDM) establish statistical relationships between the current distribution of species and key attributes whereas process-based models simulate ecosystem and tree species dynamics based on representations of physical and biological processes. TreeAtlas, which uses DISTRIB SDM, and Linkages and LANDIS PRO, process...

  18. Moon-based Earth Observation for Large Scale Geoscience Phenomena

    NASA Astrophysics Data System (ADS)

    Guo, Huadong; Liu, Guang; Ding, Yixing

    2016-07-01

    The capability of Earth observation for large-global-scale natural phenomena needs to be improved and new observing platform are expected. We have studied the concept of Moon as an Earth observation in these years. Comparing with manmade satellite platform, Moon-based Earth observation can obtain multi-spherical, full-band, active and passive information,which is of following advantages: large observation range, variable view angle, long-term continuous observation, extra-long life cycle, with the characteristics of longevity ,consistency, integrity, stability and uniqueness. Moon-based Earth observation is suitable for monitoring the large scale geoscience phenomena including large scale atmosphere change, large scale ocean change,large scale land surface dynamic change,solid earth dynamic change,etc. For the purpose of establishing a Moon-based Earth observation platform, we already have a plan to study the five aspects as follows: mechanism and models of moon-based observing earth sciences macroscopic phenomena; sensors' parameters optimization and methods of moon-based Earth observation; site selection and environment of moon-based Earth observation; Moon-based Earth observation platform; and Moon-based Earth observation fundamental scientific framework.

  19. Ares Upper Stage Processes to Implement Model Based Design - Going Paperless

    NASA Technical Reports Server (NTRS)

    Gregory, Melanie

    2012-01-01

    Computer-Aided Design (CAD) has all but replaced the drafting board for design work. Increased productivity and accuracy should be natural outcomes of using CAD. Going from paper drawings only to paper drawings based on CAD models to CAD models and no drawings, or Model Based Design (MBD), is a natural progression in today?s world. There are many advantages to MBD over traditional design methods. To make the most of those advantages, standards should be in place and the proper foundation should be laid prior to transitioning to MBD. However, without a full understanding of the implications of MBD and the proper control of the data, the advantages are greatly diminished. Transitioning from a paper design world to an electronic design world means re-thinking how information gets controlled at its origin and distributed from one point to another. It means design methodology is critical, especially for large projects. It means preparation of standardized parts and processes as well as strong communication between all parties in order to maximize the benefits of MBD.

  20. Linking Volcano Infrasound Observations to Conduit Processes for Vulcanian Eruptions

    NASA Astrophysics Data System (ADS)

    Watson, L. M.; Dunham, E. M.; Almquist, M.; Mattsson, K.; Ampong, K.

    2016-12-01

    Volcano infrasound observations have been used to infer a range of eruption parameters, such as volume flux and exit velocity, with the majority of work focused on subaerial processes. Here, we propose using infrasound observations to investigate the subsurface processes of the volcanic system. We develop a one-dimensional model of the volcanic system, coupling an unsteady conduit model to a description of a volcanic jet with sound waves generated by the expansion of the jet. The conduit model describes isothermal two-phase flow with no relative motion between the phases. We are currently working on including crystals and adding conservation of energy to the governing equations. The model captures the descent of the fragmentation front into the conduit and approaches a steady state solution with choked flow at the vent. The descending fragmentation front influences the time history of mass discharge from the vent, which is linked to the infrasound signal through the volcanic jet model. The jet model is coupled to the conduit by conservation of mass, momentum, and energy. We compare simulation results for a range of models of the volcanic jet, ranging in complexity from assuming conservation of volume, as has been done in some previous infrasound studies, to solving the Euler equations for the surrounding compressible atmosphere and accounting for entrainment. Our model is designed for short-lived, impulsive Vulcanian eruptions, such as those seen at Sakurajima Volcano, with activity triggered by a sudden drop in pressure at the top of the conduit. The intention is to compare the simulated signals to observations and to devise an inverse procedure to enable inversion for conduit properties.

  1. Model Uncertainty Quantification Methods For Data Assimilation In Partially Observed Multi-Scale Systems

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.; van Leeuwen, P. J.

    2017-12-01

    Model Uncertainty Quantification remains one of the central challenges of effective Data Assimilation (DA) in complex partially observed non-linear systems. Stochastic parameterization methods have been proposed in recent years as a means of capturing the uncertainty associated with unresolved sub-grid scale processes. Such approaches generally require some knowledge of the true sub-grid scale process or rely on full observations of the larger scale resolved process. We present a methodology for estimating the statistics of sub-grid scale processes using only partial observations of the resolved process. It finds model error realisations over a training period by minimizing their conditional variance, constrained by available observations. Special is that these realisations are binned conditioned on the previous model state during the minimization process, allowing for the recovery of complex error structures. The efficacy of the approach is demonstrated through numerical experiments on the multi-scale Lorenz 96' model. We consider different parameterizations of the model with both small and large time scale separations between slow and fast variables. Results are compared to two existing methods for accounting for model uncertainty in DA and shown to provide improved analyses and forecasts.

  2. Forecasting sea cliff retreat in Southern California using process-based models and artificial neural networks

    NASA Astrophysics Data System (ADS)

    Limber, P. W.; Barnard, P.; Erikson, L. H.

    2016-02-01

    Modeling coastal geomorphic change over multi-decadal time and regional spatial scales (i.e. >20 km alongshore) is in high demand due to rising global sea levels and heavily populated coastal zones, but is challenging for several reasons: adequate geomorphic and oceanographic data often does not exist over the entire study area or time period; models can be too computationally expensive; and model uncertainty is high. In the absence of rich datasets and unlimited computer processing power, researchers are forced to leverage existing data, however sparse, and find analytical methods that minimize computation time without sacrificing (too much) model reliability. Machine learning techniques, such as artificial neural networks, can assimilate and efficiently extrapolate geomorphic model behavior over large areas. They can also facilitate ensemble model forecasts over a broad range of parameter space, which is useful when a paucity of observational data inhibits the constraint of model parameters. Here, we assimilate the behavior of two established process-based sea cliff erosion and retreat models into a neural network to forecast the impacts of sea level rise on sea cliff retreat in Southern California ( 400 km) through the 21st century. Using inputs such as historical cliff retreat rates, mean wave power, and whether or not a beach is present, the neural network independently reproduces modeled sea cliff retreat as a function of sea level rise with a high degree of confidence (R2 > 0.9, mean squared error < 0.1 m yr-1). Results will continuously improve as more model scenarios are assimilated into the neural network, and more field data (i.e., cliff composition and rock hardness) becomes available to tune the cliff retreat models. Preliminary results suggest that sea level rise rates of 2 to 20 mm yr-1 during the next century could accelerate historical cliff retreat rates in Southern California by an average of 0.10 - 0.56 m yr-1.

  3. A geomorphology-based ANFIS model for multi-station modeling of rainfall-runoff process

    NASA Astrophysics Data System (ADS)

    Nourani, Vahid; Komasi, Mehdi

    2013-05-01

    This paper demonstrates the potential use of Artificial Intelligence (AI) techniques for predicting daily runoff at multiple gauging stations. Uncertainty and complexity of the rainfall-runoff process due to its variability in space and time in one hand and lack of historical data on the other hand, cause difficulties in the spatiotemporal modeling of the process. In this paper, an Integrated Geomorphological Adaptive Neuro-Fuzzy Inference System (IGANFIS) model conjugated with C-means clustering algorithm was used for rainfall-runoff modeling at multiple stations of the Eel River watershed, California. The proposed model could be used for predicting runoff in the stations with lack of data or any sub-basin within the watershed because of employing the spatial and temporal variables of the sub-basins as the model inputs. This ability of the integrated model for spatiotemporal modeling of the process was examined through the cross validation technique for a station. In this way, different ANFIS structures were trained using Sugeno algorithm in order to estimate daily discharge values at different stations. In order to improve the model efficiency, the input data were then classified into some clusters by the means of fuzzy C-means (FCMs) method. The goodness-of-fit measures support the gainful use of the IGANFIS and FCM methods in spatiotemporal modeling of hydrological processes.

  4. Unifying Model-Based and Reactive Programming within a Model-Based Executive

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)

    1999-01-01

    Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.

  5. Models for Temperature and Composition in Uranus from Spitzer, Herschel and Ground-Based Infrared through Millimeter Observations

    NASA Astrophysics Data System (ADS)

    Orton, Glenn S.; Fletcher, Leigh N.; Feuchtgruber, Helmut; Lellouch, Emmanuel; Moreno, Raphel; Encrenaz, Therese; Hartogh, Paul; Jarchow, Christopher; Swinyard, Bruce; Cavalie, Thibault; Moses, Julianne; Burgdorf, Martin; Hammel, Heidi; Line, Michael; Mainzer, Amy K.; Hofstadter, Mark; Sandell, Goran H.; Dowell, C. Darren; Pantin, Eric; Fujiyoshi, Takuya

    2014-11-01

    Photometric and spectroscopic observations of Uranus in the thermal infrared were combined to create self-consistent models of its global-mean temperature profile and vertical distribution of gases. These were derived from a suite of observations from Spitzer and Herschel, together with ground-based observations from UKIRT, CSO, Gemini, VLT and Subaru. Observations of the collision-induced absorption and quadrupoles of H2 have constrained the temperature structure for pressures of nearly 2 bars down to 0.1 millibars. We coupled the vertical distribution of CH4 in the stratosphere of Uranus with models for the vertical mixing in such a way to be consistent with the mixing ratios of hydrocarbons. Spitzer and Herschel data constrain the abundances of CH3, CH4, C2H2, C2H6, C3H4, C4H2, H2O and CO2. The Spitzer IRS data, in concert with photochemical models, show that the homopause is at much higher atmospheric pressures than for the other outer planets, with the predominant trace constituents for pressures lower than 30 µbar being H2O and CO2. The ratio of the oxygen-bearing molecules is consistent with exogenic origins in KBOs or comets. At millimeter wavelengths, there is evidence that an additional opacity source is required besides the H2 collision-induced absorption and the NH3 absorption needed to match the microwave spectrum; this can reasonably (but not uniquely) be attributed to H2S. This model is of ‘programmatic’ interest because it serves as a standard calibration source; the cross-comparison of its spectrum with model spectra for Mars and Neptune shows consistency to within 3%. Near equinox, the IRS spectra at different longitudes showed rotationally variable stratospheric emission that is consistent with a temperature anomaly ≤10 K near ~0.1-0.2 mbar. Spatial variability of tropospheric temperatures observed in ground-based observations from 2006 to 2011 is generally consistent with Voyager infrared (IRIS) results.

  6. Root traits explain observed tundra vegetation nitrogen uptake patterns: Implications for trait-based land models: Tundra N Uptake Model-Data Comparison

    DOE PAGES

    Zhu, Qing; Iversen, Colleen M.; Riley, William J.; ...

    2016-12-23

    Ongoing climate warming will likely perturb vertical distributions of nitrogen availability in tundra soils through enhancing nitrogen mineralization and releasing previously inaccessible nitrogen from frozen permafrost soil. But, arctic tundra responses to such changes are uncertain, because of a lack of vertically explicit nitrogen tracer experiments and untested hypotheses of root nitrogen uptake under the stress of microbial competition implemented in land models. We conducted a vertically explicit 15N tracer experiment for three dominant tundra species to quantify plant N uptake profiles. Then we applied a nutrient competition model (N-COM), which is being integrated into the ACME Land Model, tomore » explain the observations. Observations using an 15N tracer showed that plant N uptake profiles were not consistently related to root biomass density profiles, which challenges the prevailing hypothesis that root density always exerts first-order control on N uptake. By considering essential root traits (e.g., biomass distribution and nutrient uptake kinetics) with an appropriate plant-microbe nutrient competition framework, our model reasonably reproduced the observed patterns of plant N uptake. Additionally, we show that previously applied nutrient competition hypotheses in Earth System Land Models fail to explain the diverse plant N uptake profiles we observed. These results cast doubt on current climate-scale model predictions of arctic plant responses to elevated nitrogen supply under a changing climate and highlight the importance of considering essential root traits in large-scale land models. Finally, we provided suggestions and a short synthesis of data availability for future trait-based land model development.« less

  7. Root traits explain observed tundra vegetation nitrogen uptake patterns: Implications for trait-based land models: Tundra N Uptake Model-Data Comparison

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Qing; Iversen, Colleen M.; Riley, William J.

    Ongoing climate warming will likely perturb vertical distributions of nitrogen availability in tundra soils through enhancing nitrogen mineralization and releasing previously inaccessible nitrogen from frozen permafrost soil. But, arctic tundra responses to such changes are uncertain, because of a lack of vertically explicit nitrogen tracer experiments and untested hypotheses of root nitrogen uptake under the stress of microbial competition implemented in land models. We conducted a vertically explicit 15N tracer experiment for three dominant tundra species to quantify plant N uptake profiles. Then we applied a nutrient competition model (N-COM), which is being integrated into the ACME Land Model, tomore » explain the observations. Observations using an 15N tracer showed that plant N uptake profiles were not consistently related to root biomass density profiles, which challenges the prevailing hypothesis that root density always exerts first-order control on N uptake. By considering essential root traits (e.g., biomass distribution and nutrient uptake kinetics) with an appropriate plant-microbe nutrient competition framework, our model reasonably reproduced the observed patterns of plant N uptake. Additionally, we show that previously applied nutrient competition hypotheses in Earth System Land Models fail to explain the diverse plant N uptake profiles we observed. These results cast doubt on current climate-scale model predictions of arctic plant responses to elevated nitrogen supply under a changing climate and highlight the importance of considering essential root traits in large-scale land models. Finally, we provided suggestions and a short synthesis of data availability for future trait-based land model development.« less

  8. Models for Temperature and Composition in Uranus from Spitzer, Herschel and Ground-Based Infrared through Millimeter Observations

    NASA Astrophysics Data System (ADS)

    Orton, Glenn; Fletcher, Leigh; Feuchtgruber, Helmut; Lellouch, Emmanuel; Moreno, Raphael; Hartogh, Paul; Jarchow, Christopher; Swinyard, Bruce; Moses, Julianne; Burgdorf, Martin; Hammel, Heidi; Line, Michael; Mainzer, Amy; Hofstadter, Mark; Sandell, Goran; Dowell, Charles

    2014-05-01

    Photometric and spectroscopic observations of Uranus were combined to create self-consistent models of its global-mean temperature profile, bulk composition, and vertical distribution of gases. These were derived from a suite of spacecraft and ground-based observations that includes the Spitzer IRS, and the Herschel HIFI, PACS and SPIRE instruments, together with ground-based observations from UKIRT and CSO. Observations of the collision-induced absorption of H2 have constrained the temperature structure in the troposphere; this was possible up to atmospheric pressures of ~2 bars. Temperatures in the stratosphere were constrained by H2 quadrupole line emission. We coupled the vertical distribution of CH4 in the stratosphere of Uranus with models for the vertical mixing in a way that is consistent with the mixing ratios of hydrocarbons whose abundances are influenced primarily by mixing rather than chemistry. Spitzer and Herschel data constrain the abundances of CH3, CH4, C2H2, C2H6, C3H4, C4H2, H2O and CO2. The Spitzer IRS data, in concert with photochemical models, show that the atmosphere the homopause is much higher pressures than for the other outer planets, with the predominant trace constituents for pressures lower than 10 μbar being H2O and CO2. At millimeter wavelengths, there is evidence that an additional opacity source is required besides the H2 collision-induced absorption and the NH3 absorption needed to match the microwave spectrum; this can reasonably (but not uniquely) be attributed to H2S. These models will be made more mature by consideration of spatial variability from Voyager IRIS and more recent spatially resolved imaging and mapping from ground-based observatories. The model is of 'programmatic' interest because it serves as a calibration source for Herschel instruments, and it provides a starting point for planning future spacecraft investigations of the atmosphere of Uranus.

  9. Modelling Middle Infrared Thermal Imagery from Observed or Simulated Active Fire

    NASA Astrophysics Data System (ADS)

    Paugam, R.; Gastellu-Etchegorry, J. P.; Mell, W.; Johnston, J.; Filippi, J. B.

    2016-12-01

    The Fire Radiative Power (FRP) is used in the atmospheric and fire communities to estimate fire emission. For example, the current version of the emission inventory GFAS is using FRP observation from the MODIS sensors to derive daily global distribution of fire emissions. Although the FRP product is widely accepted, most of its theoretical justifications are still based on small scale burns. When up-scaling to large fires effects of view angle, canopy cover, or smoke absorption are still unknown. To cover those questions, we are building a system based on the DART radiative transfer model to simulate the middle infrared radiance emitted by a propagating fire front and propagating in the surrounding scene made of ambient vegetation and plume aerosols. The current version of the system was applied to fire ranging from a 1m2 to 7ha. The 3D fire scene used as input in DART is made of the flame, the vegetation (burnt and unburnt), and the plume. It can be either set up from [i] 3D physical based model scene (ie WFDS, mainly applicable for small scale burn), [ii] coupled 2D fire spread - atmospheric models outputs (eg ForeFire-MesoNH) or [iii] derived from thermal imageries observations (here plume effects are not considered). In the last two cases, as the complexity of physical processes occurring in the flame (in particular soot formation and emission) is not to solved, the flames structures are parameterized with (a) temperature and soot concentration based on empirical derived profiles and (b) 3D triangular shape hull interpolated at the fire front location. Once the 3D fire scene is set up, DART is then used to render thermal imageries in the middle infrared. Using data collected from burns conducted at different scale, the modelled thermal imageries are compared against observations, and effects of view angle are discussed.

  10. Improved GSO Optimized ESN Soft-Sensor Model of Flotation Process Based on Multisource Heterogeneous Information Fusion

    PubMed Central

    Wang, Jie-sheng; Han, Shuang; Shen, Na-na

    2014-01-01

    For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, an echo state network (ESN) based fusion soft-sensor model optimized by the improved glowworm swarm optimization (GSO) algorithm is proposed. Firstly, the color feature (saturation and brightness) and texture features (angular second moment, sum entropy, inertia moment, etc.) based on grey-level co-occurrence matrix (GLCM) are adopted to describe the visual characteristics of the flotation froth image. Then the kernel principal component analysis (KPCA) method is used to reduce the dimensionality of the high-dimensional input vector composed by the flotation froth image characteristics and process datum and extracts the nonlinear principal components in order to reduce the ESN dimension and network complex. The ESN soft-sensor model of flotation process is optimized by the GSO algorithm with congestion factor. Simulation results show that the model has better generalization and prediction accuracy to meet the online soft-sensor requirements of the real-time control in the flotation process. PMID:24982935

  11. A Semiparametric Change-Point Regression Model for Longitudinal Observations.

    PubMed

    Xing, Haipeng; Ying, Zhiliang

    2012-12-01

    Many longitudinal studies involve relating an outcome process to a set of possibly time-varying covariates, giving rise to the usual regression models for longitudinal data. When the purpose of the study is to investigate the covariate effects when experimental environment undergoes abrupt changes or to locate the periods with different levels of covariate effects, a simple and easy-to-interpret approach is to introduce change-points in regression coefficients. In this connection, we propose a semiparametric change-point regression model, in which the error process (stochastic component) is nonparametric and the baseline mean function (functional part) is completely unspecified, the observation times are allowed to be subject-specific, and the number, locations and magnitudes of change-points are unknown and need to be estimated. We further develop an estimation procedure which combines the recent advance in semiparametric analysis based on counting process argument and multiple change-points inference, and discuss its large sample properties, including consistency and asymptotic normality, under suitable regularity conditions. Simulation results show that the proposed methods work well under a variety of scenarios. An application to a real data set is also given.

  12. Deployment and Evaluation of an Observations Data Model

    NASA Astrophysics Data System (ADS)

    Horsburgh, J. S.; Tarboton, D. G.; Zaslavsky, I.; Maidment, D. R.; Valentine, D.

    2007-12-01

    Environmental observations are fundamental to hydrology and water resources, and the way these data are organized and manipulated either enables or inhibits the analyses that can be performed. The CUAHSI Hydrologic Information System project is developing information technology infrastructure to support hydrologic science. This includes an Observations Data Model (ODM) that provides a new and consistent format for the storage and retrieval of environmental observations in a relational database designed to facilitate integrated analysis of large datasets collected by multiple investigators. Within this data model, observations are stored with sufficient ancillary information (metadata) about the observations to allow them to be unambiguously interpreted and used, and to provide traceable heritage from raw measurements to useable information. The design is based upon a relational database model that exposes each single observation as a record, taking advantage of the capability in relational database systems for querying based upon data values and enabling cross dimension data retrieval and analysis. This data model has been deployed, as part of the HIS Server, at the WATERS Network test bed observatories across the U.S where it serves as a repository for real time data in the observatory information system. The ODM holds the data that is then made available to investigators and the public through web services and the Data Access System for Hydrology (DASH) map based interface. In the WATERS Network test bed settings the ODM has been used to ingest, analyze and publish data from a variety of sources and disciplines. This paper will present an evaluation of the effectiveness of this initial deployment and the revisions that are being instituted to address shortcomings. The ODM represents a new, systematic way for hydrologists, scientists, and engineers to organize and share their data and thereby facilitate a fuller integrated understanding of water resources based on

  13. Stimulating Scientific Reasoning with Drawing-Based Modeling

    NASA Astrophysics Data System (ADS)

    Heijnes, Dewi; van Joolingen, Wouter; Leenaars, Frank

    2018-02-01

    We investigate the way students' reasoning about evolution can be supported by drawing-based modeling. We modified the drawing-based modeling tool SimSketch to allow for modeling evolutionary processes. In three iterations of development and testing, students in lower secondary education worked on creating an evolutionary model. After each iteration, the user interface and instructions were adjusted based on students' remarks and the teacher's observations. Students' conversations were analyzed on reasoning complexity as a measurement of efficacy of the modeling tool and the instructions. These findings were also used to compose a set of recommendations for teachers and curriculum designers for using and constructing models in the classroom. Our findings suggest that to stimulate scientific reasoning in students working with a drawing-based modeling, tool instruction about the tool and the domain should be integrated. In creating models, a sufficient level of scaffolding is necessary. Without appropriate scaffolds, students are not able to create the model. With scaffolding that is too high, students may show reasoning that incorrectly assigns external causes to behavior in the model.

  14. CrowdWater - Can people observe what models need?

    NASA Astrophysics Data System (ADS)

    van Meerveld, I. H. J.; Seibert, J.; Vis, M.; Etter, S.; Strobl, B.

    2017-12-01

    CrowdWater (www.crowdwater.ch) is a citizen science project that explores the usefulness of crowd-sourced data for hydrological model calibration and prediction. Hydrological models are usually calibrated based on observed streamflow data but it is likely easier for people to estimate relative stream water levels, such as the water level above or below a rock, than streamflow. Relative stream water levels may, therefore, be a more suitable variable for citizen science projects than streamflow. In order to test this assumption, we held surveys near seven different sized rivers in Switzerland and asked more than 450 volunteers to estimate the water level class based on a picture with a virtual staff gauge. The results show that people can generally estimate the relative water level well, although there were also a few outliers. We also asked the volunteers to estimate streamflow based on the stick method. The median estimated streamflow was close to the observed streamflow but the spread in the streamflow estimates was large and there were very large outliers, suggesting that crowd-based streamflow data is highly uncertain. In order to determine the potential value of water level class data for model calibration, we converted streamflow time series for 100 catchments in the US to stream level class time series and used these to calibrate the HBV model. The model was then validated using the streamflow data. The results of this modeling exercise show that stream level class data are useful for constraining a simple runoff model. Time series of only two stream level classes, e.g. above or below a rock in the stream, were already informative, especially when the class boundary was chosen towards the highest stream levels. There was hardly any improvement in model performance when more than five water level classes were used. This suggests that if crowd-sourced stream level observations are available for otherwise ungauged catchments, these data can be used to constrain

  15. A Rule-Based Modeling for the Description of Flexible and Self-healing Business Processes

    NASA Astrophysics Data System (ADS)

    Boukhebouze, Mohamed; Amghar, Youssef; Benharkat, Aïcha-Nabila; Maamar, Zakaria

    In this paper we discuss the importance of ensuring that business processes are label robust and agile at the same time robust and agile. To this end, we consider reviewing the way business processes are managed. For instance we consider offering a flexible way to model processes so that changes in regulations are handled through some self-healing mechanisms. These changes may raise exceptions at run-time if not properly reflected on these processes. To this end we propose a new rule based model that adopts the ECA rules and is built upon formal tools. The business logic of a process can be summarized with a set of rules that implement an organization’s policies. Each business rule is formalized using our ECAPE formalism (Event-Condition-Action-Post condition- post Event). This formalism allows translating a process into a graph of rules that is analyzed in terms of reliably and flexibility.

  16. Toward a model-based cognitive neuroscience of mind wandering.

    PubMed

    Hawkins, G E; Mittner, M; Boekel, W; Heathcote, A; Forstmann, B U

    2015-12-03

    People often "mind wander" during everyday tasks, temporarily losing track of time, place, or current task goals. In laboratory-based tasks, mind wandering is often associated with performance decrements in behavioral variables and changes in neural recordings. Such empirical associations provide descriptive accounts of mind wandering - how it affects ongoing task performance - but fail to provide true explanatory accounts - why it affects task performance. In this perspectives paper, we consider mind wandering as a neural state or process that affects the parameters of quantitative cognitive process models, which in turn affect observed behavioral performance. Our approach thus uses cognitive process models to bridge the explanatory divide between neural and behavioral data. We provide an overview of two general frameworks for developing a model-based cognitive neuroscience of mind wandering. The first approach uses neural data to segment observed performance into a discrete mixture of latent task-related and task-unrelated states, and the second regresses single-trial measures of neural activity onto structured trial-by-trial variation in the parameters of cognitive process models. We discuss the relative merits of the two approaches, and the research questions they can answer, and highlight that both approaches allow neural data to provide additional constraint on the parameters of cognitive models, which will lead to a more precise account of the effect of mind wandering on brain and behavior. We conclude by summarizing prospects for mind wandering as conceived within a model-based cognitive neuroscience framework, highlighting the opportunities for its continued study and the benefits that arise from using well-developed quantitative techniques to study abstract theoretical constructs. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. Modeling of yield and environmental impact categories in tea processing units based on artificial neural networks.

    PubMed

    Khanali, Majid; Mobli, Hossein; Hosseinzadeh-Bandbafha, Homa

    2017-12-01

    In this study, an artificial neural network (ANN) model was developed for predicting the yield and life cycle environmental impacts based on energy inputs required in processing of black tea, green tea, and oolong tea in Guilan province of Iran. A life cycle assessment (LCA) approach was used to investigate the environmental impact categories of processed tea based on the cradle to gate approach, i.e., from production of input materials using raw materials to the gate of tea processing units, i.e., packaged tea. Thus, all the tea processing operations such as withering, rolling, fermentation, drying, and packaging were considered in the analysis. The initial data were obtained from tea processing units while the required data about the background system was extracted from the EcoInvent 2.2 database. LCA results indicated that diesel fuel and corrugated paper box used in drying and packaging operations, respectively, were the main hotspots. Black tea processing unit caused the highest pollution among the three processing units. Three feed-forward back-propagation ANN models based on Levenberg-Marquardt training algorithm with two hidden layers accompanied by sigmoid activation functions and a linear transfer function in output layer, were applied for three types of processed tea. The neural networks were developed based on energy equivalents of eight different input parameters (energy equivalents of fresh tea leaves, human labor, diesel fuel, electricity, adhesive, carton, corrugated paper box, and transportation) and 11 output parameters (yield, global warming, abiotic depletion, acidification, eutrophication, ozone layer depletion, human toxicity, freshwater aquatic ecotoxicity, marine aquatic ecotoxicity, terrestrial ecotoxicity, and photochemical oxidation). The results showed that the developed ANN models with R 2 values in the range of 0.878 to 0.990 had excellent performance in predicting all the output variables based on inputs. Energy consumption for

  18. Nonlinear Friction Compensation of Ball Screw Driven Stage Based on Variable Natural Length Spring Model and Disturbance Observer

    NASA Astrophysics Data System (ADS)

    Asaumi, Hiroyoshi; Fujimoto, Hiroshi

    Ball screw driven stages are used for industrial equipments such as machine tools and semiconductor equipments. Fast and precise positioning is necessary to enhance productivity and microfabrication technology of the system. The rolling friction of the ball screw driven stage deteriorate the positioning performance. Therefore, the control system based on the friction model is necessary. In this paper, we propose variable natural length spring model (VNLS model) as the friction model. VNLS model is simple and easy to implement as friction controller. Next, we propose multi variable natural length spring model (MVNLS model) as the friction model. MVNLS model can represent friction characteristic of the stage precisely. Moreover, the control system based on MVNLS model and disturbance observer is proposed. Finally, the simulation results and experimental results show the advantages of the proposed method.

  19. Off-target model based OPC

    NASA Astrophysics Data System (ADS)

    Lu, Mark; Liang, Curtis; King, Dion; Melvin, Lawrence S., III

    2005-11-01

    Model-based Optical Proximity correction has become an indispensable tool for achieving wafer pattern to design fidelity at current manufacturing process nodes. Most model-based OPC is performed considering the nominal process condition, with limited consideration of through process manufacturing robustness. This study examines the use of off-target process models - models that represent non-nominal process states such as would occur with a dose or focus variation - to understands and manipulate the final pattern correction to a more process robust configuration. The study will first examine and validate the process of generating an off-target model, then examine the quality of the off-target model. Once the off-target model is proven, it will be used to demonstrate methods of generating process robust corrections. The concepts are demonstrated using a 0.13 μm logic gate process. Preliminary indications show success in both off-target model production and process robust corrections. With these off-target models as tools, mask production cycle times can be reduced.

  20. Self-optimisation and model-based design of experiments for developing a C-H activation flow process.

    PubMed

    Echtermeyer, Alexander; Amar, Yehia; Zakrzewski, Jacek; Lapkin, Alexei

    2017-01-01

    A recently described C(sp 3 )-H activation reaction to synthesise aziridines was used as a model reaction to demonstrate the methodology of developing a process model using model-based design of experiments (MBDoE) and self-optimisation approaches in flow. The two approaches are compared in terms of experimental efficiency. The self-optimisation approach required the least number of experiments to reach the specified objectives of cost and product yield, whereas the MBDoE approach enabled a rapid generation of a process model.

  1. Changing ecophysiological processes and carbon budget in East Asian ecosystems under near-future changes in climate: implications for long-term monitoring from a process-based model.

    PubMed

    Ito, Akihiko

    2010-07-01

    Using a process-based model, I assessed how ecophysiological processes would respond to near-future global changes predicted by coupled atmosphere-ocean climate models. An ecosystem model, Vegetation Integrative SImulator for Trace gases (VISIT), was applied to four sites in East Asia (different types of forest in Takayama, Tomakomai, and Fujiyoshida, Japan, and an Alpine grassland in Qinghai, China) where observational flux data are available for model calibration. The climate models predicted +1-3 degrees C warming and slight change in annual precipitation by 2050 as a result of an increase in atmospheric CO2. Gross primary production (GPP) was estimated to increase substantially at each site because of improved efficiency in the use of water and radiation. Although increased respiration partly offset the GPP increase, the simulation showed that these ecosystems would act as net carbon sinks independent of disturbance-induced uptake for recovery. However, the carbon budget response relied strongly on nitrogen availability, such that photosynthetic down-regulation resulting from leaf nitrogen dilution largely decreased GPP. In relation to long-term monitoring, these results indicate that the impacts of global warming may be more evident in gross fluxes (e.g., photosynthesis and respiration) than in the net CO2 budget, because changes in these fluxes offset each other.

  2. A novel process-based model of microbial growth: self-inhibition in Saccharomyces cerevisiae aerobic fed-batch cultures.

    PubMed

    Mazzoleni, Stefano; Landi, Carmine; Cartenì, Fabrizio; de Alteriis, Elisabetta; Giannino, Francesco; Paciello, Lucia; Parascandola, Palma

    2015-07-30

    Microbial population dynamics in bioreactors depend on both nutrients availability and changes in the growth environment. Research is still ongoing on the optimization of bioreactor yields focusing on the increase of the maximum achievable cell density. A new process-based model is proposed to describe the aerobic growth of Saccharomyces cerevisiae cultured on glucose as carbon and energy source. The model considers the main metabolic routes of glucose assimilation (fermentation to ethanol and respiration) and the occurrence of inhibition due to the accumulation of both ethanol and other self-produced toxic compounds in the medium. Model simulations reproduced data from classic and new experiments of yeast growth in batch and fed-batch cultures. Model and experimental results showed that the growth decline observed in prolonged fed-batch cultures had to be ascribed to self-produced inhibitory compounds other than ethanol. The presented results clarify the dynamics of microbial growth under different feeding conditions and highlight the relevance of the negative feedback by self-produced inhibitory compounds on the maximum cell densities achieved in a bioreactor.

  3. The Role of Laboratory-Based Studies of the Physical and Biological Properties of Sea Ice in Supporting the Observation and Modeling of Ice Covered Seas

    NASA Astrophysics Data System (ADS)

    Light, B.; Krembs, C.

    2003-12-01

    Laboratory-based studies of the physical and biological properties of sea ice are an essential link between high latitude field observations and existing numerical models. Such studies promote improved understanding of climatic variability and its impact on sea ice and the structure of ice-dependent marine ecosystems. Controlled laboratory experiments can help identify feedback mechanisms between physical and biological processes and their response to climate fluctuations. Climatically sensitive processes occurring between sea ice and the atmosphere and sea ice and the ocean determine surface radiative energy fluxes and the transfer of nutrients and mass across these boundaries. High temporally and spatially resolved analyses of sea ice under controlled environmental conditions lend insight to the physics that drive these transfer processes. Techniques such as optical probing, thin section photography, and microscopy can be used to conduct experiments on natural sea ice core samples and laboratory-grown ice. Such experiments yield insight on small scale processes from the microscopic to the meter scale and can be powerful interdisciplinary tools for education and model parameterization development. Examples of laboratory investigations by the authors include observation of the response of sea ice microstructure to changes in temperature, assessment of the relationships between ice structure and the partitioning of solar radiation by first-year sea ice covers, observation of pore evolution and interfacial structure, and quantification of the production and impact of microbial metabolic products on the mechanical, optical, and textural characteristics of sea ice.

  4. Modelling a model?!! Prediction of observed and calculated daily pan evaporation in New Mexico, U.S.A.

    NASA Astrophysics Data System (ADS)

    Beriro, D. J.; Abrahart, R. J.; Nathanail, C. P.

    2012-04-01

    Data-driven modelling is most commonly used to develop predictive models that will simulate natural processes. This paper, in contrast, uses Gene Expression Programming (GEP) to construct two alternative models of different pan evaporation estimations by means of symbolic regression: a simulator, a model of a real-world process developed on observed records, and an emulator, an imitator of some other model developed on predicted outputs calculated by that source model. The solutions are compared and contrasted for the purposes of determining whether any substantial differences exist between either option. This analysis will address recent arguments over the impact of using downloaded hydrological modelling datasets originating from different initial sources i.e. observed or calculated. These differences can be easily be overlooked by modellers, resulting in a model of a model developed on estimations derived from deterministic empirical equations and producing exceptionally high goodness-of-fit. This paper uses different lines-of-evidence to evaluate model output and in so doing paves the way for a new protocol in machine learning applications. Transparent modelling tools such as symbolic regression offer huge potential for explaining stochastic processes, however, the basic tenets of data quality and recourse to first principles with regard to problem understanding should not be trivialised. GEP is found to be an effective tool for the prediction of observed and calculated pan evaporation, with results supported by an understanding of the records, and of the natural processes concerned, evaluated using one-at-a-time response function sensitivity analysis. The results show that both architectures and response functions are very similar, implying that previously observed differences in goodness-of-fit can be explained by whether models are applied to observed or calculated data.

  5. Modern meta-heuristics based on nonlinear physics processes: A review of models and design procedures

    NASA Astrophysics Data System (ADS)

    Salcedo-Sanz, S.

    2016-10-01

    Meta-heuristic algorithms are problem-solving methods which try to find good-enough solutions to very hard optimization problems, at a reasonable computation time, where classical approaches fail, or cannot even been applied. Many existing meta-heuristics approaches are nature-inspired techniques, which work by simulating or modeling different natural processes in a computer. Historically, many of the most successful meta-heuristic approaches have had a biological inspiration, such as evolutionary computation or swarm intelligence paradigms, but in the last few years new approaches based on nonlinear physics processes modeling have been proposed and applied with success. Non-linear physics processes, modeled as optimization algorithms, are able to produce completely new search procedures, with extremely effective exploration capabilities in many cases, which are able to outperform existing optimization approaches. In this paper we review the most important optimization algorithms based on nonlinear physics, how they have been constructed from specific modeling of a real phenomena, and also their novelty in terms of comparison with alternative existing algorithms for optimization. We first review important concepts on optimization problems, search spaces and problems' difficulty. Then, the usefulness of heuristics and meta-heuristics approaches to face hard optimization problems is introduced, and some of the main existing classical versions of these algorithms are reviewed. The mathematical framework of different nonlinear physics processes is then introduced as a preparatory step to review in detail the most important meta-heuristics based on them. A discussion on the novelty of these approaches, their main computational implementation and design issues, and the evaluation of a novel meta-heuristic based on Strange Attractors mutation will be carried out to complete the review of these techniques. We also describe some of the most important application areas, in

  6. Benefit of Modeling the Observation Error in a Data Assimilation Framework Using Vegetation Information Obtained From Passive Based Microwave Data

    NASA Technical Reports Server (NTRS)

    Bolten, John D.; Mladenova, Iliana E.; Crow, Wade; De Jeu, Richard

    2016-01-01

    A primary operational goal of the United States Department of Agriculture (USDA) is to improve foreign market access for U.S. agricultural products. A large fraction of this crop condition assessment is based on satellite imagery and ground data analysis. The baseline soil moisture estimates that are currently used for this analysis are based on output from the modified Palmer two-layer soil moisture model, updated to assimilate near-real time observations derived from the Soil Moisture Ocean Salinity (SMOS) satellite. The current data assimilation system is based on a 1-D Ensemble Kalman Filter approach, where the observation error is modeled as a function of vegetation density. This allows for offsetting errors in the soil moisture retrievals. The observation error is currently adjusted using Normalized Difference Vegetation Index (NDVI) climatology. In this paper we explore the possibility of utilizing microwave-based vegetation optical depth instead.

  7. Disturbance observer based model predictive control for accurate atmospheric entry of spacecraft

    NASA Astrophysics Data System (ADS)

    Wu, Chao; Yang, Jun; Li, Shihua; Li, Qi; Guo, Lei

    2018-05-01

    Facing the complex aerodynamic environment of Mars atmosphere, a composite atmospheric entry trajectory tracking strategy is investigated in this paper. External disturbances, initial states uncertainties and aerodynamic parameters uncertainties are the main problems. The composite strategy is designed to solve these problems and improve the accuracy of Mars atmospheric entry. This strategy includes a model predictive control for optimized trajectory tracking performance, as well as a disturbance observer based feedforward compensation for external disturbances and uncertainties attenuation. 500-run Monte Carlo simulations show that the proposed composite control scheme achieves more precise Mars atmospheric entry (3.8 km parachute deployment point distribution error) than the baseline control scheme (8.4 km) and integral control scheme (5.8 km).

  8. Observation- and Model-Based Estimates of Particulate Dry Nitrogen Deposition to the Oceans.

    PubMed

    Baker, Alex R; Kanakidou, Maria; Altieri, Katye E; Daskalakis, Nikos; Okin, Gregory S; Myriokefalitakis, Stelios; Dentener, Frank; Uematsu, Mitsuo; Sarin, Manmohan M; Duce, Robert A; Galloway, James N; Keene, William C; Singh, Arvind; Zamora, Lauren; Lamarque, Jean-Francois; Hsu, Shih-Chieh; Rohekar, Shital S; Prospero, Joseph M

    2017-01-01

    Anthropogenic nitrogen (N) emissions to the atmosphere have increased significantly the deposition of nitrate (NO 3 - ) and ammonium (NH 4 + ) to the surface waters of the open ocean, with potential impacts on marine productivity and the global carbon cycle. Global-scale understanding of the impacts of N deposition to the oceans is reliant on our ability to produce and validate models of nitrogen emission, atmospheric chemistry, transport and deposition. In this work, ~2900 observations of aerosol NO 3 - and NH 4 + concentrations, acquired from sampling aboard ships in the period 1995 - 2012, are used to assess the performance of modelled N concentration and deposition fields over the remote ocean. Three ocean regions (the eastern tropical North Atlantic, the northern Indian Ocean and northwest Pacific) were selected, in which the density and distribution of observational data were considered sufficient to provide effective comparison to model products. All of these study regions are affected by transport and deposition of mineral dust, which alters the deposition of N, due to uptake of nitrogen oxides (NO x ) on mineral surfaces. Assessment of the impacts of atmospheric N deposition on the ocean requires atmospheric chemical transport models to report deposition fluxes, however these fluxes cannot be measured over the ocean. Modelling studies such as the Atmospheric Chemistry and Climate Model Intercomparison Project (ACCMIP), which only report deposition flux are therefore very difficult to validate for dry deposition. Here the available observational data were averaged over a 5° × 5° grid and compared to ACCMIP dry deposition fluxes (ModDep) of oxidised N (NO y ) and reduced N (NH x ) and to the following parameters from the TM4-ECPL (TM4) model: ModDep for NO y , NH x and particulate NO 3 - and NH 4 + , and surface-level particulate NO 3 - and NH 4 + concentrations. As a model ensemble, ACCMIP can be expected to be more robust than TM4, while TM4 gives

  9. Modeling and Simulation of Voids in Composite Tape Winding Process Based on Domain Superposition Technique

    NASA Astrophysics Data System (ADS)

    Deng, Bo; Shi, Yaoyao

    2017-11-01

    The tape winding technology is an effective way to fabricate rotationally composite materials. Nevertheless, some inevitable defects will seriously influence the performance of winding products. One of the crucial ways to identify the quality of fiber-reinforced composite material products is examining its void content. Significant improvement in products' mechanical properties can be achieved by minimizing the void defect. Two methods were applied in this study, finite element analysis and experimental testing, respectively, to investigate the mechanism of how void forming in composite tape winding processing. Based on the theories of interlayer intimate contact and Domain Superposition Technique (DST), a three-dimensional model of prepreg tape void with SolidWorks has been modeled in this paper. Whereafter, ABAQUS simulation software was used to simulate the void content change with pressure and temperature. Finally, a series of experiments were performed to determine the accuracy of the model-based predictions. The results showed that the model is effective for predicting the void content in the composite tape winding process.

  10. Process-based Cost Estimation for Ramjet/Scramjet Engines

    NASA Technical Reports Server (NTRS)

    Singh, Brijendra; Torres, Felix; Nesman, Miles; Reynolds, John

    2003-01-01

    Process-based cost estimation plays a key role in effecting cultural change that integrates distributed science, technology and engineering teams to rapidly create innovative and affordable products. Working together, NASA Glenn Research Center and Boeing Canoga Park have developed a methodology of process-based cost estimation bridging the methodologies of high-level parametric models and detailed bottoms-up estimation. The NASA GRC/Boeing CP process-based cost model provides a probabilistic structure of layered cost drivers. High-level inputs characterize mission requirements, system performance, and relevant economic factors. Design alternatives are extracted from a standard, product-specific work breakdown structure to pre-load lower-level cost driver inputs and generate the cost-risk analysis. As product design progresses and matures the lower level more detailed cost drivers can be re-accessed and the projected variation of input values narrowed, thereby generating a progressively more accurate estimate of cost-risk. Incorporated into the process-based cost model are techniques for decision analysis, specifically, the analytic hierarchy process (AHP) and functional utility analysis. Design alternatives may then be evaluated not just on cost-risk, but also user defined performance and schedule criteria. This implementation of full-trade study support contributes significantly to the realization of the integrated development environment. The process-based cost estimation model generates development and manufacturing cost estimates. The development team plans to expand the manufacturing process base from approximately 80 manufacturing processes to over 250 processes. Operation and support cost modeling is also envisioned. Process-based estimation considers the materials, resources, and processes in establishing cost-risk and rather depending on weight as an input, actually estimates weight along with cost and schedule.

  11. Improvements in GRACE Gravity Field Determination through Stochastic Observation Modeling

    NASA Astrophysics Data System (ADS)

    McCullough, C.; Bettadpur, S. V.

    2016-12-01

    Current unconstrained Release 05 GRACE gravity field solutions from the Center for Space Research (CSR RL05) assume random observation errors following an independent multivariate Gaussian distribution. This modeling of observations, a simplifying assumption, fails to account for long period, correlated errors arising from inadequacies in the background force models. Fully modeling the errors inherent in the observation equations, through the use of a full observation covariance (modeling colored noise), enables optimal combination of GPS and inter-satellite range-rate data and obviates the need for estimating kinematic empirical parameters during the solution process. Most importantly, fully modeling the observation errors drastically improves formal error estimates of the spherical harmonic coefficients, potentially enabling improved uncertainty quantification of scientific results derived from GRACE and optimizing combinations of GRACE with independent data sets and a priori constraints.

  12. Model-Based Infrared Metrology for Advanced Technology Nodes and 300 mm Wafer Processing

    NASA Astrophysics Data System (ADS)

    Rosenthal, Peter A.; Duran, Carlos; Tower, Josh; Mazurenko, Alex; Mantz, Ulrich; Weidner, Peter; Kasic, Alexander

    2005-09-01

    The use of infrared spectroscopy for production semiconductor process monitoring has evolved recently from primarily unpatterned, i.e. blanket test wafer measurements in a limited historical application space of blanket epitaxial, BPSG, and FSG layers to new applications involving patterned product wafer measurements, and new measurement capabilities. Over the last several years, the semiconductor industry has adopted a new set of materials associated with copper/low-k interconnects, and new structures incorporating exotic materials including silicon germanium, SOI substrates and high aspect ratio trenches. The new device architectures and more chemically sophisticated materials have raised new process control and metrology challenges that are not addressed by current measurement technology. To address the challenges we have developed a new infrared metrology tool designed for emerging semiconductor production processes, in a package compatible with modern production and R&D environments. The tool incorporates recent advances in reflectance instrumentation including highly accurate signal processing, optimized reflectometry optics, and model-based calibration and analysis algorithms. To meet the production requirements of the modern automated fab, the measurement hardware has been integrated with a fully automated 300 mm platform incorporating front opening unified pod (FOUP) interfaces, automated pattern recognition and high throughput ultra clean robotics. The tool employs a suite of automated dispersion-model analysis algorithms capable of extracting a variety of layer properties from measured spectra. The new tool provides excellent measurement precision, tool matching, and a platform for deploying many new production and development applications. In this paper we will explore the use of model based infrared analysis as a tool for characterizing novel bottle capacitor structures employed in high density dynamic random access memory (DRAM) chips. We will explore

  13. Model-based analysis of pattern motion processing in mouse primary visual cortex

    PubMed Central

    Muir, Dylan R.; Roth, Morgane M.; Helmchen, Fritjof; Kampa, Björn M.

    2015-01-01

    Neurons in sensory areas of neocortex exhibit responses tuned to specific features of the environment. In visual cortex, information about features such as edges or textures with particular orientations must be integrated to recognize a visual scene or object. Connectivity studies in rodent cortex have revealed that neurons make specific connections within sub-networks sharing common input tuning. In principle, this sub-network architecture enables local cortical circuits to integrate sensory information. However, whether feature integration indeed occurs locally in rodent primary sensory areas has not been examined directly. We studied local integration of sensory features in primary visual cortex (V1) of the mouse by presenting drifting grating and plaid stimuli, while recording the activity of neuronal populations with two-photon calcium imaging. Using a Bayesian model-based analysis framework, we classified single-cell responses as being selective for either individual grating components or for moving plaid patterns. Rather than relying on trial-averaged responses, our model-based framework takes into account single-trial responses and can easily be extended to consider any number of arbitrary predictive models. Our analysis method was able to successfully classify significantly more responses than traditional partial correlation (PC) analysis, and provides a rigorous statistical framework to rank any number of models and reject poorly performing models. We also found a large proportion of cells that respond strongly to only one stimulus class. In addition, a quarter of selectively responding neurons had more complex responses that could not be explained by any simple integration model. Our results show that a broad range of pattern integration processes already take place at the level of V1. This diversity of integration is consistent with processing of visual inputs by local sub-networks within V1 that are tuned to combinations of sensory features. PMID

  14. Understanding Transient Forcing with Plasma Instability Model, Ionospheric Propagation Model and GNSS Observations

    NASA Astrophysics Data System (ADS)

    Deshpande, K.; Zettergren, M. D.; Datta-Barua, S.

    2017-12-01

    Fluctuations in the Global Navigation Satellite Systems (GNSS) signals observed as amplitude and phase scintillations are produced by plasma density structures in the ionosphere. Phase scintillation events in particular occur due to structures at Fresnel scales, typically about 250 meters at ionospheric heights and GNSS frequency. Likely processes contributing to small-scale density structuring in auroral and polar regions include ionospheric gradient-drift instability (GDI) and Kelvin-Helmholtz instability (KHI), which result, generally, from magnetosphere-ionosphere interactions (e.g. reconnection) associated with cusp and auroral zone regions. Scintillation signals, ostensibly from either GDI or KHI, are frequently observed in the high latitude ionosphere and are potentially useful diagnostics of how energy from the transient forcing in the cusp or polar cap region cascades, via instabilities, to small scales. However, extracting quantitative details of instabilities leading to scintillation using GNSS data drastically benefits from both a model of the irregularities and a model of GNSS signal propagation through irregular media. This work uses a physics-based model of the generation of plasma density irregularities (GEMINI - Geospace Environment Model of Ion-Neutral Interactions) coupled to an ionospheric radio wave propagation model (SIGMA - Satellite-beacon Ionospheric-scintillation Global Model of the upper Atmosphere) to explore the cascade of density structures from medium to small (sub-kilometer) scales. Specifically, GEMINI-SIGMA is used to simulate expected scintillation from different instabilities during various stages of evolution to determine features of the scintillation that may be useful to studying ionospheric density structures. Furthermore we relate the instabilities producing GNSS scintillations to the transient space and time-dependent magnetospheric phenomena and further predict characteristics of scintillation in different geophysical

  15. Physics-based process modeling, reliability prediction, and design guidelines for flip-chip devices

    NASA Astrophysics Data System (ADS)

    Michaelides, Stylianos

    Flip Chip on Board (FCOB) and Chip-Scale Packages (CSPs) are relatively new technologies that are being increasingly used in the electronic packaging industry. Compared to the more widely used face-up wirebonding and TAB technologies, flip-chips and most CSPs provide the shortest possible leads, lower inductance, higher frequency, better noise control, higher density, greater input/output (I/O), smaller device footprint and lower profile. However, due to the short history and due to the introduction of several new electronic materials, designs, and processing conditions, very limited work has been done to understand the role of material, geometry, and processing parameters on the reliability of flip-chip devices. Also, with the ever-increasing complexity of semiconductor packages and with the continued reduction in time to market, it is too costly to wait until the later stages of design and testing to discover that the reliability is not satisfactory. The objective of the research is to develop integrated process-reliability models that will take into consideration the mechanics of assembly processes to be able to determine the reliability of face-down devices under thermal cycling and long-term temperature dwelling. The models incorporate the time and temperature-dependent constitutive behavior of various materials in the assembly to be able to predict failure modes such as die cracking and solder cracking. In addition, the models account for process-induced defects and macro-micro features of the assembly. Creep-fatigue and continuum-damage mechanics models for the solder interconnects and fracture-mechanics models for the die have been used to determine the reliability of the devices. The results predicted by the models have been successfully validated against experimental data. The validated models have been used to develop qualification and test procedures for implantable medical devices. In addition, the research has helped develop innovative face

  16. A model for memory systems based on processing modes rather than consciousness.

    PubMed

    Henke, Katharina

    2010-07-01

    Prominent models of human long-term memory distinguish between memory systems on the basis of whether learning and retrieval occur consciously or unconsciously. Episodic memory formation requires the rapid encoding of associations between different aspects of an event which, according to these models, depends on the hippocampus and on consciousness. However, recent evidence indicates that the hippocampus mediates rapid associative learning with and without consciousness in humans and animals, for long-term and short-term retention. Consciousness seems to be a poor criterion for differentiating between declarative (or explicit) and non declarative (or implicit) types of memory. A new model is therefore required in which memory systems are distinguished based on the processing operations involved rather than by consciousness.

  17. The effect of observational learning on students' performance, processes, and motivation in two creative domains.

    PubMed

    Groenendijk, Talita; Janssen, Tanja; Rijlaarsdam, Gert; van den Bergh, Huub

    2013-03-01

    Previous research has shown that observation can be effective for learning in various domains, for example, argumentative writing and mathematics. The question in this paper is whether observational learning can also be beneficial when learning to perform creative tasks in visual and verbal arts. We hypothesized that observation has a positive effect on performance, process, and motivation. We expected similarity in competence between the model and the observer to influence the effectiveness of observation. Sample.  A total of 131 Dutch students (10(th) grade, 15 years old) participated. Two experiments were carried out (one for visual and one for verbal arts). Participants were randomly assigned to one of three conditions; two observational learning conditions and a control condition (learning by practising). The observational learning conditions differed in instructional focus (on the weaker or the more competent model of a pair to be observed). We found positive effects of observation on creative products, creative processes, and motivation in the visual domain. In the verbal domain, observation seemed to affect the creative process, but not the other variables. The model similarity hypothesis was not confirmed. Results suggest that observation may foster learning in creative domains, especially in the visual arts. © 2011 The British Psychological Society.

  18. Activated sludge model (ASM) based modelling of membrane bioreactor (MBR) processes: a critical review with special regard to MBR specificities.

    PubMed

    Fenu, A; Guglielmi, G; Jimenez, J; Spèrandio, M; Saroj, D; Lesjean, B; Brepols, C; Thoeye, C; Nopens, I

    2010-08-01

    Membrane bioreactors (MBRs) have been increasingly employed for municipal and industrial wastewater treatment in the last decade. The efforts for modelling of such wastewater treatment systems have always targeted either the biological processes (treatment quality target) as well as the various aspects of engineering (cost effective design and operation). The development of Activated Sludge Models (ASM) was an important evolution in the modelling of Conventional Activated Sludge (CAS) processes and their use is now very well established. However, although they were initially developed to describe CAS processes, they have simply been transferred and applied to MBR processes. Recent studies on MBR biological processes have reported several crucial specificities: medium to very high sludge retention times, high mixed liquor concentration, accumulation of soluble microbial products (SMP) rejected by the membrane filtration step, and high aeration rates for scouring purposes. These aspects raise the question as to what extent the ASM framework is applicable to MBR processes. Several studies highlighting some of the aforementioned issues are scattered through the literature. Hence, through a concise and structured overview of the past developments and current state-of-the-art in biological modelling of MBR, this review explores ASM-based modelling applied to MBR processes. The work aims to synthesize previous studies and differentiates between unmodified and modified applications of ASM to MBR. Particular emphasis is placed on influent fractionation, biokinetics, and soluble microbial products (SMPs)/exo-polymeric substances (EPS) modelling, and suggestions are put forward as to good modelling practice with regard to MBR modelling both for end-users and academia. A last section highlights shortcomings and future needs for improved biological modelling of MBR processes. (c) 2010 Elsevier Ltd. All rights reserved.

  19. LISP based simulation generators for modeling complex space processes

    NASA Technical Reports Server (NTRS)

    Tseng, Fan T.; Schroer, Bernard J.; Dwan, Wen-Shing

    1987-01-01

    The development of a simulation assistant for modeling discrete event processes is presented. Included are an overview of the system, a description of the simulation generators, and a sample process generated using the simulation assistant.

  20. Simple model of inhibition of chain-branching combustion processes

    NASA Astrophysics Data System (ADS)

    Babushok, Valeri I.; Gubernov, Vladimir V.; Minaev, Sergei S.; Miroshnichenko, Taisia P.

    2017-11-01

    A simple kinetic model has been suggested to describe the inhibition and extinction of flame propagation in reaction systems with chain-branching reactions typical for hydrocarbon systems. The model is based on the generalised model of the combustion process with chain-branching reaction combined with the one-stage reaction describing the thermal mode of flame propagation with the addition of inhibition reaction steps. Inhibitor addition suppresses the radical overshoot in flame and leads to the change of reaction mode from the chain-branching reaction to a thermal mode of flame propagation. With the increase of inhibitor the transition of chain-branching mode of reaction to the reaction with straight-chains (non-branching chain reaction) is observed. The inhibition part of the model includes a block of three reactions to describe the influence of the inhibitor. The heat losses are incorporated into the model via Newton cooling. The flame extinction is the result of the decreased heat release of inhibited reaction processes and the suppression of radical overshoot with the further decrease of the reaction rate due to the temperature decrease and mixture dilution. A comparison of the results of modelling laminar premixed methane/air flames inhibited by potassium bicarbonate (gas phase model, detailed kinetic model) with the results obtained using the suggested simple model is presented. The calculations with the detailed kinetic model demonstrate the following modes of combustion process: (1) flame propagation with chain-branching reaction (with radical overshoot, inhibitor addition decreases the radical overshoot down to the equilibrium level); (2) saturation of chemical influence of inhibitor, and (3) transition to thermal mode of flame propagation (non-branching chain mode of reaction). The suggested simple kinetic model qualitatively reproduces the modes of flame propagation with the addition of the inhibitor observed using detailed kinetic models.

  1. Improved Analysis of Earth System Models and Observations using Simple Climate Models

    NASA Astrophysics Data System (ADS)

    Nadiga, B. T.; Urban, N. M.

    2016-12-01

    Earth system models (ESM) are the most comprehensive tools we have to study climate change and develop climate projections. However, the computational infrastructure required and the cost incurred in running such ESMs precludes direct use of such models in conjunction with a wide variety of tools that can further our understanding of climate. Here we are referring to tools that range from dynamical systems tools that give insight into underlying flow structure and topology to tools that come from various applied mathematical and statistical techniques and are central to quantifying stability, sensitivity, uncertainty and predictability to machine learning tools that are now being rapidly developed or improved. Our approach to facilitate the use of such models is to analyze output of ESM experiments (cf. CMIP) using a range of simpler models that consider integral balances of important quantities such as mass and/or energy in a Bayesian framework.We highlight the use of this approach in the context of the uptake of heat by the world oceans in the ongoing global warming. Indeed, since in excess of 90% of the anomalous radiative forcing due greenhouse gas emissions is sequestered in the world oceans, the nature of ocean heat uptake crucially determines the surface warming that is realized (cf. climate sensitivity). Nevertheless, ESMs themselves are never run long enough to directly assess climate sensitivity. So, we consider a range of models based on integral balances--balances that have to be realized in all first-principles based models of the climate system including the most detailed state-of-the art climate simulations. The models range from simple models of energy balance to those that consider dynamically important ocean processes such as the conveyor-belt circulation (Meridional Overturning Circulation, MOC), North Atlantic Deep Water (NADW) formation, Antarctic Circumpolar Current (ACC) and eddy mixing. Results from Bayesian analysis of such models using

  2. Model Stirrer Based on a Multi-Material Turntable for Microwave Processing Materials

    PubMed Central

    Ye, Jinghua; Hong, Tao; Wu, Yuanyuan; Wu, Li; Liao, Yinhong; Zhu, Huacheng; Yang, Yang; Huang, Kama

    2017-01-01

    Microwaves have been widely used in the treatment of materials, such as heating, drying, and sterilization. However, the heating in the commonly used microwave applicators is usually uneven. In this paper, a novel multi-material turntable structure is creatively proposed to improve the temperature uniformity in microwave ovens. Three customized turntables consisting of polyethylene (PE) and alumina, PE and aluminum, and alumina and aluminum are, respectively, utilized in a domestic microwave oven in simulation. During the heating process, the processed material is placed on a fixed Teflon bracket which covers the constantly rotating turntable. Experiments are conducted to measure the surface and point temperatures using an infrared thermal imaging camera and optical fibers. Simulated results are compared qualitatively with the measured ones, which verifies the simulated models. Compared with the turntables consisting of a single material, a 26%–47% increase in temperature uniformity from adapting the multi-material turntable can be observed for the microwave-processed materials. PMID:28772457

  3. Robust Observation Detection for Single Object Tracking: Deterministic and Probabilistic Patch-Based Approaches

    PubMed Central

    Zulkifley, Mohd Asyraf; Rawlinson, David; Moran, Bill

    2012-01-01

    In video analytics, robust observation detection is very important as the content of the videos varies a lot, especially for tracking implementation. Contrary to the image processing field, the problems of blurring, moderate deformation, low illumination surroundings, illumination change and homogenous texture are normally encountered in video analytics. Patch-Based Observation Detection (PBOD) is developed to improve detection robustness to complex scenes by fusing both feature- and template-based recognition methods. While we believe that feature-based detectors are more distinctive, however, for finding the matching between the frames are best achieved by a collection of points as in template-based detectors. Two methods of PBOD—the deterministic and probabilistic approaches—have been tested to find the best mode of detection. Both algorithms start by building comparison vectors at each detected points of interest. The vectors are matched to build candidate patches based on their respective coordination. For the deterministic method, patch matching is done in 2-level test where threshold-based position and size smoothing are applied to the patch with the highest correlation value. For the second approach, patch matching is done probabilistically by modelling the histograms of the patches by Poisson distributions for both RGB and HSV colour models. Then, maximum likelihood is applied for position smoothing while a Bayesian approach is applied for size smoothing. The result showed that probabilistic PBOD outperforms the deterministic approach with average distance error of 10.03% compared with 21.03%. This algorithm is best implemented as a complement to other simpler detection methods due to heavy processing requirement. PMID:23202226

  4. Dual processing model of medical decision-making.

    PubMed

    Djulbegovic, Benjamin; Hozo, Iztok; Beckstead, Jason; Tsalatsanis, Athanasios; Pauker, Stephen G

    2012-09-03

    Dual processing theory of human cognition postulates that reasoning and decision-making can be described as a function of both an intuitive, experiential, affective system (system I) and/or an analytical, deliberative (system II) processing system. To date no formal descriptive model of medical decision-making based on dual processing theory has been developed. Here we postulate such a model and apply it to a common clinical situation: whether treatment should be administered to the patient who may or may not have a disease. We developed a mathematical model in which we linked a recently proposed descriptive psychological model of cognition with the threshold model of medical decision-making and show how this approach can be used to better understand decision-making at the bedside and explain the widespread variation in treatments observed in clinical practice. We show that physician's beliefs about whether to treat at higher (lower) probability levels compared to the prescriptive therapeutic thresholds obtained via system II processing is moderated by system I and the ratio of benefit and harms as evaluated by both system I and II. Under some conditions, the system I decision maker's threshold may dramatically drop below the expected utility threshold derived by system II. This can explain the overtreatment often seen in the contemporary practice. The opposite can also occur as in the situations where empirical evidence is considered unreliable, or when cognitive processes of decision-makers are biased through recent experience: the threshold will increase relative to the normative threshold value derived via system II using expected utility threshold. This inclination for the higher diagnostic certainty may, in turn, explain undertreatment that is also documented in the current medical practice. We have developed the first dual processing model of medical decision-making that has potential to enrich the current medical decision-making field, which is still to the

  5. Dual processing model of medical decision-making

    PubMed Central

    2012-01-01

    Background Dual processing theory of human cognition postulates that reasoning and decision-making can be described as a function of both an intuitive, experiential, affective system (system I) and/or an analytical, deliberative (system II) processing system. To date no formal descriptive model of medical decision-making based on dual processing theory has been developed. Here we postulate such a model and apply it to a common clinical situation: whether treatment should be administered to the patient who may or may not have a disease. Methods We developed a mathematical model in which we linked a recently proposed descriptive psychological model of cognition with the threshold model of medical decision-making and show how this approach can be used to better understand decision-making at the bedside and explain the widespread variation in treatments observed in clinical practice. Results We show that physician’s beliefs about whether to treat at higher (lower) probability levels compared to the prescriptive therapeutic thresholds obtained via system II processing is moderated by system I and the ratio of benefit and harms as evaluated by both system I and II. Under some conditions, the system I decision maker’s threshold may dramatically drop below the expected utility threshold derived by system II. This can explain the overtreatment often seen in the contemporary practice. The opposite can also occur as in the situations where empirical evidence is considered unreliable, or when cognitive processes of decision-makers are biased through recent experience: the threshold will increase relative to the normative threshold value derived via system II using expected utility threshold. This inclination for the higher diagnostic certainty may, in turn, explain undertreatment that is also documented in the current medical practice. Conclusions We have developed the first dual processing model of medical decision-making that has potential to enrich the current medical

  6. Fingerprints of endogenous process on Europa through linear spectral modeling of ground-based observations (ESO/VLT/SINFONI)

    NASA Astrophysics Data System (ADS)

    Ligier, Nicolas; Carter, John; Poulet, François; Langevin, Yves; Dumas, Christophe; Gourgeot, Florian

    2016-04-01

    Jupiter's moon Europa harbors a very young surface dated, based on cratering rates, to 10-50 M.y (Zahnle et al. 1998, Pappalardo et al. 1999). This young age implies rapid surface recycling and reprocessing, partially engendered by a global salty subsurface liquid ocean that could result in tectonic activity (Schmidt et al. 2011, Kattenhorn et al. 2014) and active plumes (Roth et al. 2014). The surface of Europa should contain important clues about the composition of this sub-surface briny ocean and about the potential presence of material of exobiological interest in it, thus reinforcing Europa as a major target of interest for upcoming space missions such as the ESA L-class mission JUICE. To perform the investigation of the composition of the surface of Europa, a global mapping campaign of the satellite was performed between October 2011 and January 2012 with the integral field spectrograph SINFONI on the Very Large Telescope (VLT) in Chile. The high spectral binning of this instrument (0.5 nm) is suitable to detect any narrow mineral signature in the wavelength range 1.45-2.45 μm. The spatially resolved spectra we obtained over five epochs nearly cover the entire surface of Europa with a pixel scale of 12.5 by 25 m.a.s (~35 by 70 km on Europa's surface), thus permitting a global scale study. Until recently, a large majority of studies only proposed sulfate salts along with sulfuric acid hydrate and water-ice to be present on Europa's surface. However, recent works based on Europa's surface coloration in the visible wavelength range and NIR spectral analysis support the hypothesis of the predominance of chlorine salts instead of sulfate salts (Hand & Carlson 2015, Fischer et al. 2015). Our linear spectral modeling supports this new hypothesis insofar as the use of Mg-bearing chlorines improved the fits whatever the region. As expected, the distribution of sulfuric acid hydrate is correlated to the Iogenic sulfur ion implantation flux distribution (Hendrix et al

  7. Comparing cropland net primary production estimates from inventory, a satellite-based model, and a process-based model in the Midwest of the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Zhengpeng; Liu, Shuguang; Tan, Zhengxi

    2014-04-01

    Accurately quantifying the spatial and temporal variability of net primary production (NPP) for croplands is essential to understand regional cropland carbon dynamics. We compared three NPP estimates for croplands in the Midwestern United States: inventory-based estimates using crop yield data from the U.S. Department of Agriculture (USDA) National Agricultural Statistics Service (NASS); estimates from the satellite-based Moderate Resolution Imaging Spectroradiometer (MODIS) NPP product; and estimates from the General Ensemble biogeochemical Modeling System (GEMS) process-based model. The three methods estimated mean NPP in the range of 469–687 g C m -2 yr -1 and total NPP in the range of 318–490more » Tg C yr -1 for croplands in the Midwest in 2007 and 2008. The NPP estimates from crop yield data and the GEMS model showed the mean NPP for croplands was over 650 g C m -2 yr -1 while the MODIS NPP product estimated the mean NPP was less than 500 g C m -2 yr -1. MODIS NPP also showed very different spatial variability of the cropland NPP from the other two methods. We found these differences were mainly caused by the difference in the land cover data and the crop specific information used in the methods. Our study demonstrated that the detailed mapping of the temporal and spatial change of crop species is critical for estimating the spatial and temporal variability of cropland NPP. Finally, we suggest that high resolution land cover data with species–specific crop information should be used in satellite-based and process-based models to improve carbon estimates for croplands.« less

  8. Comparing cropland net primary production estimates from inventory, a satellite-based model, and a process-based model in the Midwest of the United States

    USGS Publications Warehouse

    Li, Zhengpeng; Liu, Shuguang; Tan, Zhengxi; Bliss, Norman B.; Young, Claudia J.; West, Tristram O.; Ogle, Stephen M.

    2014-01-01

    Accurately quantifying the spatial and temporal variability of net primary production (NPP) for croplands is essential to understand regional cropland carbon dynamics. We compared three NPP estimates for croplands in the Midwestern United States: inventory-based estimates using crop yield data from the U.S. Department of Agriculture (USDA) National Agricultural Statistics Service (NASS); estimates from the satellite-based Moderate Resolution Imaging Spectroradiometer (MODIS) NPP product; and estimates from the General Ensemble biogeochemical Modeling System (GEMS) process-based model. The three methods estimated mean NPP in the range of 469–687 g C m−2 yr−1and total NPP in the range of 318–490 Tg C yr−1 for croplands in the Midwest in 2007 and 2008. The NPP estimates from crop yield data and the GEMS model showed the mean NPP for croplands was over 650 g C m−2 yr−1 while the MODIS NPP product estimated the mean NPP was less than 500 g C m−2 yr−1. MODIS NPP also showed very different spatial variability of the cropland NPP from the other two methods. We found these differences were mainly caused by the difference in the land cover data and the crop specific information used in the methods. Our study demonstrated that the detailed mapping of the temporal and spatial change of crop species is critical for estimating the spatial and temporal variability of cropland NPP. We suggest that high resolution land cover data with species–specific crop information should be used in satellite-based and process-based models to improve carbon estimates for croplands.

  9. Critical Review of NOAA's Observation Requirements Process

    NASA Astrophysics Data System (ADS)

    LaJoie, M.; Yapur, M.; Vo, T.; Templeton, A.; Bludis, D.

    2017-12-01

    NOAA's Observing Systems Council (NOSC) maintains a comprehensive database of user observation requirements. The requirements collection process engages NOAA subject matter experts to document and effectively communicate the specific environmental observation measurements (parameters and attributes) needed to produce operational products and pursue research objectives. User observation requirements documented using a structured and standardized manner and framework enables NOAA to assess its needs across organizational lines in an impartial, objective, and transparent manner. This structure provides the foundation for: selecting, designing, developing, acquiring observing technologies, systems and architectures; budget and contract formulation and decision-making; and assessing in a repeatable fashion the productivity, efficiency and optimization of NOAA's observing system enterprise. User observation requirements are captured independently from observing technologies. Therefore, they can be addressed by a variety of current or expected observing capabilities and allow flexibility to be remapped to new and evolving technologies. NOAA's current inventory of user observation requirements were collected over a ten-year period, and there have been many changes in policies, mission priorities, and funding levels during this time. In light of these changes, the NOSC initiated a critical, in-depth review to examine all aspects of user observation requirements and associated processes during 2017. This presentation provides background on the NOAA requirements process, major milestones and outcomes of the critical review, and plans for evolving and connecting observing requirements processes in the next year.

  10. Model-based adaptive 3D sonar reconstruction in reverberating environments.

    PubMed

    Saucan, Augustin-Alexandru; Sintes, Christophe; Chonavel, Thierry; Caillec, Jean-Marc Le

    2015-10-01

    In this paper, we propose a novel model-based approach for 3D underwater scene reconstruction, i.e., bathymetry, for side scan sonar arrays in complex and highly reverberating environments like shallow water areas. The presence of multipath echoes and volume reverberation generates false depth estimates. To improve the resulting bathymetry, this paper proposes and develops an adaptive filter, based on several original geometrical models. This multimodel approach makes it possible to track and separate the direction of arrival trajectories of multiple echoes impinging the array. Echo tracking is perceived as a model-based processing stage, incorporating prior information on the temporal evolution of echoes in order to reject cluttered observations generated by interfering echoes. The results of the proposed filter on simulated and real sonar data showcase the clutter-free and regularized bathymetric reconstruction. Model validation is carried out with goodness of fit tests, and demonstrates the importance of model-based processing for bathymetry reconstruction.

  11. Qualitative comparison of air temperature trends based on ncar/ncep reanalysis, model simulations and aerological observations data

    NASA Astrophysics Data System (ADS)

    Rubinstein, K. G.; Khan, V. M.; Sterin, A. M.

    In the present study we discuss two points. The first one is related with applicability of reanalysis data to investigating long-term climate variability. We present results of comparison of long term air temperature trends for the troposphere and the low stratosphere calculated using monthly averaged NCAR/NCEP reanalysis data on one hand and direct rawinsond observations from 443 stations on the other. The trends and other statistical characteristics are calculated for two overlapping time periods, namely 1964 through 1998, and 1979 through 1998. These two intervals were chosen in order to examine the influence of satellite observations on the reanalysis data, given that most satellite data have appeared after 1979. Vertical profiles of air temperature trends are also analyzed using the two types of data for different seasons. A special criterion is applied to evaluate the degree of coincidence by sign between the air temperatures trends derived from the two types of data. Vertical sections of the linear trend averaged over the 10-degrees zones for the both hemispheres are analyzed. It is shown that the two types of data exhibit good coincidence in the terms of the trend sign for the low and middle troposphere and low stratosphere over the areas well covered by the rawinsond observation net. Significant differences of the air temperature trend values are observed near the land surface and in the tropopause layer. The absolute value of the cooling rate of the tropical low stratosphere based on the rawinsond data is larger then that based on the reanalysis data. The presence of a positive trend in the low troposphere in the belt from ˜ 40N to ˜ 70N is evident in the two data sets. A comparative analysis of the trends for the both periods of observation shows that introducing satellite information in the reanalysis data resulted in an increase of the number of stations where the signs of the trend derived from the two sets of data coincide, especially in the

  12. Airport security inspection process model and optimization based on GSPN

    NASA Astrophysics Data System (ADS)

    Mao, Shuainan

    2018-04-01

    Aiming at the efficiency of airport security inspection process, Generalized Stochastic Petri Net is used to establish the security inspection process model. The model is used to analyze the bottleneck problem of airport security inspection process. The solution to the bottleneck is given, which can significantly improve the efficiency and reduce the waiting time by adding the place for people to remove their clothes and the X-ray detector.

  13. Modeling marine oily wastewater treatment by a probabilistic agent-based approach.

    PubMed

    Jing, Liang; Chen, Bing; Zhang, Baiyu; Ye, Xudong

    2018-02-01

    This study developed a novel probabilistic agent-based approach for modeling of marine oily wastewater treatment processes. It begins first by constructing a probability-based agent simulation model, followed by a global sensitivity analysis and a genetic algorithm-based calibration. The proposed modeling approach was tested through a case study of the removal of naphthalene from marine oily wastewater using UV irradiation. The removal of naphthalene was described by an agent-based simulation model using 8 types of agents and 11 reactions. Each reaction was governed by a probability parameter to determine its occurrence. The modeling results showed that the root mean square errors between modeled and observed removal rates were 8.73 and 11.03% for calibration and validation runs, respectively. Reaction competition was analyzed by comparing agent-based reaction probabilities, while agents' heterogeneity was visualized by plotting their real-time spatial distribution, showing a strong potential for reactor design and process optimization. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Toward a Tighter Coupling between Models and Observations of Arctic Energy Balance

    NASA Astrophysics Data System (ADS)

    L'Ecuyer, T. S.

    2016-12-01

    The Arctic climate is changing more rapidly than almost anywhere else on Earth owing to a number of unique feedbacks that locally amplify the effects of increased greenhouse gas concentrations. While the basic theory behind these feedback mechanisms has been known for a long time, current climate models still struggle to capture observed rates of sea ice decline and ice sheet melt. This may be explained, at least partially, by a lack of observational constraints on cloud and precipitation processes owing to the challenges of making sustained, high quality atmospheric measurements in this inhospitable region. This presentation will introduce a new multi-satellite, multi-model combined Arctic dataset for probing the state of the Arctic climate and documenting and improving prediction models. Recent satellite-based reconstructions of the Arctic energy budget and its annual cycle contained within this dataset will used to demonstrate that many climate models exhibit significant biases in several key energy flows in the region. These biases, in turn, lead to discrepancies in both the magnitude and seasonality of the implied heat transport into the Arctic from lower latitudes. The potential impacts of these biases on the surface mass balance of the Greenland Ice Sheet will be explored. New estimates of downwelling radiative fluxes that explicitly account for the effects of super-cooled liquid water observed by new active satellite sensors will be used to drive a regional ice sheet model to assess the sensitivity of ice sheet dynamical processes to uncertainties in surface radiation balance.

  15. Inhomogeneous Poisson process rate function inference from dead-time limited observations.

    PubMed

    Verma, Gunjan; Drost, Robert J

    2017-05-01

    The estimation of an inhomogeneous Poisson process (IHPP) rate function from a set of process observations is an important problem arising in optical communications and a variety of other applications. However, because of practical limitations of detector technology, one is often only able to observe a corrupted version of the original process. In this paper, we consider how inference of the rate function is affected by dead time, a period of time after the detection of an event during which a sensor is insensitive to subsequent IHPP events. We propose a flexible nonparametric Bayesian approach to infer an IHPP rate function given dead-time limited process realizations. Simulation results illustrate the effectiveness of our inference approach and suggest its ability to extend the utility of existing sensor technology by permitting more accurate inference on signals whose observations are dead-time limited. We apply our inference algorithm to experimentally collected optical communications data, demonstrating the practical utility of our approach in the context of channel modeling and validation.

  16. Improved workflow modelling using role activity diagram-based modelling with application to a radiology service case study.

    PubMed

    Shukla, Nagesh; Keast, John E; Ceglarek, Darek

    2014-10-01

    The modelling of complex workflows is an important problem-solving technique within healthcare settings. However, currently most of the workflow models use a simplified flow chart of patient flow obtained using on-site observations, group-based debates and brainstorming sessions, together with historic patient data. This paper presents a systematic and semi-automatic methodology for knowledge acquisition with detailed process representation using sequential interviews of people in the key roles involved in the service delivery process. The proposed methodology allows the modelling of roles, interactions, actions, and decisions involved in the service delivery process. This approach is based on protocol generation and analysis techniques such as: (i) initial protocol generation based on qualitative interviews of radiology staff, (ii) extraction of key features of the service delivery process, (iii) discovering the relationships among the key features extracted, and, (iv) a graphical representation of the final structured model of the service delivery process. The methodology is demonstrated through a case study of a magnetic resonance (MR) scanning service-delivery process in the radiology department of a large hospital. A set of guidelines is also presented in this paper to visually analyze the resulting process model for identifying process vulnerabilities. A comparative analysis of different workflow models is also conducted. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  17. Observing Tsunamis in the Ionosphere Using Ground Based GPS Measurements

    NASA Technical Reports Server (NTRS)

    Galvan, D. A.; Komjathy, A.; Song, Y. Tony; Stephens, P.; Hickey, M. P.; Foster, J.

    2011-01-01

    Ground-based Global Positioning System (GPS) measurements of ionospheric Total Electron Content (TEC) show variations consistent with atmospheric internal gravity waves caused by ocean tsunamis following recent seismic events, including the Tohoku tsunami of March 11, 2011. We observe fluctuations correlated in time, space, and wave properties with this tsunami in TEC estimates processed using JPL's Global Ionospheric Mapping Software. These TEC estimates were band-pass filtered to remove ionospheric TEC variations with periods outside the typical range of internal gravity waves caused by tsunamis. Observable variations in TEC appear correlated with the Tohoku tsunami near the epicenter, at Hawaii, and near the west coast of North America. Disturbance magnitudes are 1-10% of the background TEC value. Observations near the epicenter are compared to estimates of expected tsunami-driven TEC variations produced by Embry Riddle Aeronautical University's Spectral Full Wave Model, an atmosphere-ionosphere coupling model, and found to be in good agreement. The potential exists to apply these detection techniques to real-time GPS TEC data, providing estimates of tsunami speed and amplitude that may be useful for future early warning systems.

  18. Characterization of the Sahelian-Sudan rainfall based on observations and regional climate models

    NASA Astrophysics Data System (ADS)

    Salih, Abubakr A. M.; Elagib, Nadir Ahmed; Tjernström, Michael; Zhang, Qiong

    2018-04-01

    The African Sahel region is known to be highly vulnerable to climate variability and change. We analyze rainfall in the Sahelian Sudan in terms of distribution of rain-days and amounts, and examine whether regional climate models can capture these rainfall features. Three regional models namely, Regional Model (REMO), Rossby Center Atmospheric Model (RCA) and Regional Climate Model (RegCM4), are evaluated against gridded observations (Climate Research Unit, Tropical Rainfall Measuring Mission, and ERA-interim reanalysis) and rain-gauge data from six arid and semi-arid weather stations across Sahelian Sudan over the period 1989 to 2008. Most of the observed rain-days are characterized by weak (0.1-1.0 mm/day) to moderate (> 1.0-10.0 mm/day) rainfall, with average frequencies of 18.5% and 48.0% of the total annual rain-days, respectively. Although very strong rainfall events (> 30.0 mm/day) occur rarely, they account for a large fraction of the total annual rainfall (28-42% across the stations). The performance of the models varies both spatially and temporally. RegCM4 most closely reproduces the observed annual rainfall cycle, especially for the more arid locations, but all of the three models fail to capture the strong rainfall events and hence underestimate its contribution to the total annual number of rain-days and rainfall amount. However, excessive moderate rainfall compensates this underestimation in the models in an annual average sense. The present study uncovers some of the models' limitations in skillfully reproducing the observed climate over dry regions, will aid model users in recognizing the uncertainties in the model output and will help climate and hydrological modeling communities in improving models.

  19. Evaluation of Multiclass Model Observers in PET LROC Studies

    NASA Astrophysics Data System (ADS)

    Gifford, H. C.; Kinahan, P. E.; Lartizien, C.; King, M. A.

    2007-02-01

    A localization ROC (LROC) study was conducted to evaluate nonprewhitening matched-filter (NPW) and channelized NPW (CNPW) versions of a multiclass model observer as predictors of human tumor-detection performance with PET images. Target localization is explicitly performed by these model observers. Tumors were placed in the liver, lungs, and background soft tissue of a mathematical phantom, and the data simulation modeled a full-3D acquisition mode. Reconstructions were performed with the FORE+AWOSEM algorithm. The LROC study measured observer performance with 2D images consisting of either coronal, sagittal, or transverse views of the same set of cases. Versions of the CNPW observer based on two previously published difference-of-Gaussian channel models demonstrated good quantitative agreement with human observers. One interpretation of these results treats the CNPW observer as a channelized Hotelling observer with implicit internal noise

  20. Supporting the operational use of process based hydrological models and NASA Earth Observations for use in land management and post-fire remediation through a Rapid Response Erosion Database (RRED).

    NASA Astrophysics Data System (ADS)

    Miller, M. E.; Elliot, W.; Billmire, M.; Robichaud, P. R.; Banach, D. M.

    2017-12-01

    We have built a Rapid Response Erosion Database (RRED, http://rred.mtri.org/rred/) for the continental United States to allow land managers to access properly formatted spatial model inputs for the Water Erosion Prediction Project (WEPP). Spatially-explicit process-based models like WEPP require spatial inputs that include digital elevation models (DEMs), soil, climate and land cover. The online database delivers either a 10m or 30m USGS DEM, land cover derived from the Landfire project, and soil data derived from SSURGO and STATSGO datasets. The spatial layers are projected into UTM coordinates and pre-registered for modeling. WEPP soil parameter files are also created along with linkage files to match both spatial land cover and soils data with the appropriate WEPP parameter files. Our goal is to make process-based models more accessible by preparing spatial inputs ahead of time allowing modelers to focus on addressing scenarios of concern. The database provides comprehensive support for post-fire hydrological modeling by allowing users to upload spatial soil burn severity maps, and within moments returns spatial model inputs. Rapid response is critical following natural disasters. After moderate and high severity wildfires, flooding, erosion, and debris flows are a major threat to life, property and municipal water supplies. Mitigation measures must be rapidly implemented if they are to be effective, but they are expensive and cannot be applied everywhere. Fire, runoff, and erosion risks also are highly heterogeneous in space, creating an urgent need for rapid, spatially-explicit assessment. The database has been used to help assess and plan remediation on over a dozen wildfires in the Western US. Future plans include expanding spatial coverage, improving model input data and supporting additional models. Our goal is to facilitate the use of the best possible datasets and models to support the conservation of soil and water.

  1. Advective transport observations with MODPATH-OBS--documentation of the MODPATH observation process

    USGS Publications Warehouse

    Hanson, R.T.; Kauffman, L.K.; Hill, M.C.; Dickinson, J.E.; Mehl, S.W.

    2013-01-01

    The MODPATH-OBS computer program described in this report is designed to calculate simulated equivalents for observations related to advective groundwater transport that can be represented in a quantitative way by using simulated particle-tracking data. The simulated equivalents supported by MODPATH-OBS are (1) distance from a source location at a defined time, or proximity to an observed location; (2) time of travel from an initial location to defined locations, areas, or volumes of the simulated system; (3) concentrations used to simulate groundwater age; and (4) percentages of water derived from contributing source areas. Although particle tracking only simulates the advective component of conservative transport, effects of non-conservative processes such as retardation can be approximated through manipulation of the effective-porosity value used to calculate velocity based on the properties of selected conservative tracers. This program can also account for simple decay or production, but it cannot account for diffusion. Dispersion can be represented through direct simulation of subsurface heterogeneity and the use of many particles. MODPATH-OBS acts as a postprocessor to MODPATH, so that the sequence of model runs generally required is MODFLOW, MODPATH, and MODPATH-OBS. The version of MODFLOW and MODPATH that support the version of MODPATH-OBS presented in this report are MODFLOW-2005 or MODFLOW-LGR, and MODPATH-LGR. MODFLOW-LGR is derived from MODFLOW-2005, MODPATH 5, and MODPATH 6 and supports local grid refinement. MODPATH-LGR is derived from MODPATH 5. It supports the forward and backward tracking of particles through locally refined grids and provides the output needed for MODPATH_OBS. For a single grid and no observations, MODPATH-LGR results are equivalent to MODPATH 5. MODPATH-LGR and MODPATH-OBS simulations can use nearly all of the capabilities of MODFLOW-2005 and MODFLOW-LGR; for example, simulations may be steady-state, transient, or a combination

  2. Modeling winter hydrological processes under differing climatic conditions: Modifying WEPP

    NASA Astrophysics Data System (ADS)

    Dun, Shuhui

    Water erosion is a serious and continuous environmental problem worldwide. In cold regions, soil freeze and thaw has great impacts on infiltration and erosion. Rain or snowmelt on a thawing soil can cause severe water erosion. Of equal importance is snow accumulation and snowmelt, which can be the predominant hydrological process in areas of mid- to high latitudes and forested watersheds. Modelers must properly simulate winter processes to adequately represent the overall hydrological outcome and sediment and chemical transport in these areas. Modeling winter hydrology is presently lacking in water erosion models. Most of these models are based on the functional Universal Soil Loss Equation (USLE) or its revised forms, e.g., Revised USLE (RUSLE). In RUSLE a seasonally variable soil erodibility factor (K) was used to account for the effects of frozen and thawing soil. Yet the use of this factor requires observation data for calibration, and such a simplified approach cannot represent the complicated transient freeze-thaw processes and their impacts on surface runoff and erosion. The Water Erosion Prediction Project (WEPP) watershed model, a physically-based erosion prediction software developed by the USDA-ARS, has seen numerous applications within and outside the US. WEPP simulates winter processes, including snow accumulation, snowmelt, and soil freeze-thaw, using an approach based on mass and energy conservation. However, previous studies showed the inadequacy of the winter routines in the WEPP model. Therefore, the objectives of this study were: (1) To adapt a modeling approach for winter hydrology based on mass and energy conservation, and to implement this approach into a physically-oriented hydrological model, such as WEPP; and (2) To assess this modeling approach through case applications to different geographic conditions. A new winter routine was developed and its performance was evaluated by incorporating it into WEPP (v2008.9) and then applying WEPP to

  3. Mesoscopic Model — Advanced Simulation of Microforming Processes

    NASA Astrophysics Data System (ADS)

    Geißdörfer, Stefan; Engel, Ulf; Geiger, Manfred

    2007-04-01

    Continued miniaturization in many fields of forming technology implies the need for a better understanding of the effects occurring while scaling down from conventional macroscopic scale to microscale. At microscale, the material can no longer be regarded as a homogeneous continuum because of the presence of only a few grains in the deformation zone. This leads to a change in the material behaviour resulting among others in a large scatter of forming results. A correlation between the integral flow stress of the workpiece and the scatter of the process factors on the one hand and the mean grain size and its standard deviation on the other hand has been observed in experiments. The conventional FE-simulation of scaled down processes is not able to consider the size-effects observed such as the actual reduction of the flow stress, the increasing scatter of the process factors and a local material flow being different to that obtained in the case of macroparts. For that reason, a new simulation model has been developed taking into account all the size-effects. The present paper deals with the theoretical background of the new mesoscopic model, its characteristics like synthetic grain structure generation and the calculation of micro material properties — based on conventional material properties. The verification of the simulation model is done by carrying out various experiments with different mean grain sizes and grain structures but the same geometrical dimensions of the workpiece.

  4. A Bayesian Network Based Global Sensitivity Analysis Method for Identifying Dominant Processes in a Multi-physics Model

    NASA Astrophysics Data System (ADS)

    Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.

    2016-12-01

    Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can

  5. The evolution of process-based hydrologic models: historical challenges and the collective quest for physical realism

    NASA Astrophysics Data System (ADS)

    Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis; Woods, Ross A.; Uijlenhoet, Remko; Bennett, Katrina E.; Pauwels, Valentijn R. N.; Cai, Xitian; Wood, Andrew W.; Peters-Lidard, Christa D.

    2017-07-01

    The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.

  6. The evolution of process-based hydrologic models: historical challenges and the collective quest for physical realism

    NASA Astrophysics Data System (ADS)

    Clark, M. P.; Nijssen, B.; Wood, A.; Mizukami, N.; Newman, A. J.

    2017-12-01

    The diversity in hydrologic models has historically led to great controversy on the "correct" approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.

  7. Modeling cell adhesion and proliferation: a cellular-automata based approach.

    PubMed

    Vivas, J; Garzón-Alvarado, D; Cerrolaza, M

    Cell adhesion is a process that involves the interaction between the cell membrane and another surface, either a cell or a substrate. Unlike experimental tests, computer models can simulate processes and study the result of experiments in a shorter time and lower costs. One of the tools used to simulate biological processes is the cellular automata, which is a dynamic system that is discrete both in space and time. This work describes a computer model based on cellular automata for the adhesion process and cell proliferation to predict the behavior of a cell population in suspension and adhered to a substrate. The values of the simulated system were obtained through experimental tests on fibroblast monolayer cultures. The results allow us to estimate the cells settling time in culture as well as the adhesion and proliferation time. The change in the cells morphology as the adhesion over the contact surface progress was also observed. The formation of the initial link between cell and the substrate of the adhesion was observed after 100 min where the cell on the substrate retains its spherical morphology during the simulation. The cellular automata model developed is, however, a simplified representation of the steps in the adhesion process and the subsequent proliferation. A combined framework of experimental and computational simulation based on cellular automata was proposed to represent the fibroblast adhesion on substrates and changes in a macro-scale observed in the cell during the adhesion process. The approach showed to be simple and efficient.

  8. Developing a probability-based model of aquifer vulnerability in an agricultural region

    NASA Astrophysics Data System (ADS)

    Chen, Shih-Kai; Jang, Cheng-Shin; Peng, Yi-Huei

    2013-04-01

    SummaryHydrogeological settings of aquifers strongly influence the regional groundwater movement and pollution processes. Establishing a map of aquifer vulnerability is considerably critical for planning a scheme of groundwater quality protection. This study developed a novel probability-based DRASTIC model of aquifer vulnerability in the Choushui River alluvial fan, Taiwan, using indicator kriging and to determine various risk categories of contamination potentials based on estimated vulnerability indexes. Categories and ratings of six parameters in the probability-based DRASTIC model were probabilistically characterized according to the parameter classification methods of selecting a maximum estimation probability and calculating an expected value. Moreover, the probability-based estimation and assessment gave us an excellent insight into propagating the uncertainty of parameters due to limited observation data. To examine the prediction capacity of pollutants for the developed probability-based DRASTIC model, medium, high, and very high risk categories of contamination potentials were compared with observed nitrate-N exceeding 0.5 mg/L indicating the anthropogenic groundwater pollution. The analyzed results reveal that the developed probability-based DRASTIC model is capable of predicting high nitrate-N groundwater pollution and characterizing the parameter uncertainty via the probability estimation processes.

  9. Mathematical model of silicon smelting process basing on pelletized charge from technogenic raw materials

    NASA Astrophysics Data System (ADS)

    Nemchinova, N. V.; Tyutrin, A. A.; Salov, V. M.

    2018-03-01

    The silicon production process in the electric arc reduction furnaces (EAF) is studied using pelletized charge as an additive to the standard on the basis of the generated mathematical model. The results obtained due to the model will contribute to the analysis of the charge components behavior during melting with the achievement of optimum final parameters of the silicon production process. The authors proposed using technogenic waste as a raw material for the silicon production in a pelletized form using liquid glass and aluminum production dust from the electrostatic precipitators as a binder. The method of mathematical modeling with the help of the ‘Selector’ software package was used as a basis for the theoretical study. A model was simulated with the imitation of four furnace temperature zones and a crystalline silicon phase (25 °C). The main advantage of the created model is the ability to analyze the behavior of all burden materials (including pelletized charge) in the carbothermic process. The behavior analysis is based on the thermodynamic probability data of the burden materials interactions in the carbothermic process. The model accounts for 17 elements entering the furnace with raw materials, electrodes and air. The silicon melt, obtained by the modeling, contained 91.73 % wt. of the target product. The simulation results showed that in the use of the proposed combined charge, the recovery of silicon reached 69.248 %, which is in good agreement with practical data. The results of the crystalline silicon chemical composition modeling are compared with the real silicon samples of chemical analysis data, which showed the results of convergence. The efficiency of the mathematical modeling methods in the studying of the carbothermal silicon obtaining process with complex interphase transformations and the formation of numerous intermediate compounds using a pelletized charge as an additive to the traditional one is shown.

  10. Mechanochemical models of processive molecular motors

    NASA Astrophysics Data System (ADS)

    Lan, Ganhui; Sun, Sean X.

    2012-05-01

    Motor proteins are the molecular engines powering the living cell. These nanometre-sized molecules convert chemical energy, both enthalpic and entropic, into useful mechanical work. High resolution single molecule experiments can now observe motor protein movement with increasing precision. The emerging data must be combined with structural and kinetic measurements to develop a quantitative mechanism. This article describes a modelling framework where quantitative understanding of motor behaviour can be developed based on the protein structure. The framework is applied to myosin motors, with emphasis on how synchrony between motor domains give rise to processive unidirectional movement. The modelling approach shows that the elasticity of protein domains are important in regulating motor function. Simple models of protein domain elasticity are presented. The framework can be generalized to other motor systems, or an ensemble of motors such as muscle contraction. Indeed, for hundreds of myosins, our framework can be reduced to the Huxely-Simmons description of muscle movement in the mean-field limit.

  11. Unraveling the sub-processes of selective attention: insights from dynamic modeling and continuous behavior.

    PubMed

    Frisch, Simon; Dshemuchadse, Maja; Görner, Max; Goschke, Thomas; Scherbaum, Stefan

    2015-11-01

    Selective attention biases information processing toward stimuli that are relevant for achieving our goals. However, the nature of this bias is under debate: Does it solely rely on the amplification of goal-relevant information or is there a need for additional inhibitory processes that selectively suppress currently distracting information? Here, we explored the processes underlying selective attention with a dynamic, modeling-based approach that focuses on the continuous evolution of behavior over time. We present two dynamic neural field models incorporating the diverging theoretical assumptions. Simulations with both models showed that they make similar predictions with regard to response times but differ markedly with regard to their continuous behavior. Human data observed via mouse tracking as a continuous measure of performance revealed evidence for the model solely based on amplification but no indication of persisting selective distracter inhibition.

  12. How Does Higher Frequency Monitoring Data Affect the Calibration of a Process-Based Water Quality Model?

    NASA Astrophysics Data System (ADS)

    Jackson-Blake, L.

    2014-12-01

    Process-based catchment water quality models are increasingly used as tools to inform land management. However, for such models to be reliable they need to be well calibrated and shown to reproduce key catchment processes. Calibration can be challenging for process-based models, which tend to be complex and highly parameterised. Calibrating a large number of parameters generally requires a large amount of monitoring data, but even in well-studied catchments, streams are often only sampled at a fortnightly or monthly frequency. The primary aim of this study was therefore to investigate how the quality and uncertainty of model simulations produced by one process-based catchment model, INCA-P (the INtegrated CAtchment model of Phosphorus dynamics), were improved by calibration to higher frequency water chemistry data. Two model calibrations were carried out for a small rural Scottish catchment: one using 18 months of daily total dissolved phosphorus (TDP) concentration data, another using a fortnightly dataset derived from the daily data. To aid comparability, calibrations were carried out automatically using the MCMC-DREAM algorithm. Using daily rather than fortnightly data resulted in improved simulation of the magnitude of peak TDP concentrations, in turn resulting in improved model performance statistics. Marginal posteriors were better constrained by the higher frequency data, resulting in a large reduction in parameter-related uncertainty in simulated TDP (the 95% credible interval decreased from 26 to 6 μg/l). The number of parameters that could be reliably auto-calibrated was lower for the fortnightly data, leading to the recommendation that parameters should not be varied spatially for models such as INCA-P unless there is solid evidence that this is appropriate, or there is a real need to do so for the model to fulfil its purpose. Secondary study aims were to highlight the subjective elements involved in auto-calibration and suggest practical improvements

  13. Modeling Web-Based Educational Systems: Process Design Teaching Model

    ERIC Educational Resources Information Center

    Rokou, Franca Pantano; Rokou, Elena; Rokos, Yannis

    2004-01-01

    Using modeling languages is essential to the construction of educational systems based on software engineering principles and methods. Furthermore, the instructional design is undoubtedly the cornerstone of the design and development of educational systems. Although several methodologies and languages have been proposed for the specification of…

  14. Assessing the hydrological impacts of Tropical Cyclones on the Carolinas: An observational and modeling based investigation

    NASA Astrophysics Data System (ADS)

    Leeper, R. D.; Prat, O. P.; Blanton, B. O.

    2012-12-01

    During the warm season, the Carolinas are particularly prone to tropical cyclone (TC) activity and can be impacted in many different ways depending on storm track. The coasts of the Carolinas are the most vulnerable areas, but particular situations (Frances and Ivan 2004) affected communities far from the coasts (Prat and Nelson 2012). Regardless of where landfall occurs, TCs are often associated with intense precipitation and strong winds triggering a variety of natural hazards (storm surge, flooding, landslides). The assessment of societal and environmental impacts of TCs requires a suite of observations. The scarcity of station coverage, sensor limitations, and rainfall retrieval uncertainties are issues limiting the ability to assess accurately the impact of extreme precipitation events. Therefore, numerical models, such as the Weather Research and Forecasting model (WRF), can be valuable tools to investigate those impacts at regional and local scales and bridge the gap between observations. The goal of this study is to investigate the impact of TCs across the Carolinas using both observational and modeling technologies, and explore the usefulness of numerical methods in data-scarce regions. To fully assess TC impacts on the Carolinas inhabitants, storms impacting both coastal and inner communities will be selected and high-resolution WRF ensemble simulations generated from a suite of physic schemes for each TC to investigate their impact at finer scales. The ensemble member performance will be evaluated with respect to ground-based and satellite observations. Furthermore, results from the high-resolution WRF simulations, including the average wind-speed and the sea level pressure, will be used with the ADCIRC storm-surge and wave-model (Westerink et al, 2008) to simulate storm surge and waves along the Carolinas coast for TCs travelling along the coast or making landfall. This work aims to provide an assessment of the various types of impacts TCs can have

  15. Methane emissions from floodplains in the Amazon Basin: towards a process-based model for global applications

    NASA Astrophysics Data System (ADS)

    Ringeval, B.; Houweling, S.; van Bodegom, P. M.; Spahni, R.; van Beek, R.; Joos, F.; Röckmann, T.

    2013-10-01

    Tropical wetlands are estimated to represent about 50% of the natural wetland emissions and explain a large fraction of the observed CH4 variability on time scales ranging from glacial-interglacial cycles to the currently observed year-to-year variability. Despite their importance, however, tropical wetlands are poorly represented in global models aiming to predict global CH4 emissions. This study documents the first regional-scale, process-based model of CH4 emissions from tropical floodplains. The LPX-Bern Dynamic Global Vegetation Model (LPX hereafter) was modified to represent floodplain hydrology, vegetation and associated CH4 emissions. The extent of tropical floodplains was prescribed using output from the spatially-explicit hydrology model PCR-GLOBWB. We introduced new Plant Functional Types (PFTs) that explicitly represent floodplain vegetation. The PFT parameterizations were evaluated against available remote sensing datasets (GLC2000 land cover and MODIS Net Primary Productivity). Simulated CH4 flux densities were evaluated against field observations and regional flux inventories. Simulated CH4 emissions at Amazon Basin scale were compared to model simulations performed in the WETCHIMP intercomparison project. We found that LPX simulated CH4 flux densities are in reasonable agreement with observations at the field scale but with a~tendency to overestimate the flux observed at specific sites. In addition, the model did not reproduce between-site variations or between-year variations within a site. Unfortunately, site informations are too limited to attest or disprove some model features. At the Amazon Basin scale, our results underline the large uncertainty in the magnitude of wetland CH4 emissions. In particular, uncertainties in floodplain extent (i.e., difference between GLC2000 and PCR-GLOBWB output) modulate the simulated emissions by a factor of about 2. Our best estimates, using PCR-GLOBWB in combination with GLC2000, lead to simulated Amazon

  16. Combining Multi-Source Remotely Sensed Data and a Process-Based Model for Forest Aboveground Biomass Updating.

    PubMed

    Lu, Xiaoman; Zheng, Guang; Miller, Colton; Alvarado, Ernesto

    2017-09-08

    Monitoring and understanding the spatio-temporal variations of forest aboveground biomass (AGB) is a key basis to quantitatively assess the carbon sequestration capacity of a forest ecosystem. To map and update forest AGB in the Greater Khingan Mountains (GKM) of China, this work proposes a physical-based approach. Based on the baseline forest AGB from Landsat Enhanced Thematic Mapper Plus (ETM+) images in 2008, we dynamically updated the annual forest AGB from 2009 to 2012 by adding the annual AGB increment (ABI) obtained from the simulated daily and annual net primary productivity (NPP) using the Boreal Ecosystem Productivity Simulator (BEPS) model. The 2012 result was validated by both field- and aerial laser scanning (ALS)-based AGBs. The predicted forest AGB for 2012 estimated from the process-based model can explain 31% ( n = 35, p < 0.05, RMSE = 2.20 kg/m²) and 85% ( n = 100, p < 0.01, RMSE = 1.71 kg/m²) of variation in field- and ALS-based forest AGBs, respectively. However, due to the saturation of optical remote sensing-based spectral signals and contribution of understory vegetation, the BEPS-based AGB tended to underestimate/overestimate the AGB for dense/sparse forests. Generally, our results showed that the remotely sensed forest AGB estimates could serve as the initial carbon pool to parameterize the process-based model for NPP simulation, and the combination of the baseline forest AGB and BEPS model could effectively update the spatiotemporal distribution of forest AGB.

  17. Combining Multi-Source Remotely Sensed Data and a Process-Based Model for Forest Aboveground Biomass Updating

    PubMed Central

    Lu, Xiaoman; Zheng, Guang; Miller, Colton

    2017-01-01

    Monitoring and understanding the spatio-temporal variations of forest aboveground biomass (AGB) is a key basis to quantitatively assess the carbon sequestration capacity of a forest ecosystem. To map and update forest AGB in the Greater Khingan Mountains (GKM) of China, this work proposes a physical-based approach. Based on the baseline forest AGB from Landsat Enhanced Thematic Mapper Plus (ETM+) images in 2008, we dynamically updated the annual forest AGB from 2009 to 2012 by adding the annual AGB increment (ABI) obtained from the simulated daily and annual net primary productivity (NPP) using the Boreal Ecosystem Productivity Simulator (BEPS) model. The 2012 result was validated by both field- and aerial laser scanning (ALS)-based AGBs. The predicted forest AGB for 2012 estimated from the process-based model can explain 31% (n = 35, p < 0.05, RMSE = 2.20 kg/m2) and 85% (n = 100, p < 0.01, RMSE = 1.71 kg/m2) of variation in field- and ALS-based forest AGBs, respectively. However, due to the saturation of optical remote sensing-based spectral signals and contribution of understory vegetation, the BEPS-based AGB tended to underestimate/overestimate the AGB for dense/sparse forests. Generally, our results showed that the remotely sensed forest AGB estimates could serve as the initial carbon pool to parameterize the process-based model for NPP simulation, and the combination of the baseline forest AGB and BEPS model could effectively update the spatiotemporal distribution of forest AGB. PMID:28885556

  18. Forest Canopy Processes in a Regional Chemical Transport Model

    NASA Astrophysics Data System (ADS)

    Makar, Paul; Staebler, Ralf; Akingunola, Ayodeji; Zhang, Junhua; McLinden, Chris; Kharol, Shailesh; Moran, Michael; Robichaud, Alain; Zhang, Leiming; Stroud, Craig; Pabla, Balbir; Cheung, Philip

    2016-04-01

    Forest canopies have typically been absent or highly parameterized in regional chemical transport models. Some forest-related processes are often considered - for example, biogenic emissions from the forests are included as a flux lower boundary condition on vertical diffusion, as is deposition to vegetation. However, real forest canopies comprise a much more complicated set of processes, at scales below the "transport model-resolved scale" of vertical levels usually employed in regional transport models. Advective and diffusive transport within the forest canopy typically scale with the height of the canopy, and the former process tends to dominate over the latter. Emissions of biogenic hydrocarbons arise from the foliage, which may be located tens of metres above the surface, while emissions of biogenic nitric oxide from decaying plant matter are located at the surface - in contrast to the surface flux boundary condition usually employed in chemical transport models. Deposition, similarly, is usually parameterized as a flux boundary condition, but may be differentiated between fluxes to vegetation and fluxes to the surface when the canopy scale is considered. The chemical environment also changes within forest canopies: shading, temperature, and relativity humidity changes with height within the canopy may influence chemical reaction rates. These processes have been observed in a host of measurement studies, and have been simulated using site-specific one-dimensional forest canopy models. Their influence on regional scale chemistry has been unknown, until now. In this work, we describe the results of the first attempt to include complex canopy processes within a regional chemical transport model (GEM-MACH). The original model core was subdivided into "canopy" and "non-canopy" subdomains. In the former, three additional near-surface layers based on spatially and seasonally varying satellite-derived canopy height and leaf area index were added to the original model

  19. Web-based Interactive Landform Simulation Model - Grand Canyon

    NASA Astrophysics Data System (ADS)

    Luo, W.; Pelletier, J. D.; Duffin, K.; Ormand, C. J.; Hung, W.; Iverson, E. A.; Shernoff, D.; Zhai, X.; Chowdary, A.

    2013-12-01

    Earth science educators need interactive tools to engage and enable students to better understand how Earth systems work over geologic time scales. The evolution of landforms is ripe for interactive, inquiry-based learning exercises because landforms exist all around us. The Web-based Interactive Landform Simulation Model - Grand Canyon (WILSIM-GC, http://serc.carleton.edu/landform/) is a continuation and upgrade of the simple cellular automata (CA) rule-based model (WILSIM-CA, http://www.niu.edu/landform/) that can be accessed from anywhere with an Internet connection. Major improvements in WILSIM-GC include adopting a physically based model and the latest Java technology. The physically based model is incorporated to illustrate the fluvial processes involved in land-sculpting pertaining to the development and evolution of one of the most famous landforms on Earth: the Grand Canyon. It is hoped that this focus on a famous and specific landscape will attract greater student interest and provide opportunities for students to learn not only how different processes interact to form the landform we observe today, but also how models and data are used together to enhance our understanding of the processes involved. The latest development in Java technology (such as Java OpenGL for access to ubiquitous fast graphics hardware, Trusted Applet for file input and output, and multithreaded ability to take advantage of modern multi-core CPUs) are incorporated into building WILSIM-GC and active, standards-aligned curricula materials guided by educational psychology theory on science learning will be developed to accompany the model. This project is funded NSF-TUES program.

  20. Modeling texture kinetics during thermal processing of potato products.

    PubMed

    Moyano, P C; Troncoso, E; Pedreschi, F

    2007-03-01

    A kinetic model based on 2 irreversible serial chemical reactions has been proposed to fit experimental data of texture changes during thermal processing of potato products. The model links dimensionless maximum force F*(MAX) with processing time. Experimental texture changes were obtained during frying of French fries and potato chips at different temperatures, while literature data for blanching/cooking of potato cubes have been considered. A satisfactory agreement between experimental and predicted values was observed, with root mean square values (RMSs) in the range of 4.7% to 16.4% for French fries and 16.7% to 29.3% for potato chips. In the case of blanching/cooking, the proposed model gave RMSs in the range of 1.2% to 17.6%, much better than the 6.2% to 44.0% obtained with the traditional 1st-order kinetics. The model is able to predict likewise the transition from softening to hardening of the tissue during frying.

  1. Replacing climatological potential evapotranspiration estimates with dynamic satellite-based observations in operational hydrologic prediction models

    NASA Astrophysics Data System (ADS)

    Franz, K. J.; Bowman, A. L.; Hogue, T. S.; Kim, J.; Spies, R.

    2011-12-01

    In the face of a changing climate, growing populations, and increased human habitation in hydrologically risky locations, both short- and long-range planners increasingly require robust and reliable streamflow forecast information. Current operational forecasting utilizes watershed-scale, conceptual models driven by ground-based (commonly point-scale) observations of precipitation and temperature and climatological potential evapotranspiration (PET) estimates. The PET values are derived from historic pan evaporation observations and remain static from year-to-year. The need for regional dynamic PET values is vital for improved operational forecasting. With the advent of satellite remote sensing and the adoption of a more flexible operational forecast system by the National Weather Service, incorporation of advanced data products is now more feasible than in years past. In this study, we will test a previously developed satellite-derived PET product (UCLA MODIS-PET) in the National Weather Service forecast models and compare the model results to current methods. The UCLA MODIS-PET method is based on the Priestley-Taylor formulation, is driven with MODIS satellite products, and produces a daily, 250m PET estimate. The focus area is eight headwater basins in the upper Midwest U.S. There is a need to develop improved forecasting methods for this region that are able to account for climatic and landscape changes more readily and effectively than current methods. This region is highly flood prone yet sensitive to prolonged dry periods in late summer and early fall, and is characterized by a highly managed landscape, which has drastically altered the natural hydrologic cycle. Our goal is to improve model simulations, and thereby, the initial conditions prior to the start of a forecast through the use of PET values that better reflect actual watershed conditions. The forecast models are being tested in both distributed and lumped mode.

  2. Social Models: Blueprints or Processes?

    ERIC Educational Resources Information Center

    Little, Graham R.

    1981-01-01

    Discusses the nature and implications of two different models for societal planning: (1) the problem-solving process approach based on Karl Popper; and (2) the goal-setting "blueprint" approach based on Karl Marx. (DC)

  3. A stochastic estimation procedure for intermittently-observed semi-Markov multistate models with back transitions.

    PubMed

    Aralis, Hilary; Brookmeyer, Ron

    2017-01-01

    Multistate models provide an important method for analyzing a wide range of life history processes including disease progression and patient recovery following medical intervention. Panel data consisting of the states occupied by an individual at a series of discrete time points are often used to estimate transition intensities of the underlying continuous-time process. When transition intensities depend on the time elapsed in the current state and back transitions between states are possible, this intermittent observation process presents difficulties in estimation due to intractability of the likelihood function. In this manuscript, we present an iterative stochastic expectation-maximization algorithm that relies on a simulation-based approximation to the likelihood function and implement this algorithm using rejection sampling. In a simulation study, we demonstrate the feasibility and performance of the proposed procedure. We then demonstrate application of the algorithm to a study of dementia, the Nun Study, consisting of intermittently-observed elderly subjects in one of four possible states corresponding to intact cognition, impaired cognition, dementia, and death. We show that the proposed stochastic expectation-maximization algorithm substantially reduces bias in model parameter estimates compared to an alternative approach used in the literature, minimal path estimation. We conclude that in estimating intermittently observed semi-Markov models, the proposed approach is a computationally feasible and accurate estimation procedure that leads to substantial improvements in back transition estimates.

  4. Measurement-based reliability/performability models

    NASA Technical Reports Server (NTRS)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  5. A Java-based fMRI processing pipeline evaluation system for assessment of univariate general linear model and multivariate canonical variate analysis-based pipelines.

    PubMed

    Zhang, Jing; Liang, Lichen; Anderson, Jon R; Gatewood, Lael; Rottenberg, David A; Strother, Stephen C

    2008-01-01

    As functional magnetic resonance imaging (fMRI) becomes widely used, the demands for evaluation of fMRI processing pipelines and validation of fMRI analysis results is increasing rapidly. The current NPAIRS package, an IDL-based fMRI processing pipeline evaluation framework, lacks system interoperability and the ability to evaluate general linear model (GLM)-based pipelines using prediction metrics. Thus, it can not fully evaluate fMRI analytical software modules such as FSL.FEAT and NPAIRS.GLM. In order to overcome these limitations, a Java-based fMRI processing pipeline evaluation system was developed. It integrated YALE (a machine learning environment) into Fiswidgets (a fMRI software environment) to obtain system interoperability and applied an algorithm to measure GLM prediction accuracy. The results demonstrated that the system can evaluate fMRI processing pipelines with univariate GLM and multivariate canonical variates analysis (CVA)-based models on real fMRI data based on prediction accuracy (classification accuracy) and statistical parametric image (SPI) reproducibility. In addition, a preliminary study was performed where four fMRI processing pipelines with GLM and CVA modules such as FSL.FEAT and NPAIRS.CVA were evaluated with the system. The results indicated that (1) the system can compare different fMRI processing pipelines with heterogeneous models (NPAIRS.GLM, NPAIRS.CVA and FSL.FEAT) and rank their performance by automatic performance scoring, and (2) the rank of pipeline performance is highly dependent on the preprocessing operations. These results suggest that the system will be of value for the comparison, validation, standardization and optimization of functional neuroimaging software packages and fMRI processing pipelines.

  6. Southeast Atmosphere Studies: learning from model-observation syntheses

    NASA Astrophysics Data System (ADS)

    Mao, Jingqiu; Carlton, Annmarie; Cohen, Ronald C.; Brune, William H.; Brown, Steven S.; Wolfe, Glenn M.; Jimenez, Jose L.; Pye, Havala O. T.; Ng, Nga Lee; Xu, Lu; McNeill, V. Faye; Tsigaridis, Kostas; McDonald, Brian C.; Warneke, Carsten; Guenther, Alex; Alvarado, Matthew J.; de Gouw, Joost; Mickley, Loretta J.; Leibensperger, Eric M.; Mathur, Rohit; Nolte, Christopher G.; Portmann, Robert W.; Unger, Nadine; Tosca, Mika; Horowitz, Larry W.

    2018-02-01

    Concentrations of atmospheric trace species in the United States have changed dramatically over the past several decades in response to pollution control strategies, shifts in domestic energy policy and economics, and economic development (and resulting emission changes) elsewhere in the world. Reliable projections of the future atmosphere require models to not only accurately describe current atmospheric concentrations, but to do so by representing chemical, physical and biological processes with conceptual and quantitative fidelity. Only through incorporation of the processes controlling emissions and chemical mechanisms that represent the key transformations among reactive molecules can models reliably project the impacts of future policy, energy and climate scenarios. Efforts to properly identify and implement the fundamental and controlling mechanisms in atmospheric models benefit from intensive observation periods, during which collocated measurements of diverse, speciated chemicals in both the gas and condensed phases are obtained. The Southeast Atmosphere Studies (SAS, including SENEX, SOAS, NOMADSS and SEAC4RS) conducted during the summer of 2013 provided an unprecedented opportunity for the atmospheric modeling community to come together to evaluate, diagnose and improve the representation of fundamental climate and air quality processes in models of varying temporal and spatial scales.This paper is aimed at discussing progress in evaluating, diagnosing and improving air quality and climate modeling using comparisons to SAS observations as a guide to thinking about improvements to mechanisms and parameterizations in models. The effort focused primarily on model representation of fundamental atmospheric processes that are essential to the formation of ozone, secondary organic aerosol (SOA) and other trace species in the troposphere, with the ultimate goal of understanding the radiative impacts of these species in the southeast and elsewhere. Here we

  7. Southeast Atmosphere Studies: learning from model-observation syntheses

    PubMed Central

    Mao, Jingqiu; Carlton, Annmarie; Cohen, Ronald C.; Brune, William H.; Brown, Steven S.; Wolfe, Glenn M.; Jimenez, Jose L.; Pye, Havala O. T.; Ng, Nga Lee; Xu, Lu; McNeill, V. Faye; Tsigaridis, Kostas; McDonald, Brian C.; Warneke, Carsten; Guenther, Alex; Alvarado, Matthew J.; de Gouw, Joost; Mickley, Loretta J.; Leibensperger, Eric M.; Mathur, Rohit; Nolte, Christopher G.; Portmann, Robert W.; Unger, Nadine; Tosca, Mika; Horowitz, Larry W.

    2018-01-01

    Concentrations of atmospheric trace species in the United States have changed dramatically over the past several decades in response to pollution control strategies, shifts in domestic energy policy and economics, and economic development (and resulting emission changes) elsewhere in the world. Reliable projections of the future atmosphere require models to not only accurately describe current atmospheric concentrations, but to do so by representing chemical, physical and biological processes with conceptual and quantitative fidelity. Only through incorporation of the processes controlling emissions and chemical mechanisms that represent the key transformations among reactive molecules can models reliably project the impacts of future policy, energy and climate scenarios. Efforts to properly identify and implement the fundamental and controlling mechanisms in atmospheric models benefit from intensive observation periods, during which collocated measurements of diverse, speciated chemicals in both the gas and condensed phases are obtained. The Southeast Atmosphere Studies (SAS, including SENEX, SOAS, NOMADSS and SEAC4RS) conducted during the summer of 2013 provided an unprecedented opportunity for the atmospheric modeling community to come together to evaluate, diagnose and improve the representation of fundamental climate and air quality processes in models of varying temporal and spatial scales. This paper is aimed at discussing progress in evaluating, diagnosing and improving air quality and climate modeling using comparisons to SAS observations as a guide to thinking about improvements to mechanisms and parameterizations in models. The effort focused primarily on model representation of fundamental atmospheric processes that are essential to the formation of ozone, secondary organic aerosol (SOA) and other trace species in the troposphere, with the ultimate goal of understanding the radiative impacts of these species in the southeast and elsewhere. Here we

  8. Southeast Atmosphere Studies: Learning from Model-Observation Syntheses

    NASA Technical Reports Server (NTRS)

    Mao, Jingqiu; Carlton, Annmarie; Cohen, Ronald C.; Brune, William H.; Brown, Steven S.; Wolfe, Glenn M.; Jimenez, Jose L.; Pye, Havala O. T.; Ng, Nga Lee; Xu, Lu; hide

    2018-01-01

    Concentrations of atmospheric trace species in the United States have changed dramatically over the past several decades in response to pollution control strategies, shifts in domestic energy policy and economics, and economic development (and resulting emission changes) elsewhere in the world. Reliable projections of the future atmosphere require models to not only accurately describe current atmospheric concentrations, but to do so by representing chemical, physical and biological processes with conceptual and quantitative fidelity. Only through incorporation of the processes controlling emissions and chemical mechanisms that represent the key transformations among reactive molecules can models reliably project the impacts of future policy, energy and climate scenarios. Efforts to properly identify and implement the fundamental and controlling mechanisms in atmospheric models benefit from intensive observation periods, during which collocated measurements of diverse, speciated chemicals in both the gas and condensed phases are obtained. The Southeast Atmosphere Studies (SAS, including SENEX, SOAS, NOMADSS and SEAC4RS) conducted during the summer of 2013 provided an unprecedented opportunity for the atmospheric modeling community to come together to evaluate, diagnose and improve the representation of fundamental climate and air quality processes in models of varying temporal and spatial scales. This paper is aimed at discussing progress in evaluating, diagnosing and improving air quality and climate modeling using comparisons to SAS observations as a guide to thinking about improvements to mechanisms and parameterizations in models. The effort focused primarily on model representation of fundamental atmospheric processes that are essential to the formation of ozone, secondary organic aerosol (SOA) and other trace species in the troposphere, with the ultimate goal of understanding the radiative impacts of these species in the southeast and elsewhere. Here we

  9. [New paradigm for soil and water conservation: a method based on watershed process modeling and scenario analysis].

    PubMed

    Zhu, A-Xing; Chen, La-Jiao; Qin, Cheng-Zhi; Wang, Ping; Liu, Jun-Zhi; Li, Run-Kui; Cai, Qiang-Guo

    2012-07-01

    With the increase of severe soil erosion problem, soil and water conservation has become an urgent concern for sustainable development. Small watershed experimental observation is the traditional paradigm for soil and water control. However, the establishment of experimental watershed usually takes long time, and has the limitations of poor repeatability and high cost. Moreover, the popularization of the results from the experimental watershed is limited for other areas due to the differences in watershed conditions. Therefore, it is not sufficient to completely rely on this old paradigm for soil and water loss control. Recently, scenario analysis based on watershed modeling has been introduced into watershed management, which can provide information about the effectiveness of different management practices based on the quantitative simulation of watershed processes. Because of its merits such as low cost, short period, and high repeatability, scenario analysis shows great potential in aiding the development of watershed management strategy. This paper elaborated a new paradigm using watershed modeling and scenario analysis for soil and water conservation, illustrated this new paradigm through two cases for practical watershed management, and explored the future development of this new soil and water conservation paradigm.

  10. Geometric Model of Induction Heating Process of Iron-Based Sintered Materials

    NASA Astrophysics Data System (ADS)

    Semagina, Yu V.; Egorova, M. A.

    2018-03-01

    The article studies the issue of building multivariable dependences based on the experimental data. A constructive method for solving the issue is presented in the form of equations of (n-1) – surface compartments of the extended Euclidean space E+n. The dimension of space is taken to be equal to the sum of the number of parameters and factors of the model of the system being studied. The basis for building multivariable dependencies is the generalized approach to n-space used for the surface compartments of 3D space. The surface is designed on the basis of the kinematic method, moving one geometric object along a certain trajectory. The proposed approach simplifies the process aimed at building the multifactorial empirical dependencies which describe the process being investigated.

  11. Models of formation and some algorithms of hyperspectral image processing

    NASA Astrophysics Data System (ADS)

    Achmetov, R. N.; Stratilatov, N. R.; Yudakov, A. A.; Vezenov, V. I.; Eremeev, V. V.

    2014-12-01

    Algorithms and information technologies for processing Earth hyperspectral imagery are presented. Several new approaches are discussed. Peculiar properties of processing the hyperspectral imagery, such as multifold signal-to-noise reduction, atmospheric distortions, access to spectral characteristics of every image point, and high dimensionality of data, were studied. Different measures of similarity between individual hyperspectral image points and the effect of additive uncorrelated noise on these measures were analyzed. It was shown that these measures are substantially affected by noise, and a new measure free of this disadvantage was proposed. The problem of detecting the observed scene object boundaries, based on comparing the spectral characteristics of image points, is considered. It was shown that contours are processed much better when spectral characteristics are used instead of energy brightness. A statistical approach to the correction of atmospheric distortions, which makes it possible to solve the stated problem based on analysis of a distorted image in contrast to analytical multiparametric models, was proposed. Several algorithms used to integrate spectral zonal images with data from other survey systems, which make it possible to image observed scene objects with a higher quality, are considered. Quality characteristics of hyperspectral data processing were proposed and studied.

  12. Advancing an Information Model for Environmental Observations

    NASA Astrophysics Data System (ADS)

    Horsburgh, J. S.; Aufdenkampe, A. K.; Hooper, R. P.; Lehnert, K. A.; Schreuders, K.; Tarboton, D. G.; Valentine, D. W.; Zaslavsky, I.

    2011-12-01

    Observational data are fundamental to hydrology and water resources, and the way they are organized, described, and shared either enables or inhibits the analyses that can be performed using the data. The CUAHSI Hydrologic Information System (HIS) project is developing cyberinfrastructure to support hydrologic science by enabling better access to hydrologic data. HIS is composed of three major components. HydroServer is a software stack for publishing time series of hydrologic observations on the Internet as well as geospatial data using standards-based web feature, map, and coverage services. HydroCatalog is a centralized facility that catalogs the data contents of individual HydroServers and enables search across them. HydroDesktop is a client application that interacts with both HydroServer and HydroCatalog to discover, download, visualize, and analyze hydrologic observations published on one or more HydroServers. All three components of HIS are founded upon an information model for hydrologic observations at stationary points that specifies the entities, relationships, constraints, rules, and semantics of the observational data and that supports its data services. Within this information model, observations are described with ancillary information (metadata) about the observations to allow them to be unambiguously interpreted and used, and to provide traceable heritage from raw measurements to useable information. Physical implementations of this information model include the Observations Data Model (ODM) for storing hydrologic observations, Water Markup Language (WaterML) for encoding observations for transmittal over the Internet, the HydroCatalog metadata catalog database, and the HydroDesktop data cache database. The CUAHSI HIS and this information model have now been in use for several years, and have been deployed across many different academic institutions as well as across several national agency data repositories. Additionally, components of the HIS

  13. A process-based emission model for volatile organic compounds from silage sources on farms

    USDA-ARS?s Scientific Manuscript database

    Silage on dairy farms can emit large amounts of volatile organic compounds (VOCs), a precursor in the formation of tropospheric ozone. Because of the challenges associated with direct measurements, process-based modeling is another approach for estimating emissions of air pollutants from sources suc...

  14. Process-Based Governance in Public Administrations Using Activity-Based Costing

    NASA Astrophysics Data System (ADS)

    Becker, Jörg; Bergener, Philipp; Räckers, Michael

    Decision- and policy-makers in public administrations currently lack on missing relevant information for sufficient governance. In Germany the introduction of New Public Management and double-entry accounting enable public administrations to get the opportunity to use cost-centered accounting mechanisms to establish new governance mechanisms. Process modelling in this case can be a useful instrument to help the public administrations decision- and policy-makers to structure their activities and capture relevant information. In combination with approaches like Activity-Based Costing, higher management level can be supported with a reasonable data base for fruitful and reasonable governance approaches. Therefore, the aim of this article is combining the public sector domain specific process modelling method PICTURE and concept of activity-based costing for supporting Public Administrations in process-based Governance.

  15. Climate Process Team "Representing calving and iceberg dynamics in global climate models"

    NASA Astrophysics Data System (ADS)

    Sergienko, O. V.; Adcroft, A.; Amundson, J. M.; Bassis, J. N.; Hallberg, R.; Pollard, D.; Stearns, L. A.; Stern, A. A.

    2016-12-01

    Iceberg calving accounts for approximately 50% of the ice mass loss from the Greenland and Antarctic ice sheets. By changing a glacier's geometry, calving can also significantly perturb the glacier's stress-regime far upstream of the grounding line. This process can enhance discharge of ice across the grounding line. Once calved, icebergs drift into the open ocean where they melt, injecting freshwater to the ocean and affecting the large-scale ocean circulation. The spatial redistribution of the freshwater flux have strong impact on sea-ice formation and its spatial variability. A Climate Process Team "Representing calving and iceberg dynamics in global climate models" was established in the fall 2014. The major objectives of the CPT are: (1) develop parameterizations of calving processes that are suitable for continental-scale ice-sheet models that simulate the evolution of the Antarctic and Greenland ice sheets; (2) compile the data sets of the glaciological and oceanographic observations that are necessary to test, validate and constrain the developed parameterizations and models; (3) develop a physically based iceberg component for inclusion in the large-scale ocean circulation model. Several calving parameterizations based suitable for various glaciological settings have been developed and implemented in a continental-scale ice sheet model. Simulations of the present-day Antarctic and Greenland ice sheets show that the ice-sheet geometric configurations (thickness and extent) are sensitive to the calving process. In order to guide the development as well as to test calving parameterizations, available observations (of various kinds) have been compiled and organized into a database. Monthly estimates of iceberg distribution around the coast of Greenland have been produced with a goal of constructing iceberg size distribution and probability functions for iceberg occurrence in particular regions. A physically based iceberg model component was used in a GFDL

  16. Automating the Processing of Earth Observation Data

    NASA Technical Reports Server (NTRS)

    Golden, Keith; Pang, Wan-Lin; Nemani, Ramakrishna; Votava, Petr

    2003-01-01

    NASA s vision for Earth science is to build a "sensor web": an adaptive array of heterogeneous satellites and other sensors that will track important events, such as storms, and provide real-time information about the state of the Earth to a wide variety of customers. Achieving this vision will require automation not only in the scheduling of the observations but also in the processing of the resulting data. To address this need, we are developing a planner-based agent to automatically generate and execute data-flow programs to produce the requested data products.

  17. Ground-based Observations and Atmospheric Modelling of Energetic Electron Precipitation Effects on Antarctic Mesospheric Chemistry

    NASA Astrophysics Data System (ADS)

    Newnham, D.; Clilverd, M. A.; Horne, R. B.; Rodger, C. J.; Seppälä, A.; Verronen, P. T.; Andersson, M. E.; Marsh, D. R.; Hendrickx, K.; Megner, L. S.; Kovacs, T.; Feng, W.; Plane, J. M. C.

    2016-12-01

    The effect of energetic electron precipitation (EEP) on the seasonal and diurnal abundances of nitric oxide (NO) and ozone in the Antarctic middle atmosphere during March 2013 to July 2014 is investigated. Geomagnetic storm activity during this period, close to solar maximum, was driven primarily by impulsive coronal mass ejections. Near-continuous ground-based atmospheric measurements have been made by a passive millimetre-wave radiometer deployed at Halley station (75°37'S, 26°14'W, L = 4.6), Antarctica. This location is directly under the region of radiation-belt EEP, at the extremity of magnetospheric substorm-driven EEP, and deep within the polar vortex during Austral winter. Superposed epoch analyses of the ground based data, together with NO observations made by the Solar Occultation For Ice Experiment (SOFIE) onboard the Aeronomy of Ice in the Mesosphere (AIM) satellite, show enhanced mesospheric NO following moderate geomagnetic storms (Dst ≤ -50 nT). Measurements by co-located 30 MHz riometers indicate simultaneous increases in ionisation at 75-90 km directly above Halley when Kp index ≥ 4. Direct NO production by EEP in the upper mesosphere, versus downward transport of NO from the lower thermosphere, is evaluated using a new version of the Whole Atmosphere Community Climate Model incorporating the full Sodankylä Ion Neutral Chemistry Model (WACCM SIC). Model ionization rates are derived from the Polar orbiting Operational Environmental Satellites (POES) second generation Space Environment Monitor (SEM 2) Medium Energy Proton and Electron Detector instrument (MEPED). The model data are compared with observations to quantify the impact of EEP on stratospheric and mesospheric odd nitrogen (NOx), odd hydrogen (HOx), and ozone.

  18. Process-based modeling of temperature and water profiles in the seedling recruitment zone: Part II. Seedling emergence timing

    USDA-ARS?s Scientific Manuscript database

    Predictions of seedling emergence timing for spring wheat are facilitated by process-based modeling of the microsite environment in the shallow seedling recruitment zone. Hourly temperature and water profiles within the recruitment zone for 60 days after planting were simulated from the process-base...

  19. Ground-based microwave radar and optical lidar signatures of volcanic ash plumes: models, observations and retrievals

    NASA Astrophysics Data System (ADS)

    Mereu, Luigi; Marzano, Frank; Mori, Saverio; Montopoli, Mario; Cimini, Domenico; Martucci, Giovanni

    2013-04-01

    The detection and quantitative retrieval of volcanic ash clouds is of significant interest due to its environmental, climatic and socio-economic effects. Real-time monitoring of such phenomena is crucial, also for the initialization of dispersion models. Satellite visible-infrared radiometric observations from geostationary platforms are usually exploited for long-range trajectory tracking and for measuring low level eruptions. Their imagery is available every 15-30 minutes and suffers from a relatively poor spatial resolution. Moreover, the field-of-view of geostationary radiometric measurements may be blocked by water and ice clouds at higher levels and their overall utility is reduced at night. Ground-based microwave radars may represent an important tool to detect and, to a certain extent, mitigate the hazard from the ash clouds. Ground-based weather radar systems can provide data for determining the ash volume, total mass and height of eruption clouds. Methodological studies have recently investigated the possibility of using ground-based single-polarization and dual-polarization radar system for the remote sensing of volcanic ash cloud. A microphysical characterization of volcanic ash was carried out in terms of dielectric properties, size distribution and terminal fall speed, assuming spherically-shaped particles. A prototype of volcanic ash radar retrieval (VARR) algorithm for single-polarization systems was proposed and applied to S-band and C-band weather radar data. The sensitivity of the ground-based radar measurements decreases as the ash cloud is farther so that for distances greater than about 50 kilometers fine ash might be not detected anymore by microwave radars. In this respect, radar observations can be complementary to satellite, lidar and aircraft observations. Active remote sensing retrieval from ground, in terms of detection, estimation and sensitivity, of volcanic ash plumes is not only dependent on the sensor specifications, but also on

  20. A Harris-Todaro Agent-Based Model to Rural-Urban Migration

    NASA Astrophysics Data System (ADS)

    Espíndola, Aquino L.; Silveira, Jaylson J.; Penna, T. J. P.

    2006-09-01

    The Harris-Todaro model of the rural-urban migration process is revisited under an agent-based approach. The migration of the workers is interpreted as a process of social learning by imitation, formalized by a computational model. By simulating this model, we observe a transitional dynamics with continuous growth of the urban fraction of overall population toward an equilibrium. Such an equilibrium is characterized by stabilization of rural-urban expected wages differential (generalized Harris-Todaro equilibrium condition), urban concentration and urban unemployment. These classic results obtained originally by Harris and Todaro are emergent properties of our model.

  1. The evolution of process-based hydrologic models: historical challenges and the collective quest for physical realism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis

    The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. Here, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We also illustrate how modeling advances have been made by groups using models of different type and complexity,more » and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.« less

  2. The evolution of process-based hydrologic models: historical challenges and the collective quest for physical realism

    DOE PAGES

    Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis; ...

    2017-07-11

    The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. Here, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We also illustrate how modeling advances have been made by groups using models of different type and complexity,more » and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.« less

  3. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Ye, Ming; Walker, Anthony P.

    Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averagingmore » methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  4. Latitudinally dependent Trimpi effects: Modeling and observations

    NASA Astrophysics Data System (ADS)

    Clilverd, Mark A.; Yeo, Richard F.; Nunn, David; Smith, Andy J.

    1999-09-01

    Modeling studies show that the exclusion of the propagating VLF wave from the ionospheric region results in the decline of Trimpi magnitude with patch altitude. In large models such as Long Wave Propagation Capability (LWPC) this exclusion does not occur inherently in the code, and high-altitude precipitation modeling can produce results that are not consistent with observations from ground-based experiments. The introduction to LWPC of realistic wave attenuation of the height gain functions in the ionosphere solves these computational problems. This work presents the first modeling of (Born) Trimpi scattering at long ranges, taking into account global inhomogeneities and continuous mode conversion along all paths, by employing the full conductivity perturbation matrix. The application of the more realistic height gain functions allows the prediction of decreasing Trimpi activity with increasing latitude, primarily through the mechanism of excluding the VLF wave from regions of high conductivity and scattering efficiency. Ground-based observations from Faraday and Rothera, Antarctica, in September and October 1995 of Trimpi occurring on the NPM (Hawaii) path provide data that are consistent with these predictions. Latitudinal variations in Trimpi occurrence near L=2.5, with a significant decrease of about 70% occurrence between L=2.4 and L=2.8, have been observed at higher L shell resolution than in previous studies (i.e., 2

  5. Evaluation of CNN as anthropomorphic model observer

    NASA Astrophysics Data System (ADS)

    Massanes, Francesc; Brankov, Jovan G.

    2017-03-01

    Model observers (MO) are widely used in medical imaging to act as surrogates of human observers in task-based image quality evaluation, frequently towards optimization of reconstruction algorithms. In this paper, we explore the use of convolutional neural networks (CNN) to be used as MO. We will compare CNN MO to alternative MO currently being proposed and used such as the relevance vector machine based MO and channelized Hotelling observer (CHO). As the success of the CNN, and other deep learning approaches, is rooted in large data sets availability, which is rarely the case in medical imaging systems task-performance evaluation, we will evaluate CNN performance on both large and small training data sets.

  6. Mammalian cell culture process for monoclonal antibody production: nonlinear modelling and parameter estimation.

    PubMed

    Selişteanu, Dan; Șendrescu, Dorin; Georgeanu, Vlad; Roman, Monica

    2015-01-01

    Monoclonal antibodies (mAbs) are at present one of the fastest growing products of pharmaceutical industry, with widespread applications in biochemistry, biology, and medicine. The operation of mAbs production processes is predominantly based on empirical knowledge, the improvements being achieved by using trial-and-error experiments and precedent practices. The nonlinearity of these processes and the absence of suitable instrumentation require an enhanced modelling effort and modern kinetic parameter estimation strategies. The present work is dedicated to nonlinear dynamic modelling and parameter estimation for a mammalian cell culture process used for mAb production. By using a dynamical model of such kind of processes, an optimization-based technique for estimation of kinetic parameters in the model of mammalian cell culture process is developed. The estimation is achieved as a result of minimizing an error function by a particle swarm optimization (PSO) algorithm. The proposed estimation approach is analyzed in this work by using a particular model of mammalian cell culture, as a case study, but is generic for this class of bioprocesses. The presented case study shows that the proposed parameter estimation technique provides a more accurate simulation of the experimentally observed process behaviour than reported in previous studies.

  7. Mammalian Cell Culture Process for Monoclonal Antibody Production: Nonlinear Modelling and Parameter Estimation

    PubMed Central

    Selişteanu, Dan; Șendrescu, Dorin; Georgeanu, Vlad

    2015-01-01

    Monoclonal antibodies (mAbs) are at present one of the fastest growing products of pharmaceutical industry, with widespread applications in biochemistry, biology, and medicine. The operation of mAbs production processes is predominantly based on empirical knowledge, the improvements being achieved by using trial-and-error experiments and precedent practices. The nonlinearity of these processes and the absence of suitable instrumentation require an enhanced modelling effort and modern kinetic parameter estimation strategies. The present work is dedicated to nonlinear dynamic modelling and parameter estimation for a mammalian cell culture process used for mAb production. By using a dynamical model of such kind of processes, an optimization-based technique for estimation of kinetic parameters in the model of mammalian cell culture process is developed. The estimation is achieved as a result of minimizing an error function by a particle swarm optimization (PSO) algorithm. The proposed estimation approach is analyzed in this work by using a particular model of mammalian cell culture, as a case study, but is generic for this class of bioprocesses. The presented case study shows that the proposed parameter estimation technique provides a more accurate simulation of the experimentally observed process behaviour than reported in previous studies. PMID:25685797

  8. Model Based Predictive Control of Multivariable Hammerstein Processes with Fuzzy Logic Hypercube Interpolated Models

    PubMed Central

    Coelho, Antonio Augusto Rodrigues

    2016-01-01

    This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC). PMID:27657723

  9. A process-based agricultural model for the irrigated agriculture sector in Alberta, Canada

    NASA Astrophysics Data System (ADS)

    Ammar, M. E.; Davies, E. G.

    2015-12-01

    Connections between land and water, irrigation, agricultural productivity and profitability, policy alternatives, and climate change and variability are complex, poorly understood, and unpredictable. Policy assessment for agriculture presents a large potential for development of broad-based simulation models that can aid assessment and quantification of policy alternatives over longer temporal scales. The Canadian irrigated agriculture sector is concentrated in Alberta, where it represents two thirds of the irrigated land-base in Canada and is the largest consumer of surface water. Despite interest in irrigation expansion, its potential in Alberta is uncertain given a constrained water supply, significant social and economic development and increasing demands for both land and water, and climate change. This paper therefore introduces a system dynamics model as a decision support tool to provide insights into irrigation expansion in Alberta, and into trade-offs and risks associated with that expansion. It is intended to be used by a wide variety of users including researchers, policy analysts and planners, and irrigation managers. A process-based cropping system approach is at the core of the model and uses a water-driven crop growth mechanism described by AquaCrop. The tool goes beyond a representation of crop phenology and cropping systems by permitting assessment and quantification of the broader, long-term consequences of agricultural policies for Alberta's irrigation sector. It also encourages collaboration and provides a degree of transparency that gives confidence in simulation results. The paper focuses on the agricultural component of the systems model, describing the process involved; soil water and nutrients balance, crop growth, and water, temperature, salinity, and nutrients stresses, and how other disciplines can be integrated to account for the effects of interactions and feedbacks in the whole system. In later stages, other components such as

  10. Evaluation of Gravitational Field Models Based on the Laser Range Observation of Low Earth Orbit Satellites

    NASA Astrophysics Data System (ADS)

    Hong-bo, Wang; Chang-yin, Zhao; Wei, Zhang; Jin-wei, Zhan; Sheng-xian, Yu

    2016-07-01

    The Earth gravitational field model is one of the most important dynamic models in satellite orbit computation. Several space gravity missions made great successes in recent years, prompting the publishing of several gravitational filed models. In this paper, two classical (JGM3, EGM96) and four latest (EIGEN-CHAMP05S, GGM03S, GOCE02S, EGM2008) models are evaluated by employing them in the precision orbit determination (POD) and prediction. These calculations are performed based on the laser ranging observation of four Low Earth Orbit (LEO) satellites, including CHAMP, GFZ-1, GRACE-A, and SWARM-A. The residual error of observation in POD is adopted to describe the accuracy of six gravitational field models. The main results we obtained are as follows. (1) For the POD of LEOs, the accuracies of 4 latest models are at the same level, and better than those of 2 classical models; (2) Taking JGM3 as reference, EGM96 model's accuracy is better in most situations, and the accuracies of the 4 latest models are improved by 12%-47% in POD and 63% in prediction, respectively. We also confirm that the model's accuracy in POD is enhanced with the increasing degree and order if they are smaller than 70, and when they exceed 70, the accuracy keeps constant, implying that the model's degree and order truncated to 70 are sufficient to meet the requirement of LEO computation of centimeter precision.

  11. Rules based process window OPC

    NASA Astrophysics Data System (ADS)

    O'Brien, Sean; Soper, Robert; Best, Shane; Mason, Mark

    2008-03-01

    As a preliminary step towards Model-Based Process Window OPC we have analyzed the impact of correcting post-OPC layouts using rules based methods. Image processing on the Brion Tachyon was used to identify sites where the OPC model/recipe failed to generate an acceptable solution. A set of rules for 65nm active and poly were generated by classifying these failure sites. The rules were based upon segment runlengths, figure spaces, and adjacent figure widths. 2.1 million sites for active were corrected in a small chip (comparing the pre and post rules based operations), and 59 million were found at poly. Tachyon analysis of the final reticle layout found weak margin sites distinct from those sites repaired by rules-based corrections. For the active layer more than 75% of the sites corrected by rules would have printed without a defect indicating that most rulesbased cleanups degrade the lithographic pattern. Some sites were missed by the rules based cleanups due to either bugs in the DRC software or gaps in the rules table. In the end dramatic changes to the reticle prevented catastrophic lithography errors, but this method is far too blunt. A more subtle model-based procedure is needed changing only those sites which have unsatisfactory lithographic margin.

  12. Iohexol degradation in wastewater and urine by UV-based Advanced Oxidation Processes (AOPs): Process modeling and by-products identification.

    PubMed

    Giannakis, Stefanos; Jovic, Milica; Gasilova, Natalia; Pastor Gelabert, Miquel; Schindelholz, Simon; Furbringer, Jean-Marie; Girault, Hubert; Pulgarin, César

    2017-06-15

    In this work, an Iodinated Contrast Medium (ICM), Iohexol, was subjected to treatment by 3 Advanced Oxidation Processes (AOPs) (UV, UV/H 2 O 2 , UV/H 2 O 2 /Fe 2+ ). Water, wastewater and urine were spiked with Iohexol, in order to investigate the treatment efficiency of AOPs. A tri-level approach has been deployed to assess the UV-based AOPs efficacy. The treatment was heavily influenced by the UV transmittance and the organics content of the matrix, as dilution and acidification improved the degradation but iron/H 2 O 2 increase only moderately. Furthermore, optimization of the treatment conditions, as well as modeling of the degradation was performed, by step-wise constructed quadratic or product models, and determination of the optimal operational regions was achieved through desirability functions. Finally, global chemical parameters (COD, TOC and UV-Vis absorbance) were followed in parallel with specific analyses to elucidate the degradation process of Iohexol by UV-based AOPs. Through HPLC/MS analysis the degradation pathway and the effects the operational parameters were monitored, thus attributing the pathways the respective modifications. The addition of iron in the UV/H 2 O 2 process inflicted additional pathways beneficial for both Iohexol and organics removal from the matrix. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Impact of Asian Aerosols on Precipitation Over California: An Observational and Model Based Approach

    NASA Technical Reports Server (NTRS)

    Naeger, Aaron R.; Molthan, Andrew L.; Zavodsky, Bradley T.; Creamean, Jessie M.

    2015-01-01

    Dust and pollution emissions from Asia are often transported across the Pacific Ocean to over the western United States. Therefore, it is essential to fully understand the impact of these aerosols on clouds and precipitation forming over the eastern Pacific and western United States, especially during atmospheric river events that account for up to half of California's annual precipitation and can lead to widespread flooding. In order for numerical modeling simulations to accurately represent the present and future regional climate of the western United States, we must account for the aerosol-cloud-precipitation interactions associated with Asian dust and pollution aerosols. Therefore, we have constructed a detailed study utilizing multi-sensor satellite observations, NOAA-led field campaign measurements, and targeted numerical modeling studies where Asian aerosols interacted with cloud and precipitation processes over the western United States. In particular, we utilize aerosol optical depth retrievals from the NASA Moderate Resolution Imaging Spectroradiometer (MODIS), NOAA Geostationary Operational Environmental Satellite (GOES-11), and Japan Meteorological Agency (JMA) Multi-functional Transport Satellite (MTSAT) to effectively detect and monitor the trans-Pacific transport of Asian dust and pollution. The aerosol optical depth (AOD) retrievals are used in assimilating the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem) in order to provide the model with an accurate representation of the aerosol spatial distribution across the Pacific. We conduct WRF-Chem model simulations of several cold-season atmospheric river events that interacted with Asian aerosols and brought significant precipitation over California during February-March 2011 when the NOAA CalWater field campaign was ongoing. The CalWater field campaign consisted of aircraft and surface measurements of aerosol and precipitation processes that help extensively validate our WRF

  14. Lagrangian Observations and Modeling of Marine Larvae

    NASA Astrophysics Data System (ADS)

    Paris, Claire B.; Irisson, Jean-Olivier

    2017-04-01

    Just within the past two decades, studies on the early-life history stages of marine organisms have led to new paradigms in population dynamics. Unlike passive plant seeds that are transported by the wind or by animals, marine larvae have motor and sensory capabilities. As a result, marine larvae have a tremendous capacity to actively influence their dispersal. This is continuously revealed as we develop new techniques to observe larvae in their natural environment and begin to understand their ability to detect cues throughout ontogeny, process the information, and use it to ride ocean currents and navigate their way back home, or to a place like home. We present innovative in situ and numerical modeling approaches developed to understand the underlying mechanisms of larval transport in the ocean. We describe a novel concept of a Lagrangian platform, the Drifting In Situ Chamber (DISC), designed to observe and quantify complex larval behaviors and their interactions with the pelagic environment. We give a brief history of larval ecology research with the DISC, showing that swimming is directional in most species, guided by cues as diverse as the position of the sun or the underwater soundscape, and even that (unlike humans!) larvae orient better and swim faster when moving as a group. The observed Lagrangian behavior of individual larvae are directly implemented in the Connectivity Modeling System (CMS), an open source Lagrangian tracking application. Simulations help demonstrate the impact that larval behavior has compared to passive Lagrangian trajectories. These methodologies are already the base of exciting findings and are promising tools for documenting and simulating the behavior of other small pelagic organisms, forecasting their migration in a changing ocean.

  15. A hybrid double-observer sightability model for aerial surveys

    USGS Publications Warehouse

    Griffin, Paul C.; Lubow, Bruce C.; Jenkins, Kurt J.; Vales, David J.; Moeller, Barbara J.; Reid, Mason; Happe, Patricia J.; Mccorquodale, Scott M.; Tirhi, Michelle J.; Schaberi, Jim P.; Beirne, Katherine

    2013-01-01

    Raw counts from aerial surveys make no correction for undetected animals and provide no estimate of precision with which to judge the utility of the counts. Sightability modeling and double-observer (DO) modeling are 2 commonly used approaches to account for detection bias and to estimate precision in aerial surveys. We developed a hybrid DO sightability model (model MH) that uses the strength of each approach to overcome the weakness in the other, for aerial surveys of elk (Cervus elaphus). The hybrid approach uses detection patterns of 2 independent observer pairs in a helicopter and telemetry-based detections of collared elk groups. Candidate MH models reflected hypotheses about effects of recorded covariates and unmodeled heterogeneity on the separate front-seat observer pair and back-seat observer pair detection probabilities. Group size and concealing vegetation cover strongly influenced detection probabilities. The pilot's previous experience participating in aerial surveys influenced detection by the front pair of observers if the elk group was on the pilot's side of the helicopter flight path. In 9 surveys in Mount Rainier National Park, the raw number of elk counted was approximately 80–93% of the abundance estimated by model MH. Uncorrected ratios of bulls per 100 cows generally were low compared to estimates adjusted for detection bias, but ratios of calves per 100 cows were comparable whether based on raw survey counts or adjusted estimates. The hybrid method was an improvement over commonly used alternatives, with improved precision compared to sightability modeling and reduced bias compared to DO modeling.

  16. Post-processing of multi-hydrologic model simulations for improved streamflow projections

    NASA Astrophysics Data System (ADS)

    khajehei, sepideh; Ahmadalipour, Ali; Moradkhani, Hamid

    2016-04-01

    Hydrologic model outputs are prone to bias and uncertainty due to knowledge deficiency in model and data. Uncertainty in hydroclimatic projections arises due to uncertainty in hydrologic model as well as the epistemic or aleatory uncertainties in GCM parameterization and development. This study is conducted to: 1) evaluate the recently developed multi-variate post-processing method for historical simulations and 2) assess the effect of post-processing on uncertainty and reliability of future streamflow projections in both high-flow and low-flow conditions. The first objective is performed for historical period of 1970-1999. Future streamflow projections are generated for 10 statistically downscaled GCMs from two widely used downscaling methods: Bias Corrected Statistically Downscaled (BCSD) and Multivariate Adaptive Constructed Analogs (MACA), over the period of 2010-2099 for two representative concentration pathways of RCP4.5 and RCP8.5. Three semi-distributed hydrologic models were employed and calibrated at 1/16 degree latitude-longitude resolution for over 100 points across the Columbia River Basin (CRB) in the pacific northwest USA. Streamflow outputs are post-processed through a Bayesian framework based on copula functions. The post-processing approach is relying on a transfer function developed based on bivariate joint distribution between the observation and simulation in historical period. Results show that application of post-processing technique leads to considerably higher accuracy in historical simulations and also reducing model uncertainty in future streamflow projections.

  17. Monitoring growth condition of spring maize in Northeast China using a process-based model

    NASA Astrophysics Data System (ADS)

    Wang, Peijuan; Zhou, Yuyu; Huo, Zhiguo; Han, Lijuan; Qiu, Jianxiu; Tan, Yanjng; Liu, Dan

    2018-04-01

    Early and accurate assessment of the growth condition of spring maize, a major crop in China, is important for the national food security. This study used a process-based Remote-Sensing-Photosynthesis-Yield Estimation for Crops (RS-P-YEC) model, driven by satellite-derived leaf area index and ground-based meteorological observations, to simulate net primary productivity (NPP) of spring maize in Northeast China from the first ten-day (FTD) of May to the second ten-day (STD) of August during 2001-2014. The growth condition of spring maize in 2014 in Northeast China was monitored and evaluated spatially and temporally by comparison with 5- and 13-year averages, as well as 2009 and 2013. Results showed that NPP simulated by the RS-P-YEC model, with consideration of multi-scattered radiation inside the crop canopy, could reveal the growth condition of spring maize more reasonably than the Boreal Ecosystem Productivity Simulator. Moreover, NPP outperformed other commonly used vegetation indices (e.g., Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI)) for monitoring and evaluating the growth condition of spring maize. Compared with the 5- and 13-year averages, the growth condition of spring maize in 2014 was worse before the STD of June and after the FTD of August, and it was better from the third ten-day (TTD) of June to the TTD of July across Northeast China. Spatially, regions with slightly worse and worse growth conditions in the STD of August 2014 were concentrated mainly in central Northeast China, and they accounted for about half of the production area of spring maize in Northeast China. This study confirms that NPP is a good indicator for monitoring and evaluating growth condition because of its capacity to reflect the physiological characteristics of crops. Meanwhile, the RS-P-YEC model, driven by remote sensing and ground-based meteorological data, is effective for monitoring crop growth condition over large areas in a near real

  18. Robust resolution enhancement optimization methods to process variations based on vector imaging model

    NASA Astrophysics Data System (ADS)

    Ma, Xu; Li, Yanqiu; Guo, Xuejia; Dong, Lisong

    2012-03-01

    Optical proximity correction (OPC) and phase shifting mask (PSM) are the most widely used resolution enhancement techniques (RET) in the semiconductor industry. Recently, a set of OPC and PSM optimization algorithms have been developed to solve for the inverse lithography problem, which are only designed for the nominal imaging parameters without giving sufficient attention to the process variations due to the aberrations, defocus and dose variation. However, the effects of process variations existing in the practical optical lithography systems become more pronounced as the critical dimension (CD) continuously shrinks. On the other hand, the lithography systems with larger NA (NA>0.6) are now extensively used, rendering the scalar imaging models inadequate to describe the vector nature of the electromagnetic field in the current optical lithography systems. In order to tackle the above problems, this paper focuses on developing robust gradient-based OPC and PSM optimization algorithms to the process variations under a vector imaging model. To achieve this goal, an integrative and analytic vector imaging model is applied to formulate the optimization problem, where the effects of process variations are explicitly incorporated in the optimization framework. The steepest descent algorithm is used to optimize the mask iteratively. In order to improve the efficiency of the proposed algorithms, a set of algorithm acceleration techniques (AAT) are exploited during the optimization procedure.

  19. Testing Strategies for Model-Based Development

    NASA Technical Reports Server (NTRS)

    Heimdahl, Mats P. E.; Whalen, Mike; Rajan, Ajitha; Miller, Steven P.

    2006-01-01

    This report presents an approach for testing artifacts generated in a model-based development process. This approach divides the traditional testing process into two parts: requirements-based testing (validation testing) which determines whether the model implements the high-level requirements and model-based testing (conformance testing) which determines whether the code generated from a model is behaviorally equivalent to the model. The goals of the two processes differ significantly and this report explores suitable testing metrics and automation strategies for each. To support requirements-based testing, we define novel objective requirements coverage metrics similar to existing specification and code coverage metrics. For model-based testing, we briefly describe automation strategies and examine the fault-finding capability of different structural coverage metrics using tests automatically generated from the model.

  20. Conveying Clinical Reasoning Based on Visual Observation via Eye-Movement Modelling Examples

    ERIC Educational Resources Information Center

    Jarodzka, Halszka; Balslev, Thomas; Holmqvist, Kenneth; Nystrom, Marcus; Scheiter, Katharina; Gerjets, Peter; Eika, Berit

    2012-01-01

    Complex perceptual tasks, like clinical reasoning based on visual observations of patients, require not only conceptual knowledge about diagnostic classes but also the skills to visually search for symptoms and interpret these observations. However, medical education so far has focused very little on how visual observation skills can be…

  1. A Grammar-based Approach for Modeling User Interactions and Generating Suggestions During the Data Exploration Process.

    PubMed

    Dabek, Filip; Caban, Jesus J

    2017-01-01

    Despite the recent popularity of visual analytics focusing on big data, little is known about how to support users that use visualization techniques to explore multi-dimensional datasets and accomplish specific tasks. Our lack of models that can assist end-users during the data exploration process has made it challenging to learn from the user's interactive and analytical process. The ability to model how a user interacts with a specific visualization technique and what difficulties they face are paramount in supporting individuals with discovering new patterns within their complex datasets. This paper introduces the notion of visualization systems understanding and modeling user interactions with the intent of guiding a user through a task thereby enhancing visual data exploration. The challenges faced and the necessary future steps to take are discussed; and to provide a working example, a grammar-based model is presented that can learn from user interactions, determine the common patterns among a number of subjects using a K-Reversible algorithm, build a set of rules, and apply those rules in the form of suggestions to new users with the goal of guiding them along their visual analytic process. A formal evaluation study with 300 subjects was performed showing that our grammar-based model is effective at capturing the interactive process followed by users and that further research in this area has the potential to positively impact how users interact with a visualization system.

  2. Introduction of the Notion of Differential Equations by Modelling Based Teaching

    ERIC Educational Resources Information Center

    Budinski, Natalija; Takaci, Djurdjica

    2011-01-01

    This paper proposes modelling based learning as a tool for learning and teaching mathematics. The example of modelling real world problems leading to the exponential function as the solution of differential equations is described, as well as the observations about students' activities during the process. The students were acquainted with the…

  3. Improving satellite-based PM2.5 estimates in China using Gaussian processes modeling in a Bayesian hierarchical setting.

    PubMed

    Yu, Wenxi; Liu, Yang; Ma, Zongwei; Bi, Jun

    2017-08-01

    Using satellite-based aerosol optical depth (AOD) measurements and statistical models to estimate ground-level PM 2.5 is a promising way to fill the areas that are not covered by ground PM 2.5 monitors. The statistical models used in previous studies are primarily Linear Mixed Effects (LME) and Geographically Weighted Regression (GWR) models. In this study, we developed a new regression model between PM 2.5 and AOD using Gaussian processes in a Bayesian hierarchical setting. Gaussian processes model the stochastic nature of the spatial random effects, where the mean surface and the covariance function is specified. The spatial stochastic process is incorporated under the Bayesian hierarchical framework to explain the variation of PM 2.5 concentrations together with other factors, such as AOD, spatial and non-spatial random effects. We evaluate the results of our model and compare them with those of other, conventional statistical models (GWR and LME) by within-sample model fitting and out-of-sample validation (cross validation, CV). The results show that our model possesses a CV result (R 2  = 0.81) that reflects higher accuracy than that of GWR and LME (0.74 and 0.48, respectively). Our results indicate that Gaussian process models have the potential to improve the accuracy of satellite-based PM 2.5 estimates.

  4. Current Sensor Fault Diagnosis Based on a Sliding Mode Observer for PMSM Driven Systems

    PubMed Central

    Huang, Gang; Luo, Yi-Ping; Zhang, Chang-Fan; Huang, Yi-Shan; Zhao, Kai-Hui

    2015-01-01

    This paper proposes a current sensor fault detection method based on a sliding mode observer for the torque closed-loop control system of interior permanent magnet synchronous motors. First, a sliding mode observer based on the extended flux linkage is built to simplify the motor model, which effectively eliminates the phenomenon of salient poles and the dependence on the direct axis inductance parameter, and can also be used for real-time calculation of feedback torque. Then a sliding mode current observer is constructed in αβ coordinates to generate the fault residuals of the phase current sensors. The method can accurately identify abrupt gain faults and slow-variation offset faults in real time in faulty sensors, and the generated residuals of the designed fault detection system are not affected by the unknown input, the structure of the observer, and the theoretical derivation and the stability proof process are concise and simple. The RT-LAB real-time simulation is used to build a simulation model of the hardware in the loop. The simulation and experimental results demonstrate the feasibility and effectiveness of the proposed method. PMID:25970258

  5. A process-based model for cattle manure compost windrows: Model performance and application

    USDA-ARS?s Scientific Manuscript database

    A model was developed and incorporated in the Integrated Farm System Model (IFSM, v.4.3) that simulates important processes occurring during windrow composting of manure. The model, documented in an accompanying paper, predicts changes in windrow properties and conditions and the resulting emissions...

  6. Regional TEC dynamic modeling based on Slepian functions

    NASA Astrophysics Data System (ADS)

    Sharifi, Mohammad Ali; Farzaneh, Saeed

    2015-09-01

    In this work, the three-dimensional state of the ionosphere has been estimated by integrating the spherical Slepian harmonic function and Kalman filter. The spherical Slepian harmonic functions have been used to establish the observation equations because of their properties in local modeling. Spherical harmonics are poor choices to represent or analyze geophysical processes without perfect global coverage but the Slepian functions afford spatial and spectral selectivity. The Kalman filter has been utilized to perform the parameter estimation due to its suitable properties in processing the GPS measurements in the real-time mode. The proposed model has been applied to the real data obtained from the ground-based GPS observations across some portion of the IGS network in Europe. Results have been compared with the estimated TECs by the CODE, ESA, IGS centers and IRI-2012 model. The results indicated that the proposed model which takes advantage of the Slepian basis and Kalman filter is efficient and allows for the generation of the near-real-time regional TEC map.

  7. Observations and Models of the Lunar Sodium Exosphere 1988 - 1999

    NASA Technical Reports Server (NTRS)

    Killen, Rosemary; Sarantos, Menelaos; Hurley, Dana M.; Potter, Andrew E.; Morgan, Thomas H.; Farrell, William M.; Naidu, Shantanu

    2012-01-01

    Sodium in the lunar exosphere is easily observed from the Earth's surface due to its strong resonance emission lines in the visible region of the spectrum. Although sodium is a trace element, it is easily ejected from the surface by a number of processes. The variation of this exospheric constituent both spatially and temporally can help to constrain these sources and the loss processes and their timescales. Due to a revival of interest in the Moon and its volatiles, observations of the lunar exosphere obtained at the McMath-Pierce solar telescope in 1998 and 1999 have recently been reduced and analyzed. In addition, observations of the lunar sodium exosphere obtained with the Mt. Lemmon Lunar Coronagraph on Mt. Lemmon, Arizona, have also been published. We combine these new data with data previously published and reanalyzed by Sarantos et al. This comprehensive data set will be modeled using both a simple Chamberlain exosphere model and a comprehensive Monte Carlo model.

  8. Kidney transplantation process in Brazil represented in business process modeling notation.

    PubMed

    Peres Penteado, A; Molina Cohrs, F; Diniz Hummel, A; Erbs, J; Maciel, R F; Feijó Ortolani, C L; de Aguiar Roza, B; Torres Pisa, I

    2015-05-01

    Kidney transplantation is considered to be the best treatment for people with chronic kidney failure, because it improves the patients' quality of life and increases their length of survival compared with patients undergoing dialysis. The kidney transplantation process in Brazil is defined through laws, decrees, ordinances, and resolutions, but there is no visual representation of this process. The aim of this study was to analyze official documents to construct a representation of the kidney transplantation process in Brazil with the use of business process modeling notation (BPMN). The methodology for this study was based on an exploratory observational study, document analysis, and construction of process diagrams with the use of BPMN. Two rounds of validations by specialists were conducted. The result includes the kidney transplantation process in Brazil representation with the use of BPMN. We analyzed 2 digital documents that resulted in 2 processes with 45 total of activities and events, 6 organizations involved, and 6 different stages of the process. The constructed representation makes it easier to understand the rules for the business of kidney transplantation and can be used by the health care professionals involved in the various activities within this process. Construction of a representation with language appropriate for the Brazilian lay public is underway. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Control of variable speed variable pitch wind turbine based on a disturbance observer

    NASA Astrophysics Data System (ADS)

    Ren, Haijun; Lei, Xin

    2017-11-01

    In this paper, a novel sliding mode controller based on disturbance observer (DOB) to optimize the efficiency of variable speed variable pitch (VSVP) wind turbine is developed and analyzed. Due to the highly nonlinearity of the VSVP system, the model is linearly processed to obtain the state space model of the system. Then, a conventional sliding mode controller is designed and a DOB is added to estimate wind speed. The proposed control strategy can successfully deal with the random nature of wind speed, the nonlinearity of VSVP system, the uncertainty of parameters and external disturbance. Via adding the observer to the sliding mode controller, it can greatly reduce the chattering produced by the sliding mode switching gain. The simulation results show that the proposed control system has the effectiveness and robustness.

  10. Modelling morphology evolution during solidification of IPP in processing conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pantani, R., E-mail: rpantani@unisa.it, E-mail: fedesantis@unisa.it, E-mail: vsperanza@unisa.it, E-mail: gtitomanlio@unisa.it; De Santis, F., E-mail: rpantani@unisa.it, E-mail: fedesantis@unisa.it, E-mail: vsperanza@unisa.it, E-mail: gtitomanlio@unisa.it; Speranza, V., E-mail: rpantani@unisa.it, E-mail: fedesantis@unisa.it, E-mail: vsperanza@unisa.it, E-mail: gtitomanlio@unisa.it

    During polymer processing, crystallization takes place during or soon after flow. In most of cases, the flow field dramatically influences both the crystallization kinetics and the crystal morphology. On their turn, crystallinity and morphology affect product properties. Consequently, in the last decade, researchers tried to identify the main parameters determining crystallinity and morphology evolution during solidification In processing conditions. In this work, we present an approach to model flow-induced crystallization with the aim of predicting the morphology after processing. The approach is based on: interpretation of the FIC as the effect of molecular stretch on the thermodynamic crystallization temperature; modelingmore » the molecular stretch evolution by means of a model simple and easy to be implemented in polymer processing simulation codes; identification of the effect of flow on nucleation density and spherulites growth rate by means of simple experiments; determination of the condition under which fibers form instead of spherulites. Model predictions reproduce most of the features of final morphology observed in the samples after solidification.« less

  11. Transient high frequency signal estimation: A model-based processing approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnes, F.L.

    1985-03-22

    By utilizing the superposition property of linear systems a method of estimating the incident signal from reflective nondispersive data is developed. One of the basic merits of this approach is that, the reflections were removed by direct application of a Weiner type estimation algorithm, after the appropriate input was synthesized. The structure of the nondispersive signal model is well documented, and thus its' credence is established. The model is stated and more effort is devoted to practical methods of estimating the model parameters. Though a general approach was developed for obtaining the reflection weights, a simpler approach was employed here,more » since a fairly good reflection model is available. The technique essentially consists of calculating ratios of the autocorrelation function at lag zero and that lag where the incident and first reflection coincide. We initially performed our processing procedure on a measurement of a single signal. Multiple application of the processing procedure was required when we applied the reflection removal technique on a measurement containing information from the interaction of two physical phenomena. All processing was performed using SIG, an interactive signal processing package. One of the many consequences of using SIG was that repetitive operations were, for the most part, automated. A custom menu was designed to perform the deconvolution process.« less

  12. A proposed adaptive step size perturbation and observation maximum power point tracking algorithm based on photovoltaic system modeling

    NASA Astrophysics Data System (ADS)

    Huang, Yu

    Solar energy becomes one of the major alternative renewable energy options for its huge abundance and accessibility. Due to the intermittent nature, the high demand of Maximum Power Point Tracking (MPPT) techniques exists when a Photovoltaic (PV) system is used to extract energy from the sunlight. This thesis proposed an advanced Perturbation and Observation (P&O) algorithm aiming for relatively practical circumstances. Firstly, a practical PV system model is studied with determining the series and shunt resistances which are neglected in some research. Moreover, in this proposed algorithm, the duty ratio of a boost DC-DC converter is the object of the perturbation deploying input impedance conversion to achieve working voltage adjustment. Based on the control strategy, the adaptive duty ratio step size P&O algorithm is proposed with major modifications made for sharp insolation change as well as low insolation scenarios. Matlab/Simulink simulation for PV model, boost converter control strategy and various MPPT process is conducted step by step. The proposed adaptive P&O algorithm is validated by the simulation results and detail analysis of sharp insolation changes, low insolation condition and continuous insolation variation.

  13. a Gaussian Process Based Multi-Person Interaction Model

    NASA Astrophysics Data System (ADS)

    Klinger, T.; Rottensteiner, F.; Heipke, C.

    2016-06-01

    Online multi-person tracking in image sequences is commonly guided by recursive filters, whose predictive models define the expected positions of future states. When a predictive model deviates too much from the true motion of a pedestrian, which is often the case in crowded scenes due to unpredicted accelerations, the data association is prone to fail. In this paper we propose a novel predictive model on the basis of Gaussian Process Regression. The model takes into account the motion of every tracked pedestrian in the scene and the prediction is executed with respect to the velocities of all interrelated persons. As shown by the experiments, the model is capable of yielding more plausible predictions even in the presence of mutual occlusions or missing measurements. The approach is evaluated on a publicly available benchmark and outperforms other state-of-the-art trackers.

  14. Modeling of cryoseismicity observed at the Fimbulisen Ice Shelf, East Antarctica

    NASA Astrophysics Data System (ADS)

    Hainzl, S.; Pirli, M.; Dahm, T.; Schweitzer, J.; Köhler, A.

    2017-12-01

    A source region of repetitive cryoseismic activity has been identified at the Fimbulisen ice shelf, in Dronning Maud Land, East Antarctica. The specific area is located at the outlet of the Jutulstraumen glacier, near the Kupol Moskovskij ice rise. A unique event catalog extending over 13 years, from 2003 to 2016 has been built based on waveform cross-correlation detectors and Hidden Markov Model classifiers. Phases of low seismicity rates are alternating with intense activity intervals that exhibit a strong tidal modulation. We performed a detailed analysis and modeling of the more than 2000 events recorded between July and October 2013. The observations are characterized by a number of very clear signals: (i) the event rate follows both the neap-spring and the semi-diurnal ocean-tide cycle; (ii) recurrences have a characteristic time of approximately 8 minutes; (iii) magnitudes vary systematically both on short and long time scales; and (iv) the events migrate within short-time clusters. We use these observations to constrain the dynamic processes at work at this particular region of the Fimbulisen ice shelf. Our model assumes a local grounding of the ice shelf, where stick-slip motion occurs. We show that the observations can be reproduced considering the modulation of the Coulomb-Failure stress by ocean tides.

  15. Upscaling Empirically Based Conceptualisations to Model Tropical Dominant Hydrological Processes for Historical Land Use Change

    NASA Astrophysics Data System (ADS)

    Toohey, R.; Boll, J.; Brooks, E.; Jones, J.

    2009-12-01

    Surface runoff and percolation to ground water are two hydrological processes of concern to the Atlantic slope of Costa Rica because of their impacts on flooding and drinking water contamination. As per legislation, the Costa Rican Government funds land use management from the farm to the regional scale to improve or conserve hydrological ecosystem services. In this study, we examined how land use (e.g., forest, coffee, sugar cane, and pasture) affects hydrological response at the point, plot (1 m2), and the field scale (1-6ha) to empirically conceptualize the dominant hydrological processes in each land use. Using our field data, we upscaled these conceptual processes into a physically-based distributed hydrological model at the field, watershed (130 km2), and regional (1500 km2) scales. At the point and plot scales, the presence of macropores and large roots promoted greater vertical percolation and subsurface connectivity in the forest and coffee field sites. The lack of macropores and large roots, plus the addition of management artifacts (e.g., surface compaction and a plough layer), altered the dominant hydrological processes by increasing lateral flow and surface runoff in the pasture and sugar cane field sites. Macropores and topography were major influences on runoff generation at the field scale. Also at the field scale, antecedent moisture conditions suggest a threshold behavior as a temporal control on surface runoff generation. However, in this tropical climate with very intense rainstorms, annual surface runoff was less than 10% of annual precipitation at the field scale. Significant differences in soil and hydrological characteristics observed at the point and plot scales appear to have less significance when upscaled to the field scale. At the point and plot scales, percolation acted as the dominant hydrological process in this tropical environment. However, at the field scale for sugar cane and pasture sites, saturation-excess runoff increased as

  16. A Unimodal Model for Double Observer Distance Sampling Surveys.

    PubMed

    Becker, Earl F; Christ, Aaron M

    2015-01-01

    Distance sampling is a widely used method to estimate animal population size. Most distance sampling models utilize a monotonically decreasing detection function such as a half-normal. Recent advances in distance sampling modeling allow for the incorporation of covariates into the distance model, and the elimination of the assumption of perfect detection at some fixed distance (usually the transect line) with the use of double-observer models. The assumption of full observer independence in the double-observer model is problematic, but can be addressed by using the point independence assumption which assumes there is one distance, the apex of the detection function, where the 2 observers are assumed independent. Aerially collected distance sampling data can have a unimodal shape and have been successfully modeled with a gamma detection function. Covariates in gamma detection models cause the apex of detection to shift depending upon covariate levels, making this model incompatible with the point independence assumption when using double-observer data. This paper reports a unimodal detection model based on a two-piece normal distribution that allows covariates, has only one apex, and is consistent with the point independence assumption when double-observer data are utilized. An aerial line-transect survey of black bears in Alaska illustrate how this method can be applied.

  17. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE PAGES

    Dai, Heng; Ye, Ming; Walker, Anthony P.; ...

    2017-03-28

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  18. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Ye, Ming; Walker, Anthony P.

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  19. Ensuring congruency in multiscale modeling: towards linking agent based and continuum biomechanical models of arterial adaptation.

    PubMed

    Hayenga, Heather N; Thorne, Bryan C; Peirce, Shayn M; Humphrey, Jay D

    2011-11-01

    There is a need to develop multiscale models of vascular adaptations to understand tissue-level manifestations of cellular level mechanisms. Continuum-based biomechanical models are well suited for relating blood pressures and flows to stress-mediated changes in geometry and properties, but less so for describing underlying mechanobiological processes. Discrete stochastic agent-based models are well suited for representing biological processes at a cellular level, but not for describing tissue-level mechanical changes. We present here a conceptually new approach to facilitate the coupling of continuum and agent-based models. Because of ubiquitous limitations in both the tissue- and cell-level data from which one derives constitutive relations for continuum models and rule-sets for agent-based models, we suggest that model verification should enforce congruency across scales. That is, multiscale model parameters initially determined from data sets representing different scales should be refined, when possible, to ensure that common outputs are consistent. Potential advantages of this approach are illustrated by comparing simulated aortic responses to a sustained increase in blood pressure predicted by continuum and agent-based models both before and after instituting a genetic algorithm to refine 16 objectively bounded model parameters. We show that congruency-based parameter refinement not only yielded increased consistency across scales, it also yielded predictions that are closer to in vivo observations.

  20. NASA Cold Land Processes Experiment (CLPX 2002/03): Ground-based and near-surface meteorological observations

    Treesearch

    Kelly Elder; Don Cline; Angus Goodbody; Paul Houser; Glen E. Liston; Larry Mahrt; Nick Rutter

    2009-01-01

    A short-term meteorological database has been developed for the Cold Land Processes Experiment (CLPX). This database includes meteorological observations from stations designed and deployed exclusively for CLPXas well as observations available from other sources located in the small regional study area (SRSA) in north-central Colorado. The measured weather parameters...

  1. Non-fragile observer-based output feedback control for polytopic uncertain system under distributed model predictive control approach

    NASA Astrophysics Data System (ADS)

    Zhu, Kaiqun; Song, Yan; Zhang, Sunjie; Zhong, Zhaozhun

    2017-07-01

    In this paper, a non-fragile observer-based output feedback control problem for the polytopic uncertain system under distributed model predictive control (MPC) approach is discussed. By decomposing the global system into some subsystems, the computation complexity is reduced, so it follows that the online designing time can be saved.Moreover, an observer-based output feedback control algorithm is proposed in the framework of distributed MPC to deal with the difficulties in obtaining the states measurements. In this way, the presented observer-based output-feedback MPC strategy is more flexible and applicable in practice than the traditional state-feedback one. What is more, the non-fragility of the controller has been taken into consideration in favour of increasing the robustness of the polytopic uncertain system. After that, a sufficient stability criterion is presented by using Lyapunov-like functional approach, meanwhile, the corresponding control law and the upper bound of the quadratic cost function are derived by solving an optimisation subject to convex constraints. Finally, some simulation examples are employed to show the effectiveness of the method.

  2. When Models and Observations Collide: Journeying towards an Integrated Snow Depth Product

    NASA Astrophysics Data System (ADS)

    Webster, M.; Petty, A.; Boisvert, L.; Markus, T.; Kurtz, N. T.; Kwok, R.; Perovich, D. K.

    2017-12-01

    Knowledge of snow depth is essential for assessing changes in sea ice mass balance due to snow's insulating and reflective properties. In remote sensing applications, the accuracy of sea ice thickness retrievals from altimetry crucially depends on snow depth. Despite the need for snow depth data, we currently lack continuous observations that capture the basin-scale snow depth distribution and its seasonal evolution. Recent in situ and remote sensing observations are sparse in space and time, and contain uncertainties, caveats, and/or biases that often require careful interpretation. Likewise, using model output for remote sensing applications is limited due to uncertainties in atmospheric forcing and different treatments of snow processes. Here, we summarize our efforts in bringing observational and model data together to develop an approach for an integrated snow depth product. We start with a snow budget model and incrementally incorporate snow processes to determine the effects on snow depth and to assess model sensitivity. We discuss lessons learned in model-observation integration and ideas for potential improvements to the treatment of snow in models.

  3. Investigation of CO, C2H6 and aerosols over Eastern Canada during BORTAS 2011 using ground-based and satellite-based observations and model simulations

    NASA Astrophysics Data System (ADS)

    Griffin, Debora; Franklin, Jonathan; Parrington, Mark; Whaley, Cynthia; Hopper, Jason; Lesins, Glen; Tereszchuk, Keith; Walker, Kaley A.; Drummond, James R.; Palmer, Paul; Strong, Kimberly; Duck, Thomas J.; Abboud, Ihab; Dan, Lin; O'Neill, Norm; Clerbaux, Cathy; Coheur, Pierre; Bernath, Peter F.; Hyer, Edward; Kliever, Jenny

    2013-04-01

    We present the results of total column measurements of CO and C2H6 and aerosol optical depth (AOD) during the Quantifying the impact of BOReal forest fires on Tropospheric oxidants over the Atlantic using Aircraft and Satellites (BORTAS-B) campaign over Eastern Canada. Ground-based observations, using Fourier transform spectrometers (FTSs) and sun photometers, were carried out in July and August 2011. They were taken in Halifax, Nova Scotia, which is an ideal location to monitor the outflow of boreal fires from North America, and in Toronto, Ontario. Measurements of enhanced fine mode AOD were highly correlated with enhancements in coincident trace gas (CO and C2H6) observations between 19 and 21 July 2011, which is typical for a smoke plume event. In this study, we will focus on the identification of the origin and the transport of this smoke plume. We use back-trajectories calculated by the Canadian Meteorological Centre (CMC) as well as FLEXPART forward-trajectories to demonstrate that the enhanced CO, C2H6 and fine mode AOD seen near Halifax and Toronto did originate from forest fires in Northwestern Ontario, that occurred between 17 and 19 July 2011. In addition, total column measurements of CO from the satellite-borne Infrared Atmospheric Sounding Interferometer (IASI) have been used to trace the smoke plume and to confirm the origin of the CO enhancement. Furthermore, the emission ratio (ERC2H6-CO) and the emission factor (EFC2H6) of C2H6 (with respect to the CO emission) were estimated from these ground-based observations. The C2H6 emission results from boreal fires in Northwestern Ontario agree well with C2H6 emission measurements from other boreal regions, and are relatively high compared to other geographical regions. The ground-based CO and C2H6 observations were compared with output from the 3-D global chemical transport model GEOS-Chem, using the inventory of the Fire Locating And Monitoring of Burning Emissions (FLAMBE). Good agreement was found for

  4. Electrophysiological models of neural processing.

    PubMed

    Nelson, Mark E

    2011-01-01

    The brain is an amazing information processing system that allows organisms to adaptively monitor and control complex dynamic interactions with their environment across multiple spatial and temporal scales. Mathematical modeling and computer simulation techniques have become essential tools in understanding diverse aspects of neural processing ranging from sub-millisecond temporal coding in the sound localization circuity of barn owls to long-term memory storage and retrieval in humans that can span decades. The processing capabilities of individual neurons lie at the core of these models, with the emphasis shifting upward and downward across different levels of biological organization depending on the nature of the questions being addressed. This review provides an introduction to the techniques for constructing biophysically based models of individual neurons and local networks. Topics include Hodgkin-Huxley-type models of macroscopic membrane currents, Markov models of individual ion-channel currents, compartmental models of neuronal morphology, and network models involving synaptic interactions among multiple neurons.

  5. Effect of river flow fluctuations on riparian vegetation dynamics: Processes and models

    NASA Astrophysics Data System (ADS)

    Vesipa, Riccardo; Camporeale, Carlo; Ridolfi, Luca

    2017-12-01

    Several decades of field observations, laboratory experiments and mathematical modelings have demonstrated that the riparian environment is a disturbance-driven ecosystem, and that the main source of disturbance is river flow fluctuations. The focus of the present work has been on the key role that flow fluctuations play in determining the abundance, zonation and species composition of patches of riparian vegetation. To this aim, the scientific literature on the subject, over the last 20 years, has been reviewed. First, the most relevant ecological, morphological and chemical mechanisms induced by river flow fluctuations are described from a process-based perspective. The role of flow variability is discussed for the processes that affect the recruitment of vegetation, the vegetation during its adult life, and the morphological and nutrient dynamics occurring in the riparian habitat. Particular emphasis has been given to studies that were aimed at quantifying the effect of these processes on vegetation, and at linking them to the statistical characteristics of the river hydrology. Second, the advances made, from a modeling point of view, have been considered and discussed. The main models that have been developed to describe the dynamics of riparian vegetation have been presented. Different modeling approaches have been compared, and the corresponding advantages and drawbacks have been pointed out. Finally, attention has been paid to identifying the processes considered by the models, and these processes have been compared with those that have actually been observed or measured in field/laboratory studies.

  6. Conceptual Research of Lunar-based Earth Observation for Polar Glacier Motion

    NASA Astrophysics Data System (ADS)

    Ruan, Zhixing; Liu, Guang; Ding, Yixing

    2016-07-01

    The ice flow velocity of glaciers is important for estimating the polar ice sheet mass balance, and it is of great significance for studies into rising sea level under the background of global warming. However so far the long-term and global measurements of these macro-scale motion processes of the polar glaciers have hardly been achieved by Earth Observation (EO) technique from the ground, aircraft or satellites in space. This paper, facing the demand for space technology for large-scale global environmental change observation,especially the changes of polar glaciers, and proposes a new concept involving setting up sensors on the lunar surface and using the Moon as a platform for Earth observation, transmitting the data back to Earth. Lunar-based Earth observation, which enables the Earth's large-scale, continuous, long-term dynamic motions to be measured, is expected to provide a new solution to the problems mentioned above. According to the pattern and characteristics of polar glaciers motion, we will propose a comprehensive investigation of Lunar-based Earth observation with synthetic aperture radar (SAR). Via theoretical modeling and experimental simulation inversion, intensive studies of Lunar-based Earth observation for the glacier motions in the polar regions will be implemented, including the InSAR basics theory, observation modes of InSAR and optimization methods of their key parameters. It will be of a great help to creatively expand the EO technique system from space. In addition, they will contribute to establishing the theoretical foundation for the realization of the global, long-term and continuous observation for the glacier motion phenomena in the Antarctic and the Arctic.

  7. Calibration process of highly parameterized semi-distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Vidmar, Andrej; Brilly, Mitja

    2017-04-01

    Hydrological phenomena take place in the hydrological system, which is governed by nature, and are essentially stochastic. These phenomena are unique, non-recurring, and changeable across space and time. Since any river basin with its own natural characteristics and any hydrological event therein, are unique, this is a complex process that is not researched enough. Calibration is a procedure of determining the parameters of a model that are not known well enough. Input and output variables and mathematical model expressions are known, while only some parameters are unknown, which are determined by calibrating the model. The software used for hydrological modelling nowadays is equipped with sophisticated algorithms for calibration purposes without possibility to manage process by modeler. The results are not the best. We develop procedure for expert driven process of calibration. We use HBV-light-CLI hydrological model which has command line interface and coupling it with PEST. PEST is parameter estimation tool which is used widely in ground water modeling and can be used also on surface waters. Process of calibration managed by expert directly, and proportionally to the expert knowledge, affects the outcome of the inversion procedure and achieves better results than if the procedure had been left to the selected optimization algorithm. First step is to properly define spatial characteristic and structural design of semi-distributed model including all morphological and hydrological phenomena, like karstic area, alluvial area and forest area. This step includes and requires geological, meteorological, hydraulic and hydrological knowledge of modeler. Second step is to set initial parameter values at their preferred values based on expert knowledge. In this step we also define all parameter and observation groups. Peak data are essential in process of calibration if we are mainly interested in flood events. Each Sub Catchment in the model has own observations group

  8. A framework for modeling scenario-based barrier island storm impacts

    USGS Publications Warehouse

    Mickey, Rangley; Long, Joseph W.; Dalyander, P. Soupy; Plant, Nathaniel G.; Thompson, David M.

    2018-01-01

    Methods for investigating the vulnerability of existing or proposed coastal features to storm impacts often rely on simplified parametric models or one-dimensional process-based modeling studies that focus on changes to a profile across a dune or barrier island. These simple studies tend to neglect the impacts to curvilinear or alongshore varying island planforms, influence of non-uniform nearshore hydrodynamics and sediment transport, irregular morphology of the offshore bathymetry, and impacts from low magnitude wave events (e.g. cold fronts). Presented here is a framework for simulating regionally specific, low and high magnitude scenario-based storm impacts to assess the alongshore variable vulnerabilities of a coastal feature. Storm scenarios based on historic hydrodynamic conditions were derived and simulated using the process-based morphologic evolution model XBeach. Model results show that the scenarios predicted similar patterns of erosion and overwash when compared to observed qualitative morphologic changes from recent storm events that were not included in the dataset used to build the scenarios. The framework model simulations were capable of predicting specific areas of vulnerability in the existing feature and the results illustrate how this storm vulnerability simulation framework could be used as a tool to help inform the decision-making process for scientists, engineers, and stakeholders involved in coastal zone management or restoration projects.

  9. The Australian electricity market's pre-dispatch process: Some observations on its efficiency using ordered probit model

    NASA Astrophysics Data System (ADS)

    Zainudin, Wan Nur Rahini Aznie; Becker, Ralf; Clements, Adam

    2015-12-01

    Many market participants in Australia Electricity Market had cast doubts on whether the pre-dispatch process in the electricity market is able to give them good and timely quantity and price information. In a study by [11], they observed a significant bias (mainly indicating that the pre-dispatch process tends to underestimate spot price outcomes), a seasonality features of the bias across seasons and/or trading periods and changes in bias across the years in our sample period (1999 to 2007). In a formal setting of an ordered probit model we establish that there are some exogenous variables that are able to explain increased probabilities of over- or under-predictions of the spot price. It transpires that meteorological data, expected pre-dispatch prices and information on past over- and under-predictions contribute significantly to explaining variation in the probabilities for over- and under-predictions. The results allow us to conjecture that some of the bids and re-bids provided by electricity generators are not made in good faith.

  10. Comprehensive, Process-based Identification of Hydrologic Models using Satellite and In-situ Water Storage Data: A Multi-objective calibration Approach

    NASA Astrophysics Data System (ADS)

    Abdo Yassin, Fuad; Wheater, Howard; Razavi, Saman; Sapriza, Gonzalo; Davison, Bruce; Pietroniro, Alain

    2015-04-01

    The credible identification of vertical and horizontal hydrological components and their associated parameters is very challenging (if not impossible) by only constraining the model to streamflow data, especially in regions where the vertical processes significantly dominate the horizontal processes. The prairie areas of the Saskatchewan River basin, a major water system in Canada, demonstrate such behavior, where the hydrologic connectivity and vertical fluxes are mainly controlled by the amount of surface and sub-surface water storages. In this study, we develop a framework for distributed hydrologic model identification and calibration that jointly constrains the model response (i.e., streamflows) as well as a set of model state variables (i.e., water storages) to observations. This framework is set up in the form of multi-objective optimization, where multiple performance criteria are defined and used to simultaneously evaluate the fidelity of the model to streamflow observations and observed (estimated) changes of water storage in the gridded landscape over daily and monthly time scales. The time series of estimated changes in total water storage (including soil, canopy, snow and pond storages) used in this study were derived from an experimental study enhanced by the information obtained from the GRACE satellite. We test this framework on the calibration of a Land Surface Scheme-Hydrology model, called MESH (Modélisation Environmentale Communautaire - Surface and Hydrology), for the Saskatchewan River basin. Pareto Archived Dynamically Dimensioned Search (PA-DDS) is used as the multi-objective optimization engine. The significance of using the developed framework is demonstrated in comparison with the results obtained through a conventional calibration approach to streamflow observations. The approach of incorporating water storage data into the model identification process can more potentially constrain the posterior parameter space, more comprehensively

  11. A Comparative of business process modelling techniques

    NASA Astrophysics Data System (ADS)

    Tangkawarow, I. R. H. T.; Waworuntu, J.

    2016-04-01

    In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.

  12. Multi-model study of mercury dispersion in the atmosphere: atmospheric processes and model evaluation

    NASA Astrophysics Data System (ADS)

    Travnikov, Oleg; Angot, Hélène; Artaxo, Paulo; Bencardino, Mariantonia; Bieser, Johannes; D'Amore, Francesco; Dastoor, Ashu; De Simone, Francesco; Diéguez, María del Carmen; Dommergue, Aurélien; Ebinghaus, Ralf; Feng, Xin Bin; Gencarelli, Christian N.; Hedgecock, Ian M.; Magand, Olivier; Martin, Lynwill; Matthias, Volker; Mashyanov, Nikolay; Pirrone, Nicola; Ramachandran, Ramesh; Read, Katie Alana; Ryjkov, Andrei; Selin, Noelle E.; Sena, Fabrizio; Song, Shaojie; Sprovieri, Francesca; Wip, Dennis; Wängberg, Ingvar; Yang, Xin

    2017-04-01

    Current understanding of mercury (Hg) behavior in the atmosphere contains significant gaps. Some key characteristics of Hg processes, including anthropogenic and geogenic emissions, atmospheric chemistry, and air-surface exchange, are still poorly known. This study provides a complex analysis of processes governing Hg fate in the atmosphere involving both measured data from ground-based sites and simulation results from chemical transport models. A variety of long-term measurements of gaseous elemental Hg (GEM) and reactive Hg (RM) concentration as well as Hg wet deposition flux have been compiled from different global and regional monitoring networks. Four contemporary global-scale transport models for Hg were used, both in their state-of-the-art configurations and for a number of numerical experiments to evaluate particular processes. Results of the model simulations were evaluated against measurements. As follows from the analysis, the interhemispheric GEM gradient is largely formed by the prevailing spatial distribution of anthropogenic emissions in the Northern Hemisphere. The contributions of natural and secondary emissions enhance the south-to-north gradient, but their effect is less significant. Atmospheric chemistry has a limited effect on the spatial distribution and temporal variation of GEM concentration in surface air. In contrast, RM air concentration and wet deposition are largely defined by oxidation chemistry. The Br oxidation mechanism can reproduce successfully the observed seasonal variation of the RM / GEM ratio in the near-surface layer, but it predicts a wet deposition maximum in spring instead of in summer as observed at monitoring sites in North America and Europe. Model runs with OH chemistry correctly simulate both the periods of maximum and minimum values and the amplitude of observed seasonal variation but shift the maximum RM / GEM ratios from spring to summer. O3 chemistry does not predict significant seasonal variation of Hg

  13. Predictive representations can link model-based reinforcement learning to model-free mechanisms.

    PubMed

    Russek, Evan M; Momennejad, Ida; Botvinick, Matthew M; Gershman, Samuel J; Daw, Nathaniel D

    2017-09-01

    Humans and animals are capable of evaluating actions by considering their long-run future rewards through a process described using model-based reinforcement learning (RL) algorithms. The mechanisms by which neural circuits perform the computations prescribed by model-based RL remain largely unknown; however, multiple lines of evidence suggest that neural circuits supporting model-based behavior are structurally homologous to and overlapping with those thought to carry out model-free temporal difference (TD) learning. Here, we lay out a family of approaches by which model-based computation may be built upon a core of TD learning. The foundation of this framework is the successor representation, a predictive state representation that, when combined with TD learning of value predictions, can produce a subset of the behaviors associated with model-based learning, while requiring less decision-time computation than dynamic programming. Using simulations, we delineate the precise behavioral capabilities enabled by evaluating actions using this approach, and compare them to those demonstrated by biological organisms. We then introduce two new algorithms that build upon the successor representation while progressively mitigating its limitations. Because this framework can account for the full range of observed putatively model-based behaviors while still utilizing a core TD framework, we suggest that it represents a neurally plausible family of mechanisms for model-based evaluation.

  14. Predictive representations can link model-based reinforcement learning to model-free mechanisms

    PubMed Central

    Botvinick, Matthew M.

    2017-01-01

    Humans and animals are capable of evaluating actions by considering their long-run future rewards through a process described using model-based reinforcement learning (RL) algorithms. The mechanisms by which neural circuits perform the computations prescribed by model-based RL remain largely unknown; however, multiple lines of evidence suggest that neural circuits supporting model-based behavior are structurally homologous to and overlapping with those thought to carry out model-free temporal difference (TD) learning. Here, we lay out a family of approaches by which model-based computation may be built upon a core of TD learning. The foundation of this framework is the successor representation, a predictive state representation that, when combined with TD learning of value predictions, can produce a subset of the behaviors associated with model-based learning, while requiring less decision-time computation than dynamic programming. Using simulations, we delineate the precise behavioral capabilities enabled by evaluating actions using this approach, and compare them to those demonstrated by biological organisms. We then introduce two new algorithms that build upon the successor representation while progressively mitigating its limitations. Because this framework can account for the full range of observed putatively model-based behaviors while still utilizing a core TD framework, we suggest that it represents a neurally plausible family of mechanisms for model-based evaluation. PMID:28945743

  15. Development and Sensitivity Analysis of a Frost Risk model based primarily on freely distributed Earth Observation data

    NASA Astrophysics Data System (ADS)

    Louka, Panagiota; Petropoulos, George; Papanikolaou, Ioannis

    2015-04-01

    The ability to map the spatiotemporal distribution of extreme climatic conditions, such as frost, is a significant tool in successful agricultural management and decision making. Nowadays, with the development of Earth Observation (EO) technology, it is possible to obtain accurately, timely and in a cost-effective way information on the spatiotemporal distribution of frost conditions, particularly over large and otherwise inaccessible areas. The present study aimed at developing and evaluating a frost risk prediction model, exploiting primarily EO data from MODIS and ASTER sensors and ancillary ground observation data. For the evaluation of our model, a region in north-western Greece was selected as test site and a detailed sensitivity analysis was implemented. The agreement between the model predictions and the observed (remotely sensed) frost frequency obtained by MODIS sensor was evaluated thoroughly. Also, detailed comparisons of the model predictions were performed against reference frost ground observations acquired from the Greek Agricultural Insurance Organization (ELGA) over a period of 10-years (2000-2010). Overall, results evidenced the ability of the model to produce reasonably well the frost conditions, following largely explainable patterns in respect to the study site and local weather conditions characteristics. Implementation of our proposed frost risk model is based primarily on satellite imagery analysis provided nowadays globally at no cost. It is also straightforward and computationally inexpensive, requiring much less effort in comparison for example to field surveying. Finally, the method is adjustable to be potentially integrated with other high resolution data available from both commercial and non-commercial vendors. Keywords: Sensitivity analysis, frost risk mapping, GIS, remote sensing, MODIS, Greece

  16. Implementation of the nursing process in a health area: models and assessment structures used

    PubMed Central

    Huitzi-Egilegor, Joseba Xabier; Elorza-Puyadena, Maria Isabel; Urkia-Etxabe, Jose Maria; Asurabarrena-Iraola, Carmen

    2014-01-01

    OBJECTIVE: to analyze what nursing models and nursing assessment structures have been used in the implementation of the nursing process at the public and private centers in the health area Gipuzkoa (Basque Country). METHOD: a retrospective study was undertaken, based on the analysis of the nursing records used at the 158 centers studied. RESULTS: the Henderson model, Carpenito's bifocal structure, Gordon's assessment structure and the Resident Assessment Instrument Nursing Home 2.0 have been used as nursing models and assessment structures to implement the nursing process. At some centers, the selected model or assessment structure has varied over time. CONCLUSION: Henderson's model has been the most used to implement the nursing process. Furthermore, the trend is observed to complement or replace Henderson's model by nursing assessment structures. PMID:25493672

  17. Observation and modelling of the Fe XXI line profile observed by IRIS during the impulsive phase of flares

    NASA Astrophysics Data System (ADS)

    Polito, V.; Testa, P.; De Pontieu, B.; Allred, J. C.

    2017-12-01

    The observation of the high temperature (above 10 MK) Fe XXI 1354.1 A line with the Interface Region Imaging Spectrograph (IRIS) has provided significant insights into the chromospheric evaporation process in flares. In particular, the line is often observed to be completely blueshifted, in contrast to previous observations at lower spatial and spectral resolution, and in agreement with predictions from theoretical models. Interestingly, the line is also observed to be mostly symmetric and with a large excess above the thermal width. One popular interpretation for the excess broadening is given by assuming a superposition of flows from different loop strands. In this work, we perform a statistical analysis of Fe XXI line profiles observed by IRIS during the impulsive phase of flares and compare our results with hydrodynamic simulations of multi-thread flare loops performed with the 1D RADYN code. Our results indicate that the multi-thread models cannot easily reproduce the symmetry of the line and that some other physical process might need to be invoked in order to explain the observed profiles.

  18. Critical Evaluation of Prediction Models for Phosphorus Partition between CaO-based Slags and Iron-based Melts during Dephosphorization Processes

    NASA Astrophysics Data System (ADS)

    Yang, Xue-Min; Li, Jin-Yan; Chai, Guo-Ming; Duan, Dong-Ping; Zhang, Jian

    2016-08-01

    According to the experimental results of hot metal dephosphorization by CaO-based slags at a commercial-scale hot metal pretreatment station, the collected 16 models of equilibrium quotient k_{{P}} or phosphorus partition L_{{P}} between CaO-based slags and iron-based melts from the literature have been evaluated. The collected 16 models for predicting equilibrium quotient k_{{P}} can be transferred to predict phosphorus partition L_{{P}} . The predicted results by the collected 16 models cannot be applied to be criteria for evaluating k_{{P}} or L_{{P}} due to various forms or definitions of k_{{P}} or L_{{P}} . Thus, the measured phosphorus content [pct P] in a hot metal bath at the end point of the dephosphorization pretreatment process is applied to be the fixed criteria for evaluating the collected 16 models. The collected 16 models can be described in the form of linear functions as y = c0 + c1 x , in which independent variable x represents the chemical composition of slags, intercept c0 including the constant term depicts the temperature effect and other unmentioned or acquiescent thermodynamic factors, and slope c1 is regressed by the experimental results of k_{{P}} or L_{{P}} . Thus, a general approach to developing the thermodynamic model for predicting equilibrium quotient k_{{P}} or phosphorus partition L P or [pct P] in iron-based melts during the dephosphorization process is proposed by revising the constant term in intercept c0 for the summarized 15 models except for Suito's model (M3). The better models with an ideal revising possibility or flexibility among the collected 16 models have been selected and recommended. Compared with the predicted result by the revised 15 models and Suito's model (M3), the developed IMCT- L_{{P}} model coupled with the proposed dephosphorization mechanism by the present authors can be applied to accurately predict phosphorus partition L_{{P}} with the lowest mean deviation δ_{{L_{{P}} }} of log L_{{P}} as 2.33, as

  19. Modeling socio-cultural processes in network-centric environments

    NASA Astrophysics Data System (ADS)

    Santos, Eunice E.; Santos, Eugene, Jr.; Korah, John; George, Riya; Gu, Qi; Kim, Keumjoo; Li, Deqing; Russell, Jacob; Subramanian, Suresh

    2012-05-01

    The major focus in the field of modeling & simulation for network centric environments has been on the physical layer while making simplifications for the human-in-the-loop. However, the human element has a big impact on the capabilities of network centric systems. Taking into account the socio-behavioral aspects of processes such as team building, group decision-making, etc. are critical to realistically modeling and analyzing system performance. Modeling socio-cultural processes is a challenge because of the complexity of the networks, dynamism in the physical and social layers, feedback loops and uncertainty in the modeling data. We propose an overarching framework to represent, model and analyze various socio-cultural processes within network centric environments. The key innovation in our methodology is to simultaneously model the dynamism in both the physical and social layers while providing functional mappings between them. We represent socio-cultural information such as friendships, professional relationships and temperament by leveraging the Culturally Infused Social Network (CISN) framework. The notion of intent is used to relate the underlying socio-cultural factors to observed behavior. We will model intent using Bayesian Knowledge Bases (BKBs), a probabilistic reasoning network, which can represent incomplete and uncertain socio-cultural information. We will leverage previous work on a network performance modeling framework called Network-Centric Operations Performance and Prediction (N-COPP) to incorporate dynamism in various aspects of the physical layer such as node mobility, transmission parameters, etc. We validate our framework by simulating a suitable scenario, incorporating relevant factors and providing analyses of the results.

  20. Molecular modeling of the microstructure evolution during carbon fiber processing

    NASA Astrophysics Data System (ADS)

    Desai, Saaketh; Li, Chunyu; Shen, Tongtong; Strachan, Alejandro

    2017-12-01

    The rational design of carbon fibers with desired properties requires quantitative relationships between the processing conditions, microstructure, and resulting properties. We developed a molecular model that combines kinetic Monte Carlo and molecular dynamics techniques to predict the microstructure evolution during the processes of carbonization and graphitization of polyacrylonitrile (PAN)-based carbon fibers. The model accurately predicts the cross-sectional microstructure of the fibers with the molecular structure of the stabilized PAN fibers and physics-based chemical reaction rates as the only inputs. The resulting structures exhibit key features observed in electron microcopy studies such as curved graphitic sheets and hairpin structures. In addition, computed X-ray diffraction patterns are in good agreement with experiments. We predict the transverse moduli of the resulting fibers between 1 GPa and 5 GPa, in good agreement with experimental results for high modulus fibers and slightly lower than those of high-strength fibers. The transverse modulus is governed by sliding between graphitic sheets, and the relatively low value for the predicted microstructures can be attributed to their perfect longitudinal texture. Finally, the simulations provide insight into the relationships between chemical kinetics and the final microstructure; we observe that high reaction rates result in porous structures with lower moduli.