Sample records for submodel system messy

  1. A new radiation infrastructure for the Modular Earth Submodel System (MESSy, based on version 2.51)

    NASA Astrophysics Data System (ADS)

    Dietmüller, Simone; Jöckel, Patrick; Tost, Holger; Kunze, Markus; Gellhorn, Catrin; Brinkop, Sabine; Frömming, Christine; Ponater, Michael; Steil, Benedikt; Lauer, Axel; Hendricks, Johannes

    2016-06-01

    The Modular Earth Submodel System (MESSy) provides an interface to couple submodels to a base model via a highly flexible data management facility (Jöckel et al., 2010). In the present paper we present the four new radiation related submodels RAD, AEROPT, CLOUDOPT, and ORBIT. The submodel RAD (including the shortwave radiation scheme RAD_FUBRAD) simulates the radiative transfer, the submodel AEROPT calculates the aerosol optical properties, the submodel CLOUDOPT calculates the cloud optical properties, and the submodel ORBIT is responsible for Earth orbit calculations. These submodels are coupled via the standard MESSy infrastructure and are largely based on the original radiation scheme of the general circulation model ECHAM5, however, expanded with additional features. These features comprise, among others, user-friendly and flexibly controllable (by namelists) online radiative forcing calculations by multiple diagnostic calls of the radiation routines. With this, it is now possible to calculate radiative forcing (instantaneous as well as stratosphere adjusted) of various greenhouse gases simultaneously in only one simulation, as well as the radiative forcing of cloud perturbations. Examples of online radiative forcing calculations in the ECHAM/MESSy Atmospheric Chemistry (EMAC) model are presented.

  2. Technical Note: The Modular Earth Submodel System (MESSy) - a new approach towards Earth System Modeling

    NASA Astrophysics Data System (ADS)

    Jöckel, P.; Sander, R.; Kerkweg, A.; Tost, H.; Lelieveld, J.

    2005-02-01

    The development of a comprehensive Earth System Model (ESM) to study the interactions between chemical, physical, and biological processes, requires coupling of the different domains (land, ocean, atmosphere, ...). One strategy is to link existing domain-specific models with a universal coupler, i.e. an independent standalone program organizing the communication between other programs. In many cases, however, a much simpler approach is more feasible. We have developed the Modular Earth Submodel System (MESSy). It comprises (1) a modular interface structure to connect to a , (2) an extendable set of such for miscellaneous processes, and (3) a coding standard. MESSy is therefore not a coupler in the classical sense, but exchanges data between a and several within one comprehensive executable. The internal complexity of the is controllable in a transparent and user friendly way. This provides remarkable new possibilities to study feedback mechanisms (by two-way coupling). Note that the MESSy and the coupler approach can be combined. For instance, an atmospheric model implemented according to the MESSy standard could easily be coupled to an ocean model by means of an external coupler. The vision is to ultimately form a comprehensive ESM which includes a large set of submodels, and a base model which contains only a central clock and runtime control. This can be reached stepwise, since each process can be included independently. Starting from an existing model, process submodels can be reimplemented according to the MESSy standard. This procedure guarantees the availability of a state-of-the-art model for scientific applications at any time of the development. In principle, MESSy can be implemented into any kind of model, either global or regional. So far, the MESSy concept has been applied to the general circulation model ECHAM5 and a number of process boxmodels.

  3. The 1-way on-line coupled model system MECO(n) - Part 4: Chemical evaluation (based on MESSy v2.52)

    NASA Astrophysics Data System (ADS)

    Mertens, Mariano; Kerkweg, Astrid; Jöckel, Patrick; Tost, Holger; Hofmann, Christiane

    2016-10-01

    For the first time, a simulation incorporating tropospheric and stratospheric chemistry using the newly developed MECO(n) model system is performed. MECO(n) is short for MESSy-fied ECHAM and COSMO models nested n times. It features an online coupling of the COSMO-CLM model, equipped with the Modular Earth Submodel System (MESSy) interface (called COSMO/MESSy), with the global atmospheric chemistry model ECHAM5/MESSy for Atmospheric Chemistry (EMAC). This online coupling allows a consistent model chain with respect to chemical and meteorological boundary conditions from the global scale down to the regional kilometre scale. A MECO(2) simulation incorporating one regional instance over Europe with 50 km resolution and one instance over Germany with 12 km resolution is conducted for the evaluation of MECO(n) with respect to tropospheric gas-phase chemistry. The main goal of this evaluation is to ensure that the chemistry-related MESSy submodels and the online coupling with respect to the chemistry are correctly implemented. This evaluation is a prerequisite for the further usage of MECO(n) in atmospheric chemistry-related studies. Results of EMAC and the two COSMO/MESSy instances are compared with satellite, ground-based and aircraft in situ observations, focusing on ozone, carbon monoxide and nitrogen dioxide. Further, the methane lifetimes in EMAC and the two COSMO/MESSy instances are analysed in view of the tropospheric oxidation capacity. From this evaluation, we conclude that the chemistry-related submodels and the online coupling with respect to the chemistry are correctly implemented. In comparison with observations, both EMAC and COSMO/MESSy show strengths and weaknesses. Especially in comparison to aircraft in situ observations, COSMO/MESSy shows very promising results. However, the amplitude of the diurnal cycle of ground-level ozone measurements is underestimated. Most of the differences between COSMO/MESSy and EMAC can be attributed to differences in the dynamics of both models, which are subject to further model developments.

  4. The generic MESSy submodel TENDENCY (v1.0) for process-based analyses in Earth system models

    NASA Astrophysics Data System (ADS)

    Eichinger, R.; Jöckel, P.

    2014-07-01

    The tendencies of prognostic variables in Earth system models are usually only accessible, e.g. for output, as a sum over all physical, dynamical and chemical processes at the end of one time integration step. Information about the contribution of individual processes to the total tendency is lost, if no special precautions are implemented. The knowledge on individual contributions, however, can be of importance to track down specific mechanisms in the model system. We present the new MESSy (Modular Earth Submodel System) infrastructure submodel TENDENCY and use it exemplarily within the EMAC (ECHAM/MESSy Atmospheric Chemistry) model to trace process-based tendencies of prognostic variables. The main idea is the outsourcing of the tendency accounting for the state variables from the process operators (submodels) to the TENDENCY submodel itself. In this way, a record of the tendencies of all process-prognostic variable pairs can be stored. The selection of these pairs can be specified by the user, tailor-made for the desired application, in order to minimise memory requirements. Moreover, a standard interface allows the access to the individual process tendencies by other submodels, e.g. for on-line diagnostics or for additional parameterisations, which depend on individual process tendencies. An optional closure test assures the correct treatment of tendency accounting in all submodels and thus serves to reduce the model's susceptibility. TENDENCY is independent of the time integration scheme and therefore the concept is applicable to other model systems as well. Test simulations with TENDENCY show an increase of computing time for the EMAC model (in a setup without atmospheric chemistry) of 1.8 ± 1% due to the additional subroutine calls when using TENDENCY. Exemplary results reveal the dissolving mechanisms of the stratospheric tape recorder signal in height over time. The separation of the tendency of the specific humidity into the respective processes (large-scale clouds, convective clouds, large-scale advection, vertical diffusion and methane oxidation) show that the upward propagating water vapour signal dissolves mainly because of the chemical and the advective contribution. The TENDENCY submodel is part of version 2.42 or later of MESSy.

  5. The generic MESSy submodel TENDENCY (v1.0) for process-based analyses in Earth System Models

    NASA Astrophysics Data System (ADS)

    Eichinger, R.; Jöckel, P.

    2014-04-01

    The tendencies of prognostic variables in Earth System Models are usually only accessible, e.g., for output, as sum over all physical, dynamical and chemical processes at the end of one time integration step. Information about the contribution of individual processes to the total tendency is lost, if no special precautions are implemented. The knowledge on individual contributions, however, can be of importance to track down specific mechanisms in the model system. We present the new MESSy (Modular Earth Submodel System) infrastructure submodel TENDENCY and use it exemplarily within the EMAC (ECHAM/MESSy Atmospheric Chemistry) model to trace process-based tendencies of prognostic variables. The main idea is the outsourcing of the tendency accounting for the state variables from the process operators (submodels) to the TENDENCY submodel itself. In this way, a record of the tendencies of all process-prognostic variable pairs can be stored. The selection of these pairs can be specified by the user, tailor-made for the desired application, in order to minimise memory requirements. Moreover a standard interface allows the access to the individual process tendencies by other submodels, e.g., for on-line diagnostics or for additional parameterisations, which depend on individual process tendencies. An optional closure test assures the correct treatment of tendency accounting in all submodels and thus serves to reduce the models susceptibility. TENDENCY is independent of the time integration scheme and therefore applicable to other model systems as well. Test simulations with TENDENCY show an increase of computing time for the EMAC model (in a setup without atmospheric chemistry) of 1.8 ± 1% due to the additional subroutine calls when using TENDENCY. Exemplary results reveal the dissolving mechanisms of the stratospheric tape recorder signal in height over time. The separation of the tendency of the specific humidity into the respective processes (large-scale clouds, convective clouds, large-scale advection, vertical diffusion and methane-oxidation) show that the upward propagating water vapour signal dissolves mainly because of the chemical and the advective contribution. The TENDENCY submodel is part of version 2.42 or later of MESSy.

  6. Technical Note: Coupling of chemical processes with the Modular Earth Submodel System (MESSy) submodel TRACER

    NASA Astrophysics Data System (ADS)

    Jöckel, P.; Kerkweg, A.; Buchholz, J.; Tost, H.; Sander, R.; Pozzer, A.

    2007-11-01

    The implementation of processes related to chemistry into Earth System Models and their coupling within such systems requires the consistent description of the chemical species involved. We provide a tool (written in Fortran95) to structure and manage information about constituents, herein after referred to as tracers, namely the Modular Earth Submodel System (MESSy) generic (i.e., infrastructure) submodel TRACER. With TRACER it is possible to define a multitude of tracer sets, depending on the spatio-temporal representation (i.e., the grid structure) of the model. The required information about a specific chemical species is split into the static meta-information about the characteristics of the species, and its (generally in time and space variable) abundance in the corresponding representation. TRACER moreover includes two submodels. One is TRACER_FAMILY, an implementation of the tracer family concept. It distinguishes between two types: type-1 families are usually applied to handle strongly related tracers (e.g., fast equilibrating species) for a specific process (e.g., advection). In contrast to this, type-2 families are applied for tagging techniques, in which specific species are artificially decomposed and associated with additional information, in order to conserve the linear relationship between the family and its members. The second submodel is TRACER_PDEF, which corrects and budgets numerical negative overshoots that arise in many process implementations due to the numerical limitations (limited precision, rounding errors). The submodel therefore guarantees the positive definiteness of the tracers and stabilises the integration scheme. As a by-product, it further provides a global tracer mass diagnostic. Last but not least, we present the submodel PTRAC for the definition of prognostic tracers via a Fortran95 namelist. TRACER with its submodels and PTRAC can readily be applied to a variety of models without further requirements. The code and a documentation is included in the electronic supplement.

  7. The on-line coupled atmospheric chemistry model system MECO(n) - Part 5: Expanding the Multi-Model-Driver (MMD v2.0) for 2-way data exchange including data interpolation via GRID (v1.0)

    NASA Astrophysics Data System (ADS)

    Kerkweg, Astrid; Hofmann, Christiane; Jöckel, Patrick; Mertens, Mariano; Pante, Gregor

    2018-03-01

    As part of the Modular Earth Submodel System (MESSy), the Multi-Model-Driver (MMD v1.0) was developed to couple online the regional Consortium for Small-scale Modeling (COSMO) model into a driving model, which can be either the regional COSMO model or the global European Centre Hamburg general circulation model (ECHAM) (see Part 2 of the model documentation). The coupled system is called MECO(n), i.e., MESSy-fied ECHAM and COSMO models nested n times. In this article, which is part of the model documentation of the MECO(n) system, the second generation of MMD is introduced. MMD comprises the message-passing infrastructure required for the parallel execution (multiple programme multiple data, MPMD) of different models and the communication of the individual model instances, i.e. between the driving and the driven models. Initially, the MMD library was developed for a one-way coupling between the global chemistry-climate ECHAM/MESSy atmospheric chemistry (EMAC) model and an arbitrary number of (optionally cascaded) instances of the regional chemistry-climate model COSMO/MESSy. Thus, MMD (v1.0) provided only functions for unidirectional data transfer, i.e. from the larger-scale to the smaller-scale models.Soon, extended applications requiring data transfer from the small-scale model back to the larger-scale model became of interest. For instance, the original fields of the larger-scale model can directly be compared to the upscaled small-scale fields to analyse the improvements gained through the small-scale calculations, after the results are upscaled. Moreover, the fields originating from the two different models might be fed into the same diagnostic tool, e.g. the online calculation of the radiative forcing calculated consistently with the same radiation scheme. Last but not least, enabling the two-way data transfer between two models is the first important step on the way to a fully dynamical and chemical two-way coupling of the various model instances.In MMD (v1.0), interpolation between the base model grids is performed via the COSMO preprocessing tool INT2LM, which was implemented into the MMD submodel for online interpolation, specifically for mapping onto the rotated COSMO grid. A more flexible algorithm is required for the backward mapping. Thus, MMD (v2.0) uses the new MESSy submodel GRID for the generalised definition of arbitrary grids and for the transformation of data between them.In this article, we explain the basics of the MMD expansion and the newly developed generic MESSy submodel GRID (v1.0) and show some examples of the abovementioned applications.

  8. Technical Note: Coupling of chemical processes with the Modular Earth Submodel System (MESSy) submodel TRACER

    NASA Astrophysics Data System (ADS)

    Jöckel, P.; Kerkweg, A.; Buchholz-Dietsch, J.; Tost, H.; Sander, R.; Pozzer, A.

    2008-03-01

    The implementation of processes related to chemistry into Earth System Models and their coupling within such systems requires the consistent description of the chemical species involved. We provide a tool (written in Fortran95) to structure and manage information about constituents, hereinafter referred to as tracers, namely the Modular Earth Submodel System (MESSy) generic (i.e., infrastructure) submodel TRACER. With TRACER it is possible to define a multitude of tracer sets, depending on the spatio-temporal representation (i.e., the grid structure) of the model. The required information about a specific chemical species is split into the static meta-information about the characteristics of the species, and its (generally in time and space variable) abundance in the corresponding representation. TRACER moreover includes two submodels. One is TRACER_FAMILY, an implementation of the tracer family concept. It distinguishes between two types: type-1 families are usually applied to handle strongly related tracers (e.g., fast equilibrating species) for a specific process (e.g., advection). In contrast to this, type-2 families are applied for tagging techniques. Tagging means the artificial decomposition of one or more species into parts, which are additionally labelled (e.g., by the region of their primary emission) and then processed as the species itself. The type-2 family concept is designed to conserve the linear relationship between the family and its members. The second submodel is TRACER_PDEF, which corrects and budgets numerical negative overshoots that arise in many process implementations due to the numerical limitations (e.g., rounding errors). The submodel therefore guarantees the positive definiteness of the tracers and stabilises the integration scheme. As a by-product, it further provides a global tracer mass diagnostic. Last but not least, we present the submodel PTRAC, which allows the definition of tracers via a Fortran95 namelist, as a complement to the standard tracer definition by application of the TRACER interface routines in the code. TRACER with its submodels and PTRAC can readily be applied to a variety of models without further requirements. The code and a documentation are included in the electronic supplement.

  9. The Messy Aerosol Submodel MADE3 (v2.0b): Description and a Box Model Test

    NASA Technical Reports Server (NTRS)

    Kaiser, J. C.; Hendricks, J.; Righi, M.; Riemer, N.; Zaveri, R. A.; Metzger, S.; Aquila, Valentina

    2014-01-01

    We introduce MADE3 (Modal Aerosol Dynamics model for Europe, adapted for global applications, 3rd generation), an aerosol dynamics submodel for application within the MESSy framework (Modular Earth Submodel System). MADE3 builds on the predecessor aerosol submodels MADE and MADE-in. Its main new features are the explicit representation of coarse particle interactions both with other particles and with condensable gases, and the inclusion of hydrochloric acid (HCl)chloride (Cl) partitioning between the gas and condensed phases. The aerosol size distribution is represented in the new submodel as a superposition of nine lognormal modes: one for fully soluble particles, one for insoluble particles, and one for mixed particles in each of three size ranges (Aitken, accumulation, and coarse mode size ranges). In order to assess the performance of MADE3 we compare it to its predecessor MADE and to the much more detailed particle-resolved aerosol model PartMC-MOSAIC in a box model simulation of an idealized marine boundary layer test case. MADE3 and MADE results are very similar, except in the coarse mode, where the aerosol is dominated by sea spray particles. Cl is reduced in MADE3 with respect to MADE due to the HClCl partitioning that leads to Cl removal from the sea spray aerosol in our test case. Additionally, aerosol nitrate concentration is higher in MADE3 due to the condensation of nitric acid on coarse particles. MADE3 and PartMC- MOSAIC show substantial differences in the fine particle size distributions (sizes about 2 micrometers) that could be relevant when simulating climate effects on a global scale. Nevertheless, the agreement between MADE3 and PartMC-MOSAIC is very good when it comes to coarse particle size distribution, and also in terms of aerosol composition. Considering these results and the well-established ability of MADE in reproducing observed aerosol loadings and composition, MADE3 seems suitable for application within a global model.

  10. Earth System Chemistry integrated Modelling (ESCiMo) with the Modular Earth Submodel System (MESSy) version 2.51

    NASA Astrophysics Data System (ADS)

    Jöckel, Patrick; Tost, Holger; Pozzer, Andrea; Kunze, Markus; Kirner, Oliver; Brenninkmeijer, Carl A. M.; Brinkop, Sabine; Cai, Duy S.; Dyroff, Christoph; Eckstein, Johannes; Frank, Franziska; Garny, Hella; Gottschaldt, Klaus-Dirk; Graf, Phoebe; Grewe, Volker; Kerkweg, Astrid; Kern, Bastian; Matthes, Sigrun; Mertens, Mariano; Meul, Stefanie; Neumaier, Marco; Nützel, Matthias; Oberländer-Hayn, Sophie; Ruhnke, Roland; Runde, Theresa; Sander, Rolf; Scharffe, Dieter; Zahn, Andreas

    2016-03-01

    Three types of reference simulations, as recommended by the Chemistry-Climate Model Initiative (CCMI), have been performed with version 2.51 of the European Centre for Medium-Range Weather Forecasts - Hamburg (ECHAM)/Modular Earth Submodel System (MESSy) Atmospheric Chemistry (EMAC) model: hindcast simulations (1950-2011), hindcast simulations with specified dynamics (1979-2013), i.e. nudged towards ERA-Interim reanalysis data, and combined hindcast and projection simulations (1950-2100). The manuscript summarizes the updates of the model system and details the different model set-ups used, including the on-line calculated diagnostics. Simulations have been performed with two different nudging set-ups, with and without interactive tropospheric aerosol, and with and without a coupled ocean model. Two different vertical resolutions have been applied. The on-line calculated sources and sinks of reactive species are quantified and a first evaluation of the simulation results from a global perspective is provided as a quality check of the data. The focus is on the intercomparison of the different model set-ups. The simulation data will become publicly available via CCMI and the Climate and Environmental Retrieval and Archive (CERA) database of the German Climate Computing Centre (DKRZ). This manuscript is intended to serve as an extensive reference for further analyses of the Earth System Chemistry integrated Modelling (ESCiMo) simulations.

  11. Contribution of emissions to concentrations: the TAGGING 1.0 submodel based on the Modular Earth Submodel System (MESSy 2.52)

    NASA Astrophysics Data System (ADS)

    Grewe, Volker; Tsati, Eleni; Mertens, Mariano; Frömming, Christine; Jöckel, Patrick

    2017-07-01

    Questions such as what is the contribution of road traffic emissions to climate change? or what is the impact of shipping emissions on local air quality? require a quantification of the contribution of specific emissions sectors to the concentration of radiatively active species and air-quality-related species, respectively. Here, we present a diagnostics package, implemented in the Modular Earth Submodel System (MESSy), which keeps track of the contribution of source categories (mainly emission sectors) to various concentrations. The diagnostics package is implemented as a submodel (TAGGING) of EMAC (European Centre for Medium-Range Weather Forecasts - Hamburg (ECHAM)/MESSy Atmospheric Chemistry). It determines the contributions of 10 different source categories to the concentration of ozone, nitrogen oxides, peroxyacytyl nitrate, carbon monoxide, non-methane hydrocarbons, hydroxyl, and hydroperoxyl radicals ( = tagged tracers). The source categories are mainly emission sectors and some other sources for completeness. As emission sectors, road traffic, shipping, air traffic, anthropogenic non-traffic, biogenic, biomass burning, and lightning are considered. The submodel obtains information on the chemical reaction rates, online emissions, such as lightning, and wash-out rates. It then solves differential equations for the contribution of a source category to each of the seven tracers. This diagnostics package does not feed back to any other part of the model. For the first time, it takes into account chemically competing effects: for example, the competition between NOx, CO, and non-methane hydrocarbons (NMHCs) in the production and destruction of ozone. We show that the results are in-line with results from other tagging schemes and provide plausibility checks for concentrations of trace gases, such as OH and HO2, which have not previously been tagged. The budgets of the tagged tracers, i.e. the contribution from individual source categories (mainly emission sectors) to, e.g., ozone, are only marginally sensitive to changes in model resolution, though the level of detail increases. A reduction in road traffic emissions by 5 % shows that road traffic global tropospheric ozone is reduced by 4 % only, because the net ozone productivity increases. This 4 % reduction in road traffic tropospheric ozone corresponds to a reduction in total tropospheric ozone by ≈ 0.3 %, which is compensated by an increase in tropospheric ozone from other sources by 0.1 %, resulting in a reduction in total tropospheric ozone of ≈ 0.2 %. This compensating effect compares well with previous findings. The computational costs of the TAGGING submodel are low with respect to computing time, but a large number of additional tracers are required. The advantage of the tagging scheme is that in one simulation and at every time step and grid point, information is available on the contribution of different emission sectors to the ozone budget, which then can be further used in upcoming studies to calculate the respective radiative forcing simultaneously.

  12. A diagnostic interface for the ICOsahedral Non-hydrostatic (ICON) modelling framework based on the Modular Earth Submodel System (MESSy v2.50)

    NASA Astrophysics Data System (ADS)

    Kern, Bastian; Jöckel, Patrick

    2016-10-01

    Numerical climate and weather models have advanced to finer scales, accompanied by large amounts of output data. The model systems hit the input and output (I/O) bottleneck of modern high-performance computing (HPC) systems. We aim to apply diagnostic methods online during the model simulation instead of applying them as a post-processing step to written output data, to reduce the amount of I/O. To include diagnostic tools into the model system, we implemented a standardised, easy-to-use interface based on the Modular Earth Submodel System (MESSy) into the ICOsahedral Non-hydrostatic (ICON) modelling framework. The integration of the diagnostic interface into the model system is briefly described. Furthermore, we present a prototype implementation of an advanced online diagnostic tool for the aggregation of model data onto a user-defined regular coarse grid. This diagnostic tool will be used to reduce the amount of model output in future simulations. Performance tests of the interface and of two different diagnostic tools show, that the interface itself introduces no overhead in form of additional runtime to the model system. The diagnostic tools, however, have significant impact on the model system's runtime. This overhead strongly depends on the characteristics and implementation of the diagnostic tool. A diagnostic tool with high inter-process communication introduces large overhead, whereas the additional runtime of a diagnostic tool without inter-process communication is low. We briefly describe our efforts to reduce the additional runtime from the diagnostic tools, and present a brief analysis of memory consumption. Future work will focus on optimisation of the memory footprint and the I/O operations of the diagnostic interface.

  13. Revised mineral dust emissions in the atmospheric chemistry-climate model EMAC (MESSy 2.52 DU_Astitha1 KKDU2017 patch)

    NASA Astrophysics Data System (ADS)

    Klingmüller, Klaus; Metzger, Swen; Abdelkader, Mohamed; Karydis, Vlassis A.; Stenchikov, Georgiy L.; Pozzer, Andrea; Lelieveld, Jos

    2018-03-01

    To improve the aeolian dust budget calculations with the global ECHAM/MESSy atmospheric chemistry-climate model (EMAC), which combines the Modular Earth Submodel System (MESSy) with the ECMWF/Hamburg (ECHAM) climate model developed at the Max Planck Institute for Meteorology in Hamburg based on a weather prediction model of the European Centre for Medium-Range Weather Forecasts (ECMWF), we have implemented new input data and updates of the emission scheme.The data set comprises land cover classification, vegetation, clay fraction and topography. It is based on up-to-date observations, which are crucial to account for the rapid changes of deserts and semi-arid regions in recent decades. The new Moderate Resolution Imaging Spectroradiometer (MODIS)-based land cover and vegetation data are time dependent, and the effect of long-term trends and variability of the relevant parameters is therefore considered by the emission scheme. All input data have a spatial resolution of at least 0.1° compared to 1° in the previous version, equipping the model for high-resolution simulations.We validate the updates by comparing the aerosol optical depth (AOD) at 550 nm wavelength from a 1-year simulation at T106 (about 1.1°) resolution with Aerosol Robotic Network (AERONET) and MODIS observations, the 10 µm dust AOD (DAOD) with Infrared Atmospheric Sounding Interferometer (IASI) retrievals, and dust concentration and deposition results with observations from the Aerosol Comparisons between Observations and Models (AeroCom) dust benchmark data set. The update significantly improves agreement with the observations and is therefore recommended to be used in future simulations.

  14. An "island" in the stratosphere - on the enhanced annual variation of water vapour in the middle and upper stratosphere in the southern tropics and subtropics

    NASA Astrophysics Data System (ADS)

    Lossow, Stefan; Garny, Hella; Jöckel, Patrick

    2017-09-01

    The amplitude of the annual variation in water vapour exhibits a distinct isolated maximum in the middle and upper stratosphere in the southern tropics and subtropics, peaking typically around 15° S in latitude and close to 3 hPa (˜ 40.5 km) in altitude. This enhanced annual variation is primarily related to the Brewer-Dobson circulation and hence also visible in other trace gases. So far this feature has not gained much attention in the literature and the present work aims to add more prominence. Using Envisat/MIPAS (Environmental Satellite/Michelson Interferometer for Passive Atmospheric Sounding) observations and ECHAM/MESSy (European Centre for Medium-Range Weather Forecasts Hamburg/Modular Earth Submodel System) Atmospheric Chemistry (EMAC) simulations we provide a dedicated illustration and a full account of the reasons for this enhanced annual variation.

  15. Earth system modelling on system-level heterogeneous architectures: EMAC (version 2.42) on the Dynamical Exascale Entry Platform (DEEP)

    NASA Astrophysics Data System (ADS)

    Christou, Michalis; Christoudias, Theodoros; Morillo, Julián; Alvarez, Damian; Merx, Hendrik

    2016-09-01

    We examine an alternative approach to heterogeneous cluster-computing in the many-core era for Earth system models, using the European Centre for Medium-Range Weather Forecasts Hamburg (ECHAM)/Modular Earth Submodel System (MESSy) Atmospheric Chemistry (EMAC) model as a pilot application on the Dynamical Exascale Entry Platform (DEEP). A set of autonomous coprocessors interconnected together, called Booster, complements a conventional HPC Cluster and increases its computing performance, offering extra flexibility to expose multiple levels of parallelism and achieve better scalability. The EMAC model atmospheric chemistry code (Module Efficiently Calculating the Chemistry of the Atmosphere (MECCA)) was taskified with an offload mechanism implemented using OmpSs directives. The model was ported to the MareNostrum 3 supercomputer to allow testing with Intel Xeon Phi accelerators on a production-size machine. The changes proposed in this paper are expected to contribute to the eventual adoption of Cluster-Booster division and Many Integrated Core (MIC) accelerated architectures in presently available implementations of Earth system models, towards exploiting the potential of a fully Exascale-capable platform.

  16. Impact of major volcanic eruptions on stratospheric water vapour

    NASA Astrophysics Data System (ADS)

    Löffler, Michael; Brinkop, Sabine; Jöckel, Patrick

    2016-05-01

    Volcanic eruptions can have a significant impact on the Earth's weather and climate system. Besides the subsequent tropospheric changes, the stratosphere is also influenced by large eruptions. Here changes in stratospheric water vapour after the two major volcanic eruptions of El Chichón in Mexico in 1982 and Mount Pinatubo on the Philippines in 1991 are investigated with chemistry-climate model simulations. This study is based on two simulations with specified dynamics of the European Centre for Medium-Range Weather Forecasts Hamburg - Modular Earth Submodel System (ECHAM/MESSy) Atmospheric Chemistry (EMAC) model, performed within the Earth System Chemistry integrated Modelling (ESCiMo) project, of which only one includes the long-wave volcanic forcing through prescribed aerosol optical properties. The results show a significant increase in stratospheric water vapour induced by the eruptions, resulting from increased heating rates and the subsequent changes in stratospheric and tropopause temperatures in the tropics. The tropical vertical advection and the South Asian summer monsoon are identified as sources for the additional water vapour in the stratosphere. Additionally, volcanic influences on tropospheric water vapour and El Niño-Southern Oscillation (ENSO) are evident, if the long-wave forcing is strong enough. Our results are corroborated by additional sensitivity simulations of the Mount Pinatubo period with reduced nudging and reduced volcanic aerosol extinction.

  17. Evaluation of a Mineral Dust Simulation in the Atmospheric-Chemistry General Circulation Model-EMAC

    NASA Astrophysics Data System (ADS)

    Abdel Kader, M.; Astitha, M.; Lelieveld, J.

    2012-04-01

    This study presents an evaluation of the atmospheric mineral dust cycle in the Atmospheric Chemistry General Circulation Model (AC-GCM) using new developed dust emissions scheme. The dust cycle, as an integral part of the Earth System, plays an important role in the Earth's energy balance by both direct and indirect ways. As an aerosol, it significantly impacts the absorption and scattering of radiation in the atmosphere and can modify the optical properties of clouds and snow/ice surfaces. In addition, dust contributes to a range of physical, chemical and bio-geological processes that interact with the cycles of carbon and water. While our knowledge of the dust cycle, its impacts and interactions with the other global-scale bio-geochemical cycles has greatly advanced in the last decades, large uncertainties and knowledge gaps still exist. Improving the dust simulation in global models is essential to minimize the uncertainties in the model results related to dust. In this study, the results are based on the ECHAM5 Modular Earth Submodel System (MESSy) AC-GCM simulations using T106L31 spectral resolution (about 120km ) with 31 vertical levels. The GMXe aerosol submodel is used to simulate the phase changes of the dust particles between soluble and insoluble modes. Dust emission, transport and deposition (wet and dry) are calculated on-line along with the meteorological parameters in every model time step. The preliminary evaluation of the dust concentration and deposition are presented based on ground observations from various campaigns as well as the evaluation of the optical properties of dust using AERONET and satellite (MODIS and MISR) observations. Preliminarily results show good agreement with observations for dust deposition and optical properties. In addition, the global dust emissions, load, deposition and lifetime is in good agreement with the published results. Also, the uncertainties in the dust cycle that contribute to the overall model performance will be briefly discussed as it is a subject of future work.

  18. AOD trends during 2001-2010 from observations and model simulations

    NASA Astrophysics Data System (ADS)

    Pozzer, Andrea; de Meij, Alexander; Yoon, Jongmin; Astitha, Marina

    2016-04-01

    The trend of aerosol optical depth (AOD) between 2001 and 2010 is estimated globally and regionally from remote sensed observations by the MODIS (Moderate Resolution Imaging Spectroradiometer), MISR (Multi-angle Imaging SpectroRadiometer) and SeaWIFS (Sea-viewing Wide Field-of-view Sensor) satellite sensor. The resulting trends have been compared to model results from the EMAC (ECHAM5/MESSy Atmospheric Chemistry {[1]}), model. Although interannual variability is applied only to anthropogenic and biomass-burning emissions, the model is able to quantitatively reproduce the AOD trends as observed by MODIS, while some discrepancies are found when compared to MISR and SeaWIFS. An additional numerical simulation with the same model was performed, neglecting any temporal change in the emissions, i.e. with no interannual variability for any emission source. It is shown that decreasing AOD trends over the US and Europe are due to the decrease in the (anthropogenic) emissions. On contrary over the Sahara Desert and the Middle East region, the meteorological/dynamical changes in the last decade play a major role in driving the AOD trends. Further, over Southeast Asia, both meteorology and emissions changes are equally important in defining AOD trends {[2]}. Finally, decomposing the regional AOD trends into individual aerosol components reveals that the soluble components are the most dominant contributors to the total AOD, as their influence on the total AOD is enhanced by the aerosol water content. {[1]}: Jöckel, P., Kerkweg, A., Pozzer, A., Sander, R., Tost, H., Riede, H., Baumgaertner, A., Gromov, S., and Kern, B.: Development cycle 2 of the Modular Earth Submodel System (MESSy2), Geosci. Model Dev., 3, 717-752, doi:10.5194/gmd-3-717-2010, 2010. {[2]}: Pozzer, A., de Meij, A., Yoon, J., Tost, H., Georgoulias, A. K., and Astitha, M.: AOD trends during 2001-2010 from observations and model simulations, Atmos. Chem. Phys., 15, 5521-5535, doi:10.5194/acp-15-5521-2015, 2015.

  19. Modular Analysis of Automobile Exhaust Thermoelectric Power Generation System

    NASA Astrophysics Data System (ADS)

    Deng, Y. D.; Zhang, Y.; Su, C. Q.

    2015-06-01

    In this paper, an automobile exhaust thermoelectric power generation system is packaged into a model with its own operating principles. The inputs are the engine speed and power, and the output is the power generated by the system. The model is divided into two submodels. One is the inlet temperature submodel, and the other is the power generation submodel. An experimental data modeling method is adopted to construct the inlet temperature submodel, and a theoretical modeling method is adopted to construct the power generation submodel. After modeling, simulation is conducted under various engine operating conditions to determine the variation of the power generated by the system. Finally, the model is embedded into a Honda Insight vehicle model to explore the energy-saving effect of the system on the vehicle under Economic Commission for Europe and cyc-constant_60 driving cycles.

  20. Simulation Modeling of Advanced Pilot Training: The Effects of a New Aircraft Family of Systems

    DTIC Science & Technology

    2014-03-01

    baseline requirements disqualifies itself from further consideration. The next phase of competition involves adjusting the total proposed price by...submodels, the previous submodel provided the framework for the following submodel. This copy-and-paste process forced the model builder to inspect each

  1. Bulk measurements of messy chemistries are needed for a theory of the origins of life

    NASA Astrophysics Data System (ADS)

    Guttenberg, Nicholas; Virgo, Nathaniel; Chandru, Kuhan; Scharf, Caleb; Mamajanov, Irena

    2017-11-01

    A feature of many of the chemical systems plausibly involved in the origins of terrestrial life is that they are complex and messy-producing a wide range of compounds via a wide range of mechanisms. However, the fundamental behaviour of such systems is currently not well understood; we do not have the tools to make statistical predictions about such complex chemical networks. This is, in part, due to a lack of quantitative data from which such a theory could be built; specifically, functional measurements of messy chemical systems. Here, we propose that the pantheon of experimental approaches to the origins of life should be expanded to include the study of `functional measurements'-the direct study of bulk properties of chemical systems and their interactions with other compounds, the formation of structures and other behaviours, even in cases where the precise composition and mechanisms are unknown. This article is part of the themed issue 'Reconceptualizing the origins of life'.

  2. A system of recurrent neural networks for modularising, parameterising and dynamic analysis of cell signalling networks.

    PubMed

    Samarasinghe, S; Ling, H

    In this paper, we show how to extend our previously proposed novel continuous time Recurrent Neural Networks (RNN) approach that retains the advantage of continuous dynamics offered by Ordinary Differential Equations (ODE) while enabling parameter estimation through adaptation, to larger signalling networks using a modular approach. Specifically, the signalling network is decomposed into several sub-models based on important temporal events in the network. Each sub-model is represented by the proposed RNN and trained using data generated from the corresponding ODE model. Trained sub-models are assembled into a whole system RNN which is then subjected to systems dynamics and sensitivity analyses. The concept is illustrated by application to G1/S transition in cell cycle using Iwamoto et al. (2008) ODE model. We decomposed the G1/S network into 3 sub-models: (i) E2F transcription factor release; (ii) E2F and CycE positive feedback loop for elevating cyclin levels; and (iii) E2F and CycA negative feedback to degrade E2F. The trained sub-models accurately represented system dynamics and parameters were in good agreement with the ODE model. The whole system RNN however revealed couple of parameters contributing to compounding errors due to feedback and required refinement to sub-model 2. These related to the reversible reaction between CycE/CDK2 and p27, its inhibitor. The revised whole system RNN model very accurately matched dynamics of the ODE system. Local sensitivity analysis of the whole system model further revealed the most dominant influence of the above two parameters in perturbing G1/S transition, giving support to a recent hypothesis that the release of inhibitor p27 from Cyc/CDK complex triggers cell cycle stage transition. To make the model useful in a practical setting, we modified each RNN sub-model with a time relay switch to facilitate larger interval input data (≈20min) (original model used data for 30s or less) and retrained them that produced parameters and protein concentrations similar to the original RNN system. Results thus demonstrated the reliability of the proposed RNN method for modelling relatively large networks by modularisation for practical settings. Advantages of the method are its ability to represent accurate continuous system dynamics and ease of: parameter estimation through training with data from a practical setting, model analysis (40% faster than ODE), fine tuning parameters when more data are available, sub-model extension when new elements and/or interactions come to light and model expansion with addition of sub-models. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. An advanced method of contributing emissions to short-lived chemical species (OH and HO2): the TAGGING 1.1 submodel based on the Modular Earth Submodel System (MESSy 2.53)

    NASA Astrophysics Data System (ADS)

    Rieger, Vanessa S.; Mertens, Mariano; Grewe, Volker

    2018-06-01

    To mitigate the human impact on climate change, it is essential to determine the contribution of emissions to the concentration of trace gases. In particular, the source attribution of short-lived species such as OH and HO2 is important as they play a crucial role for atmospheric chemistry. This study presents an advanced version of a tagging method for OH and HO2 (HOx) which attributes HOx concentrations to emissions. While the former version (V1.0) only considered 12 reactions in the troposphere, the new version (V1.1), presented here, takes 19 reactions in the troposphere into account. For the first time, the main chemical reactions for the HOx chemistry in the stratosphere are also regarded (in total 27 reactions). To fully take into account the main HO2 source by the reaction of H and O2, the tagging of the H radical is introduced. In order to ensure the steady-state assumption, we introduce rest terms which balance the deviation of HOx production and loss. This closes the budget between the sum of all contributions and the total concentration. The contributions to OH and HO2 obtained by the advanced tagging method V1.1 deviate from V1.0 in certain source categories. For OH, major changes are found in the categories biomass burning, biogenic emissions and methane decomposition. For HO2, the contributions differ strongly in the categories biogenic emissions and methane decomposition. As HOx reacts with ozone (O3), carbon monoxide (CO), reactive nitrogen compounds (NOy), non-methane hydrocarbons (NMHCs) and peroxyacyl nitrates (PAN), the contributions to these species are also modified by the advanced HOx tagging method V1.1. The contributions to NOy, NMHC and PAN show only little change, whereas O3 from biogenic emissions and methane decomposition increases in the tropical troposphere. Variations for CO from biogenic emissions and biomass burning are only found in the Southern Hemisphere.

  4. A new adaptive multiple modelling approach for non-linear and non-stationary systems

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Gong, Yu; Hong, Xia

    2016-07-01

    This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.

  5. A structural model decomposition framework for systems health management

    NASA Astrophysics Data System (ADS)

    Roychoudhury, I.; Daigle, M.; Bregon, A.; Pulido, B.

    Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.

  6. A Structural Model Decomposition Framework for Systems Health Management

    NASA Technical Reports Server (NTRS)

    Roychoudhury, Indranil; Daigle, Matthew J.; Bregon, Anibal; Pulido, Belamino

    2013-01-01

    Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.

  7. Towards Operational Meteotsunami Early Warning System: the Adriatic Project MESSI

    NASA Astrophysics Data System (ADS)

    Vilibic, I.; Sepic, J.; Denamiel, C. L.; Mihanovic, H.; Muslim, S.; Tudor, M.; Ivankovic, D.; Jelavic, D.; Kovacevic, V.; Masce, T.; Dadic, V.; Gacic, M.; Horvath, K.; Monserrat, S.; Rabinovich, A.; Telisman-Prtenjak, M.

    2017-12-01

    A number of destructive meteotsunamis - atmospherically-driven long ocean waves in a tsunami frequency band - occurred during the last decade through the world oceans. Owing to significant damage caused by these meteotsunamis, several scientific groups (occasionally in collaboration with public offices) have started developing meteotsunami warning systems. Creation of one such system has been initialized in the late 2015 within the MESSI (Meteotsunamis, destructive long ocean waves in the tsunami frequency band: from observations and simulations towards a warning system) project. Main goal of this project is to build a prototype of a meteotsunami warning system for the eastern Adriatic coast. The system will be based on real-time measurements, operational atmosphere and ocean modeling and real time decision-making process. Envisioned MESSI meteotsunami warning system consists of three modules: (1) synoptic warning module, which will use established correlation between forecasted synoptic fields and high-frequency sea level oscillations to provide qualitative meteotsunami forecasts for up to a week in advance, (2) probabilistic premodeling prediction module, which will use operational WRF-ROMS-ADCIRC modeling system and compare the forecast with an atlas of presimulations to get the probabilistic meteotsunami forecast for up to three days in advance, and (3) real-time module, which is based on real time tracking of properties of air pressure disturbance (amplitude, speed, direction, period, ...) and their real-time comparison with the atlas of meteotsunami simulations. System will be tested on recent meteotsunami events which were recorded in the MESSI area shortly after the operational meteotsunami network installation. Albeit complex, such a multilevel warning system has a potential to be adapted to most meteotsunami hot spots, simply by tuning the system parameters to the available atmospheric and ocean data.

  8. Development of the Transportation Revenue Estimator and Needs Determination System (TRENDS) forecasting model : MPO sub-models and maintenance.

    DOT National Transportation Integrated Search

    2011-11-01

    This report summarizes the technical work performed developing and incorporating Metropolitan Planning : Organization sub-models into the existing Texas Revenue Estimator and Needs Determination System : (TRENDS) model. Additionally, this report expl...

  9. The management submodel of the Wind Erosion Prediction System

    USDA-ARS?s Scientific Manuscript database

    The Wind Erosion Prediction System (WEPS) is a process-based, daily time-step, computer model that predicts soil erosion via simulation of the physical processes controlling wind erosion. WEPS is comprised of several individual modules (submodels) that reflect different sets of physical processes, ...

  10. Transportation Planning with Immune System Derived Approach

    NASA Astrophysics Data System (ADS)

    Sugiyama, Kenji; Yaji, Yasuhito; Ootsuki, John Takuya; Fujimoto, Yasutaka; Sekiguchi, Takashi

    This paper presents an immune system derived approach for planning transportation of materials between manufacturing processes in the factory. Transportation operations are modeled by Petri Net, and divided into submodels. Transportation orders are derived from the firing sequences of those submodels through convergence calculation by the immune system derived excitation and suppression operations. Basic evaluation of this approach is conducted by simulation-based investigation.

  11. A Structural Model Decomposition Framework for Hybrid Systems Diagnosis

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Bregon, Anibal; Roychoudhury, Indranil

    2015-01-01

    Nowadays, a large number of practical systems in aerospace and industrial environments are best represented as hybrid systems that consist of discrete modes of behavior, each defined by a set of continuous dynamics. These hybrid dynamics make the on-line fault diagnosis task very challenging. In this work, we present a new modeling and diagnosis framework for hybrid systems. Models are composed from sets of user-defined components using a compositional modeling approach. Submodels for residual generation are then generated for a given mode, and reconfigured efficiently when the mode changes. Efficient reconfiguration is established by exploiting causality information within the hybrid system models. The submodels can then be used for fault diagnosis based on residual generation and analysis. We demonstrate the efficient causality reassignment, submodel reconfiguration, and residual generation for fault diagnosis using an electrical circuit case study.

  12. Messy Collaboration: Learning from a Learning Study

    ERIC Educational Resources Information Center

    Adamson, Bob; Walker, Elizabeth

    2011-01-01

    Messy collaboration refers to complexity, unpredictability and management dilemmas when educators work together. Such messiness was evident in a Hong Kong English Learning Study, a structured cyclical process in which teachers and researcher-participants from a teacher education institution work collaboratively on effective student learning. This…

  13. Modeling snail breeding in a bioregenerative life support system

    NASA Astrophysics Data System (ADS)

    Kovalev, V. S.; Manukovsky, N. S.; Tikhomirov, A. A.; Kolmakova, A. A.

    2015-07-01

    The discrete-time model of snail breeding consists of two sequentially linked submodels: "Stoichiometry" and "Population". In both submodels, a snail population is split up into twelve age groups within one year of age. The first submodel is used to simulate the metabolism of a single snail in each age group via the stoichiometric equation; the second submodel is used to optimize the age structure and the size of the snail population. Daily intake of snail meat by crewmen is a guideline which specifies the population productivity. The mass exchange of the snail unit inhabited by land snails of Achatina fulica is given as an outcome of step-by-step modeling. All simulations are performed using Solver Add-In of Excel 2007.

  14. Residue decomposition of submodel of WEPS

    USDA-ARS?s Scientific Manuscript database

    The Residue Decomposition submodel of the Wind Erosion Prediction System (WEPS) simulates the decrease in crop residue biomass due to microbial activity. The decomposition process is modeled as a first-order reaction with temperature and moisture as driving variables. Decomposition is a function of ...

  15. A Review of Patti Lather's "Engaging Science Policy from the Side of the Messy"

    ERIC Educational Resources Information Center

    Rosser, Sue V.

    2011-01-01

    In "Engaging Science Policy from the Side of the Messy," Patti Lather explores the relationship between science and policy. In this review Rosser explores how Lather argues for the use of all forms of research to make policy that is democratic, complex and messy.

  16. High pressure common rail injection system modeling and control.

    PubMed

    Wang, H P; Zheng, D; Tian, Y

    2016-07-01

    In this paper modeling and common-rail pressure control of high pressure common rail injection system (HPCRIS) is presented. The proposed mathematical model of high pressure common rail injection system which contains three sub-systems: high pressure pump sub-model, common rail sub-model and injector sub-model is a relative complicated nonlinear system. The mathematical model is validated by the software Matlab and a virtual detailed simulation environment. For the considered HPCRIS, an effective model free controller which is called Extended State Observer - based intelligent Proportional Integral (ESO-based iPI) controller is designed. And this proposed method is composed mainly of the referred ESO observer, and a time delay estimation based iPI controller. Finally, to demonstrate the performances of the proposed controller, the proposed ESO-based iPI controller is compared with a conventional PID controller and ADRC. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Social regulation of emotion: messy layers

    PubMed Central

    Kappas, Arvid

    2013-01-01

    Emotions are evolved systems of intra- and interpersonal processes that are regulatory in nature, dealing mostly with issues of personal or social concern. They regulate social interaction and in extension, the social sphere. In turn, processes in the social sphere regulate emotions of individuals and groups. In other words, intrapersonal processes project in the interpersonal space, and inversely, interpersonal experiences deeply influence intrapersonal processes. Thus, I argue that the concepts of emotion generation and regulation should not be artificially separated. Similarly, interpersonal emotions should not be reduced to interacting systems of intraindividual processes. Instead, we can consider emotions at different social levels, ranging from dyads to large scale e-communities. The interaction between these levels is complex and does not only involve influences from one level to the next. In this sense the levels of emotion/regulation are messy and a challenge for empirical study. In this article, I discuss the concepts of emotions and regulation at different intra- and interpersonal levels. I extend the concept of auto-regulation of emotions (Kappas, 2008, 2011a,b) to social processes. Furthermore, I argue for the necessity of including mediated communication, particularly in cyberspace in contemporary models of emotion/regulation. Lastly, I suggest the use of concepts from systems dynamics and complex systems to tackle the challenge of the “messy layers.” PMID:23424049

  18. Comprehensive silicon solar cell computer modeling

    NASA Technical Reports Server (NTRS)

    Lamorte, M. F.

    1984-01-01

    The development of an efficient, comprehensive Si solar cell modeling program that has the capability of simulation accuracy of 5 percent or less is examined. A general investigation of computerized simulation is provided. Computer simulation programs are subdivided into a number of major tasks: (1) analytical method used to represent the physical system; (2) phenomena submodels that comprise the simulation of the system; (3) coding of the analysis and the phenomena submodels; (4) coding scheme that results in efficient use of the CPU so that CPU costs are low; and (5) modularized simulation program with respect to structures that may be analyzed, addition and/or modification of phenomena submodels as new experimental data become available, and the addition of other photovoltaic materials.

  19. Comparison Between Numerically Simulated and Experimentally Measured Flowfield Quantities Behind a Pulsejet

    NASA Technical Reports Server (NTRS)

    Geng, Tao; Paxson, Daniel E.; Zheng, Fei; Kuznetsov, Andrey V.; Roberts, William L.

    2008-01-01

    Pulsed combustion is receiving renewed interest as a potential route to higher performance in air breathing propulsion systems. Pulsejets offer a simple experimental device with which to study unsteady combustion phenomena and validate simulations. Previous computational fluid dynamic (CFD) simulation work focused primarily on the pulsejet combustion and exhaust processes. This paper describes a new inlet sub-model which simulates the fluidic and mechanical operation of a valved pulsejet head. The governing equations for this sub-model are described. Sub-model validation is provided through comparisons of simulated and experimentally measured reed valve motion, and time averaged inlet mass flow rate. The updated pulsejet simulation, with the inlet sub-model implemented, is validated through comparison with experimentally measured combustion chamber pressure, inlet mass flow rate, operational frequency, and thrust. Additionally, the simulated pulsejet exhaust flowfield, which is dominated by a starting vortex ring, is compared with particle imaging velocimetry (PIV) measurements on the bases of velocity, vorticity, and vortex location. The results show good agreement between simulated and experimental data. The inlet sub-model is shown to be critical for the successful modeling of pulsejet operation. This sub-model correctly predicts both the inlet mass flow rate and its phase relationship with the combustion chamber pressure. As a result, the predicted pulsejet thrust agrees very well with experimental data.

  20. PVWatts Version 1 Technical Reference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobos, A. P.

    2013-10-01

    The NREL PVWatts(TM) calculator is a web application developed by the National Renewable Energy Laboratory (NREL) that estimates the electricity production of a grid-connected photovoltaic system based on a few simple inputs. PVWatts combines a number of sub-models to predict overall system performance, and makes several hidden assumptions about performance parameters. This technical reference details the individual sub-models, documents assumptions and hidden parameters, and explains the sequence of calculations that yield the final system performance estimation.

  1. Laser Induced Aluminum Surface Breakdown Model

    NASA Technical Reports Server (NTRS)

    Chen, Yen-Sen; Liu, Jiwen; Zhang, Sijun; Wang, Ten-See (Technical Monitor)

    2002-01-01

    Laser powered propulsion systems involve complex fluid dynamics, thermodynamics and radiative transfer processes. Based on an unstructured grid, pressure-based computational aerothermodynamics; platform, several sub-models describing such underlying physics as laser ray tracing and focusing, thermal non-equilibrium, plasma radiation and air spark ignition have been developed. This proposed work shall extend the numerical platform and existing sub-models to include the aluminum wall surface Inverse Bremsstrahlung (IB) effect from which surface ablation and free-electron generation can be initiated without relying on the air spark ignition sub-model. The following tasks will be performed to accomplish the research objectives.

  2. Shaping planetary nebulae with jets in inclined triple stellar systems

    NASA Astrophysics Data System (ADS)

    Akashi, Muhammad; Soker, Noam

    2017-08-01

    We conduct three-dimensional hydrodynamical simulations of two opposite jets launched obliquely to the orbital plane around an asymptotic giant branch (AGB) star and within its dense wind, and demonstrate the formation of a 'messy' planetary nebula (PN), namely a PN lacking any type of symmetry (I.e. highly irregular). In building the initial conditions, we assume that a tight binary system orbits the AGB star and that the orbital plane of the tight binary system is inclined to the orbital plane of the binary system and the AGB star (the triple system plane). We further assume that the accreted mass on to the tight binary system forms an accretion disc around one of the stars and that the plane of the disc is tilted to the orbital plane of the triple system. The highly asymmetrical and filamentary structures that we obtain support the notion that messy PNe might be shaped by triple stellar systems.

  3. MessyBoard: Lowering the Cost of Communication and Making it More Enjoyable

    DTIC Science & Technology

    2005-05-02

    38 Figure 2.9. The MessyBoard main menu ...nized in real time. Users add content to the board by using a menu or by dragging and dropping or cutting and pasting from other applications...initiated.) MessyBoard also allows users to add objects to the space using a menu (Figure 2.9) that appears when the user clicks the right mouse

  4. Shaping planetary nebulae with jets in inclined triple stellar systems

    NASA Astrophysics Data System (ADS)

    Akashi, Muhammad; Soker, Noam

    2017-10-01

    We conduct three-dimensional hydrodynamical simulations of two opposite jets launched obliquely to the orbital plane around an asymptotic giant branch (AGB) star and within its dense wind, and demonstrate the formation of a `messy' planetary nebula (PN), namely, a PN lacking any type of symmetry (highly irregular). In building the initial conditions we assume that a tight binary system orbits the AGB star, and that the orbital plane of the tight binary system is inclined to the orbital plane of binary system and the AGB star. We further assume that the accreted mass onto the tight binary system forms an accretion disk around one of the stars, and that the plane of the disk is in between the two orbital planes. The highly asymmetrical lobes that we obtain support the notion that messy PNe might be shaped by triple stellar systems.

  5. How much detail and accuracy is required in plant growth sub-models to address questions about optimal management strategies in agricultural systems?

    PubMed Central

    Renton, Michael

    2011-01-01

    Background and aims Simulations that integrate sub-models of important biological processes can be used to ask questions about optimal management strategies in agricultural and ecological systems. Building sub-models with more detail and aiming for greater accuracy and realism may seem attractive, but is likely to be more expensive and time-consuming and result in more complicated models that lack transparency. This paper illustrates a general integrated approach for constructing models of agricultural and ecological systems that is based on the principle of starting simple and then directly testing for the need to add additional detail and complexity. Methodology The approach is demonstrated using LUSO (Land Use Sequence Optimizer), an agricultural system analysis framework based on simulation and optimization. A simple sensitivity analysis and functional perturbation analysis is used to test to what extent LUSO's crop–weed competition sub-model affects the answers to a number of questions at the scale of the whole farming system regarding optimal land-use sequencing strategies and resulting profitability. Principal results The need for accuracy in the crop–weed competition sub-model within LUSO depended to a small extent on the parameter being varied, but more importantly and interestingly on the type of question being addressed with the model. Only a small part of the crop–weed competition model actually affects the answers to these questions. Conclusions This study illustrates an example application of the proposed integrated approach for constructing models of agricultural and ecological systems based on testing whether complexity needs to be added to address particular questions of interest. We conclude that this example clearly demonstrates the potential value of the general approach. Advantages of this approach include minimizing costs and resources required for model construction, keeping models transparent and easy to analyse, and ensuring the model is well suited to address the question of interest. PMID:22476477

  6. Surveillance system and method having parameter estimation and operating mode partitioning

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor)

    2005-01-01

    A system and method for monitoring an apparatus or process asset including creating a process model comprised of a plurality of process submodels each correlative to at least one training data subset partitioned from an unpartitioned training data set and each having an operating mode associated thereto; acquiring a set of observed signal data values from the asset; determining an operating mode of the asset for the set of observed signal data values; selecting a process submodel from the process model as a function of the determined operating mode of the asset; calculating a set of estimated signal data values from the selected process submodel for the determined operating mode; and determining asset status as a function of the calculated set of estimated signal data values for providing asset surveillance and/or control.

  7. PVWatts Version 5 Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobos, A. P.

    2014-09-01

    The NREL PVWatts calculator is a web application developed by the National Renewable Energy Laboratory (NREL) that estimates the electricity production of a grid-connected photovoltaic system based on a few simple inputs. PVWatts combines a number of sub-models to predict overall system performance, and makes includes several built-in parameters that are hidden from the user. This technical reference describes the sub-models, documents assumptions and hidden parameters, and explains the sequence of calculations that yield the final system performance estimate. This reference is applicable to the significantly revised version of PVWatts released by NREL in 2014.

  8. Subspace Methods for Massive and Messy Data

    DTIC Science & Technology

    2017-07-12

    Subspace Methods for Massive and Messy Data The views, opinions and/or findings contained in this report are those of the author(s) and should not...AGENCY NAME(S) AND ADDRESS (ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 REPORT DOCUMENTATION PAGE 11. SPONSOR...Number: W911NF-14-1-0634 Organization: University of Michigan - Ann Arbor Title: Subspace Methods for Massive and Messy Data Report Term: 0-Other

  9. Oxidation Mechanisms of Toluene and Benzene

    NASA Technical Reports Server (NTRS)

    Bittker, David A.

    1995-01-01

    An expanded and improved version of a previously published benzene oxidation mechanism is presented and shown to model published experimental data fairly successfully. This benzene submodel is coupled to a modified version of a toluene oxidation submodel from the recent literature. This complete mechanism is shown to successfully model published experimental toluene oxidation data for a highly mixed flow reactor and for higher temperature ignition delay times in a shock tube. A comprehensive sensitivity analysis showing the most important reactions is presented for both the benzene and toluene reacting systems. The NASA Lewis toluene mechanism's modeling capability is found to be equivalent to that of the previously published mechanism which contains a somewhat different benzene submodel.

  10. Modelling Nitrogen Cycling in a Mariculture Ecosystem as a Tool to Evaluate its Outflow

    NASA Astrophysics Data System (ADS)

    Lefebvre, S.; Bacher, C.; Meuret, A.; Hussenot, J.

    2001-03-01

    A model was constructed to describe an intensive mariculture ecosystem growing sea bass ( Dicentrarchus labrax), located in the salt marshes of the Fiers d'Ars Bay on the French Atlantic coast, in order to assess nitrogen cycling within the system and nitrogen outflow from the system. The land-based system was separated into three main compartments: a seawater reservoir, fish ponds and a lagoon (sedimentation pond). Three submodels were built for simulation purposes: (1) a hydrological submodel which simulated water exchange; (2) a fish growth and excretion bioenergetic submodel; and (3) a nitrogen compound transformation and loss submodel (i.e. ammonification, nitrification and assimilation processes). A two-year sampling period of nitrogen water quality concentrations and fish growth was used to validate the model. The model fitted the observations of dissolved nitrogen components, fish growth and water fluxes on a daily basis in all the compartments. The dissolved inorganic nitrogen ranged widely and over time from 0·5 to 9 g N m -3within the system, depending on seawater supply and water temperature, without affecting fish growth. Fish feed was the most important input of nitrogen into the system. The mean average input of nitrogen in the feed was 205 kg N day -1, of which 19% was retained by fish, 4% accumulated in the sediment and 61% flowed from the system as dissolved components. The farm represented about 25% of the total dissolved nitrogen export from the bay, although the farm surface area was 100 times smaller than that of the bay.

  11. Group analysis of dynamics equations of self-gravitating polytropic gas

    NASA Astrophysics Data System (ADS)

    Klebanov, I.; Panov, A.; Ivanov, S.; Maslova, O.

    2018-06-01

    The Lie algebras admitted by the dynamics equations of self-gravitating gas for an arbitrary equation of state and a polytropic gas are calculated. A spherically symmetric submodel is constructed for the case of a polytropic gas. The Lie algebras and the optimal system of subalgebras for a spherically symmetric submodel are computed. An invariant solution describing the steady motion is obtained.

  12. Surveillance system and method having parameter estimation and operating mode partitioning

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor)

    2003-01-01

    A system and method for monitoring an apparatus or process asset including partitioning an unpartitioned training data set into a plurality of training data subsets each having an operating mode associated thereto; creating a process model comprised of a plurality of process submodels each trained as a function of at least one of the training data subsets; acquiring a current set of observed signal data values from the asset; determining an operating mode of the asset for the current set of observed signal data values; selecting a process submodel from the process model as a function of the determined operating mode of the asset; calculating a current set of estimated signal data values from the selected process submodel for the determined operating mode; and outputting the calculated current set of estimated signal data values for providing asset surveillance and/or control.

  13. Design of disturbances control model at automotive company

    NASA Astrophysics Data System (ADS)

    Marie, I. A.; Sari, D. K.; Astuti, P.; Teorema, M.

    2017-12-01

    The discussion was conducted at PT. XYZ which produces automotive components and motorcycle products. The company produced X123 type cylinder head which is a motor vehicle forming component. The disturbances in the production system has affected the company performance in achieving the target of Key Performance Indicator (KPI). Currently, the determination of the percentage of safety stock of cylinder head products is not in accordance to the control limits set by the company (60% - 80%), and tends to exceed the control limits that cause increasing the inventory wastage in the company. This study aims to identify the production system disturbances that occurs in the production process of manufacturing components of X123 type cylinder head products and design the control model of disturbance to obtain control action and determine the safety stock policy in accordance with the needs of the company. The design stage has been done based on the Disturbance Control Model which already existing and customized with the company need in controlling the production system disturbances at the company. The design of the disturbances control model consists of sub-model of the risk level of the disturbance, sub-model of action status, sub-model action control of the disturbance, and sub-model of determining the safety stock. The model can assist the automotive company in taking the decision to perform the disturbances control action in production system cylinder head while controlling the percentage of the safety stock.

  14. Hierarchical algorithms for modeling the ocean on hierarchical architectures

    NASA Astrophysics Data System (ADS)

    Hill, C. N.

    2012-12-01

    This presentation will describe an approach to using accelerator/co-processor technology that maps hierarchical, multi-scale modeling techniques to an underlying hierarchical hardware architecture. The focus of this work is on making effective use of both CPU and accelerator/co-processor parts of a system, for large scale ocean modeling. In the work, a lower resolution basin scale ocean model is locally coupled to multiple, "embedded", limited area higher resolution sub-models. The higher resolution models execute on co-processor/accelerator hardware and do not interact directly with other sub-models. The lower resolution basin scale model executes on the system CPU(s). The result is a multi-scale algorithm that aligns with hardware designs in the co-processor/accelerator space. We demonstrate this approach being used to substitute explicit process models for standard parameterizations. Code for our sub-models is implemented through a generic abstraction layer, so that we can target multiple accelerator architectures with different programming environments. We will present two application and implementation examples. One uses the CUDA programming environment and targets GPU hardware. This example employs a simple non-hydrostatic two dimensional sub-model to represent vertical motion more accurately. The second example uses a highly threaded three-dimensional model at high resolution. This targets a MIC/Xeon Phi like environment and uses sub-models as a way to explicitly compute sub-mesoscale terms. In both cases the accelerator/co-processor capability provides extra compute cycles that allow improved model fidelity for little or no extra wall-clock time cost.

  15. On extending parallelism to serial simulators

    NASA Technical Reports Server (NTRS)

    Nicol, David; Heidelberger, Philip

    1994-01-01

    This paper describes an approach to discrete event simulation modeling that appears to be effective for developing portable and efficient parallel execution of models of large distributed systems and communication networks. In this approach, the modeler develops submodels using an existing sequential simulation modeling tool, using the full expressive power of the tool. A set of modeling language extensions permit automatically synchronized communication between submodels; however, the automation requires that any such communication must take a nonzero amount off simulation time. Within this modeling paradigm, a variety of conservative synchronization protocols can transparently support conservative execution of submodels on potentially different processors. A specific implementation of this approach, U.P.S. (Utilitarian Parallel Simulator), is described, along with performance results on the Intel Paragon.

  16. Cross-cultural validity of Morningness-Eveningness Stability Scale improved (MESSi) in Iran, Spain and Germany.

    PubMed

    Rahafar, Arash; Randler, Christoph; Díaz-Morales, Juan F; Kasaeian, Ali; Heidari, Zeinab

    2017-01-01

    Morningness-Eveningness Stability Scale improved (MESSi) is a newly constructed measure to assess circadian types and amplitude. In this study, we applied this measure to participants from three different countries: Germany, Spain and Iran. Confirmatory factorial analysis (CFA) of MESSi displayed mediocre fit in the three countries. Comparing increasingly stringent models using multigroup confirmatory factor analyses indicated at least partial measurement invariance (metric invariance) by country for Morning Affect and Distinctness subscales. Age was positively related to Morning Affect (MA), and negatively related to Eveningness (EV) and Distinctness (DI). Men reported higher MA than women, whereas women reported higher DI than men. Regarding country effect, Iranian participants reported highest MA compared to Spaniards and Germans, whereas Germans reported higher DI compared to Iranians and Spaniards. As a conclusion, our study corroborated the validity and reliability of MESSi across three different countries with different geographical and cultural characteristics.

  17. Waste tyre pyrolysis: modelling of a moving bed reactor.

    PubMed

    Aylón, E; Fernández-Colino, A; Murillo, R; Grasa, G; Navarro, M V; García, T; Mastral, A M

    2010-12-01

    This paper describes the development of a new model for waste tyre pyrolysis in a moving bed reactor. This model comprises three different sub-models: a kinetic sub-model that predicts solid conversion in terms of reaction time and temperature, a heat transfer sub-model that calculates the temperature profile inside the particle and the energy flux from the surroundings to the tyre particles and, finally, a hydrodynamic model that predicts the solid flow pattern inside the reactor. These three sub-models have been integrated in order to develop a comprehensive reactor model. Experimental results were obtained in a continuous moving bed reactor and used to validate model predictions, with good approximation achieved between the experimental and simulated results. In addition, a parametric study of the model was carried out, which showed that tyre particle heating is clearly faster than average particle residence time inside the reactor. Therefore, this fast particle heating together with fast reaction kinetics enables total solid conversion to be achieved in this system in accordance with the predictive model. Copyright © 2010 Elsevier Ltd. All rights reserved.

  18. New earth system model for optical performance evaluation of space instruments.

    PubMed

    Ryu, Dongok; Kim, Sug-Whan; Breault, Robert P

    2017-03-06

    In this study, a new global earth system model is introduced for evaluating the optical performance of space instruments. Simultaneous imaging and spectroscopic results are provided using this global earth system model with fully resolved spatial, spectral, and temporal coverage of sub-models of the Earth. The sun sub-model is a Lambertian scattering sphere with a 6-h scale and 295 lines of solar spectral irradiance. The atmospheric sub-model has a 15-layer three-dimensional (3D) ellipsoid structure. The land sub-model uses spectral bidirectional reflectance distribution functions (BRDF) defined by a semi-empirical parametric kernel model. The ocean is modeled with the ocean spectral albedo after subtracting the total integrated scattering of the sun-glint scatter model. A hypothetical two-mirror Cassegrain telescope with a 300-mm-diameter aperture and 21.504 mm × 21.504-mm focal plane imaging instrument is designed. The simulated image results are compared with observational data from HRI-VIS measurements during the EPOXI mission for approximately 24 h from UTC Mar. 18, 2008. Next, the defocus mapping result and edge spread function (ESF) measuring result show that the distance between the primary and secondary mirror increases by 55.498 μm from the diffraction-limited condition. The shift of the focal plane is determined to be 5.813 mm shorter than that of the defocused focal plane, and this result is confirmed through the estimation of point spread function (PSF) measurements. This study shows that the earth system model combined with an instrument model is a powerful tool that can greatly help the development phase of instrument missions.

  19. Meteotsunamis, destructive tsunami-like waves: from observations and simulations towards a warning system (MESSI)

    NASA Astrophysics Data System (ADS)

    Sepic, Jadranka; Vilibic, Ivica

    2016-04-01

    Atmospherically-generated tsunami-like waves, also known as meteotsunamis, pose a severe threat for exposed coastlines. Although not as destructive as ordinary tsunamis, several meters high meteotsunami waves can bring destruction, cause loss of human lives and raise panic. For that reason, MESSI, an integrative meteotsunami research & warning project, has been developed and will be presented herein. The project has a threefold base: (1) research of atmosphere-ocean interaction with focus on (i) source processes in the atmosphere, (ii) energy transfer to the ocean and (iii) along-propagation growth of meteotsunami waves; (2) estimation of meteotsunami occurrence rates in past, present and future climate, and mapping of meteotsunami hazard; (3) construction of a meteotsunami warning system prototype, with the latter being the main objective of the project. Due to a great frequency of meteotsunamis and its complex bathymetry which varies from the shallow shelf in the north towards deep pits in the south, with a number of funnel-shaped bays and harbours substantially amplifying incoming tsunami-like waves, the Adriatic, northernmost of the Mediterranean seas, has been chosen as an ideal area for realization of the MESSI project and implementation of the warning system. This warning system will however be designed to allow for a wider applicability and easy-to-accomplish transfer to other endangered locations. The architecture of the warning system will integrate several components: (1) real-time measurements of key oceanographic and atmospheric parameters, (2) coupled atmospheric-ocean models run in real time (warning) mode, and (3) semi-automatic procedures and protocols for warning of civil protection, local authorities and public. The effectiveness of the warning system will be tested over the historic events.

  20. On the hierarchy of partially invariant submodels of differential equations

    NASA Astrophysics Data System (ADS)

    Golovin, Sergey V.

    2008-07-01

    It is noted that the partially invariant solution (PIS) of differential equations in many cases can be represented as an invariant reduction of some PISs of the higher rank. This introduces a hierarchic structure in the set of all PISs of a given system of differential equations. An equivalence of the two-step and the direct ways of construction of PISs is proved. The hierarchy simplifies the process of enumeration and analysis of partially invariant submodels to the given system of differential equations. In this framework, the complete classification of regular partially invariant solutions of ideal MHD equations is given.

  1. MESSI: metabolic engineering target selection and best strain identification tool.

    PubMed

    Kang, Kang; Li, Jun; Lim, Boon Leong; Panagiotou, Gianni

    2015-01-01

    Metabolic engineering and synthetic biology are synergistically related fields for manipulating target pathways and designing microorganisms that can act as chemical factories. Saccharomyces cerevisiae's ideal bioprocessing traits make yeast a very attractive chemical factory for production of fuels, pharmaceuticals, nutraceuticals as well as a wide range of chemicals. However, future attempts of engineering S. cerevisiae's metabolism using synthetic biology need to move towards more integrative models that incorporate the high connectivity of metabolic pathways and regulatory processes and the interactions in genetic elements across those pathways and processes. To contribute in this direction, we have developed Metabolic Engineering target Selection and best Strain Identification tool (MESSI), a web server for predicting efficient chassis and regulatory components for yeast bio-based production. The server provides an integrative platform for users to analyse ready-to-use public high-throughput metabolomic data, which are transformed to metabolic pathway activities for identifying the most efficient S. cerevisiae strain for the production of a compound of interest. As input MESSI accepts metabolite KEGG IDs or pathway names. MESSI outputs a ranked list of S. cerevisiae strains based on aggregation algorithms. Furthermore, through a genome-wide association study of the metabolic pathway activities with the strains' natural variation, MESSI prioritizes genes and small variants as potential regulatory points and promising metabolic engineering targets. Users can choose various parameters in the whole process such as (i) weight and expectation of each metabolic pathway activity in the final ranking of the strains, (ii) Weighted AddScore Fuse or Weighted Borda Fuse aggregation algorithm, (iii) type of variants to be included, (iv) variant sets in different biological levels.Database URL: http://sbb.hku.hk/MESSI/. © The Author(s) 2015. Published by Oxford University Press.

  2. Software life cycle dynamic simulation model: The organizational performance submodel

    NASA Technical Reports Server (NTRS)

    Tausworthe, Robert C.

    1985-01-01

    The submodel structure of a software life cycle dynamic simulation model is described. The software process is divided into seven phases, each with product, staff, and funding flows. The model is subdivided into an organizational response submodel, a management submodel, a management influence interface, and a model analyst interface. The concentration here is on the organizational response model, which simulates the performance characteristics of a software development subject to external and internal influences. These influences emanate from two sources: the model analyst interface, which configures the model to simulate the response of an implementing organization subject to its own internal influences, and the management submodel that exerts external dynamic control over the production process. A complete characterization is given of the organizational response submodel in the form of parameterized differential equations governing product, staffing, and funding levels. The parameter values and functions are allocated to the two interfaces.

  3. Tailored Testing Theory and Practice: A Basic Model, Normal Ogive Submodels, and Tailored Testing Algorithms

    DTIC Science & Technology

    1983-08-01

    ACCESSION NO «• TITLE (and Sublltle) TAILORED TESTING THEORY AND PRACTICE: A BASIC MODEL , NORMAL OGIVE SUBMODELS, AND TAILORED TESTING ALGORITHMS 7...single common-factor model , the author derives the two- and three-parametir normal ogfve il’^irTr^ functions as submodels. For both of these...PAOEfWiwi Dmia Bnfnd) NPRDC TR 83-32 AUGUST 1983 TAILORED TESTING THEORY AND PRACTICE: A BASIC MODEL , NORMAL OGIVE SUBMODELS, AND TAILORED TESTING

  4. Patterns and Attributes in Vulnerable Children's Messy Play

    ERIC Educational Resources Information Center

    Gascoyne, Sue

    2017-01-01

    "Messy play" is often viewed as a purely ludic play activity in early childhood settings, however, some professionals in therapeutic contexts have understood it as potentially significant for supporting young children who have experienced trauma. This article reports on a study that explored how young children who have experienced trauma…

  5. Messy History vs. Neat History: Toward an Expanded View of Women in Graphic Design.

    ERIC Educational Resources Information Center

    Scotford, Martha

    1994-01-01

    Argues that a "messy history" approach is required to discover, study and include the variety of alternative approaches and activities that are often part of women designers' professional lives. Proposes a typology of roles played by women in graphic design for further research. (SR)

  6. Sub-Model Partial Least Squares for Improved Accuracy in Quantitative Laser Induced Breakdown Spectroscopy

    NASA Astrophysics Data System (ADS)

    Anderson, R. B.; Clegg, S. M.; Frydenvang, J.

    2015-12-01

    One of the primary challenges faced by the ChemCam instrument on the Curiosity Mars rover is developing a regression model that can accurately predict the composition of the wide range of target types encountered (basalts, calcium sulfate, feldspar, oxides, etc.). The original calibration used 69 rock standards to train a partial least squares (PLS) model for each major element. By expanding the suite of calibration samples to >400 targets spanning a wider range of compositions, the accuracy of the model was improved, but some targets with "extreme" compositions (e.g. pure minerals) were still poorly predicted. We have therefore developed a simple method, referred to as "submodel PLS", to improve the performance of PLS across a wide range of target compositions. In addition to generating a "full" (0-100 wt.%) PLS model for the element of interest, we also generate several overlapping submodels (e.g. for SiO2, we generate "low" (0-50 wt.%), "mid" (30-70 wt.%), and "high" (60-100 wt.%) models). The submodels are generally more accurate than the "full" model for samples within their range because they are able to adjust for matrix effects that are specific to that range. To predict the composition of an unknown target, we first predict the composition with the submodels and the "full" model. Then, based on the predicted composition from the "full" model, the appropriate submodel prediction can be used (e.g. if the full model predicts a low composition, use the "low" model result, which is likely to be more accurate). For samples with "full" predictions that occur in a region of overlap between submodels, the submodel predictions are "blended" using a simple linear weighted sum. The submodel PLS method shows improvements in most of the major elements predicted by ChemCam and reduces the occurrence of negative predictions for low wt.% targets. Submodel PLS is currently being used in conjunction with ICA regression for the major element compositions of ChemCam data.

  7. Seismic damage analysis of the outlet piers of arch dams using the finite element sub-model method

    NASA Astrophysics Data System (ADS)

    Song, Liangfeng; Wu, Mingxin; Wang, Jinting; Xu, Yanjie

    2016-09-01

    This study aims to analyze seismic damage of reinforced outlet piers of arch dams by the nonlinear finite element (FE) sub-model method. First, the dam-foundation system is modeled and analyzed, in which the effects of infinite foundation, contraction joints, and nonlinear concrete are taken into account. The detailed structures of the outlet pier are then simulated with a refined FE model in the sub-model analysis. In this way the damage mechanism of the plain (unreinforced) outlet pier is analyzed, and the effects of two reinforcement measures (i.e., post-tensioned anchor cables and reinforcing bar) on the dynamic damage to the outlet pier are investigated comprehensively. Results show that the plain pier is damaged severely by strong earthquakes while implementation of post-tensioned anchor cables strengthens the pier effectively. In addition, radiation damping strongly alleviates seismic damage to the piers.

  8. Meaning Making with Motion Is Messy: Developing a STEM Learning Community

    ERIC Educational Resources Information Center

    LópezLeiva, Carlos; Roberts-Harris, Deborah; von Toll, Elizabeth

    2016-01-01

    Through a collaborative effort between a sixth-grade teacher and two university faculty, we designed an integrated unit to learn about motion and we learned that an integrated teaching and learning experience about motion is MESSY (i.e., it includes movement, engagement, social interactions, spontaneity, yikes, and yippees!). We engaged in a…

  9. Messy Design: Organic Planning for Blended Learning

    ERIC Educational Resources Information Center

    Rankin, Andrea; Luzeckyj, Ann; Haggis, Jane; Gare, Callum

    2016-01-01

    In this paper we argue that a messy design process does not mitigate against sharing and transfer of artefacts across educational domains. In fact, such a process can aid in developing a model for learning and teaching that is reusable and authentic. We describe the planning and design of an integrated and interactive blended learning environment…

  10. An investigation of messy genetic algorithms

    NASA Technical Reports Server (NTRS)

    Goldberg, David E.; Deb, Kalyanmoy; Korb, Bradley

    1990-01-01

    Genetic algorithms (GAs) are search procedures based on the mechanics of natural selection and natural genetics. They combine the use of string codings or artificial chromosomes and populations with the selective and juxtapositional power of reproduction and recombination to motivate a surprisingly powerful search heuristic in many problems. Despite their empirical success, there has been a long standing objection to the use of GAs in arbitrarily difficult problems. A new approach was launched. Results to a 30-bit, order-three-deception problem were obtained using a new type of genetic algorithm called a messy genetic algorithm (mGAs). Messy genetic algorithms combine the use of variable-length strings, a two-phase selection scheme, and messy genetic operators to effect a solution to the fixed-coding problem of standard simple GAs. The results of the study of mGAs in problems with nonuniform subfunction scale and size are presented. The mGA approach is summarized, both its operation and the theory of its use. Experiments on problems of varying scale, varying building-block size, and combined varying scale and size are presented.

  11. Large liquid rocket engine transient performance simulation system

    NASA Technical Reports Server (NTRS)

    Mason, J. R.; Southwick, R. D.

    1989-01-01

    Phase 1 of the Rocket Engine Transient Simulation (ROCETS) program consists of seven technical tasks: architecture; system requirements; component and submodel requirements; submodel implementation; component implementation; submodel testing and verification; and subsystem testing and verification. These tasks were completed. Phase 2 of ROCETS consists of two technical tasks: Technology Test Bed Engine (TTBE) model data generation; and system testing verification. During this period specific coding of the system processors was begun and the engineering representations of Phase 1 were expanded to produce a simple model of the TTBE. As the code was completed, some minor modifications to the system architecture centering on the global variable common, GLOBVAR, were necessary to increase processor efficiency. The engineering modules completed during Phase 2 are listed: INJTOO - main injector; MCHBOO - main chamber; NOZLOO - nozzle thrust calculations; PBRNOO - preburner; PIPE02 - compressible flow without inertia; PUMPOO - polytropic pump; ROTROO - rotor torque balance/speed derivative; and TURBOO - turbine. Detailed documentation of these modules is in the Appendix. In addition to the engineering modules, several submodules were also completed. These submodules include combustion properties, component performance characteristics (maps), and specific utilities. Specific coding was begun on the system configuration processor. All functions necessary for multiple module operation were completed but the SOLVER implementation is still under development. This system, the Verification Checkout Facility (VCF) allows interactive comparison of module results to store data as well as provides an intermediate checkout of the processor code. After validation using the VCF, the engineering modules and submodules were used to build a simple TTBE.

  12. A Real-Life Case Study of Audit Interactions--Resolving Messy, Complex Problems

    ERIC Educational Resources Information Center

    Beattie, Vivien; Fearnley, Stella; Hines, Tony

    2012-01-01

    Real-life accounting and auditing problems are often complex and messy, requiring the synthesis of technical knowledge in addition to the application of generic skills. To help students acquire the necessary skills to deal with these problems effectively, educators have called for the use of case-based methods. Cases based on real situations (such…

  13. On Least Squares Fitting Nonlinear Submodels.

    ERIC Educational Resources Information Center

    Bechtel, Gordon G.

    Three simplifying conditions are given for obtaining least squares (LS) estimates for a nonlinear submodel of a linear model. If these are satisfied, and if the subset of nonlinear parameters may be LS fit to the corresponding LS estimates of the linear model, then one attains the desired LS estimates for the entire submodel. Two illustrative…

  14. Vibration analysis of rotor systems using reduced subsystem models

    NASA Technical Reports Server (NTRS)

    Fan, Uei-Jiun; Noah, Sherif T.

    1989-01-01

    A general impedance method using reduced submodels has been developed for the linear dynamic analysis of rotor systems. Formulated in terms of either modal or physical coordinates of the subsystems, the method enables imbalance responses at specific locations of the rotor systems to be efficiently determined from a small number of 'master' degrees of freedom. To demonstrate the capability of this impedance approach, the Space Shuttle Main Engine high-pressure oxygen turbopump has been investigated to determine the bearing loads due to imbalance. Based on the same formulation, an eigenvalue analysis has been performed to study the system stability. A small 5-DOF model has been utilized to illustrate the application of the method to eigenvalue analysis. Because of its inherent characteristics of allowing formulation of reduced submodels, the impedance method can significantly increase the computational speed.

  15. Planetary nebula progenitors that swallow binary systems

    NASA Astrophysics Data System (ADS)

    Soker, Noam

    2016-01-01

    I propose that some irregular messy planetary nebulae (PNe) owe their morphologies to triple-stellar evolution where tight binary systems evolve inside and/or on the outskirts of the envelope of asymptotic giant branch (AGB) stars. In some cases, the tight binary system can survive, in others, it is destroyed. The tight binary system might break up with one star leaving the system. In an alternative evolution, one of the stars of the broken-up tight binary system falls towards the AGB envelope with low specific angular momentum, and drowns in the envelope. In a different type of destruction process, the drag inside the AGB envelope causes the tight binary system to merge. This releases gravitational energy within the AGB envelope, leading to a very asymmetrical envelope ejection, with an irregular and messy PN as a descendant. The evolution of the triple-stellar system can be in a full common envelope evolution or in a grazing envelope evolution. Both before and after destruction (if destruction takes place), the system might launch pairs of opposite jets. One pronounced signature of triple-stellar evolution might be a large departure from axisymmetrical morphology of the descendant PN. I estimate that about one in eight non-spherical PNe is shaped by one of these triple-stellar evolutionary routes.

  16. Reliability of the Matson Evaluation of Social Skills with Youngsters (MESSY) for Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Matson, Johnny L.; Horovitz, Max; Mahan, Sara; Fodstad, Jill

    2013-01-01

    The purpose of this paper was to update the psychometrics of the "Matson Evaluation of Social Skills for Youngsters" ("MESSY") with children with Autism Spectrum Disorders (ASD), specifically with respect to internal consistency, split-half reliability, and inter-rater reliability. In Study 1, 114 children with ASD (Autistic Disorder, Asperger's…

  17. Evaluation of joint probability density function models for turbulent nonpremixed combustion with complex chemistry

    NASA Technical Reports Server (NTRS)

    Smith, N. S. A.; Frolov, S. M.; Bowman, C. T.

    1996-01-01

    Two types of mixing sub-models are evaluated in connection with a joint-scalar probability density function method for turbulent nonpremixed combustion. Model calculations are made and compared to simulation results for homogeneously distributed methane-air reaction zones mixing and reacting in decaying turbulence within a two-dimensional enclosed domain. The comparison is arranged to ensure that both the simulation and model calculations a) make use of exactly the same chemical mechanism, b) do not involve non-unity Lewis number transport of species, and c) are free from radiation loss. The modified Curl mixing sub-model was found to provide superior predictive accuracy over the simple relaxation-to-mean submodel in the case studied. Accuracy to within 10-20% was found for global means of major species and temperature; however, nitric oxide prediction accuracy was lower and highly dependent on the choice of mixing sub-model. Both mixing submodels were found to produce non-physical mixing behavior for mixture fractions removed from the immediate reaction zone. A suggestion for a further modified Curl mixing sub-model is made in connection with earlier work done in the field.

  18. Kinematic and diabatic vertical velocity climatologies from a chemistry climate model

    NASA Astrophysics Data System (ADS)

    Marinke Hoppe, Charlotte; Ploeger, Felix; Konopka, Paul; Müller, Rolf

    2016-05-01

    The representation of vertical velocity in chemistry climate models is a key element for the representation of the large-scale Brewer-Dobson circulation in the stratosphere. Here, we diagnose and compare the kinematic and diabatic vertical velocities in the ECHAM/Modular Earth Submodel System (MESSy) Atmospheric Chemistry (EMAC) model. The calculation of kinematic vertical velocity is based on the continuity equation, whereas diabatic vertical velocity is computed using diabatic heating rates. Annual and monthly zonal mean climatologies of vertical velocity from a 10-year simulation are provided for both kinematic and diabatic vertical velocity representations. In general, both vertical velocity patterns show the main features of the stratospheric circulation, namely, upwelling at low latitudes and downwelling at high latitudes. The main difference in the vertical velocity pattern is a more uniform structure for diabatic and a noisier structure for kinematic vertical velocity. Diabatic vertical velocities show higher absolute values both in the upwelling branch in the inner tropics and in the downwelling regions in the polar vortices. Further, there is a latitudinal shift of the tropical upwelling branch in boreal summer between the two vertical velocity representations with the tropical upwelling region in the diabatic representation shifted southward compared to the kinematic case. Furthermore, we present mean age of air climatologies from two transport schemes in EMAC using these different vertical velocities and analyze the impact of residual circulation and mixing processes on the age of air. The age of air distributions show a hemispheric difference pattern in the stratosphere with younger air in the Southern Hemisphere and older air in the Northern Hemisphere using the transport scheme with diabatic vertical velocities. Further, the age of air climatology from the transport scheme using diabatic vertical velocities shows a younger mean age of air in the inner tropical upwelling branch and an older mean age in the extratropical tropopause region.

  19. Jane Jacobs and ‘The Need for Aged Buildings’: Neighbourhood Historical Development Pace and Community Social Relations

    EPA Science Inventory

    Jacobs argued that grand planning schemes intending to redevelop large swaths of a city according to a central theoretical framework fail because planners do not understand that healthy cities are organic, spontaneous, messy, complex systems that result from evolutionary proces...

  20. Effects of real time control of sewer systems on treatment plant performance and receiving water quality.

    PubMed

    Frehmann, T; Niemann, A; Ustohal, P; Geiger, W F

    2002-01-01

    Four individual mathematical submodels simulating different subsystems of urban drainage were intercoupled to an integral model. The submodels (for surface runoff, flow in sewer system, wastewater treatment plant and receiving water) were calibrated on the basis of field data measured in an existing urban catchment investigation. Three different strategies for controlling the discharge in the sewer network were defined and implemented in the integral model. The impact of these control measures was quantified by representative immission state-parameters of the receiving water. The results reveal that the effect of a control measure may be ambivalent, depending on the referred component of a complex drainage system. Furthermore, it is demonstrated that the drainage system in the catchment investigation can be considerably optimised towards environmental protection and operation efficiency if an appropriate real time control on the integral scale is applied.

  1. An integrated draft gear model with the consideration of wagon body structural characteristics

    NASA Astrophysics Data System (ADS)

    Chang, Gao; Liangliang, Yang; Weihua, Ma; Min, Zhang; Shihui, Luo

    2018-03-01

    With the increase of railway wagon axle load and the growth of marshalling quantity, the problem caused by impact and vibration of vehicles is increasingly serious, which leads to the damage of vehicle structures and the components. In order to improve the reliability of longitudinal connection model for vehicle impact tests, a new railway wagon longitudinal connection model was developed to simulate and analyse vehicle impact tests. The new model is based on characteristics of longitudinal force transmission for vehicles and parts. In this model, carbodies and bogies were simplified to a particle system that can vibrate in the longitudinal direction, which corresponded to a stiffness-damping vibration system. The model consists of three sub-models, that is, coupler and draft gear sub-model, centre plate sub-model and carbody structure sub-model. Compared with conventional draft gear models, the new model was proposed with geometrical and mechanical relations of friction draft gears considered and with behaviours of sticking, sliding and impact between centre plate and centre bowl added. Besides, virtual springs between discrete carbodies were built to describe the structural deformation of carbody. A computation program for longitudinal dynamics based on vehicle impact tests was accomplished to simulate. Comparisons and analyses regarding the train dynamics outputs and vehicle impact tests were conducted. Simulation results indicate that the new wagon longitudinal connection model can provide a practical application environment for wagons, and the outputs of vehicle impact tests agree with those of field tests. The new model can also be used to study on longitudinal vibrations of different vehicles, of carbody and bogie, and of carbody itself.

  2. The mGA1.0: A common LISP implementation of a messy genetic algorithm

    NASA Technical Reports Server (NTRS)

    Goldberg, David E.; Kerzic, Travis

    1990-01-01

    Genetic algorithms (GAs) are finding increased application in difficult search, optimization, and machine learning problems in science and engineering. Increasing demands are being placed on algorithm performance, and the remaining challenges of genetic algorithm theory and practice are becoming increasingly unavoidable. Perhaps the most difficult of these challenges is the so-called linkage problem. Messy GAs were created to overcome the linkage problem of simple genetic algorithms by combining variable-length strings, gene expression, messy operators, and a nonhomogeneous phasing of evolutionary processing. Results on a number of difficult deceptive test functions are encouraging with the mGA always finding global optima in a polynomial number of function evaluations. Theoretical and empirical studies are continuing, and a first version of a messy GA is ready for testing by others. A Common LISP implementation called mGA1.0 is documented and related to the basic principles and operators developed by Goldberg et. al. (1989, 1990). Although the code was prepared with care, it is not a general-purpose code, only a research version. Important data structures and global variations are described. Thereafter brief function descriptions are given, and sample input data are presented together with sample program output. A source listing with comments is also included.

  3. Measurement and modeling of advanced coal conversion processes, Volume II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solomon, P.R.; Serio, M.A.; Hamblen, D.G.

    1993-06-01

    A two dimensional, steady-state model for describing a variety of reactive and nonreactive flows, including pulverized coal combustion and gasification, is presented. The model, referred to as 93-PCGC-2 is applicable to cylindrical, axi-symmetric systems. Turbulence is accounted for in both the fluid mechanics equations and the combustion scheme. Radiation from gases, walls, and particles is taken into account using a discrete ordinates method. The particle phase is modeled in a lagrangian framework, such that mean paths of particle groups are followed. A new coal-general devolatilization submodel (FG-DVC) with coal swelling and char reactivity submodels has been added.

  4. Preparing the EPIC Model for Evaluating Bioenergy Production Systems: A Test of the Denitrification Submodel using a Long-Term Dataset

    NASA Astrophysics Data System (ADS)

    Manowitz, D. H.; Schwab, D. E.; Izaurralde, R. C.

    2010-12-01

    As bioenergy production continues to increase, it is important to be able to predict not only the crop yields that are expected from future production, but also the various environmental impacts that will accompany it. Therefore, models that can be used to make such predictions must be validated against as many of these agricultural outputs as possible. The Environmental Policy Integrated Climate (EPIC) model is a widely used and tested model for simulating many agricultural ecosystem processes including plant growth, crop yield, carbon and nutrient cycling, wind and water erosion, runoff, leaching, as well as changes in soil physical and chemical properties. This model has undergone many improvements, including the addition of a process-based denitrification submodel. Here we evaluate the performance of EPIC in its ability to simulate nitrous oxide (N2O) fluxes and related variables as observed in selected treatments of the Long-Term Ecological Research (LTER) cropping systems study at Kellogg Biological Station (KBS). We will provide a brief description of the EPIC model in the context of bioenergy production, describe the denitrification submodel, and compare simulated and observed values of crop yields, N2O emissions, soil carbon dynamics, and soil moisture.

  5. A review of unmanned aircraft system ground risk models

    NASA Astrophysics Data System (ADS)

    Washington, Achim; Clothier, Reece A.; Silva, Jose

    2017-11-01

    There is much effort being directed towards the development of safety regulations for unmanned aircraft systems (UAS). National airworthiness authorities have advocated the adoption of a risk-based approach, whereby regulations are driven by the outcomes of a systematic process to assess and manage identified safety risks. Subsequently, models characterising the primary hazards associated with UAS operations have now become critical to the development of regulations and in turn, to the future of the industry. Key to the development of airworthiness regulations for UAS is a comprehensive understanding of the risks UAS operations pose to people and property on the ground. A comprehensive review of the literature identified 33 different models (and component sub models) used to estimate ground risk posed by UAS. These models comprise failure, impact location, recovery, stress, exposure, incident stress and harm sub-models. The underlying assumptions and treatment of uncertainties in each of these sub-models differ significantly between models, which can have a significant impact on the development of regulations. This paper reviews the state-of-the-art in research into UAS ground risk modelling, discusses how the various sub-models relate to the different components of the regulation, and explores how model-uncertainties potentially impact the development of regulations for UAS.

  6. Detailed model for practical pulverized coal furnaces and gasifiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, P.J.; Smoot, L.D.

    1989-08-01

    This study has been supported by a consortium of nine industrial and governmental sponsors. Work was initiated on May 1, 1985 and completed August 31, 1989. The central objective of this work was to develop, evaluate and apply a practical combustion model for utility boilers, industrial furnaces and gasifiers. Key accomplishments have included: Development of an advanced first-generation, computer model for combustion in three dimensional furnaces; development of a new first generation fouling and slagging submodel; detailed evaluation of an existing NO{sub x} submodel; development and evaluation of an improved radiation submodel; preparation and distribution of a three-volume final report:more » (a) Volume 1: General Technical Report; (b) Volume 2: PCGC-3 User's Manual; (c) Volume 3: Data Book for Evaluation of Three-Dimensional Combustion Models; and organization of a user's workshop on the three-dimensional code. The furnace computer model developed under this study requires further development before it can be applied generally to all applications; however, it can be used now by specialists for many specific applications, including non-combusting systems and combusting geseous systems. A new combustion center was organized and work was initiated to continue the important research effort initiated by this study. 212 refs., 72 figs., 38 tabs.« less

  7. A messy reality: an analysis of New Zealand's elective surgery scoring system via media sources, 200–2006

    PubMed Central

    Derrett, Sarah; Cousins, Kim; Gauld, Robin

    2013-01-01

    Waiting lists for elective procedures are a characteristic feature of tax-funded universal health systems. New Zealand has gained a reputation for its ‘booking system’ for waiting list management, introduced in the early-1990s. The New Zealand system uses criteria to ‘score’ and then ‘book’ qualifying patients for surgery. This article aims to (i) describe key issues focused on by the media, (ii) identify local strategies and (iii) present evidence of variation. Newspaper sources were searched (2000–2006). A total of 1199 booking system stories were identified. Findings demonstrate, from a national system perspective, the extraordinarily difficult nature of maintaining overall control and coordination. Equity and national consistency are affected when hospitals respond to local pressure by reducing access to elective treatment. Findings suggest that central government probably needs to be closely involved in local-level management and policy adjustments; that through the study period, the New Zealand system appears to have been largely out of the control of government; and that governments elsewhere may need to be cautious when considering developing similar systems. Developing and implementing scoring and booking systems may always be a ‘messy reality’ with unintended consequences and throwing regional differences in service management and access into stark relief. Copyright © 2012 John Wiley & Sons, Ltd. PMID:22815091

  8. Convergance experiments with a hydrodynamic model of Port Royal Sound, South Carolina

    USGS Publications Warehouse

    Lee, J.K.; Schaffranek, R.W.; Baltzer, R.A.

    1989-01-01

    A two-demensional, depth-averaged, finite-difference, flow/transport model, SIM2D, is being used to simulate tidal circulation and transport in the Port Royal Sound, South Carolina, estuarine system. Models of a subregion of the Port Royal Sound system have been derived from an earlier-developed model of the entire system having a grid size of 600 ft. The submodels were implemented with grid sizes of 600, 300, and 150 ft in order to determine the effects of changes in grid size on computed flows in the subregion, which is characterized by narrow channels and extensive tidal flats that flood and dewater with each rise and fall of the tide. Tidal amplitudes changes less than 5 percent as the grid size was decreased. Simulations were performed with the 300-foot submodel for time steps of 60, 30, and 15 s. Study results are discussed.

  9. On the sub-model errors of a generalized one-way coupling scheme for linking models at different scales

    NASA Astrophysics Data System (ADS)

    Zeng, Jicai; Zha, Yuanyuan; Zhang, Yonggen; Shi, Liangsheng; Zhu, Yan; Yang, Jinzhong

    2017-11-01

    Multi-scale modeling of the localized groundwater flow problems in a large-scale aquifer has been extensively investigated under the context of cost-benefit controversy. An alternative is to couple the parent and child models with different spatial and temporal scales, which may result in non-trivial sub-model errors in the local areas of interest. Basically, such errors in the child models originate from the deficiency in the coupling methods, as well as from the inadequacy in the spatial and temporal discretizations of the parent and child models. In this study, we investigate the sub-model errors within a generalized one-way coupling scheme given its numerical stability and efficiency, which enables more flexibility in choosing sub-models. To couple the models at different scales, the head solution at parent scale is delivered downward onto the child boundary nodes by means of the spatial and temporal head interpolation approaches. The efficiency of the coupling model is improved either by refining the grid or time step size in the parent and child models, or by carefully locating the sub-model boundary nodes. The temporal truncation errors in the sub-models can be significantly reduced by the adaptive local time-stepping scheme. The generalized one-way coupling scheme is promising to handle the multi-scale groundwater flow problems with complex stresses and heterogeneity.

  10. The economic efficiency of conservation measures for amphibians in organic farming--results from bio-economic modelling.

    PubMed

    Schuler, Johannes; Sattler, Claudia; Helmecke, Angela; Zander, Peter; Uthes, Sandra; Bachinger, Johann; Stein-Bachinger, Karin

    2013-01-15

    This paper presents a whole farm bio-economic modelling approach for the assessment and optimisation of amphibian conservation conditions applied at the example of a large scale organic farm in North-Eastern Germany. The assessment focuses mainly on the habitat quality as affected by conservation measures such as through specific adapted crop production activities (CPA) and in-field buffer strips for the European tree frog (Hyla arborea), considering also interrelations with other amphibian species (i.e. common spadefoot toad (Pelobates fuscus), fire-bellied toad (Bombina bombina)). The aim of the approach is to understand, analyse and optimize the relationships between the ecological and economic performance of an organic farming system, based on the expectation that amphibians are differently impacted by different CPAs. The modelling system consists of a set of different sub-models that generate a farm model on the basis of environmentally evaluated CPAs. A crop-rotation sub-model provides a set of agronomically sustainable crop rotations that ensures overall sufficient nitrogen supply and controls weed, pest and disease infestations. An economic sub-model calculates the gross margins for each possible CPA including costs of inputs such as labour and machinery. The conservation effects of the CPAs are assessed with an ecological sub-model evaluates the potential negative or positive effect that each work step of a CPA has on amphibians. A mathematical programming sub-model calculates the optimal farm organization taking into account the limited factors of the farm (e.g. labour, land) as well as ecological improvements. In sequential model runs, the habitat quality is to be improved by the model, while the highest possible gross margin is still to be achieved. The results indicate that the model can be used to show the scope of action that a farmer has to improve habitat quality by reducing damage to amphibian population on its land during agricultural activities. Thereby, depending on the level of habitat quality that is aimed at, different measures may provide the most efficient solution. Lower levels of conservation can be achieved with low-cost adapted CPAs, such as an increased cutting height, reduced sowing density and grubbing instead of ploughing. Higher levels of conservation require e.g. grassland-like managed buffer strips around ponds in sensible areas, which incur much higher on-farm conservation costs. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Field testing of thermal canopy models in a spruce-fir forest

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Recent advances in remote sensing technology allow the use of the thermal infrared region to gain information about vegetative surfaces. Extending existing models to account for thermal radiance transfers within rough forest canopies is of paramount importance. This is so since all processes of interest in the physical climate system and biogeochemical cycles are thermally mediated. Model validation experiments were conducted at a well established boreal forest; northern hardwood forest ecotone research site located in central Maine. Data was collected to allow spatial and temporal validation of thermal models. Emphasis was placed primarily upon enhancing submodels of stomatal behavior, and secondarily upon enhancing boundary layer resistance submodels and accounting for thermal storage in soil and vegetation.

  12. A model and numerical method for compressible flows with capillary effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmidmayer, Kevin, E-mail: kevin.schmidmayer@univ-amu.fr; Petitpas, Fabien, E-mail: fabien.petitpas@univ-amu.fr; Daniel, Eric, E-mail: eric.daniel@univ-amu.fr

    2017-04-01

    A new model for interface problems with capillary effects in compressible fluids is presented together with a specific numerical method to treat capillary flows and pressure waves propagation. This new multiphase model is in agreement with physical principles of conservation and respects the second law of thermodynamics. A new numerical method is also proposed where the global system of equations is split into several submodels. Each submodel is hyperbolic or weakly hyperbolic and can be solved with an adequate numerical method. This method is tested and validated thanks to comparisons with analytical solutions (Laplace law) and with experimental results onmore » droplet breakup induced by a shock wave.« less

  13. Leading as Emotional Management Work in High Risk Times: The Counterintuitive Impulses of Performativity and Passion

    ERIC Educational Resources Information Center

    Blackmore, Jill

    2004-01-01

    This paper explores, through a case study of educational restructuring in Victoria, Australia, how school leaders in a public education system in Australia mediate reform discourses emphasizing managerial and market accountability and the emotional and messy work of teaching and leading. These accountability exercises were often seen by teachers…

  14. Validation of the MESSi among adult workers and young students: General health and personality correlates.

    PubMed

    Díaz-Morales, Juan F; Randler, Christoph; Arrona-Palacios, Arturo; Adan, Ana

    2017-01-01

    The aim of this study was to provide validity for the Spanish version of the Morningness-Eveningness-Stability Scale - improved (MESSi), a novel evolved assessment of circadian typology which considers the subjective phase and amplitude by morning affect (MA), eveningness (EV) and distinctness (DI; subjective amplitude) sub-scales. Convergence validity of the MESSi with the reduced Morningness-Eveningness Questionnaire (rMEQ) and relationships with the General Health Questionnaire (GHQ-12) and sensitivity to reward and punishment (SR and SP) were analyzed. Two different Spanish samples, young undergraduate students (n = 891, 18-30 years) and adult workers (n = 577, 31-65 years) participated in this study. Exploratory structural equation modeling (ESEM) of MESSi displayed acceptable fit of a three-factors measurement model. Percentiles of the MA, EV and DI sub-scales were obtained for students and adults. The MESSi showed good convergence validity with the rMEQ scores, with a higher correlation coefficient between MA, EV and lower with DI sub-scales. In both, young students and adult workers, MA was negatively related with the GHQ-12 and SP, but the percentage of explained variance (6% and 3%) was lower than the positive correlations between DI, the GHQ-12 and SP (20% and 13%). Morning types presented higher MA and lower EV scores than the other two typologies in both students and adult workers, whereas only differences in DI were found among students (lowest in evening type). Candidates to psychological symptoms and mental disorders ("true cases"), with the clinical cut-off criteria of the GHQ-12, showed lower MA and higher DI in students, whereas only DI was higher for "true cases" among adults. These results supported that subjective amplitude is a factor related to, but also differentiated of, morningness-eveningness (preferred time for a certain activity). The measure of amplitude might be more important than circadian phase in health consequences.

  15. Decomposition of timed automata for solving scheduling problems

    NASA Astrophysics Data System (ADS)

    Nishi, Tatsushi; Wakatake, Masato

    2014-03-01

    A decomposition algorithm for scheduling problems based on timed automata (TA) model is proposed. The problem is represented as an optimal state transition problem for TA. The model comprises of the parallel composition of submodels such as jobs and resources. The procedure of the proposed methodology can be divided into two steps. The first step is to decompose the TA model into several submodels by using decomposable condition. The second step is to combine individual solution of subproblems for the decomposed submodels by the penalty function method. A feasible solution for the entire model is derived through the iterated computation of solving the subproblem for each submodel. The proposed methodology is applied to solve flowshop and jobshop scheduling problems. Computational experiments demonstrate the effectiveness of the proposed algorithm compared with a conventional TA scheduling algorithm without decomposition.

  16. The application of dam break monitoring based on BJ-2 images

    NASA Astrophysics Data System (ADS)

    Cui, Yan; Li, Suju; Wu, Wei; Liu, Ming

    2018-03-01

    Flood is one of the major disasters in China. There are heavy intensity and wide range rainstorm during flood season in eastern part of China, and the flood control capacity of rivers is lower somewhere, so the flood disaster is abrupt and caused lots of direct economic losses. In this paper, based on BJ-2 Spatio-temporal resolution remote sensing data, reference image, 30-meter Global Land Cover Dataset(GlobeLand 30) and basic geographic data, forming Dam break monitoring model which including BJ-2 date processing sub-model, flood inundation range monitoring sub-model, dam break change monitoring sub-model and crop inundation monitoring sub-model. Case analysis in Poyang County Jiangxi province in 20th, Jun, 2016 show that the model has a high precision and could monitoring flood inundation range, crops inundation range and breach.

  17. A multiphysics 3D model of tissue growth under interstitial perfusion in a tissue-engineering bioreactor.

    PubMed

    Nava, Michele M; Raimondi, Manuela T; Pietrabissa, Riccardo

    2013-11-01

    The main challenge in engineered cartilage consists in understanding and controlling the growth process towards a functional tissue. Mathematical and computational modelling can help in the optimal design of the bioreactor configuration and in a quantitative understanding of important culture parameters. In this work, we present a multiphysics computational model for the prediction of cartilage tissue growth in an interstitial perfusion bioreactor. The model consists of two separate sub-models, one two-dimensional (2D) sub-model and one three-dimensional (3D) sub-model, which are coupled between each other. These sub-models account both for the hydrodynamic microenvironment imposed by the bioreactor, using a model based on the Navier-Stokes equation, the mass transport equation and the biomass growth. The biomass, assumed as a phase comprising cells and the synthesised extracellular matrix, has been modelled by using a moving boundary approach. In particular, the boundary at the fluid-biomass interface is moving with a velocity depending from the local oxygen concentration and viscous stress. In this work, we show that all parameters predicted, such as oxygen concentration and wall shear stress, by the 2D sub-model with respect to the ones predicted by the 3D sub-model are systematically overestimated and thus the tissue growth, which directly depends on these parameters. This implies that further predictive models for tissue growth should take into account of the three dimensionality of the problem for any scaffold microarchitecture.

  18. ME(SSY)**2: Monte Carlo Code for Star Cluster Simulations

    NASA Astrophysics Data System (ADS)

    Freitag, Marc Dewi

    2013-02-01

    ME(SSY)**2 stands for “Monte-carlo Experiments with Spherically SYmmetric Stellar SYstems." This code simulates the long term evolution of spherical clusters of stars; it was devised specifically to treat dense galactic nuclei. It is based on the pioneering Monte Carlo scheme proposed by Hénon in the 70's and includes all relevant physical ingredients (2-body relaxation, stellar mass spectrum, collisions, tidal disruption, ldots). It is basically a Monte Carlo resolution of the Fokker-Planck equation. It can cope with any stellar mass spectrum or velocity distribution. Being a particle-based method, it also allows one to take stellar collisions into account in a very realistic way. This unique code, featuring most important physical processes, allows million particle simulations, spanning a Hubble time, in a few CPU days on standard personal computers and provides a wealth of data only rivalized by N-body simulations. The current version of the software requires the use of routines from the "Numerical Recipes in Fortran 77" (http://www.nrbook.com/a/bookfpdf.php).

  19. Defect, Kinetics and Heat Transfer of CDTE Bridgman Growth without Wall Contact

    NASA Technical Reports Server (NTRS)

    Larson, D. J., Jr.; Zhang, H.

    2003-01-01

    A detached growth mechanism has been proposed, which is similar to that proposed by Duffar et al. and used to study the current detached growth system. From numerical results, we can conclude that detached growth will more likely appear if the growth and wetting angles are large and meniscus is flat. Detached thickness is dependent on growth angle, wetting angle, and gap width and shape of the fins. The model can also explain why the detached growth will not happen for metals in which the growth angle is almost zero. Since the growth angle of CdZnTe cannot be changed, to promote detached growth, the number density of the fins should be low and the wetting angle should be high. Also, a much smaller gap width of the fins should be used in the ground experiment and the detached gap width is much smaller. The shape of the fins has minor influence on detached growth. An integrated numerical model for detached solidification has been developed combining a global heat transfer sub-model and a wall contact sub-model. The global heat transfer sub-model accounts for heat and mass transfer in the multiphase system, convection in the melt, macro-segregation, and interface dynamics. The location and dynamics of the solidification interface are accurately tracked by a multizone adaptive grid generation scheme. The wall contact sub-model accounts for the meniscus dynamics at the three-phase boundary. Simulations have been performed for crystal growth in a conventional ampoule and a designed ampoule to understand the benefits of detached solidification and its impacts on crystalline structural quality, e.g., stoichiometry, macro-segregation, and stress. From simulation results, both the Grashof and Marangoni numbers will have significant effects on the shape of growth front, Zn concentration distribution, and radial segregation. The integrated model can be used in designing apparatus and determining the optimal geometry for detached solidification in space and on the ground.

  20. A Mathematical Model for an Educational System.

    ERIC Educational Resources Information Center

    McReynolds, William Peter

    The document contents divide into (1) the basic flow model of an educational system and its application to the secondary school system of Ontario and (2) a group of interrelated submodels that describe the entrance to higher education in considerably finer detail. In the first section, the principal variable of the model--the transition…

  1. A new MRI land surface model HAL

    NASA Astrophysics Data System (ADS)

    Hosaka, M.

    2011-12-01

    A land surface model HAL is newly developed for MRI-ESM1. It is used for the CMIP simulations. HAL consists of three submodels: SiByl (vegetation), SNOWA (snow) and SOILA (soil) in the current version. It also contains a land coupler LCUP which connects some submodels and an atmospheric model. The vegetation submodel SiByl has surface vegetation processes similar to JMA/SiB (Sato et al. 1987, Hirai et al. 2007). SiByl has 2 vegetation layers (canopy and grass) and calculates heat, moisture, and momentum fluxes between the land surface and the atmosphere. The snow submodel SNOWA can have any number of snow layers and the maximum value is set to 8 for the CMIP5 experiments. Temperature, SWE, density, grain size and the aerosol deposition contents of each layer are predicted. The snow properties including the grain size are predicted due to snow metamorphism processes (Niwano et al., 2011), and the snow albedo is diagnosed from the aerosol mixing ratio, the snow properties and the temperature (Aoki et al., 2011). The soil submodel SOILA can also have any number of soil layers, and is composed of 14 soil layers in the CMIP5 experiments. The temperature of each layer is predicted by solving heat conduction equations. The soil moisture is predicted by solving the Darcy equation, in which hydraulic conductivity depends on the soil moisture. The land coupler LCUP is designed to enable the complicated constructions of the submidels. HAL can include some competing submodels (precise and detailed ones, and simpler ones), and they can run at the same simulations. LCUP enables a 2-step model validation, in which we compare the results of the detailed submodels with the in-situ observation directly at the 1st step, and follows the comparison between them and those of the simpler ones at the 2nd step. When the performances of the detailed ones are good, we can improve the simpler ones by using the detailed ones as reference models.

  2. Generalized mathematical model of red muds’ thickener of alumina production

    NASA Astrophysics Data System (ADS)

    Fedorova, E. R.; Vinogradova, A. A.

    2018-03-01

    The article describes the principle of a generalized mathematical model of the red mud’s thickener construction. The model of the red muds’ thickener of alumina production consists of sub-models of flocculation zones containing solid fraction feed slurry, free-fall and cramped sedimentation zones or effective sedimentation zones, bleaching zones. The generalized mathematical model of thickener allows predicting the content of solid fraction in the condensed product and in the upper discharge. The sub-model of solid phase aggregation allows one to count up average size of floccules, which is created during the flocculation process in feedwell. The sub-model of the free-fall and cramped sedimentation zone allows one to count up the concentration profile taking into account the variable cross-sectional area of the thickener. The sub-model of the bleaching zone is constructed on the basis of the theory of the precipitation of Kinc, supplemented by correction factors.

  3. Surveillance system and method having an operating mode partitioned fault classification model

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor)

    2005-01-01

    A system and method which partitions a parameter estimation model, a fault detection model, and a fault classification model for a process surveillance scheme into two or more coordinated submodels together providing improved diagnostic decision making for at least one determined operating mode of an asset.

  4. A Distributed Approach to System-Level Prognostics

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, Indranil

    2012-01-01

    Prognostics, which deals with predicting remaining useful life of components, subsystems, and systems, is a key technology for systems health management that leads to improved safety and reliability with reduced costs. The prognostics problem is often approached from a component-centric view. However, in most cases, it is not specifically component lifetimes that are important, but, rather, the lifetimes of the systems in which these components reside. The system-level prognostics problem can be quite difficult due to the increased scale and scope of the prognostics problem and the relative Jack of scalability and efficiency of typical prognostics approaches. In order to address these is ues, we develop a distributed solution to the system-level prognostics problem, based on the concept of structural model decomposition. The system model is decomposed into independent submodels. Independent local prognostics subproblems are then formed based on these local submodels, resul ting in a scalable, efficient, and flexible distributed approach to the system-level prognostics problem. We provide a formulation of the system-level prognostics problem and demonstrate the approach on a four-wheeled rover simulation testbed. The results show that the system-level prognostics problem can be accurately and efficiently solved in a distributed fashion.

  5. OISI dynamic end-to-end modeling tool

    NASA Astrophysics Data System (ADS)

    Kersten, Michael; Weidler, Alexander; Wilhelm, Rainer; Johann, Ulrich A.; Szerdahelyi, Laszlo

    2000-07-01

    The OISI Dynamic end-to-end modeling tool is tailored to end-to-end modeling and dynamic simulation of Earth- and space-based actively controlled optical instruments such as e.g. optical stellar interferometers. `End-to-end modeling' is meant to denote the feature that the overall model comprises besides optical sub-models also structural, sensor, actuator, controller and disturbance sub-models influencing the optical transmission, so that the system- level instrument performance due to disturbances and active optics can be simulated. This tool has been developed to support performance analysis and prediction as well as control loop design and fine-tuning for OISI, Germany's preparatory program for optical/infrared spaceborne interferometry initiated in 1994 by Dornier Satellitensysteme GmbH in Friedrichshafen.

  6. Evaluating environmental and economic consequences of alternative pest management strategies: results of modeling workshops

    USGS Publications Warehouse

    Johnson, Richard L.; Andrews, Austin K.; Auble, Gregor T.L.; Ellison, Richard A.; Hamilton, David B.; Roelle, James E.; McNamee, Peter J.

    1983-01-01

    The model conceptualized at the first workshop simulates the effect of corn agrecosystem decisions on crop production, economic returns, and environmental indicators. The model is composed of five interacting submodels: 1) a Production Strategies submodel which makes decisions concerning tillage, planting, fertilizer and pesticide applications, and harvest; 2) a Hydrology/Chemical Transport submodel which represents soil hydrology, erosion, and concentrations of fertilizers and pesticides in the soil, runoff, surface waters, and percolation; 3) a Vegetation submodel which simulates growth of agricultural crops (corns and soybeans) and weeds; 4) a Pests submodel which calculates pest population levels and resulting crop damage; and 5) an Environmental Effects submodel which calculates indicators of potential fish kills, human health effects, and wildlife habitat. The most persistent data gaps encountered in quantifying the model were coefficients to relate environmental consequences to alternative pest management strategies. While the model developed in the project is not yet accurate enough to be used for real-world decisions about the use of pesticides on corn, it does contain the basic structure upon which such a model could be built. More importantly at this stage of development, the project has shown that very complex systems can be modeled in short periods of time and that the process of building such models increases understanding among disciplinary specialists and between diverse institutional interests. This process can be useful to EPA as the agency cooperates with other institutions to meet its responsibilities in less costly ways. Activities at the second 2 1/2-day workshop included a review of the model, incorporation of necessary corrections, simulation of policy scenarios, and examination of techniques to address remaining institutional conflicts. Participants were divided into three groups representing environmental, production or industry, and regulatory interests. Each group developed scenarios that would be most appealing to their particular interest and the scenarios were simulated by the agroecosystem computer model. Negotiators from each of the interest groups decided whether a hypothetical herbicide should be relabeled and if certain restrictions should be imposed on its use. Other participants functioned as experts and consultants on caucus teams. A solution to the hypothetical problem was successfully negotiated. Workshop participants and project staff agreed that the model and processes developed during the project should be used in training students, extension specialists, farmers, researchers, and chemical producers in collaborative problem solving methods. More productive research can be planned, and more realistic models of complex systems can be built in this way. More importantly, greater trust of decisionmakers in computer models, better understanding by technical experts about disciplines other than their own, and improved cooperation between institutional interests can be achieved. This trust, understanding, and cooperation are critical ingredients in solving problems that are too complex to be resolved by independent disciplinary activity and unilateral decision authority.

  7. Towards a comprehensive framework for cosimulation of dynamic models with an emphasis on time stepping

    NASA Astrophysics Data System (ADS)

    Hoepfer, Matthias

    Over the last two decades, computer modeling and simulation have evolved as the tools of choice for the design and engineering of dynamic systems. With increased system complexities, modeling and simulation become essential enablers for the design of new systems. Some of the advantages that modeling and simulation-based system design allows for are the replacement of physical tests to ensure product performance, reliability and quality, the shortening of design cycles due to the reduced need for physical prototyping, the design for mission scenarios, the invoking of currently nonexisting technologies, and the reduction of technological and financial risks. Traditionally, dynamic systems are modeled in a monolithic way. Such monolithic models include all the data, relations and equations necessary to represent the underlying system. With increased complexity of these models, the monolithic model approach reaches certain limits regarding for example, model handling and maintenance. Furthermore, while the available computer power has been steadily increasing according to Moore's Law (a doubling in computational power every 10 years), the ever-increasing complexities of new models have negated the increased resources available. Lastly, modern systems and design processes are interdisciplinary, enforcing the necessity to make models more flexible to be able to incorporate different modeling and design approaches. The solution to bypassing the shortcomings of monolithic models is cosimulation. In a very general sense, co-simulation addresses the issue of linking together different dynamic sub-models to a model which represents the overall, integrated dynamic system. It is therefore an important enabler for the design of interdisciplinary, interconnected, highly complex dynamic systems. While a basic co-simulation setup can be very easy, complications can arise when sub-models display behaviors such as algebraic loops, singularities, or constraints. This work frames the co-simulation approach to modeling and simulation. It lays out the general approach to dynamic system co-simulation, and gives a comprehensive overview of what co-simulation is and what it is not. It creates a taxonomy of the requirements and limits of co-simulation, and the issues arising with co-simulating sub-models. Possible solutions towards resolving the stated problems are investigated to a certain depth. A particular focus is given to the issue of time stepping. It will be shown that for dynamic models, the selection of the simulation time step is a crucial issue with respect to computational expense, simulation accuracy, and error control. The reasons for this are discussed in depth, and a time stepping algorithm for co-simulation with unknown dynamic sub-models is proposed. Motivations and suggestions for the further treatment of selected issues are presented.

  8. Prediction of settled water turbidity and optimal coagulant dosage in drinking water treatment plant using a hybrid model of k-means clustering and adaptive neuro-fuzzy inference system

    NASA Astrophysics Data System (ADS)

    Kim, Chan Moon; Parnichkun, Manukid

    2017-11-01

    Coagulation is an important process in drinking water treatment to attain acceptable treated water quality. However, the determination of coagulant dosage is still a challenging task for operators, because coagulation is nonlinear and complicated process. Feedback control to achieve the desired treated water quality is difficult due to lengthy process time. In this research, a hybrid of k-means clustering and adaptive neuro-fuzzy inference system ( k-means-ANFIS) is proposed for the settled water turbidity prediction and the optimal coagulant dosage determination using full-scale historical data. To build a well-adaptive model to different process states from influent water, raw water quality data are classified into four clusters according to its properties by a k-means clustering technique. The sub-models are developed individually on the basis of each clustered data set. Results reveal that the sub-models constructed by a hybrid k-means-ANFIS perform better than not only a single ANFIS model, but also seasonal models by artificial neural network (ANN). The finally completed model consisting of sub-models shows more accurate and consistent prediction ability than a single model of ANFIS and a single model of ANN based on all five evaluation indices. Therefore, the hybrid model of k-means-ANFIS can be employed as a robust tool for managing both treated water quality and production costs simultaneously.

  9. A distributed snow-evolution modeling system (SnowModel)

    Treesearch

    Glen E. Liston; Kelly Elder

    2006-01-01

    SnowModel is a spatially distributed snow-evolution modeling system designed for application in landscapes, climates, and conditions where snow occurs. It is an aggregation of four submodels: MicroMet defines meteorological forcing conditions, EnBal calculates surface energy exchanges, SnowPack simulates snow depth and water-equivalent evolution, and SnowTran-3D...

  10. A review and update of the Virginia Department of Transportation cash flow forecasting model.

    DOT National Transportation Integrated Search

    1996-01-01

    This report details the research done to review and update components of the VDOT cash flow forecasting model. Specifically, the study updated the monthly factors submodel used to predict payments on construction contracts. For the other submodel rev...

  11. BPS sectors of the Skyrme model and their non-BPS extensions

    NASA Astrophysics Data System (ADS)

    Adam, C.; Foster, D.; Krusch, S.; Wereszczynski, A.

    2018-02-01

    Two recently found coupled Bogomol'nyi-Prasad-Sommerfield (BPS) submodels of the Skyrme model are further analyzed. First, we provide a geometrical formulation of the submodels in terms of the eigenvalues of the strain tensor. Second, we study their thermodynamical properties and show that the mean-field equations of state coincide at high pressure and read p =ρ ¯/3 . We also provide evidence that matter described by the first BPS submodel has some similarity with a Bose-Einstein condensate. Moreover, we show that extending the second submodel to a non-BPS model by including certain additional terms of the full Skyrme model does not spoil the respective ansatz, leading to an ordinary differential equation for the profile of the Skymion, for any value of the topological charge. This allows for an almost analytical description of the properties of Skyrmions in this model. In particular, we analytically study the breaking and restoration of the BPS property. Finally, we provide an explanation of the success of the rational map ansatz.

  12. Second Generation Crop Yield Models Review

    NASA Technical Reports Server (NTRS)

    Hodges, T. (Principal Investigator)

    1982-01-01

    Second generation yield models, including crop growth simulation models and plant process models, may be suitable for large area crop yield forecasting in the yield model development project. Subjective and objective criteria for model selection are defined and models which might be selected are reviewed. Models may be selected to provide submodels as input to other models; for further development and testing; or for immediate testing as forecasting tools. A plant process model may range in complexity from several dozen submodels simulating (1) energy, carbohydrates, and minerals; (2) change in biomass of various organs; and (3) initiation and development of plant organs, to a few submodels simulating key physiological processes. The most complex models cannot be used directly in large area forecasting but may provide submodels which can be simplified for inclusion into simpler plant process models. Both published and unpublished models which may be used for development or testing are reviewed. Several other models, currently under development, may become available at a later date.

  13. Results of a modeling workshop concerning economic and environmental trends and concomitant resource management issues in the Mobile Bay area

    USGS Publications Warehouse

    Hamilton, David B.; Andrews, Austin K.; Auble, Gregor T.; Ellison, Richard A.; Johnson, Richard A.; Roelle, James E.; Staley, Michael J.

    1982-01-01

    During the past decade, the southern regions of the U.S. have experienced rapid change which is expected to continue into the foreseeable future. Growth in population, industry, and resource development has been attributed to a variety of advantages such as an abundant and inexpensive labor force, a mild climate, and the availability of energy, water, land, and other natural resources. While this growth has many benefits for the region, it also creates the potential for increased air, water, and solid waste pollution, and modification of natural habitats. A workshop was convened to consider the Mobile Bay area as a site-specific case of growth and its environmental consequences in the southern region. The objectives of the modeling workshop were to: (1) identify major factors of economic development as they relate to growth in the area over the immediate and longer term; (2) identify major environmental and resource management issues associated with this expected growth; and (3) identify and characterize the complex interrelationships among economic and environmental factors. This report summarizes the activities and results of a modeling workshop concerning economic growth and concomitant resource management issues in the Mobile Bay area. The workshop was organized around construction of a simulation model representing the relationships between a series of actions and indicators identified by participants. The workshop model had five major components. An Industry Submodel generated scenarios of growth in several industrial and transportation sectors. A Human Population/Economy Submodel calculated human population and economic variables in response to employment opportunities. A Land Use/Air Quality Submodel tabulated changes in land use, shoreline use, and air quality. A Water Submodel calculated indicators of water quality and quantity for fresh surface water, ground water, and Mobile Bay based on discharge information provided by the Industry and Human Population/Economy Submodels. Finally, a Fish Submodel calculated indicators of habitat quality for finfish and shellfish, utilizing information on water quality and wetlands acreage. The workshop was successful in identifying many of the critical interrelations between components of the Mobile area system. Not all of those interactions, such as the feedback of air quality as a limitation on development, could be incorporated into the workshop model because of the model's broad spatial scale and because of uncertainties or data gaps. Thus, the value of the modeling workshop was in the areas outlines below, rather than in the predictive power of the initial model developed at the workshop. First, participants developed a holistic perspective on the interactions which will determine future economic and environmental trends within the Mobile Bay area. Potential environmental consequences and limitations to grown identified at the workshop included: shoreline and water access; water quality of Mobile Bay; finfish and shellfish habitat quality with respect to dissolved oxygen and coliforms; air quality; and acreage of critical wetland habitat. Second, the model's requirements for specific, quantitative information stimulated supporting analyses, such as economic input-output calculations, which provide additional insight into the Mobile Bay area system. Third, the perspective of the Mobile area as an interacting system was developed in an open, cooperative forum which my provide a foundation for conflict resolution based on common understanding. Finally, the identification of model limitations and uncertainties should be useful in guiding the efficient allocation of future research effort.

  14. Performance Impact of Deflagration to Detonation Transition Enhancing Obstacles

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.; Schauer, Frederick; Hopper, David

    2012-01-01

    A sub-model is developed to account for the drag and heat transfer enhancement resulting from deflagration-to-detonation (DDT) inducing obstacles commonly used in pulse detonation engines (PDE). The sub-model is incorporated as a source term in a time-accurate, quasi-onedimensional, CFD-based PDE simulation. The simulation and sub-model are then validated through comparison with a particular experiment in which limited DDT obstacle parameters were varied. The simulation is then used to examine the relative contributions from drag and heat transfer to the reduced thrust which is observed. It is found that heat transfer is far more significant than aerodynamic drag in this particular experiment.

  15. Distance measurement based on light field geometry and ray tracing.

    PubMed

    Chen, Yanqin; Jin, Xin; Dai, Qionghai

    2017-01-09

    In this paper, we propose a geometric optical model to measure the distances of object planes in a light field image. The proposed geometric optical model is composed of two sub-models based on ray tracing: object space model and image space model. The two theoretic sub-models are derived on account of on-axis point light sources. In object space model, light rays propagate into the main lens and refract inside it following the refraction theorem. In image space model, light rays exit from emission positions on the main lens and subsequently impinge on the image sensor with different imaging diameters. The relationships between imaging diameters of objects and their corresponding emission positions on the main lens are investigated through utilizing refocusing and similar triangle principle. By combining the two sub-models together and tracing light rays back to the object space, the relationships between objects' imaging diameters and corresponding distances of object planes are figured out. The performance of the proposed geometric optical model is compared with existing approaches using different configurations of hand-held plenoptic 1.0 cameras and real experiments are conducted using a preliminary imaging system. Results demonstrate that the proposed model can outperform existing approaches in terms of accuracy and exhibits good performance at general imaging range.

  16. On looking into the Black Box: Prospects and Limits in the Search for Mental Models

    DTIC Science & Technology

    1985-05-01

    particularly in terms of the ways in which humans understand systems. Norman (19831 characterizes this understanding as messy, sloppy, incomplete, and...Kleinman, et al., 1971]). However, for tasks involving only monitoring [ Smallwood , 1967; Sheridan, 1970], especially when apparent discontinuities...18 Norman [1983] uses the word "conceptaalization" to characterize researchers’ models of humans’ mental models. This characterization serves to

  17. A generic model for estimating biomass accumulation and greenhouse gas emissions from perennial crops

    NASA Astrophysics Data System (ADS)

    Ledo, Alicia; Heathcote, Richard; Hastings, Astley; Smith, Pete; Hillier, Jonathan

    2017-04-01

    Agriculture is essential to maintain humankind but is, at the same time, a substantial emitter of greenhouse gas (GHG) emissions. With a rising global population, the need for agriculture to provide secure food and energy supply is one of the main human challenges. At the same time, it is the only sector which has significant potential for negative emissions through the sequestration of carbon and offsetting via supply of feedstock for energy production. Perennial crops accumulate carbon during their lifetime and enhance organic soil carbon increase via root senescence and decomposition. However, inconsistency in accounting for this stored biomass undermines efforts to assess the benefits of such cropping systems when applied at scale. A consequence of this exclusion is that efforts to manage this important carbon stock are neglected. Detailed information on carbon balance is crucial to identify the main processes responsible for greenhouse gas emissions in order to develop strategic mitigation programs. Perennial crops systems represent 30% in area of total global crop systems, a considerable amount to be ignored. Furthermore, they have a major standing both in the bioenergy and global food industries. In this study, we first present a generic model to calculate the carbon balance and GHGs emissions from perennial crops, covering both food and bioenergy crops. The model is composed of two simple process-based sub-models, to cover perennial grasses and other perennial woody plants. The first is a generic individual based sub-model (IBM) covering crops in which the yield is the fruit and the plant biomass is an unharvested residue. Trees, shrubs and climbers fall into this category. The second model is a generic area based sub-model (ABM) covering perennial grasses, in which the harvested part includes some of the plant parts in which the carbon storage is accounted. Most second generation perennial bioenergy crops fall into this category. Both generic sub-models presented in this paper can be parametrized for different crops. Quantifying CO2 capture by plants and biomass accumulation and changes in soil carbon, are key in evaluating the impacts of perennial crops in life cycle analysis. We then use this model to illustrate the importance of biomass in the overall GHG estimation from four important perennial crops - sugarcane, Miscanthus, coffee, and apples - which were chosen to cover tropical and temperate regions, trees and grasses, and energy and food supply.

  18. Creating Security System Models Using SNAP-PC.

    DTIC Science & Technology

    1987-05-01

    Submodel ATTGRD Prompts ............ 228 x ACKNOWLEDGEMENTS SNAP was originally developed in the late 1970’s by Pritsker & Associates, Inc., for Sandia...systems. The other was to simplify the simulation process so that a person knowledgeable in security planning and who had little experience in ...simulation techniques could use simulation in his evaluation of security systems. SNAP-PC was developed by Pritsker & Associates, Inc., for Sandia with

  19. Towards a complete physically based forecast model for underwater noise related to impact pile driving.

    PubMed

    Fricke, Moritz B; Rolfes, Raimund

    2015-03-01

    An approach for the prediction of underwater noise caused by impact pile driving is described and validated based on in situ measurements. The model is divided into three sub-models. The first sub-model, based on the finite element method, is used to describe the vibration of the pile and the resulting acoustic radiation into the surrounding water and soil column. The mechanical excitation of the pile by the piling hammer is estimated by the second sub-model using an analytical approach which takes the large vertical dimension of the ram into account. The third sub-model is based on the split-step Padé solution of the parabolic equation and targets the long-range propagation up to 20 km. In order to presume realistic environmental properties for the validation, a geoacoustic model is derived from spatially averaged geological information about the investigation area. Although it can be concluded from the validation that the model and the underlying assumptions are appropriate, there are some deviations between modeled and measured results. Possible explanations for the observed errors are discussed.

  20. Evaluation of a hybrid kinetics/mixing-controlled combustion model for turbulent premixed and diffusion combustion using KIVA-II

    NASA Technical Reports Server (NTRS)

    Nguyen, H. Lee; Wey, Ming-Jyh

    1990-01-01

    Two-dimensional calculations were made of spark ignited premixed-charge combustion and direct injection stratified-charge combustion in gasoline fueled piston engines. Results are obtained using kinetic-controlled combustion submodel governed by a four-step global chemical reaction or a hybrid laminar kinetics/mixing-controlled combustion submodel that accounts for laminar kinetics and turbulent mixing effects. The numerical solutions are obtained by using KIVA-2 computer code which uses a kinetic-controlled combustion submodel governed by a four-step global chemical reaction (i.e., it assumes that the mixing time is smaller than the chemistry). A hybrid laminar/mixing-controlled combustion submodel was implemented into KIVA-2. In this model, chemical species approach their thermodynamics equilibrium with a rate that is a combination of the turbulent-mixing time and the chemical-kinetics time. The combination is formed in such a way that the longer of the two times has more influence on the conversion rate and the energy release. An additional element of the model is that the laminar-flame kinetics strongly influence the early flame development following ignition.

  1. Evaluation of a hybrid kinetics/mixing-controlled combustion model for turbulent premixed and diffusion combustion using KIVA-2

    NASA Technical Reports Server (NTRS)

    Nguyen, H. Lee; Wey, Ming-Jyh

    1990-01-01

    Two dimensional calculations were made of spark ignited premixed-charge combustion and direct injection stratified-charge combustion in gasoline fueled piston engines. Results are obtained using kinetic-controlled combustion submodel governed by a four-step global chemical reaction or a hybrid laminar kinetics/mixing-controlled combustion submodel that accounts for laminar kinetics and turbulent mixing effects. The numerical solutions are obtained by using KIVA-2 computer code which uses a kinetic-controlled combustion submodel governed by a four-step global chemical reaction (i.e., it assumes that the mixing time is smaller than the chemistry). A hybrid laminar/mixing-controlled combustion submodel was implemented into KIVA-2. In this model, chemical species approach their thermodynamics equilibrium with a rate that is a combination of the turbulent-mixing time and the chemical-kinetics time. The combination is formed in such a way that the longer of the two times has more influence on the conversion rate and the energy release. An additional element of the model is that the laminar-flame kinetics strongly influence the early flame development following ignition.

  2. Comparative simulation of a fluidised bed reformer using industrial process simulators

    NASA Astrophysics Data System (ADS)

    Bashiri, Hamed; Sotudeh-Gharebagh, Rahmat; Sarvar-Amini, Amin; Haghtalab, Ali; Mostoufi, Navid

    2016-08-01

    A simulation model is developed by commercial simulators in order to predict the performance of a fluidised bed reformer. As many physical and chemical phenomena take place in the reformer, two sub-models (hydrodynamic and reaction sub-models) are needed. The hydrodynamic sub-model is based on the dynamic two-phase model and the reaction sub-model is derived from the literature. In the overall model, the bed is divided into several sections. In each section, the flow of the gas is considered as plug flow through the bubble phase and perfectly mixed through the emulsion phase. Experimental data from the literature were used to validate the model. Close agreement was found between the model of both ASPEN Plus (ASPEN PLUS 2004 ©) and HYSYS (ASPEN HYSYS 2004 ©) and the experimental data using various sectioning of the reactor ranged from one to four. The experimental conversion lies between one and four sections as expected. The model proposed in this work can be used as a framework in developing the complicated models for non-ideal reactors inside of the process simulators.

  3. FEAST: sensitive local alignment with multiple rates of evolution.

    PubMed

    Hudek, Alexander K; Brown, Daniel G

    2011-01-01

    We present a pairwise local aligner, FEAST, which uses two new techniques: a sensitive extension algorithm for identifying homologous subsequences, and a descriptive probabilistic alignment model. We also present a new procedure for training alignment parameters and apply it to the human and mouse genomes, producing a better parameter set for these sequences. Our extension algorithm identifies homologous subsequences by considering all evolutionary histories. It has higher maximum sensitivity than Viterbi extensions, and better balances specificity. We model alignments with several submodels, each with unique statistical properties, describing strongly similar and weakly similar regions of homologous DNA. Training parameters using two submodels produces superior alignments, even when we align with only the parameters from the weaker submodel. Our extension algorithm combined with our new parameter set achieves sensitivity 0.59 on synthetic tests. In contrast, LASTZ with default settings achieves sensitivity 0.35 with the same false positive rate. Using the weak submodel as parameters for LASTZ increases its sensitivity to 0.59 with high error. FEAST is available at http://monod.uwaterloo.ca/feast/.

  4. Accommodating permafrost in contaminant transport modeling, a preliminary approach to modify the TREECS modeling tools

    NASA Astrophysics Data System (ADS)

    Ryder, J. L.; Dortch, M. S.; Johnson, B. E.

    2017-12-01

    Efforts are underway to adapt TREECS (Training Range Environmental Evaluation and Characterization System) for use in arctic or subarctic conditions where the extent and duration of snowpack and frozen ground may influence the development and concentration of contaminant plumes. TREECS is a multi-media model designed to aid facility managers in the long term stewardship of Army properties. TREECS includes sub-models for mass loading, soil, vadose zone, aquifer, and stream transport. Potential changes to the sub-models to improve the ability to model contaminant transport in areas with permafrost include accurately representing the dissolution of contaminants over a wider range of temperatures, estimating snow depth and ablation for both the hydrology and thermal conditions, determining ground freeze/thaw state and an average active layer depth, a more precise method to estimate a vertical transport time to a water table, and a soil interflow routine that adapts for permafrost condition. In this presentation we will show three sub-model comparisons 1) the use of the National Weather Service SNOW-17 model and the current TREECS snowmelt routines for input hydrology, 2) a Continuous Frozen Ground Index (CFGI) model and the Geophysical Institute Permafrost Lab model (GIPL 1.0) for determining active layer depth and summer season length, and 3) the use of HYDRUS-1D and the current TREECS vadose zone model for transport to the water table. The performance vs input needs, assumptions, and limitations of each approach, as well as the physical system uncertainties will also be discussed.

  5. Constraints on global oceanic emissions of N2O from observations and models

    NASA Astrophysics Data System (ADS)

    Buitenhuis, Erik T.; Suntharalingam, Parvadha; Le Quéré, Corinne

    2018-04-01

    We estimate the global ocean N2O flux to the atmosphere and its confidence interval using a statistical method based on model perturbation simulations and their fit to a database of ΔpN2O (n = 6136). We evaluate two submodels of N2O production. The first submodel splits N2O production into oxic and hypoxic pathways following previous publications. The second submodel explicitly represents the redox transformations of N that lead to N2O production (nitrification and hypoxic denitrification) and N2O consumption (suboxic denitrification), and is presented here for the first time. We perturb both submodels by modifying the key parameters of the N2O cycling pathways (nitrification rates; NH4+ uptake; N2O yields under oxic, hypoxic and suboxic conditions) and determine a set of optimal model parameters by minimisation of a cost function against four databases of N cycle observations. Our estimate of the global oceanic N2O flux resulting from this cost function minimisation derived from observed and model ΔpN2O concentrations is 2.4 ± 0.8 and 2.5 ± 0.8 Tg N yr-1 for the two N2O submodels. These estimates suggest that the currently available observational data of surface ΔpN2O constrain the global N2O flux to a narrower range relative to the large range of results presented in the latest IPCC report.

  6. Multiscale simulation of a prescribed fire event in the New Jersey Pine Barrens using ARPS-CANOPY

    Treesearch

    Michael T. Kiefer; Warren E. Heilman; Shiyuan Zhong; Joseph J. Charney; Xindi Bian; Nicholas S. Skowronski; John L. Hom; Kenneth L. Clark; Matthew Patterson; Michael R. Gallagher

    2014-01-01

    Smoke prediction products are one of the tools used by land management personnel for decision making regarding prescribed fires. This study documents the application to a prescribed fire of a smoke prediction system that employs ARPS-CANOPY, a modified version of the Advanced Regional Prediction System (ARPS) model containing a canopy submodel, as the meteorological...

  7. Programming Coup D’Oeil: The Impact of Decision Making Technology in Operational Warfare

    DTIC Science & Technology

    2010-05-03

    system will never be a complete substitute for the personal judgment of the operational commander. Computers exist wholly in the scientific realm, in...a binary world that is defined through mathematical, logical, and scientific terms, and where everything is represented through the lenses of an...equation. War, on the other hand, is a messy and unpredictable business, where events happen for no reason despite giving every scientific indication

  8. DAYCENT AND ITS LAND SURFACE SUBMODEL: DESCRIPTION AND TESTING. (R824993)

    EPA Science Inventory

    Abstract

    A land surface submodel was developed for the daily version of the CENTURY ecosystem model (DAYCENT). The goal of DAYCENT to simulate soil N2O, NOx, and CH4 fluxes for terrestrial ecosystems determined the structure and ...

  9. Aerospace applications of SINDA/FLUINT at the Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Ewert, Michael K.; Bellmore, Phillip E.; Andish, Kambiz K.; Keller, John R.

    1992-01-01

    SINDA/FLUINT has been found to be a versatile code for modeling aerospace systems involving single or two-phase fluid flow and all modes of heat transfer. Several applications of SINDA/FLUINT are described in this paper. SINDA/FLUINT is being used extensively to model the single phase water loops and the two-phase ammonia loops of the Space Station Freedom active thermal control system (ATCS). These models range from large integrated system models with multiple submodels to very detailed subsystem models. An integrated Space Station ATCS model has been created with ten submodels representing five water loops, three ammonia loops, a Freon loop and a thermal submodel representing the air loop. The model, which has approximately 800 FLUINT lumps and 300 thermal nodes, is used to determine the interaction between the multiple fluid loops which comprise the Space Station ATCS. Several detailed models of the flow-through radiator subsystem of the Space Station ATCS have been developed. One model, which has approximately 70 FLUINT lumps and 340 thermal nodes, provides a representation of the ATCS low temperature radiator array with two fluid loops connected only by conduction through the radiator face sheet. The detailed models are used to determine parameters such as radiator fluid return temperature, fin efficiency, flow distribution and total heat rejection for the baseline design as well as proposed alternate designs. SINDA/FLUINT has also been used as a design tool for several systems using pressurized gasses. One model examined the pressurization and depressurization of the Space Station airlock under a variety of operating conditions including convection with the side walls and internal cooling. Another model predicted the performance of a new generation of manned maneuvering units. This model included high pressure gas depressurization, internal heat transfer and supersonic thruster equations. The results of both models were used to size components, such as the heaters and gas bottles and also to point to areas where hardware testing was needed.

  10. COST MODEL FOR LARGE URBAN SCHOOLS.

    ERIC Educational Resources Information Center

    O'BRIEN, RICHARD J.

    THIS DOCUMENT CONTAINS A COST SUBMODEL OF AN URBAN EDUCATIONAL SYSTEM. THIS MODEL REQUIRES THAT PUPIL POPULATION AND PROPOSED SCHOOL BUILDING ARE KNOWN. THE COST ELEMENTS ARE--(1) CONSTRUCTION COSTS OF NEW PLANTS, (2) ACQUISITION AND DEVELOPMENT COSTS OF BUILDING SITES, (3) CURRENT OPERATING EXPENSES OF THE PROPOSED SCHOOL, (4) PUPIL…

  11. A Liver-Centric Multiscale Modeling Framework for Xenobiotics.

    PubMed

    Sluka, James P; Fu, Xiao; Swat, Maciej; Belmonte, Julio M; Cosmanescu, Alin; Clendenon, Sherry G; Wambaugh, John F; Glazier, James A

    2016-01-01

    We describe a multi-scale, liver-centric in silico modeling framework for acetaminophen pharmacology and metabolism. We focus on a computational model to characterize whole body uptake and clearance, liver transport and phase I and phase II metabolism. We do this by incorporating sub-models that span three scales; Physiologically Based Pharmacokinetic (PBPK) modeling of acetaminophen uptake and distribution at the whole body level, cell and blood flow modeling at the tissue/organ level and metabolism at the sub-cellular level. We have used standard modeling modalities at each of the three scales. In particular, we have used the Systems Biology Markup Language (SBML) to create both the whole-body and sub-cellular scales. Our modeling approach allows us to run the individual sub-models separately and allows us to easily exchange models at a particular scale without the need to extensively rework the sub-models at other scales. In addition, the use of SBML greatly facilitates the inclusion of biological annotations directly in the model code. The model was calibrated using human in vivo data for acetaminophen and its sulfate and glucuronate metabolites. We then carried out extensive parameter sensitivity studies including the pairwise interaction of parameters. We also simulated population variation of exposure and sensitivity to acetaminophen. Our modeling framework can be extended to the prediction of liver toxicity following acetaminophen overdose, or used as a general purpose pharmacokinetic model for xenobiotics.

  12. A Liver-Centric Multiscale Modeling Framework for Xenobiotics

    PubMed Central

    Swat, Maciej; Cosmanescu, Alin; Clendenon, Sherry G.; Wambaugh, John F.; Glazier, James A.

    2016-01-01

    We describe a multi-scale, liver-centric in silico modeling framework for acetaminophen pharmacology and metabolism. We focus on a computational model to characterize whole body uptake and clearance, liver transport and phase I and phase II metabolism. We do this by incorporating sub-models that span three scales; Physiologically Based Pharmacokinetic (PBPK) modeling of acetaminophen uptake and distribution at the whole body level, cell and blood flow modeling at the tissue/organ level and metabolism at the sub-cellular level. We have used standard modeling modalities at each of the three scales. In particular, we have used the Systems Biology Markup Language (SBML) to create both the whole-body and sub-cellular scales. Our modeling approach allows us to run the individual sub-models separately and allows us to easily exchange models at a particular scale without the need to extensively rework the sub-models at other scales. In addition, the use of SBML greatly facilitates the inclusion of biological annotations directly in the model code. The model was calibrated using human in vivo data for acetaminophen and its sulfate and glucuronate metabolites. We then carried out extensive parameter sensitivity studies including the pairwise interaction of parameters. We also simulated population variation of exposure and sensitivity to acetaminophen. Our modeling framework can be extended to the prediction of liver toxicity following acetaminophen overdose, or used as a general purpose pharmacokinetic model for xenobiotics. PMID:27636091

  13. A spatial simulation model of hydrology and vegetation dynamics in semi-permanent prairie wetlands

    USGS Publications Warehouse

    Poiani, Karen A.; Johnson, W. Carter

    1993-01-01

    The objective of this study was to construct a spatial simulation model of the vegetation dynamics in semi-permanent prairie wetlands. A hydrologic submodel estimated water levels based on precipitation, runoff, and potential evapotranspiration. A vegetation submodel calculated the amount and distribution of emergent cover and open water using a geographic information system. The response of vegetation to water-level changes was based on seed bank composition, seedling recruitment and establishment, and plant survivorship. The model was developed and tested using data from the Cottonwood Lake study site in North Dakota. Data from semi-permanent wetland P1 were used to calibrate the model. Data from a second wetland, P4, were used to evaluate model performance. Simulation results were compared with actual water data from 1797 through 1989. Test results showed that differences between calculated and observed water levels were within 10 cm 75% of the time. Open water over the past decade ranged from 0 to 7% in wetland P4 and from 0 to 8% in submodel simulations. Several model parameters including evapotranspiration and timing of seedling germination could be improved with more complex techniques or relatively minor adjustments. Despite these differences the model adequately represented vegetation dynamics of prairie wetlands and can be used to examine wetland response to natural or human-induced climate change.

  14. Demographic, social, and economic effects on Mexican causes of death in 1990.

    PubMed

    Pick, J B; Butler, E W

    1998-01-01

    This study examined spatial geographic patterns of cause of death and 28 demographic and socioeconomic influences on causes of death for 31 Mexican states plus the Federal District for 1990. Mortality data were obtained from the state death registration system and are age standardized. The 28 socioeconomic variables were obtained from Census records. Analysis included 2 submodels: one with all 28 socioeconomic variables in a stepwise regression, and one with each of the 4 groups of factors. The conceptual model is based on epidemiological transition theory and empirical findings. There are 4 stages in mortality decline. Effects are grouped as demographic, sociocultural, economic prosperity, and housing, health, and crime factors. Findings indicate that cancer and cardiovascular disease were strongly correlated and consistently high in border areas as well as the Federal District and Jalisco. Respiratory mortality had higher values in the Federal District, Puebla, and surrounding states, as well as Jalisco. The standardized total mortality rate was only in simple correlations associated inversely with underemployment. All cause specific mortality was associated with individual factors. Respiratory mortality was linked with manufacturing work force. Cardiovascular and cancer mortality were associated with socioeconomic factors. In submodel I, cause specific mortality was predicted by crowding, housing characteristics, marriage and divorce, and manufacturing work force. In submodel II, economic group factors had the strongest model fits explaining 33-60% of the "r" square. Hypothesized effects were only partially validated.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glaser, D.; Connolly, J.; Berghoffen, A.

    The resident bald eagles of the lower Columbia River have lower productivity and higher contaminant levels than other bald eagles of the Pacific Northwest. The primary population stressors are believed to be habitat loss, human disturbance, p,p{prime}DDE, PCBs, dioxins and furans. The primary effect of habitat loss is to reduce the carrying capacity of the region for nesting sites, and the primary effects of human disturbance and contamination by organic compounds are to reduce productivity. The purpose of this study was to quantitatively evaluate the effects of all of, these potential stressors on the bald eagle population dynamics. A modelmore » of the population dynamics was developed. The model structure includes a physiologically-based toxicokinetic (PBTK) submodel to estimate the degree of contamination, which is linked via a toxicology submodel to a population dynamics submodel. The PBTK submodel is time-variable, incorporating species-specific bioenergetics, as well as contaminant assimilation and excretion rates for each compound of interest. Calculated body burdens and egg concentrations for each compound account for spatial and temporal variations in feeding habits and prey contaminant levels. The population submodel includes fecundity and survival information, as well as a limit to the number of breeding pairs (carrying capacity) and a population of non-breeding subadults and adults (floaters). Model simulations are performed in a Monte Carlo framework. Results include estimates of the persistence, resistance and resilience of the population: the probability of extinction, the relationship between magnitude of stress and change in population size, and the time course of recovery of a population following a reduction in stress.« less

  16. Design and model for the giant magnetostrictive actuator used on an electronic controlled injector

    NASA Astrophysics Data System (ADS)

    Xue, Guangming; Zhang, Peilin; He, Zhongbo; Li, Ben; Rong, Ce

    2017-05-01

    Giant magnetostrictive actuator (GMA) may be a promising candidate actuator to drive an electronic controlled injector as giant magnetostrictive material (GMM) has excellent performances as large output, fast response and high operating stability etc. To meet the driving requirement of the injector, the GMA should produce maximal shortening displacement when energized. An unbiased GMA with a ‘T’ shaped output rod is designed to reach the target. Furthermore, an open-hold-fall type driving voltage is exerted on the actuator coil to accelerate the response speed of the coil current. The actuator displacement is modeled from establishing the sub-models of coil current, magnetic field within GMM rod, magnetization and magnetostrictive strain sequentially. Two modifications are done to make the model more accurate. Firstly, consider the model fails to compute the transient-state response precisely, a dead-zone and delay links are embedded into the coil current sub-model. Secondly, as the magnetization and magnetostrictive strain sub-models just influence the change rule of the transient-state response the linear magnetostrictive strain-magnetic field sub-model is introduced. From experimental results, the modified model with linear magnetostrictive stain expression can predict the actuator displacement quite effectively.

  17. Combined electrochemical, heat generation, and thermal model for large prismatic lithium-ion batteries in real-time applications

    NASA Astrophysics Data System (ADS)

    Farag, Mohammed; Sweity, Haitham; Fleckenstein, Matthias; Habibi, Saeid

    2017-08-01

    Real-time prediction of the battery's core temperature and terminal voltage is very crucial for an accurate battery management system. In this paper, a combined electrochemical, heat generation, and thermal model is developed for large prismatic cells. The proposed model consists of three sub-models, an electrochemical model, heat generation model, and thermal model which are coupled together in an iterative fashion through physicochemical temperature dependent parameters. The proposed parameterization cycles identify the sub-models' parameters separately by exciting the battery under isothermal and non-isothermal operating conditions. The proposed combined model structure shows accurate terminal voltage and core temperature prediction at various operating conditions while maintaining a simple mathematical structure, making it ideal for real-time BMS applications. Finally, the model is validated against both isothermal and non-isothermal drive cycles, covering a broad range of C-rates, and temperature ranges [-25 °C to 45 °C].

  18. Where Do Messy Planetary Nebulae Come From?

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2017-03-01

    If you examined images of planetary nebulae, you would find that many of them have an appearance that is too messy to be accounted for in the standard model of how planetary nebulae form. So what causes these structures?Examples of planetary nebulae that have a low probability of having beenshaped by a triple stellar system. They are mostly symmetric, with only slight departures (labeled) that can be explained by instabilities, interactions with the interstellar medium, etc. [Bear and Soker 2017]A Range of LooksAt the end of a stars lifetime, in the red-giant phase, strong stellar winds can expel the outer layers of the star. The hot, luminous core then radiates in ultraviolet, ionizing the gas of the ejected stellar layers and causing them to shine as a brightly colored planetary nebula for a few tens of thousands of years.Planetary nebulae come in a wide variety of morphologies. Some are approximately spherical, but others can be elliptical, bipolar, quadrupolar, or even more complex.Its been suggested that non-spherical planetary nebulae might be shaped by the presence of a second star in a binary system with the source of the nebula but even this scenario should still produce a structure with axial or mirror symmetry.A pair of scientists from Technion Israel Institute of Technology, Ealeal Bear and Noam Soker, argue that planetary nebulae with especially messy morphologies those without clear axial or point symmetries may have been shaped by an interacting triple stellar system instead.Examples of planetary nebulae that might have been shaped by a triple stellar system. They have some deviations from symmetry but also show signs of interacting with the interstellar medium. [Bear and Soker 2017]Departures from SymmetryTo examine this possibility more closely, Bear and Soker look at a sample of thousands planetary nebulae and qualitatively classify each of them into one of four categories, based on the degree to which they show signs of having been shaped by a triple stellar progenitor. The primary signs the authors look for are:SymmetriesIf a planetary nebula has a strong axisymmetric or point-symmetric structure (i.e., its bipolar, elliptical, spherical, etc.), it was likely not shaped by a triple progenitor. If clear symmetries are missing, however, or if there is a departure from symmetry in specific regions, the morphology of the planetary nebula may have been shaped by the presence of stars in a close triple system.Interaction with the interstellar mediumSome asymmetries, especially local ones, can be explained by interaction of the planetary nebula with the interstellar medium. The authors look for signs of such an interaction, which decreases the likelihood that a triple stellar system need be involved to produce the morphology we observe.Examples of planetary nebulae that are extremely likely to have been shaped by a triple stellar system. They have strong departures from symmetry and dont show signs of interacting with the interstellar medium. [Bear and Soker 2017]Influential TriosFrom the images in two planetary nebulae catalogs the Planetary Nebula Image Catelog and the HASH catalog Bear and Soker find that 275 and 372 planetary nebulae are categorizable, respectively. By assigning crude probabilities to their categories, the authors estimate that the total fraction of planetary nebulae shaped by three stars in a close system is around 1321%.The authors argue that in some cases, all three stars might survive. This means that we may be able to find direct evidence of these triple stellar systems lying in the hearts of especially messy planetary nebulae.CitationEaleal Bear and Noam Soker 2017 ApJL 837 L10. doi:10.3847/2041-8213/aa611c

  19. Modelling Per Capita Water Demand Change to Support System Planning

    NASA Astrophysics Data System (ADS)

    Garcia, M. E.; Islam, S.

    2016-12-01

    Water utilities have a number of levers to influence customer water usage. These include levers to proactively slow demand growth over time such as building and landscape codes as well as levers to decrease demands quickly in response to water stress including price increases, education campaigns, water restrictions, and incentive programs. Even actions aimed at short term reductions can result in long term water usage declines when substantial changes are made in water efficiency, as in incentives for fixture replacement or turf removal, or usage patterns such as permanent lawn watering restrictions. Demand change is therefore linked to hydrological conditions and to the effects of past management decisions - both typically included in water supply planning models. Yet, demand is typically incorporated exogenously using scenarios or endogenously using only price, though utilities also use rules and incentives issued in response to water stress and codes specifying standards for new construction to influence water usage. Explicitly including these policy levers in planning models enables concurrent testing of infrastructure and policy strategies and illuminates interactions between the two. The City of Las Vegas is used as a case study to develop and demonstrate this modeling approach. First, a statistical analysis of system data was employed to rule out alternate hypotheses of per capita demand decrease such as changes in population density and economic structure. Next, four demand sub-models were developed including one baseline model in which demand is a function of only price. The sub-models were then calibrated and tested using monthly data from 1997 to 2012. Finally, the best performing sub-model was integrated with a full supply and demand model. The results highlight the importance of both modeling water demand dynamics endogenously and taking a broader view of the variables influencing demand change.

  20. Performance of the SWEEP model affected by estimates of threshold friction velocity

    USDA-ARS?s Scientific Manuscript database

    The Wind Erosion Prediction System (WEPS) is a process-based model and needs to be verified under a broad range of climatic, soil, and management conditions. Occasional failure of the WEPS erosion submodel (Single-event Wind Erosion Evaluation Program or SWEEP) to simulate erosion in the Columbia Pl...

  1. Comparison of measured and simulated friction velocity and threshold friction velocity using SWEEP

    USDA-ARS?s Scientific Manuscript database

    The Wind Erosion Prediction System (WEPS) was developed by the USDA Agricultural Research Service as a tool to predict wind erosion and assess the influence of control practices on windblown soil loss. Occasional failure of the WEPS erosion submodel (SWEEP) to simulate erosion in the Columbia Platea...

  2. Validation of SWEEP for contrasting agricultural land use types in the Tarim Basin

    USDA-ARS?s Scientific Manuscript database

    In order to aid in identifying land management practices with the potential to control soil erosion, models such as the Wind Erosion Prediction System (WEPS) have been developed to assess soil erosion. The objective of this study was to test the performance of the WEPS erosion submodel (the Single-e...

  3. How Good and Useful Are Air Pollution Models?

    ERIC Educational Resources Information Center

    Environmental Science and Technology, 1973

    1973-01-01

    The Regional Air Pollution Study (RAPS) to be conducted in St. Louis, is the largest air monitoring program of the Environmental Protection Agency. A key segment will be the collection of a data base on which this system of mathematical models can be tested and upon which submodels can be validated. (BL)

  4. Tidal Mixing Box Submodel for Tampa Bay: Calibration of Tidal Exchange Flows with the Parameter Estimation Tool (PEST)

    EPA Science Inventory

    In the mid-1990s the Tampa Bay Estuary Program proposed a nutrient reduction strategy focused on improving water clarity to promote seagrass expansion within Tampa Bay. A System Dynamics Model is being developed to evaluate spatially and temporally explicit impacts of nutrient r...

  5. Nonlinear aeroacoustic characterization of Helmholtz resonators with a local-linear neuro-fuzzy network model

    NASA Astrophysics Data System (ADS)

    Förner, K.; Polifke, W.

    2017-10-01

    The nonlinear acoustic behavior of Helmholtz resonators is characterized by a data-based reduced-order model, which is obtained by a combination of high-resolution CFD simulation and system identification. It is shown that even in the nonlinear regime, a linear model is capable of describing the reflection behavior at a particular amplitude with quantitative accuracy. This observation motivates to choose a local-linear model structure for this study, which consists of a network of parallel linear submodels. A so-called fuzzy-neuron layer distributes the input signal over the linear submodels, depending on the root mean square of the particle velocity at the resonator surface. The resulting model structure is referred to as an local-linear neuro-fuzzy network. System identification techniques are used to estimate the free parameters of this model from training data. The training data are generated by CFD simulations of the resonator, with persistent acoustic excitation over a wide range of frequencies and sound pressure levels. The estimated nonlinear, reduced-order models show good agreement with CFD and experimental data over a wide range of amplitudes for several test cases.

  6. The spike trains of inhibited pacemaker neurons seen through the magnifying glass of nonlinear analyses.

    PubMed

    Segundo, J P; Sugihara, G; Dixon, P; Stiber, M; Bersier, L F

    1998-12-01

    This communication describes the new information that may be obtained by applying nonlinear analytical techniques to neurobiological time-series. Specifically, we consider the sequence of interspike intervals Ti (the "timing") of trains recorded from synaptically inhibited crayfish pacemaker neurons. As reported earlier, different postsynaptic spike train forms (sets of timings with shared properties) are generated by varying the average rate and/or pattern (implying interval dispersions and sequences) of presynaptic spike trains. When the presynaptic train is Poisson (independent exponentially distributed intervals), the form is "Poisson-driven" (unperturbed and lengthened intervals succeed each other irregularly). When presynaptic trains are pacemaker (intervals practically equal), forms are either "p:q locked" (intervals repeat periodically), "intermittent" (mostly almost locked but disrupted irregularly), "phase walk throughs" (intermittencies with briefer regular portions), or "messy" (difficult to predict or describe succinctly). Messy trains are either "erratic" (some intervals natural and others lengthened irregularly) or "stammerings" (intervals are integral multiples of presynaptic intervals). The individual spike train forms were analysed using attractor reconstruction methods based on the lagged coordinates provided by successive intervals from the time-series Ti. Numerous models were evaluated in terms of their predictive performance by a trial-and-error procedure: the most successful model was taken as best reflecting the true nature of the system's attractor. Each form was characterized in terms of its dimensionality, nonlinearity and predictability. (1) The dimensionality of the underlying dynamical attractor was estimated by the minimum number of variables (coordinates Ti) required to model acceptably the system's dynamics, i.e. by the system's degrees of freedom. Each model tested was based on a different number of Ti; the smallest number whose predictions were judged successful provided the best integer approximation of the attractor's true dimension (not necessarily an integer). Dimensionalities from three to five provided acceptable fits. (2) The degree of nonlinearity was estimated by: (i) comparing the correlations between experimental results and data from linear and nonlinear models, and (ii) tuning model nonlinearity via a distance-weighting function and identifying the either local or global neighborhood size. Lockings were compatible with linear models and stammerings were marginal; nonlinear models were best for Poisson-driven, intermittent and erratic forms. (3) Finally, prediction accuracy was plotted against increasingly long sequences of intervals forecast: the accuracies for Poisson-driven, locked and stammering forms were invariant, revealing irregularities due to uncorrelated noise, but those of intermittent and messy erratic forms decayed rapidly, indicating an underlying deterministic process. The excellent reconstructions possible for messy erratic and for some intermittent forms are especially significant because of their relatively low dimensionality (around 4), high degree of nonlinearity and prediction decay with time. This is characteristic of chaotic systems, and provides evidence that nonlinear couplings between relatively few variables are the major source of the apparent complexity seen in these cases. This demonstration of different dimensions, degrees of nonlinearity and predictabilities provides rigorous support for the categorization of different synaptically driven discharge forms proposed earlier on the basis of more heuristic criteria. This has significant implications. (1) It demonstrates that heterogeneous postsynaptic forms can indeed be induced by manipulating a few presynaptic variables. (2) Each presynaptic timing induces a form with characteristic dimensionality, thus breaking up the preparation into subsystems such that the physical variables in each operate as one

  7. Modeling biotic uptake by periphyton and transient hyporrheic storage of nitrate in a natural stream

    USGS Publications Warehouse

    Kim, Brian K.A.; Jackman, Alan P.; Triska, Frank J.

    1992-01-01

    To a convection-dispersion hydrologic transport model we coupled a transient storage submodel (Bencala, 1984) and a biotic uptake submodel based on Michaelis-Menten kinetics (Kim et al., 1990). Our purpose was threefold: (1) to simulate nitrate retention in response to change in load in a third-order stream, (2) to differentiate biotic versus hydrologie factors in nitrate retention, and (3) to produce a research tool whose properties are consistent with laboratory and field observations. Hydrodynamic parameters were fitted from chloride concentration during a 20-day chloride-nitrate coinjection (Bencala, 1984), and biotic uptake kinetics were based on flume studies by Kim et al. (1990) and Triska et al. (1983). Nitrate concentration from the 20-day coinjection experiment served as a base for model validation. The complete transport retention model reasonably predicted the observed nitrate concentration. However, simulations which lacked either the transient storage submodel or the biotic uptake submodel poorly predicted the observed nitrate concentration. Model simulations indicated that transient storage in channel and hyporrheic interstices dominated nitrate retention within the first 24 hours, whereas biotic uptake dominated thereafter. A sawtooth function for Vmax ranging from 0.10 to 0.17 μg NO3-N s−1 gAFDM−1 (grams ash free dry mass) slightly underpredicted nitrate retention in simulations of 2–7 days. This result was reasonable since uptake by other nitrate-demanding processes were not included. The model demonstrated how ecosystem retention is an interaction between physical and biotic processes and supports the validity of coupling separate hydrodynamic and reactive submodels to established solute transport models in biological studies of fluvial ecosystems.

  8. The MIT Integrated Global System Model: A facility for Assessing and Communicating Climate Change Uncertainty (Invited)

    NASA Astrophysics Data System (ADS)

    Prinn, R. G.

    2013-12-01

    The world is facing major challenges that create tensions between human development and environmental sustenance. In facing these challenges, computer models are invaluable tools for addressing the need for probabilistic approaches to forecasting. To illustrate this, I use the MIT Integrated Global System Model framework (IGSM; http://globalchange.mit.edu ). The IGSM consists of a set of coupled sub-models of global economic and technological development and resultant emissions, and physical, dynamical and chemical processes in the atmosphere, land, ocean and ecosystems (natural and managed). Some of the sub-models have both complex and simplified versions available, with the choice of which version to use being guided by the questions being addressed. Some sub-models (e.g.urban air pollution) are reduced forms of complex ones created by probabilistic collocation with polynomial chaos bases. Given the significant uncertainties in the model components, it is highly desirable that forecasts be probabilistic. We achieve this by running 400-member ensembles (Latin hypercube sampling) with different choices for key uncertain variables and processes within the human and natural system model components (pdfs of inputs estimated by model-observation comparisons, literature surveys, or expert elicitation). The IGSM has recently been used for probabilistic forecasts of climate, each using 400-member ensembles: one ensemble assumes no explicit climate mitigation policy and others assume increasingly stringent policies involving stabilization of greenhouse gases at various levels. These forecasts indicate clearly that the greatest effect of these policies is to lower the probability of extreme changes. The value of such probability analyses for policy decision-making lies in their ability to compare relative (not just absolute) risks of various policies, which are less affected by the earth system model uncertainties. Given the uncertainties in forecasts, it is also clear that we need to evaluate policies based on their ability to lower risk, and to re-evaluate decisions over time as new knowledge is gained. Reference: R. G. Prinn, Development and Application of Earth System Models, Proceedings, National Academy of Science, June 15, 2012, http://www.pnas.org/cgi/doi/10.1073/pnas.1107470109.

  9. Modelling the root system architecture of Poaceae. Can we simulate integrated traits from morphological parameters of growth and branching?

    PubMed

    Pagès, Loïc; Picon-Cochard, Catherine

    2014-10-01

    Our objective was to calibrate a model of the root system architecture on several Poaceae species and to assess its value to simulate several 'integrated' traits measured at the root system level: specific root length (SRL), maximum root depth and root mass. We used the model ArchiSimple, made up of sub-models that represent and combine the basic developmental processes, and an experiment on 13 perennial grassland Poaceae species grown in 1.5-m-deep containers and sampled at two different dates after planting (80 and 120 d). Model parameters were estimated almost independently using small samples of the root systems taken at both dates. The relationships obtained for calibration validated the sub-models, and showed species effects on the parameter values. The simulations of integrated traits were relatively correct for SRL and were good for root depth and root mass at the two dates. We obtained some systematic discrepancies that were related to the slight decline of root growth in the last period of the experiment. Because the model allowed correct predictions on a large set of Poaceae species without global fitting, we consider that it is a suitable tool for linking root traits at different organisation levels. © 2014 INRA. New Phytologist © 2014 New Phytologist Trust.

  10. Dike/Drift Interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    E. Gaffiney

    2004-11-23

    This report presents and documents the model components and analyses that represent potential processes associated with propagation of a magma-filled crack (dike) migrating upward toward the surface, intersection of the dike with repository drifts, flow of magma in the drifts, and post-magma emplacement effects on repository performance. The processes that describe upward migration of a dike and magma flow down the drift are referred to as the dike intrusion submodel. The post-magma emplacement processes are referred to as the post-intrusion submodel. Collectively, these submodels are referred to as a conceptual model for dike/drift interaction. The model components and analyses ofmore » the dike/drift interaction conceptual model provide the technical basis for assessing the potential impacts of an igneous intrusion on repository performance, including those features, events, and processes (FEPs) related to dike/drift interaction (Section 6.1).« less

  11. Cost-effectiveness of raising HDL cholesterol by adding prolonged-release nicotinic acid to statin therapy in the secondary prevention setting: a French perspective.

    PubMed

    Roze, S; Ferrières, J; Bruckert, E; Van Ganse, E; Chapman, M J; Liens, D; Renaudin, C

    2007-11-01

    To evaluate the cost-effectiveness of raising high-density lipoprotein cholesterol (HDL-C) with add-on nicotinic acid in statin-treated patients with coronary heart disease (CHD) and low HDL-C, from the French healthcare system perspective. Computer simulation economic modelling incorporating two decision analytic submodels was used. The first submodel generated a cohort of 2000 patients and simulated lipid changes using baseline characteristics and treatment effects from the ARterial Biology for the Investigation of the Treatment Effects of Reducing cholesterol (ARBITER 2) study. Prolonged-release (PR) nicotinic acid (1 g/day) was added in patients with HDL-C < 40 mg/dl (1.03 mmol/l) on statin alone. The second submodel used standard Markov techniques to evaluate long-term clinical and economic outcomes based on Framingham risk estimates. Direct medical costs were accounted from a third party payer perspective [2004 Euros (euro)] and discounted by 3%. Addition of PR nicotinic acid to statin therapy resulted in substantial health gain and increased life expectancy, at a cost well within the threshold (< 50,000 euros per life year gained) considered good value for money in Western Europe. Raising HDL-C by adding PR nicotinic acid to statin therapy in CHD patients was cost-effective in France at a level considered to represent good value for money by reimbursement authorities in Europe. This strategy was highly cost-effective in CHD patients with type 2 diabetes.

  12. Maximizing the Potential of the Special Operations Forces and General Purpose Forces

    DTIC Science & Technology

    2014-05-22

    States (Washington, DC: Government Printing Office, 2013), I-15). 8David Tucker and Christopher J. Lamb , United States Special Operations Forces...functionally it was a messy misapplication of forces, that got the job done in a highly inefficient manner. 59Mets, 114. 60Chris Lamb , “Belief Systems and...Taking out the planes was a standoff operation, a job for a three-man team equipped with AT-4s ( shoulder -fired rockets) and machine guns,’ he

  13. Analysis of Composite Skin-Stiffener Debond Specimens Using a Shell/3D Modeling Technique and Submodeling

    NASA Technical Reports Server (NTRS)

    OBrien, T. Kevin (Technical Monitor); Krueger, Ronald; Minguet, Pierre J.

    2004-01-01

    The application of a shell/3D modeling technique for the simulation of skin/stringer debond in a specimen subjected to tension and three-point bending was studied. The global structure was modeled with shell elements. A local three-dimensional model, extending to about three specimen thicknesses on either side of the delamination front was used to model the details of the damaged section. Computed total strain energy release rates and mixed-mode ratios obtained from shell/3D simulations were in good agreement with results obtained from full solid models. The good correlation of the results demonstrated the effectiveness of the shell/3D modeling technique for the investigation of skin/stiffener separation due to delamination in the adherents. In addition, the application of the submodeling technique for the simulation of skin/stringer debond was also studied. Global models made of shell elements and solid elements were studied. Solid elements were used for local submodels, which extended between three and six specimen thicknesses on either side of the delamination front to model the details of the damaged section. Computed total strain energy release rates and mixed-mode ratios obtained from the simulations using the submodeling technique were not in agreement with results obtained from full solid models.

  14. A hybrid system identification methodology for wireless structural health monitoring systems based on dynamic substructuring

    NASA Astrophysics Data System (ADS)

    Dragos, Kosmas; Smarsly, Kay

    2016-04-01

    System identification has been employed in numerous structural health monitoring (SHM) applications. Traditional system identification methods usually rely on centralized processing of structural response data to extract information on structural parameters. However, in wireless SHM systems the centralized processing of structural response data introduces a significant communication bottleneck. Exploiting the merits of decentralization and on-board processing power of wireless SHM systems, many system identification methods have been successfully implemented in wireless sensor networks. While several system identification approaches for wireless SHM systems have been proposed, little attention has been paid to obtaining information on the physical parameters (e.g. stiffness, damping) of the monitored structure. This paper presents a hybrid system identification methodology suitable for wireless sensor networks based on the principles of component mode synthesis (dynamic substructuring). A numerical model of the monitored structure is embedded into the wireless sensor nodes in a distributed manner, i.e. the entire model is segmented into sub-models, each embedded into one sensor node corresponding to the substructure the sensor node is assigned to. The parameters of each sub-model are estimated by extracting local mode shapes and by applying the equations of the Craig-Bampton method on dynamic substructuring. The proposed methodology is validated in a laboratory test conducted on a four-story frame structure to demonstrate the ability of the methodology to yield accurate estimates of stiffness parameters. Finally, the test results are discussed and an outlook on future research directions is provided.

  15. Examining responses of ecosystem carbon exchange to environmental changes using particle filtering mathod

    NASA Astrophysics Data System (ADS)

    Yokozawa, M.

    2017-12-01

    Attention has been paid to the agricultural field that could regulate ecosystem carbon exchange by water management and residual treatments. However, there have been less known about the dynamic responses of the ecosystem to environmental changes. In this study, focussing on paddy field, where CO2 emissions due to microbial decomposition of organic matter are suppressed and alternatively CH4 emitted under flooding condition during rice growth season and subsequently CO2 emission following the fallow season after harvest, the responses of ecosystem carbon exchange were examined. We conducted model data fusion analysis for examining the response of cropland-atmosphere carbon exchange to environmental variation. The used model consists of two sub models, paddy rice growth sub-model and soil decomposition sub-model. The crop growth sub-model mimics the rice plant growth processes including formation of reproductive organs as well as leaf expansion. The soil decomposition sub-model simulates the decomposition process of soil organic carbon. Assimilating the data on the time changes in CO2 flux measured by eddy covariance method, rice plant biomass, LAI and the final yield with the model, the parameters were calibrated using a stochastic optimization algorithm with a particle filter method. The particle filter method, which is one of the Monte Carlo filters, enable us to evaluating time changes in parameters based on the observed data until the time and to make prediction of the system. Iterative filtering and prediction with changing parameters and/or boundary condition enable us to obtain time changes in parameters governing the crop production as well as carbon exchange. In this study, we focused on the parameters related to crop production as well as soil carbon storage. As the results, the calibrated model with estimated parameters could accurately predict the NEE flux in the subsequent years. The temperature sensitivity, denoted by Q10s in the decomposition rate of soil organic carbon (SOC) were obtained as 1.4 for no cultivation period and 2.9 for cultivation period (submerged soil condition in flooding season). It suggests that the response of ecosystem carbon exchange differs due to SOC decomposition process which is sensitive to environmental variation during paddy rice cultivation period.

  16. Validation and Calibration of Nuclear Thermal Hydraulics Multiscale Multiphysics Models - Subcooled Flow Boiling Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anh Bui; Nam Dinh; Brian Williams

    In addition to validation data plan, development of advanced techniques for calibration and validation of complex multiscale, multiphysics nuclear reactor simulation codes are a main objective of the CASL VUQ plan. Advanced modeling of LWR systems normally involves a range of physico-chemical models describing multiple interacting phenomena, such as thermal hydraulics, reactor physics, coolant chemistry, etc., which occur over a wide range of spatial and temporal scales. To a large extent, the accuracy of (and uncertainty in) overall model predictions is determined by the correctness of various sub-models, which are not conservation-laws based, but empirically derived from measurement data. Suchmore » sub-models normally require extensive calibration before the models can be applied to analysis of real reactor problems. This work demonstrates a case study of calibration of a common model of subcooled flow boiling, which is an important multiscale, multiphysics phenomenon in LWR thermal hydraulics. The calibration process is based on a new strategy of model-data integration, in which, all sub-models are simultaneously analyzed and calibrated using multiple sets of data of different types. Specifically, both data on large-scale distributions of void fraction and fluid temperature and data on small-scale physics of wall evaporation were simultaneously used in this work’s calibration. In a departure from traditional (or common-sense) practice of tuning/calibrating complex models, a modern calibration technique based on statistical modeling and Bayesian inference was employed, which allowed simultaneous calibration of multiple sub-models (and related parameters) using different datasets. Quality of data (relevancy, scalability, and uncertainty) could be taken into consideration in the calibration process. This work presents a step forward in the development and realization of the “CIPS Validation Data Plan” at the Consortium for Advanced Simulation of LWRs to enable quantitative assessment of the CASL modeling of Crud-Induced Power Shift (CIPS) phenomenon, in particular, and the CASL advanced predictive capabilities, in general. This report is prepared for the Department of Energy’s Consortium for Advanced Simulation of LWRs program’s VUQ Focus Area.« less

  17. Modeling concentration patterns of agricultural and urban micropollutants in surface waters in catchment of mixed land use

    NASA Astrophysics Data System (ADS)

    Stamm, C.; Scheidegger, R.; Bader, H. P.

    2012-04-01

    Organic micropollutants detected in surface waters can originate from agricultural and urban sources. Depending on the use of the compounds, the temporal loss patterns vary substantially. Therefore models that simulate water quality in watersheds of mixed land use have to account for all relevant sources. We present here simulation results of a transport model that describes the dynamic of several biocidal compounds as well as the behaviour of human pharmaceuticals. The model consists of the sub-model Rexpo simulating the transfer of the compounds from the point of application to the stream in semi-lumped manner. The river sub-model, which is programmed in the Aquasim software, describes the fate of the compounds in the stream. Both sub-models are process-based. The Rexpo sub-model was calibrated at the scale of a small catchment of 25 km2, which is inhabited by about 12'000 people. Based on the resulting model parameters the loss dynamics of two herbicides (atrazine, isoproturon) and a compound of mixed urban and agricultural use (diuron) were predicted for two nested catchment of 212 and 1696 km2, respectively. The model output was compared to observed time-series of concentrations and loads obtained for the entire year 2009. Additionally, the fate of two pharmaceuticals with constant input (carbamazepine, diclofenac) was simulated for improving the understanding of possible degradation processes. The simulated loads and concentrations of the biocidal compounds differed by a factor of 2 to 3 from the observations. In general, the seasonal patterns were well captured by the model. However, a detailed analysis of the seasonality revealed substantial input uncertainty for the application of the compounds. The model results also demonstrated that for the dynamics of rain-driven losses of biocidal compounds the semi-lumped approach of the Rexpo sub-model was sufficient. Only for simulating the photolytic degradation of diclofenac in the stream the detailed representation of the routing in the stream was essential. Overall, the study demonstrated that the simulation of micropollutants at the watershed scale can be strongly hampered by input uncertainty regarding the use of the chemicals. Under such conditions the level of process-representation in the Rexpo sub-models is superfluous. For practical applications, one should address the question how to simply the approach while still maintaining the essential parts.

  18. Dynamics of melt crystal interface and thermal stresses in rotational Bridgman crystal growth process

    NASA Astrophysics Data System (ADS)

    Ma, Ronghui; Zhang, Hui; Larson, David J.; Mandal, Krishna C.

    2004-05-01

    The growth process of potassium bromide (KBr) single crystals in a vertical Bridgman furnace has been studied numerically using an integrated model that combines formulation of global heat transfer and thermal elastic stresses. The global heat transfer sub-model accounts for conduction, convection and interface movement in the multiphase system. Using the elastic stress sub-model, thermal stresses in the growing crystal caused by the non-uniform temperature distribution is predicted. Special attention is directed to the interaction between the crystal and the ampoule. The global temperature distribution in the furnace, the flow pattern in the melt and the interface shapes are presented. We also investigate the effects of the natural convection and rotational forced convection on the shape of the growth fronts. Furthermore, the state of the thermal stresses in the crystal is studied to understand the plastic deformation mechanisms during the cooling process. The influence of the wall contact on thermal stresses is also addressed.

  19. Electro-thermo-optical simulation of vertical-cavity surface-emitting lasers

    NASA Astrophysics Data System (ADS)

    Smagley, Vladimir Anatolievich

    Three-dimensional electro-thermal simulator based on the double-layer approximation for the active region was coupled to optical gain and optical field numerical simulators to provide a self-consistent steady-state solution of VCSEL current-voltage and current-output power characteristics. Methodology of VCSEL modeling had been established and applied to model a standard 850-nm VCSEL based on GaAs-active region and a novel intracavity-contacted 400-nm GaN-based VCSEL. Results of GaAs VCSEL simulation were in a good agreement with experiment. Correlations between current injection and radiative mode profiles have been observed. Physical sub-models of transport, optical gain and cavity optical field were developed. Carrier transport through DBRs was studied. Problem of optical fields in VCSEL cavity was treated numerically by the effective frequency method. All the sub-models were connected through spatially inhomogeneous rate equation system. It was shown that the conventional uncoupled analysis of every separate physical phenomenon would be insufficient to describe VCSEL operation.

  20. Thermal Model Development for Ares I-X

    NASA Technical Reports Server (NTRS)

    Amundsen, Ruth M.; DelCorso, Joe

    2008-01-01

    Thermal analysis for the Ares I-X vehicle has involved extensive thermal model integration, since thermal models of vehicle elements came from several different NASA and industry organizations. Many valuable lessons were learned in terms of model integration and validation. Modeling practices such as submodel, analysis group and symbol naming were standardized to facilitate the later model integration. Upfront coordination of coordinate systems, timelines, units, symbols and case scenarios was very helpful in minimizing integration rework. A process for model integration was developed that included pre-integration runs and basic checks of both models, and a step-by-step process to efficiently integrate one model into another. Extensive use of model logic was used to create scenarios and timelines for avionics and air flow activation. Efficient methods of model restart between case scenarios were developed. Standardization of software version and even compiler version between organizations was found to be essential. An automated method for applying aeroheating to the full integrated vehicle model, including submodels developed by other organizations, was developed.

  1. A global model for steady state and transient S.I. engine heat transfer studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bohac, S.V.; Assanis, D.N.; Baker, D.M.

    1996-09-01

    A global, systems-level model which characterizes the thermal behavior of internal combustion engines is described in this paper. Based on resistor-capacitor thermal networks, either steady-state or transient thermal simulations can be performed. A two-zone, quasi-dimensional spark-ignition engine simulation is used to determine in-cylinder gas temperature and convection coefficients. Engine heat fluxes and component temperatures can subsequently be predicted from specification of general engine dimensions, materials, and operating conditions. Emphasis has been placed on minimizing the number of model inputs and keeping them as simple as possible to make the model practical and useful as an early design tool. The successmore » of the global model depends on properly scaling the general engine inputs to accurately model engine heat flow paths across families of engine designs. The development and validation of suitable, scalable submodels is described in detail in this paper. Simulation sub-models and overall system predictions are validated with data from two spark ignition engines. Several sensitivity studies are performed to determine the most significant heat transfer paths within the engine and exhaust system. Overall, it has been shown that the model is a powerful tool in predicting steady-state heat rejection and component temperatures, as well as transient component temperatures.« less

  2. Validation and Sensitivity Analysis of a New Atmosphere-Soil-Vegetation Model.

    NASA Astrophysics Data System (ADS)

    Nagai, Haruyasu

    2002-02-01

    This paper describes details, validation, and sensitivity analysis of a new atmosphere-soil-vegetation model. The model consists of one-dimensional multilayer submodels for atmosphere, soil, and vegetation and radiation schemes for the transmission of solar and longwave radiations in canopy. The atmosphere submodel solves prognostic equations for horizontal wind components, potential temperature, specific humidity, fog water, and turbulence statistics by using a second-order closure model. The soil submodel calculates the transport of heat, liquid water, and water vapor. The vegetation submodel evaluates the heat and water budget on leaf surface and the downward liquid water flux. The model performance was tested by using measured data of the Cooperative Atmosphere-Surface Exchange Study (CASES). Calculated ground surface fluxes were mainly compared with observations at a winter wheat field, concerning the diurnal variation and change in 32 days of the first CASES field program in 1997, CASES-97. The measured surface fluxes did not satisfy the energy balance, so sensible and latent heat fluxes obtained by the eddy correlation method were corrected. By using options of the solar radiation scheme, which addresses the effect of the direct solar radiation component, calculated albedo agreed well with the observations. Some sensitivity analyses were also done for model settings. Model calculations of surface fluxes and surface temperature were in good agreement with measurements as a whole.

  3. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    USGS Publications Warehouse

    Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby

    2017-01-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  4. SBML Level 3 package: Hierarchical Model Composition, Version 1 Release 3

    PubMed Central

    Smith, Lucian P.; Hucka, Michael; Hoops, Stefan; Finney, Andrew; Ginkel, Martin; Myers, Chris J.; Moraru, Ion; Liebermeister, Wolfram

    2017-01-01

    Summary Constructing a model in a hierarchical fashion is a natural approach to managing model complexity, and offers additional opportunities such as the potential to re-use model components. The SBML Level 3 Version 1 Core specification does not directly provide a mechanism for defining hierarchical models, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The SBML Hierarchical Model Composition package for SBML Level 3 adds the necessary features to SBML to support hierarchical modeling. The package enables a modeler to include submodels within an enclosing SBML model, delete unneeded or redundant elements of that submodel, replace elements of that submodel with element of the containing model, and replace elements of the containing model with elements of the submodel. In addition, the package defines an optional “port” construct, allowing a model to be defined with suggested interfaces between hierarchical components; modelers can chose to use these interfaces, but they are not required to do so and can still interact directly with model elements if they so chose. Finally, the SBML Hierarchical Model Composition package is defined in such a way that a hierarchical model can be “flattened” to an equivalent, non-hierarchical version that uses only plain SBML constructs, thus enabling software tools that do not yet support hierarchy to nevertheless work with SBML hierarchical models. PMID:26528566

  5. A COST-EFFECTIVENESS MODEL FOR THE ANALYSIS OF TITLE I ESEA PROJECT PROPOSALS, PART I-VII.

    ERIC Educational Resources Information Center

    ABT, CLARK C.

    SEVEN SEPARATE REPORTS DESCRIBE AN OVERVIEW OF A COST-EFFECTIVENESS MODEL AND FIVE SUBMODELS FOR EVALUATING THE EFFECTIVENESS OF ELEMENTARY AND SECONDARY ACT TITLE I PROPOSALS. THE DESIGN FOR THE MODEL ATTEMPTS A QUANTITATIVE DESCRIPTION OF EDUCATION SYSTEMS WHICH MAY BE PROGRAMED AS A COMPUTER SIMULATION TO INDICATE THE IMPACT OF A TITLE I…

  6. Model behavior and sensitivity in an application of the cohesive bed component of the community sediment transport modeling system for the York River estuary, VA, USA

    USGS Publications Warehouse

    Fall, Kelsey A.; Harris, Courtney K.; Friedrichs, Carl T.; Rinehimer, J. Paul; Sherwood, Christopher R.

    2014-01-01

    The Community Sediment Transport Modeling System (CSTMS) cohesive bed sub-model that accounts for erosion, deposition, consolidation, and swelling was implemented in a three-dimensional domain to represent the York River estuary, Virginia. The objectives of this paper are to (1) describe the application of the three-dimensional hydrodynamic York Cohesive Bed Model, (2) compare calculations to observations, and (3) investigate sensitivities of the cohesive bed sub-model to user-defined parameters. Model results for summer 2007 showed good agreement with tidal-phase averaged estimates of sediment concentration, bed stress, and current velocity derived from Acoustic Doppler Velocimeter (ADV) field measurements. An important step in implementing the cohesive bed model was specification of both the initial and equilibrium critical shear stress profiles, in addition to choosing other parameters like the consolidation and swelling timescales. This model promises to be a useful tool for investigating the fundamental controls on bed erodibility and settling velocity in the York River, a classical muddy estuary, provided that appropriate data exists to inform the choice of model parameters.

  7. Product shipping information using graceful labeling on undirected tree graph approach

    NASA Astrophysics Data System (ADS)

    Kuan, Yoong Kooi; Ghani, Ahmad Termimi Ab

    2017-08-01

    Product shipping information is the related information of an ordered product that ready to be shipped to the foreign customer's company, where the information represents as an irrefutable proof in black and white to the local manufacturer by E-mails. This messy and unordered list of information is stored in E-mail folders by the people incharge, which do not function in collating the information properly. So, in this paper, an algorithm is proposed on how to rearrange the messy information from the sequence of a path graph structure into a concise version of a caterpillar graph with achieving the concept of graceful labeling. The final graceful caterpillar graph consists of the full listed information together with the numbering, which able to assist people get the information fleetly for shipping arrangement procedure.

  8. System-Wide Water Resources Program Nutrient Sub-Model (SWWRP-NSM) Version 1.1

    DTIC Science & Technology

    2008-09-01

    species including crops, native grasses, and trees . The process descriptions utilize a single plant growth model to simulate all types of land covers...characteristics: • Multi- species , multi-phase, and multi-reaction system • Fast (equilibrium-based) and slow (non-equilibrium-based or rate- based...Transformation and loading of N and P species in the overland flow • Simulation of the N and P cycle in the water column (both overland and

  9. Distributed Prognostics based on Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, I.

    2014-01-01

    Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based models are constructed that describe the operation of a system and how it fails. Such approaches consist of an estimation phase, in which the health state of the system is first identified, and a prediction phase, in which the health state is projected forward in time to determine the end of life. Centralized solutions to these problems are often computationally expensive, do not scale well as the size of the system grows, and introduce a single point of failure. In this paper, we propose a novel distributed model-based prognostics scheme that formally describes how to decompose both the estimation and prediction problems into independent local subproblems whose solutions may be easily composed into a global solution. The decomposition of the prognostics problem is achieved through structural decomposition of the underlying models. The decomposition algorithm creates from the global system model a set of local submodels suitable for prognostics. Independent local estimation and prediction problems are formed based on these local submodels, resulting in a scalable distributed prognostics approach that allows the local subproblems to be solved in parallel, thus offering increases in computational efficiency. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the distributed approach, compare the performance with a centralized approach, and establish its scalability. Index Terms-model-based prognostics, distributed prognostics, structural model decomposition ABBREVIATIONS

  10. Oscillons in a perturbed signum-Gordon model

    NASA Astrophysics Data System (ADS)

    Klimas, P.; Streibel, J. S.; Wereszczynski, A.; Zakrzewski, W. J.

    2018-04-01

    We study various properties of a perturbed signum-Gordon model, which has been obtained through the dimensional reduction of the called `first BPS submodel of the Skyrme model'. This study is motivated by the observation that the first BPS submodel of the Skyrme model may be partially responsible for the good qualities of the rational map ansatz approximation to the solutions of the Skyrme model. We investigate the existence, stability and various properties of oscillons and other time-dependent states in this perturbed signum-Gordon model.

  11. A mixture of sparse coding models explaining properties of face neurons related to holistic and parts-based processing

    PubMed Central

    2017-01-01

    Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models. PMID:28742816

  12. A mixture of sparse coding models explaining properties of face neurons related to holistic and parts-based processing.

    PubMed

    Hosoya, Haruo; Hyvärinen, Aapo

    2017-07-01

    Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models.

  13. Spring hydrograph simulation of karstic aquifers: Impacts of variable recharge area, intermediate storage and memory effects

    NASA Astrophysics Data System (ADS)

    Hosseini, Seiyed Mossa; Ataie-Ashtiani, Behzad; Simmons, Craig T.

    2017-09-01

    A simple conceptual rainfall-runoff model is proposed for the estimation of groundwater balance components in complex karst aquifers. In the proposed model the effects of memory length of different karst flow systems of base-flow, intermediate-flow, and quick-flow and also time variation of recharge area (RA) during a hydrological year were investigated. The model consists of three sub-models: soil moisture balance (SMB), epikarst balance (EPB), and groundwater balance (GWB) to simulate the daily spring discharge. The SMB and EPB sub-models utilize the mass conservation equation to compute the variation of moisture storages in the soil cover and epikarst, respectively. The GWB sub-model computes the spring discharge hydrograph through three parallel linear reservoirs for base-flow, intermediate-flow, and quick-flow. Three antecedent recharge indices are defined and embedded in the model structure to deal with the memory effect of three karst flow systems to antecedent recharge flow. The Sasan Karst aquifer located in the semi-arid region of south-west Iran with a continuous long-term (21-years) daily meteorological and discharge data are considered to describe model calibration and validation procedures. The effects of temporal variations of RA of karst formations during the hydrological year namely invariant RA, two RA (winter and summer), four RA (seasonal), and twelve RA (monthly) are assessed to determine their impact on the model efficiency. Results indicated that the proposed model with monthly-variant RA is able to reproduce acceptable simulation results based on modified Kling-Gupta efficiency (KGE = -0.83). The results of density-based global sensitivity analysis for dry (June to September) and a wet (October to May) period reveal the dominant influence of RA (with sensitivity indices equal to 0.89 and 0.93, respectively) in spring discharge simulation. The sensitivity of simulated spring discharge to memory effect of different karst formations during the dry period is greater than the wet period. In addition, the results reveal the important role of intermediate-flow system in the hydrological modeling of karst systems during the wet period. Precise estimation of groundwater budgets for a better decision making regarding water supplies from complex karst systems with long memory effect can considerably be improved by use of the proposed model.

  14. Monte-Carlo-based uncertainty propagation with hierarchical models—a case study in dynamic torque

    NASA Astrophysics Data System (ADS)

    Klaus, Leonard; Eichstädt, Sascha

    2018-04-01

    For a dynamic calibration, a torque transducer is described by a mechanical model, and the corresponding model parameters are to be identified from measurement data. A measuring device for the primary calibration of dynamic torque, and a corresponding model-based calibration approach, have recently been developed at PTB. The complete mechanical model of the calibration set-up is very complex, and involves several calibration steps—making a straightforward implementation of a Monte Carlo uncertainty evaluation tedious. With this in mind, we here propose to separate the complete model into sub-models, with each sub-model being treated with individual experiments and analysis. The uncertainty evaluation for the overall model then has to combine the information from the sub-models in line with Supplement 2 of the Guide to the Expression of Uncertainty in Measurement. In this contribution, we demonstrate how to carry this out using the Monte Carlo method. The uncertainty evaluation involves various input quantities of different origin and the solution of a numerical optimisation problem.

  15. Experiments and Model Development for the Investigation of Sooting and Radiation Effects in Microgravity Droplet Combustion

    NASA Technical Reports Server (NTRS)

    Choi, Mun Young; Yozgatligil, Ahmet; Dryer, Frederick L.; Kazakov, Andrei; Dobashi, Ritsu

    2001-01-01

    Today, despite efforts to develop and utilize natural gas and renewable energy sources, nearly 97% of the energy used for transportation is derived from combustion of liquid fuels, principally derived from petroleum. While society continues to rely on liquid petroleum-based fuels as a major energy source in spite of their finite supply, it is of paramount importance to maximize the efficiency and minimize the environmental impact of the devices that burn these fuels. The development of improved energy conversion systems, having higher efficiencies and lower emissions, is central to meeting both local and regional air quality standards. This development requires improvements in computational design tools for applied energy conversion systems, which in turn requires more robust sub-model components for combustion chemistry, transport, energy transport (including radiation), and pollutant emissions (soot formation and burnout). The study of isolated droplet burning as a unidimensional, time dependent model diffusion flame system facilitates extensions of these mechanisms to include fuel molecular sizes and pollutants typical of conventional and alternative liquid fuels used in the transportation sector. Because of the simplified geometry, sub-model components from the most detailed to those reduced to sizes compatible for use in multi-dimensional, time dependent applied models can be developed, compared and validated against experimental diffusion flame processes, and tested against one another. Based on observations in microgravity experiments on droplet combustion, it appears that the formation and lingering presence of soot within the fuel-rich region of isolated droplets can modify the burning rate, flame structure and extinction, soot aerosol properties, and the effective thermophysical properties. These observations led to the belief that perhaps one of the most important outstanding contributions of microgravity droplet combustion is the observation that in the absence of asymmetrical forced and natural convection, a soot shell is formed between the droplet surface and the flame, exerting an influence on the droplet combustion response far greater than previously recognized. The effects of soot on droplet burning parameters, including burning rate, soot shell dynamics, flame structure, and extinction phenomena provide significant testing parameters for studying the structure and coupling of soot models with other sub-model components.

  16. Accretion physics: It's not U, it's B

    NASA Astrophysics Data System (ADS)

    Miller, Jon

    2017-03-01

    Black holes grow by accreting mass, but the process is messy and redistributes gas and energy into their environments. New evidence shows that magnetic processes mediate both the accretion and ejection of matter.

  17. No More Green Thumbs!

    ERIC Educational Resources Information Center

    Bland, Judith A.

    1977-01-01

    An alternative method of bacterial spore staining using malachite green is described. This technique is designed to save time and expense by a less messy procedure. Advantages and adaptations of the technique are also given. (MR)

  18. Non-uniform overland flow-infiltration model for roadside swales

    NASA Astrophysics Data System (ADS)

    García-Serrana, María; Gulliver, John S.; Nieber, John L.

    2017-09-01

    There is a need to quantify the hydrologic performance of vegetated roadside swales (drainage ditches) as stormwater control measures (SCMs). To quantify their infiltration performance in both the side slope and the channel of the swale, a model has been developed for coupling a Green-Ampt-Mein-Larson (GAML) infiltration submodel with kinematic wave submodels for both overland flow down the side slope and open channel flow for flow in the ditch. The coupled GAML submodel and overland flow submodel has been validated using data collected in twelve simulated runoff tests in three different highways located in the Minneapolis-St. Paul metropolitan area, MN. The percentage of the total water infiltrated into the side slope is considerably greater than into the channel. Thus, the side slope of a roadside swale is the main component contributing to the loss of runoff by infiltration and the channel primarily conveys the water that runs off the side slope, for the typical design found in highways. Finally, as demonstrated in field observations and the model, the fraction of the runoff/rainfall infiltrated (Vi∗) into the roadside swale appears to increase with a dimensionless saturated hydraulic conductivity (Ks∗), which is a function of the saturated hydraulic conductivity, rainfall intensity, and dimensions of the swale and contributing road surface. For design purposes, the relationship between Vi∗ and Ks∗ can provide a rough estimate of the fraction of runoff/rainfall infiltrated with the few essential parameters that appear to dominate the results.

  19. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens

    We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response ofmore » an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.« less

  20. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    DOE PAGES

    Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens; ...

    2016-12-15

    We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response ofmore » an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.« less

  1. Three-dimensional multi-scale model of deformable platelets adhesion to vessel wall in blood flow

    PubMed Central

    Wu, Ziheng; Xu, Zhiliang; Kim, Oleg; Alber, Mark

    2014-01-01

    When a blood vessel ruptures or gets inflamed, the human body responds by rapidly forming a clot to restrict the loss of blood. Platelets aggregation at the injury site of the blood vessel occurring via platelet–platelet adhesion, tethering and rolling on the injured endothelium is a critical initial step in blood clot formation. A novel three-dimensional multi-scale model is introduced and used in this paper to simulate receptor-mediated adhesion of deformable platelets at the site of vascular injury under different shear rates of blood flow. The novelty of the model is based on a new approach of coupling submodels at three biological scales crucial for the early clot formation: novel hybrid cell membrane submodel to represent physiological elastic properties of a platelet, stochastic receptor–ligand binding submodel to describe cell adhesion kinetics and lattice Boltzmann submodel for simulating blood flow. The model implementation on the GPU cluster significantly improved simulation performance. Predictive model simulations revealed that platelet deformation, interactions between platelets in the vicinity of the vessel wall as well as the number of functional GPIbα platelet receptors played significant roles in platelet adhesion to the injury site. Variation of the number of functional GPIbα platelet receptors as well as changes of platelet stiffness can represent effects of specific drugs reducing or enhancing platelet activity. Therefore, predictive simulations can improve the search for new drug targets and help to make treatment of thrombosis patient-specific. PMID:24982253

  2. NASIS data base management system: IBM 360 TSS implementation. Volume 4: Program design specifications

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The design specifications for the programs and modules within the NASA Aerospace Safety Information System (NASIS) are presented. The purpose of the design specifications is to standardize the preparation of the specifications and to guide the program design. Each major functional module within the system is a separate entity for documentation purposes. The design specifications contain a description of, and specifications for, all detail processing which occurs in the module. Sub-models, reference tables, and data sets which are common to several modules are documented separately.

  3. Competency-Based Medical Education and the Ghost of Kuhn: Reflections on the Messy and Meaningful Work of Transformation.

    PubMed

    Holmboe, Eric S

    2018-03-01

    The transition, if not transformation, to outcomes-based medical education likely represents a paradigm shift struggling to be realized. Paradigm shifts are messy and difficult but ultimately meaningful if done successfully. This struggle has engen dered tension and disagreements, with many of these disagreements cast as either-or polarities. There is little disagreement, however, that the health care system is not effectively achieving the triple aim for all patients. Much of the tension and polarity revolve around how more effectively to prepare students and residents to work in and help change a complex health care system.Competencies were an initial attempt to facilitate this shift by creating frameworks of essential abilities needed by physicians. However, implementation of competencies has proven to be difficult. Entrustable professional activities (EPAs) in undergraduate and graduate medical education and Milestones in graduate medical education are recent concepts being tried and studied as approaches to guide the shift to outcomes. Their primary purpose is to help facilitate implementation of an outcomes-based approach by creating shared mental models of the competencies, which in turn can help to improve curricula and assessment. Understanding whether and how EPAs and Milestones effectively facilitate the shift to outcomes has been and will continue to be an iterative and ongoing reflective process across the entire medical education community using lessons from implementation and complexity science. In this Invited Commentary, the author reflects on what got the community to this point and some sources of tension involved in the struggle to move to outcomes-based education.

  4. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE PAGES

    Dai, Heng; Ye, Ming; Walker, Anthony P.; ...

    2017-03-28

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  5. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Ye, Ming; Walker, Anthony P.

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  6. Building generic anatomical models using virtual model cutting and iterative registration.

    PubMed

    Xiao, Mei; Soh, Jung; Meruvia-Pastor, Oscar; Schmidt, Eric; Hallgrímsson, Benedikt; Sensen, Christoph W

    2010-02-08

    Using 3D generic models to statistically analyze trends in biological structure changes is an important tool in morphometrics research. Therefore, 3D generic models built for a range of populations are in high demand. However, due to the complexity of biological structures and the limited views of them that medical images can offer, it is still an exceptionally difficult task to quickly and accurately create 3D generic models (a model is a 3D graphical representation of a biological structure) based on medical image stacks (a stack is an ordered collection of 2D images). We show that the creation of a generic model that captures spatial information exploitable in statistical analyses is facilitated by coupling our generalized segmentation method to existing automatic image registration algorithms. The method of creating generic 3D models consists of the following processing steps: (i) scanning subjects to obtain image stacks; (ii) creating individual 3D models from the stacks; (iii) interactively extracting sub-volume by cutting each model to generate the sub-model of interest; (iv) creating image stacks that contain only the information pertaining to the sub-models; (v) iteratively registering the corresponding new 2D image stacks; (vi) averaging the newly created sub-models based on intensity to produce the generic model from all the individual sub-models. After several registration procedures are applied to the image stacks, we can create averaged image stacks with sharp boundaries. The averaged 3D model created from those image stacks is very close to the average representation of the population. The image registration time varies depending on the image size and the desired accuracy of the registration. Both volumetric data and surface model for the generic 3D model are created at the final step. Our method is very flexible and easy to use such that anyone can use image stacks to create models and retrieve a sub-region from it at their ease. Java-based implementation allows our method to be used on various visualization systems including personal computers, workstations, computers equipped with stereo displays, and even virtual reality rooms such as the CAVE Automated Virtual Environment. The technique allows biologists to build generic 3D models of their interest quickly and accurately.

  7. A model-data fusion analysis for examining the response of carbon exchange to environmental variation in crop field

    NASA Astrophysics Data System (ADS)

    Yokozawa, M.; Sakurai, G.; Ono, K.; Mano, M.; Miyata, A.

    2011-12-01

    Agricultural activities, cultivating crops, managing soil, harvesting and post-harvest treatments, are not only affected from the surrounding environment but also change the environment reversely. The changes in environment, temperature, radiation and precipitation, brings changes in crop productivity. On the other hand, the status of crops, i.e. the growth and phenological stage, change the exchange of energy, H2O and CO2 between crop vegetation surface and atmosphere. Conducting the stable agricultural harvests, reducing the Greenhouse Effect Gas (GHG) emission and enhancing carbon sequestration in soil are preferable as a win-win activity. We conducted model-data fusion analysis for examining the response of cropland-atmosphere carbon exchange to environmental variation. The used model consists of two sub models, paddy rice growth sub-model and soil decomposition sub-model. The crop growth sub-model mimics the rice plant growth processes including formation of reproductive organs as well as leaf expansion. The soil decomposition sub-model simulates the decomposition process of soil organic carbon. Assimilating the data on the time changes in CO2 flux measured by eddy covariance method, rice plant biomass, LAI and the final yield with the model, the parameters were calibrated using a stochastic optimization algorithm with a particle filter. The particle filter, which is one of Monte Carlo filters, enable us to evaluating time changes in parameters based on the observed data until the time and to make prediction of the system. Iterative filtering and prediction with changing parameters and/or boundary condition enable us to obtain time changes in parameters governing the crop production as well as carbon exchange. In this paper, we applied the model-data fusion analysis to the two datasets on paddy rice field sites in Japan: only a single rice cultivation, and a single rice and wheat cultivation. We focused on the parameters related to crop production as well as soil carbon storage. As a result, the calibrated model with estimated parameters could accurately predict the NEE flux in the subsequent years (Fig.1). The temperature sensitivity, Q10s in the decomposition rate of soil organic carbon (SOC) were obtained as 1.4 for no cultivation period and 2.9 for cultivation period (submerged soil condition).

  8. Comprehensive model for predicting elemental composition of coal pyrolysis products

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ricahrds, Andrew P.; Shutt, Tim; Fletcher, Thomas H.

    Large-scale coal combustion simulations depend highly on the accuracy and utility of the physical submodels used to describe the various physical behaviors of the system. Coal combustion simulations depend on the particle physics to predict product compositions, temperatures, energy outputs, and other useful information. The focus of this paper is to improve the accuracy of devolatilization submodels, to be used in conjunction with other particle physics models. Many large simulations today rely on inaccurate assumptions about particle compositions, including that the volatiles that are released during pyrolysis are of the same elemental composition as the char particle. Another common assumptionmore » is that the char particle can be approximated by pure carbon. These assumptions will lead to inaccuracies in the overall simulation. There are many factors that influence pyrolysis product composition, including parent coal composition, pyrolysis conditions (including particle temperature history and heating rate), and others. All of these factors are incorporated into the correlations to predict the elemental composition of the major pyrolysis products, including coal tar, char, and light gases.« less

  9. Drift-Scale THC Seepage Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    C.R. Bryan

    The purpose of this report (REV04) is to document the thermal-hydrologic-chemical (THC) seepage model, which simulates the composition of waters that could potentially seep into emplacement drifts, and the composition of the gas phase. The THC seepage model is processed and abstracted for use in the total system performance assessment (TSPA) for the license application (LA). This report has been developed in accordance with ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Post-Processing Analysis for THC Seepage) Report Integration'' (BSC 2005 [DIRS 172761]). The technical work plan (TWP) describes planning information pertainingmore » to the technical scope, content, and management of this report. The plan for validation of the models documented in this report is given in Section 2.2.2, ''Model Validation for the DS THC Seepage Model,'' of the TWP. The TWP (Section 3.2.2) identifies Acceptance Criteria 1 to 4 for ''Quantity and Chemistry of Water Contacting Engineered Barriers and Waste Forms'' (NRC 2003 [DIRS 163274]) as being applicable to this report; however, in variance to the TWP, Acceptance Criterion 5 has also been determined to be applicable, and is addressed, along with the other Acceptance Criteria, in Section 4.2 of this report. Also, three FEPS not listed in the TWP (2.2.10.01.0A, 2.2.10.06.0A, and 2.2.11.02.0A) are partially addressed in this report, and have been added to the list of excluded FEPS in Table 6.1-2. This report has been developed in accordance with LP-SIII.10Q-BSC, ''Models''. This report documents the THC seepage model and a derivative used for validation, the Drift Scale Test (DST) THC submodel. The THC seepage model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral alteration on flow in rocks surrounding drifts. The DST THC submodel uses a drift-scale process model relying on the same conceptual model and many of the same input data (i.e., physical, hydrologic, thermodynamic, and kinetic) as the THC seepage model. The DST THC submodel is the primary means for validating the THC seepage model. The DST THC submodel compares predicted water and gas compositions, and mineral alteration patterns, with observed data from the DST. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal-loading conditions, and predict the evolution of mineral alteration and fluid chemistry around potential waste emplacement drifts. The DST THC submodel is used solely for the validation of the THC seepage model and is not used for calibration to measured data.« less

  10. Variability of Phenology and Fluxes of Water and Carbon with Observed and Simulated Soil Moisture in the Ent Terrestrial Biosphere Model (Ent TBM Version 1.0.1.0.0)

    NASA Technical Reports Server (NTRS)

    Kim, Y.; Moorcroft, P. R.; Aleinov, Igor; Puma, M. J.; Kiang, N. Y.

    2015-01-01

    The Ent Terrestrial Biosphere Model (Ent TBM) is a mixed-canopy dynamic global vegetation model developed specifically for coupling with land surface hydrology and general circulation models (GCMs). This study describes the leaf phenology submodel implemented in the Ent TBM version 1.0.1.0.0 coupled to the carbon allocation scheme of the Ecosystem Demography (ED) model. The phenology submodel adopts a combination of responses to temperature (growing degree days and frost hardening), soil moisture (linearity of stress with relative saturation) and radiation (light length). Growth of leaves, sapwood, fine roots, stem wood and coarse roots is updated on a daily basis. We evaluate the performance in reproducing observed leaf seasonal growth as well as water and carbon fluxes for four plant functional types at five Fluxnet sites, with both observed and prognostic hydrology, and observed and prognostic seasonal leaf area index. The phenology submodel is able to capture the timing and magnitude of leaf-out and senescence for temperate broadleaf deciduous forest (Harvard Forest and Morgan- Monroe State Forest, US), C3 annual grassland (Vaira Ranch, US) and California oak savanna (Tonzi Ranch, US). For evergreen needleleaf forest (Hyytiäla, Finland), the phenology submodel captures the effect of frost hardening of photosynthetic capacity on seasonal fluxes and leaf area. We address the importance of customizing parameter sets of vegetation soil moisture stress response to the particular land surface hydrology scheme. We identify model deficiencies that reveal important dynamics and parameter needs.

  11. Simulation and optimization model for irrigation planning and management

    NASA Astrophysics Data System (ADS)

    Kuo, Sheng-Feng; Liu, Chen-Wuing

    2003-10-01

    A simulation and optimization model was developed and applied to an irrigated area in Delta, Utah to optimize the economic benefit, simulate the water demand, and search the related crop area percentages with specified water supply and planted area constraints. The user interface model begins with the weather generation submodel, which produces daily weather data, which is based on long-term monthly average and standard deviation data from Delta, Utah. To simulate the daily crop water demand and relative crop yield for seven crops in two command areas, the information provided by this submodel was applied to the on-farm irrigation scheduling submodel. Furthermore, to optimize the project benefit by searching for the best allocation of planted crop areas given the constraints of projected water supply, the results were employed in the genetic algorithm submodel. Optimal planning for the 394·6-ha area of the Delta irrigation project is projected to produce the maximum economic benefit. That is, projected profit equals US$113 826 and projected water demand equals 3·03 × 106 m3. Also, area percentages of crops within UCA#2 command area are 70·1%, 19% and 10·9% for alfalfa, barley and corn, respectively, and within UCA#4 command area are 41·5%, 38·9%, 14·4% and 5·2% for alfalfa, barley, corn and wheat, respectively. As this model can plan irrigation application depths and allocate crop areas for optimal economic benefit, it can thus be applied to many irrigation projects. Copyright

  12. A Decision Mixture Model-Based Method for Inshore Ship Detection Using High-Resolution Remote Sensing Images

    PubMed Central

    Bi, Fukun; Chen, Jing; Zhuang, Yin; Bian, Mingming; Zhang, Qingjun

    2017-01-01

    With the rapid development of optical remote sensing satellites, ship detection and identification based on large-scale remote sensing images has become a significant maritime research topic. Compared with traditional ocean-going vessel detection, inshore ship detection has received increasing attention in harbor dynamic surveillance and maritime management. However, because the harbor environment is complex, gray information and texture features between docked ships and their connected dock regions are indistinguishable, most of the popular detection methods are limited by their calculation efficiency and detection accuracy. In this paper, a novel hierarchical method that combines an efficient candidate scanning strategy and an accurate candidate identification mixture model is presented for inshore ship detection in complex harbor areas. First, in the candidate region extraction phase, an omnidirectional intersected two-dimension scanning (OITDS) strategy is designed to rapidly extract candidate regions from the land-water segmented images. In the candidate region identification phase, a decision mixture model (DMM) is proposed to identify real ships from candidate objects. Specifically, to improve the robustness regarding the diversity of ships, a deformable part model (DPM) was employed to train a key part sub-model and a whole ship sub-model. Furthermore, to improve the identification accuracy, a surrounding correlation context sub-model is built. Finally, to increase the accuracy of candidate region identification, these three sub-models are integrated into the proposed DMM. Experiments were performed on numerous large-scale harbor remote sensing images, and the results showed that the proposed method has high detection accuracy and rapid computational efficiency. PMID:28640236

  13. A Decision Mixture Model-Based Method for Inshore Ship Detection Using High-Resolution Remote Sensing Images.

    PubMed

    Bi, Fukun; Chen, Jing; Zhuang, Yin; Bian, Mingming; Zhang, Qingjun

    2017-06-22

    With the rapid development of optical remote sensing satellites, ship detection and identification based on large-scale remote sensing images has become a significant maritime research topic. Compared with traditional ocean-going vessel detection, inshore ship detection has received increasing attention in harbor dynamic surveillance and maritime management. However, because the harbor environment is complex, gray information and texture features between docked ships and their connected dock regions are indistinguishable, most of the popular detection methods are limited by their calculation efficiency and detection accuracy. In this paper, a novel hierarchical method that combines an efficient candidate scanning strategy and an accurate candidate identification mixture model is presented for inshore ship detection in complex harbor areas. First, in the candidate region extraction phase, an omnidirectional intersected two-dimension scanning (OITDS) strategy is designed to rapidly extract candidate regions from the land-water segmented images. In the candidate region identification phase, a decision mixture model (DMM) is proposed to identify real ships from candidate objects. Specifically, to improve the robustness regarding the diversity of ships, a deformable part model (DPM) was employed to train a key part sub-model and a whole ship sub-model. Furthermore, to improve the identification accuracy, a surrounding correlation context sub-model is built. Finally, to increase the accuracy of candidate region identification, these three sub-models are integrated into the proposed DMM. Experiments were performed on numerous large-scale harbor remote sensing images, and the results showed that the proposed method has high detection accuracy and rapid computational efficiency.

  14. Variability of phenology and fluxes of water and carbon with observed and simulated soil moisture in the Ent Terrestrial Biosphere Model (Ent TBM version 1.0.1.0.0)

    NASA Astrophysics Data System (ADS)

    Kim, Y.; Moorcroft, P. R.; Aleinov, I.; Puma, M. J.; Kiang, N. Y.

    2015-12-01

    The Ent Terrestrial Biosphere Model (Ent TBM) is a mixed-canopy dynamic global vegetation model developed specifically for coupling with land surface hydrology and general circulation models (GCMs). This study describes the leaf phenology submodel implemented in the Ent TBM version 1.0.1.0.0 coupled to the carbon allocation scheme of the Ecosystem Demography (ED) model. The phenology submodel adopts a combination of responses to temperature (growing degree days and frost hardening), soil moisture (linearity of stress with relative saturation) and radiation (light length). Growth of leaves, sapwood, fine roots, stem wood and coarse roots is updated on a daily basis. We evaluate the performance in reproducing observed leaf seasonal growth as well as water and carbon fluxes for four plant functional types at five Fluxnet sites, with both observed and prognostic hydrology, and observed and prognostic seasonal leaf area index. The phenology submodel is able to capture the timing and magnitude of leaf-out and senescence for temperate broadleaf deciduous forest (Harvard Forest and Morgan-Monroe State Forest, US), C3 annual grassland (Vaira Ranch, US) and California oak savanna (Tonzi Ranch, US). For evergreen needleleaf forest (Hyytiäla, Finland), the phenology submodel captures the effect of frost hardening of photosynthetic capacity on seasonal fluxes and leaf area. We address the importance of customizing parameter sets of vegetation soil moisture stress response to the particular land surface hydrology scheme. We identify model deficiencies that reveal important dynamics and parameter needs.

  15. Software for Engineering Simulations of a Spacecraft

    NASA Technical Reports Server (NTRS)

    Shireman, Kirk; McSwain, Gene; McCormick, Bernell; Fardelos, Panayiotis

    2005-01-01

    Spacecraft Engineering Simulation II (SES II) is a C-language computer program for simulating diverse aspects of operation of a spacecraft characterized by either three or six degrees of freedom. A functional model in SES can include a trajectory flight plan; a submodel of a flight computer running navigational and flight-control software; and submodels of the environment, the dynamics of the spacecraft, and sensor inputs and outputs. SES II features a modular, object-oriented programming style. SES II supports event-based simulations, which, in turn, create an easily adaptable simulation environment in which many different types of trajectories can be simulated by use of the same software. The simulation output consists largely of flight data. SES II can be used to perform optimization and Monte Carlo dispersion simulations. It can also be used to perform simulations for multiple spacecraft. In addition to its generic simulation capabilities, SES offers special capabilities for space-shuttle simulations: for this purpose, it incorporates submodels of the space-shuttle dynamics and a C-language version of the guidance, navigation, and control components of the space-shuttle flight software.

  16. Parallelization of fine-scale computation in Agile Multiscale Modelling Methodology

    NASA Astrophysics Data System (ADS)

    Macioł, Piotr; Michalik, Kazimierz

    2016-10-01

    Nowadays, multiscale modelling of material behavior is an extensively developed area. An important obstacle against its wide application is high computational demands. Among others, the parallelization of multiscale computations is a promising solution. Heterogeneous multiscale models are good candidates for parallelization, since communication between sub-models is limited. In this paper, the possibility of parallelization of multiscale models based on Agile Multiscale Methodology framework is discussed. A sequential, FEM based macroscopic model has been combined with concurrently computed fine-scale models, employing a MatCalc thermodynamic simulator. The main issues, being investigated in this work are: (i) the speed-up of multiscale models with special focus on fine-scale computations and (ii) on decreasing the quality of computations enforced by parallel execution. Speed-up has been evaluated on the basis of Amdahl's law equations. The problem of `delay error', rising from the parallel execution of fine scale sub-models, controlled by the sequential macroscopic sub-model is discussed. Some technical aspects of combining third-party commercial modelling software with an in-house multiscale framework and a MPI library are also discussed.

  17. Hedgehogs and foxes (and a bear)

    NASA Astrophysics Data System (ADS)

    Gibb, Bruce

    2017-02-01

    The chemical universe is big. Really big. You just won't believe how vastly, hugely, mind-bogglingly big it is. Bruce Gibb reminds us that it's somewhat messy too, and so we succeed by recognizing the limits of our knowledge.

  18. A Vote for 'Messiness'.

    ERIC Educational Resources Information Center

    Sizer, Theodore R.

    1985-01-01

    A fictional letter "written" in 2005 laments the success of educational movements that, in the name of productivity, have placed increasing authority over curriculum in the hands of central government, leading to a homogenization of thinking and a dulling of creativity. (PGD)

  19. Simulation of a dust episode over Eastern Mediterranean using a high-resolution atmospheric chemistry general circulation model

    NASA Astrophysics Data System (ADS)

    Abdel Kader, Mohamed; Zittis, Georgios; Astitha, Marina; Lelieveld, Jos; Tymvios, Fillipos

    2013-04-01

    An extended episode of low visibility took place over the Eastern Mediterranean in late September 2011, caused by a strong increase in dust concentrations, analyzed from observations of PM10 (Particulate Matter with <10μm in diameter). A high-resolution version of the atmospheric chemistry general circulation model EMAC (ECHAM5/Messy2.41 Atmospheric Chemistry) was used to simulate the emissions, transport and deposition of airborne desert dust. The model configuration involves the spectral resolution of T255 (0.5°, ~50Km) and 31 vertical levels in the troposphere and lower stratosphere. The model was nudged towards ERA40 reanalysis data to represent the actual meteorological conditions. The dust emissions were calculated online at each model time step and the aerosol microphysics using the GMXe submodel (Global Modal-aerosol eXtension). The model includes a sulphur chemistry mechanism to simulate the transformation of the dust particles from the insoluble (at emission) to soluble modes, which promotes dust removal by precipitation. The model successfully reproduces the dust distribution according to observations by the MODIS satellite instruments and ground-based AERONET stations. The PM10 concentration is also compared with in-situ measurements over Cyprus, resulting in good agreement. The model results show two subsequent dust events originating from the Negev and Sahara deserts. The first dust event resulted from the transport of dust from the Sahara on the 21st of September and lasted only briefly (hours) as the dust particles were efficiently removed by precipitation simulated by the model and observed by the TRMM (Tropical Rainfall Measuring Mission) satellites. The second event resulted from dust transport from the Negev desert to the Eastern Mediterranean during the period 26th - 30th September with a peak concentration at 2500m elevation. This event lasted for four days and diminished due to dry deposition. The observed reduced visibility over Cyprus resulted from the sedimentation of dust originating from the Negev, followed by dry deposition at the surface. The dust particles were both pristine and polluted (sulphate coated), and we evaluate the role of mixing in the duration and extent of the episodes.

  20. GPU-accelerated atmospheric chemical kinetics in the ECHAM/MESSy (EMAC) Earth system model (version 2.52)

    NASA Astrophysics Data System (ADS)

    Alvanos, Michail; Christoudias, Theodoros

    2017-10-01

    This paper presents an application of GPU accelerators in Earth system modeling. We focus on atmospheric chemical kinetics, one of the most computationally intensive tasks in climate-chemistry model simulations. We developed a software package that automatically generates CUDA kernels to numerically integrate atmospheric chemical kinetics in the global climate model ECHAM/MESSy Atmospheric Chemistry (EMAC), used to study climate change and air quality scenarios. A source-to-source compiler outputs a CUDA-compatible kernel by parsing the FORTRAN code generated by the Kinetic PreProcessor (KPP) general analysis tool. All Rosenbrock methods that are available in the KPP numerical library are supported.Performance evaluation, using Fermi and Pascal CUDA-enabled GPU accelerators, shows achieved speed-ups of 4. 5 × and 20. 4 × , respectively, of the kernel execution time. A node-to-node real-world production performance comparison shows a 1. 75 × speed-up over the non-accelerated application using the KPP three-stage Rosenbrock solver. We provide a detailed description of the code optimizations used to improve the performance including memory optimizations, control code simplification, and reduction of idle time. The accuracy and correctness of the accelerated implementation are evaluated by comparing to the CPU-only code of the application. The median relative difference is found to be less than 0.000000001 % when comparing the output of the accelerated kernel the CPU-only code.The approach followed, including the computational workload division, and the developed GPU solver code can potentially be used as the basis for hardware acceleration of numerous geoscientific models that rely on KPP for atmospheric chemical kinetics applications.

  1. IABP timing and ventricular performance--comparison between a compliant and a stiffer aorta: a hybrid model study including baroreflex.

    PubMed

    Fresiello, Libera; Khir, Ashraf W; Di Molfetta, Arianna; Kozarski, Maciej; Ferrari, Gianfranco

    2013-11-01

    The aim of this study was to investigate the effects of the intra aortic balloon pump (IABP) and of aortic compliance on left ventricular performance, including the effects of baroreflex control.
 The study was conducted using a hybrid cardiovascular simulator, including a computational cardiovascular sub-model, a hydraulic sub-model of the descending aorta, and a baroreflex computational sub-model. A 40 cc balloon was inserted into a rubber tube component of the hydraulic sub-model. A comparative analysis was conducted for two aortic compliances (C1 = 2.4 and C2 = 1.43 cm3/mmHg, corresponding to an aortic pulse pressure of 23 mmHg and 35 mmHg, respectively), driving the balloon for different trigger timings.
 Under C1 conditions, the IABP induced higher effects on baroreflex activity (decrement of sympathetic efferent activity: 10% for C1 and 14.7% for C2) and ventricular performance (increment of cardiac output (CO): 3.7% for C1 and 5.2% for C2, increment of endocardial viability ratio (EVR): 24.8% for C1 and 55% for C2). The best balloon timing was different for C1 and C2: inflation trigger timing (from the dicrotic notch) -0.09 s for C1 and -0.04 s for C2, inflation duration 0.25 s for C1 and 0.2 s for C2.
 Early inflation ensures better EVR, CO, and an increment of the afferent nerve activity, hence causing peripheral resistance and heart rate to decrease. The best balloon timing depends on aortic compliance, thus suggesting the need for a therapy tailored to the specific conditions of individual patients.

  2. Numerical Investigation of the Effect of C/O Mole Ratio on the Performance of Rotary Hearth Furnace Using a Combined Model

    NASA Astrophysics Data System (ADS)

    Liu, Ying; Wen, Zhi; Lou, Guofeng; Li, Zhi; Yong, Haiquan; Feng, Xiaohong

    2014-12-01

    In a rotary hearth furnace (RHF) the direct reduction of composite pellets and processes of heat and mass transfer as well as combustion in the chamber of RHF influence each other. These mutual interactions should be considered when an accurate model of RHF is established. This paper provides a combined model that incorporates two sub-models to investigate the effects of C/O mole ratio in the feed pellets on the reduction kinetics and heat and mass transfer as well as combustion processes in the chamber of a pilot-scale RHF. One of the sub-models is established to describe the direct reduction process of composite pellets on the hearth of RHF. Heat and mass transfer within the pellet, chemical reactions, and radiative heat transfer from furnace walls and combustion gas to the surface of the pellet are considered in the model. The other sub-model is used to simulate gas flow and combustion process in the chamber of RHF by using commercial CFD software, FLUENT. The two sub-models were linked through boundary conditions and heat, mass sources. Cases for pellets with different C/O mole ratio were calculated by the combined model. The calculation results showed that the degree of metallization, the total amounts of carbon monoxide escaping from the pellet, and heat absorbed by chemical reactions within the pellet as well as CO and CO2 concentrations in the furnace increase with the increase of C/O mole ratio ranging from 0.6 to 1.0, when calculation conditions are the same except for C/O molar ratio. Carbon content in the pellet has little influence on temperature distribution in the furnace under the same calculation conditions except for C/O mole ratio in the feed pellets.

  3. Development of a flocculation sub-model for a 3-D CFD model based on rectangular settling tanks.

    PubMed

    Gong, M; Xanthos, S; Ramalingam, K; Fillos, J; Beckmann, K; Deur, A; McCorquodale, J A

    2011-01-01

    To assess performance and evaluate alternatives to improve the efficiency of rectangular Gould II type final settling tanks (FSTs), New York City Department of Environmental Protection and City College of NY developed a 3D computer model depicting the actual structural configuration of the tanks and the current and proposed hydraulic and solids loading rates. Fluent 6.3.26™ was the base platform for the computational fluid dynamics (CFD) model, for which sub-models of the SS settling characteristics, turbulence, flocculation and rheology were incorporated. This was supplemented by field and bench scale experiments to quantify the coefficients integral to the sub-models. The 3D model developed can be used to consider different baffle arrangements, sludge withdrawal mechanisms and loading alternatives to the FSTs. Flocculation in the front half of the rectangular tank especially in the region before and after the inlet baffle is one of the vital parameters that influences the capture efficiency of SS. Flocculation could be further improved by capturing medium and small size particles by creating an additional zone with an in-tank baffle. This was one of the methods that was adopted in optimizing the performance of the tank where the CCNY 3D CFD model was used to locate the in-tank baffle position. This paper describes the development of the flocculation sub-model and the relationship of the flocculation coefficients in the known Parker equation to the initial mixed liquor suspended solids (MLSS) concentration X0. A new modified equation is proposed removing the dependency of the breakup coefficient to the initial value of X0 based on preliminary data using normal and low concentration mixed liquor suspended solids values in flocculation experiments performed.

  4. Correlation models for waste tank sludges and slurries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahoney, L.A.; Trent, D.S.

    This report presents the results of work conducted to support the TEMPEST computer modeling under the Flammable Gas Program (FGP) and to further the comprehension of the physical processes occurring in the Hanford waste tanks. The end products of this task are correlation models (sets of algorithms) that can be added to the TEMPEST computer code to improve the reliability of its simulation of the physical processes that occur in Hanford tanks. The correlation models can be used to augment, not only the TEMPEST code, but other computer codes that can simulate sludge motion and flammable gas retention. This reportmore » presents the correlation models, also termed submodels, that have been developed to date. The submodel-development process is an ongoing effort designed to increase our understanding of sludge behavior and improve our ability to realistically simulate the sludge fluid characteristics that have an impact on safety analysis. The effort has employed both literature searches and data correlation to provide an encyclopedia of tank waste properties in forms that are relatively easy to use in modeling waste behavior. These properties submodels will be used in other tasks to simulate waste behavior in the tanks. Density, viscosity, yield strength, surface tension, heat capacity, thermal conductivity, salt solubility, and ammonia and water vapor pressures were compiled for solutions and suspensions of sodium nitrate and other salts (where data were available), and the data were correlated by linear regression. In addition, data for simulated Hanford waste tank supernatant were correlated to provide density, solubility, surface tension, and vapor pressure submodels for multi-component solutions containing sodium hydroxide, sodium nitrate, sodium nitrite, and sodium aluminate.« less

  5. Biomimicry of quorum sensing using bacterial lifecycle model.

    PubMed

    Niu, Ben; Wang, Hong; Duan, Qiqi; Li, Li

    2013-01-01

    Recent microbiologic studies have shown that quorum sensing mechanisms, which serve as one of the fundamental requirements for bacterial survival, exist widely in bacterial intra- and inter-species cell-cell communication. Many simulation models, inspired by the social behavior of natural organisms, are presented to provide new approaches for solving realistic optimization problems. Most of these simulation models follow population-based modelling approaches, where all the individuals are updated according to the same rules. Therefore, it is difficult to maintain the diversity of the population. In this paper, we present a computational model termed LCM-QS, which simulates the bacterial quorum-sensing (QS) mechanism using an individual-based modelling approach under the framework of Agent-Environment-Rule (AER) scheme, i.e. bacterial lifecycle model (LCM). LCM-QS model can be classified into three main sub-models: chemotaxis with QS sub-model, reproduction and elimination sub-model and migration sub-model. The proposed model is used to not only imitate the bacterial evolution process at the single-cell level, but also concentrate on the study of bacterial macroscopic behaviour. Comparative experiments under four different scenarios have been conducted in an artificial 3-D environment with nutrients and noxious distribution. Detailed study on bacterial chemotatic processes with quorum sensing and without quorum sensing are compared. By using quorum sensing mechanisms, artificial bacteria working together can find the nutrient concentration (or global optimum) quickly in the artificial environment. Biomimicry of quorum sensing mechanisms using the lifecycle model allows the artificial bacteria endowed with the communication abilities, which are essential to obtain more valuable information to guide their search cooperatively towards the preferred nutrient concentrations. It can also provide an inspiration for designing new swarm intelligence optimization algorithms, which can be used for solving the real-world problems.

  6. Biomimicry of quorum sensing using bacterial lifecycle model

    PubMed Central

    2013-01-01

    Background Recent microbiologic studies have shown that quorum sensing mechanisms, which serve as one of the fundamental requirements for bacterial survival, exist widely in bacterial intra- and inter-species cell-cell communication. Many simulation models, inspired by the social behavior of natural organisms, are presented to provide new approaches for solving realistic optimization problems. Most of these simulation models follow population-based modelling approaches, where all the individuals are updated according to the same rules. Therefore, it is difficult to maintain the diversity of the population. Results In this paper, we present a computational model termed LCM-QS, which simulates the bacterial quorum-sensing (QS) mechanism using an individual-based modelling approach under the framework of Agent-Environment-Rule (AER) scheme, i.e. bacterial lifecycle model (LCM). LCM-QS model can be classified into three main sub-models: chemotaxis with QS sub-model, reproduction and elimination sub-model and migration sub-model. The proposed model is used to not only imitate the bacterial evolution process at the single-cell level, but also concentrate on the study of bacterial macroscopic behaviour. Comparative experiments under four different scenarios have been conducted in an artificial 3-D environment with nutrients and noxious distribution. Detailed study on bacterial chemotatic processes with quorum sensing and without quorum sensing are compared. By using quorum sensing mechanisms, artificial bacteria working together can find the nutrient concentration (or global optimum) quickly in the artificial environment. Conclusions Biomimicry of quorum sensing mechanisms using the lifecycle model allows the artificial bacteria endowed with the communication abilities, which are essential to obtain more valuable information to guide their search cooperatively towards the preferred nutrient concentrations. It can also provide an inspiration for designing new swarm intelligence optimization algorithms, which can be used for solving the real-world problems. PMID:23815296

  7. Can a Reaction's Environment Program its Outcome, and Does it Matter?

    NASA Astrophysics Data System (ADS)

    Surman, A. J.; Rodriguez-Garcia, M.; Abul-Haija, Y.; Cooper, G. J. T.; Donkers, K.; Planchat i Barbarà, J. M.; Kube, J.; Mullin, M.; Hezwani, M.; Cronin, L.

    2017-07-01

    Where most eschew reactions producing complex mixtures (‘tar') and prefer to plan ‘clean' syntheses, we embrace complexity. We show that environments can steer ‘messy' reactions, and ask if this can yield significant difference in structure and function.

  8. The 2003 HBR list. Breakthrough ideas for tomorrow's business agenda.

    PubMed

    2003-04-01

    International conflict. Bear markets. Corporate scandals. The events of this past year have prompted intense soul-searching in many quarters and led us, in this year's list of the best business ideas, to reassess some of the most basic assumptions about strategy, organizations, and leadership. We began by reconsidering the role of the leader. Whether the boss is a hero or villain, discussions of leadership focus almost exclusively on the CEO. But attention also needs to be paid to the other people who make organizations work, not only to the corporate boards that oversee CEOs but to the followers--to their responsibilities, their power, and their obligation not to follow flawed leaders. And we considered the fate of soft issues, like emotional intelligence, in hard times. It's tempting to dismiss them when your employees will do anything just to keep their jobs. But hard times are good times to employ such tools on yourself. They can arm you with the self-awareness you need to understand, anticipate, and outwit your enemies. Where tools may fail, an attitude adjustment may be what's needed. Despite valiant efforts to lead change and eliminate inefficiencies, organizations stay messy. Perhaps it's better to learn to live with messiness and even focus on its benefits, one of which may be growth. Not the meteoric, effortless illusion we indulged in during the 1990s, but significant gains nonetheless. These can come when managers embrace messiness not just within their organizations but along the boundaries of the firm, blurring the line between their own core assets and functions and those of other companies. There's growth potential, too, in considering the company as a portfolio of opportunities--but only if managers can sell off poorly performing business units as easily as they've been shedding ailing stocks of late.

  9. A derivation of the Cramer-Rao lower bound of euclidean parameters under equality constraints via score function

    NASA Astrophysics Data System (ADS)

    Susyanto, Nanang

    2017-12-01

    We propose a simple derivation of the Cramer-Rao Lower Bound (CRLB) of parameters under equality constraints from the CRLB without constraints in regular parametric models. When a regular parametric model and an equality constraint of the parameter are given, a parametric submodel can be defined by restricting the parameter under that constraint. The tangent space of this submodel is then computed with the help of the implicit function theorem. Finally, the score function of the restricted parameter is obtained by projecting the efficient influence function of the unrestricted parameter on the appropriate inner product spaces.

  10. Comprehensive Model of Single Particle Pulverized Coal Combustion Extended to Oxy-Coal Conditions

    DOE PAGES

    Holland, Troy; Fletcher, Thomas H.

    2017-02-22

    Oxy-fired coal combustion is a promising potential carbon capture technology. Predictive CFD simulations are valuable tools in evaluating and deploying oxy-fuel and other carbon capture technologies either as retrofit technologies or for new construction. But, accurate predictive simulations require physically realistic submodels with low computational requirements. In particular, comprehensive char oxidation and gasification models have been developed that describe multiple reaction and diffusion processes. Our work extends a comprehensive char conversion code (CCK), which treats surface oxidation and gasification reactions as well as processes such as film diffusion, pore diffusion, ash encapsulation, and annealing. In this work several submodels inmore » the CCK code were updated with more realistic physics or otherwise extended to function in oxy-coal conditions. Improved submodels include the annealing model, the swelling model, the mode of burning parameter, and the kinetic model, as well as the addition of the chemical percolation devolatilization (CPD) model. We compare our results of the char combustion model to oxy-coal data, and further compared to parallel data sets near conventional conditions. A potential method to apply the detailed code in CFD work is given.« less

  11. Comprehensive Model of Single Particle Pulverized Coal Combustion Extended to Oxy-Coal Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holland, Troy; Fletcher, Thomas H.

    Oxy-fired coal combustion is a promising potential carbon capture technology. Predictive CFD simulations are valuable tools in evaluating and deploying oxy-fuel and other carbon capture technologies either as retrofit technologies or for new construction. But, accurate predictive simulations require physically realistic submodels with low computational requirements. In particular, comprehensive char oxidation and gasification models have been developed that describe multiple reaction and diffusion processes. Our work extends a comprehensive char conversion code (CCK), which treats surface oxidation and gasification reactions as well as processes such as film diffusion, pore diffusion, ash encapsulation, and annealing. In this work several submodels inmore » the CCK code were updated with more realistic physics or otherwise extended to function in oxy-coal conditions. Improved submodels include the annealing model, the swelling model, the mode of burning parameter, and the kinetic model, as well as the addition of the chemical percolation devolatilization (CPD) model. We compare our results of the char combustion model to oxy-coal data, and further compared to parallel data sets near conventional conditions. A potential method to apply the detailed code in CFD work is given.« less

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gur, Sourav; Frantziskonis, George N.; Univ. of Arizona, Tucson, AZ

    Here, we report results from a numerical study of multi-time-scale bistable dynamics for CO oxidation on a catalytic surface in a flowing, well-mixed gas stream. The problem is posed in terms of surface and gas-phase submodels that dynamically interact in the presence of stochastic perturbations, reflecting the impact of molecular-scale fluctuations on the surface and turbulence in the gas. Wavelet-based methods are used to encode and characterize the temporal dynamics produced by each submodel and detect the onset of sudden state shifts (bifurcations) caused by nonlinear kinetics. When impending state shifts are detected, a more accurate but computationally expensive integrationmore » scheme can be used. This appears to make it possible, at least in some cases, to decrease the net computational burden associated with simulating multi-time-scale, nonlinear reacting systems by limiting the amount of time in which the more expensive integration schemes are required. Critical to achieving this is being able to detect unstable temporal transitions such as the bistable shifts in the example problem considered here. Lastly, our results indicate that a unique wavelet-based algorithm based on the Lipschitz exponent is capable of making such detections, even under noisy conditions, and may find applications in critical transition detection problems beyond catalysis.« less

  13. The impact of SciDAC on US climate change research and the IPCCAR4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wehner, Michael

    2005-07-08

    SciDAC has invested heavily in climate change research. We offer a candid opinion as to the impact of the DOE laboratories' SciDAC projects on the upcoming Fourth Assessment Report of the Intergovernmental Panel on Climate Change. As a result of the direct importance of climate change to society, climate change research is highly coordinated at the international level. The Intergovernmental Panel on Climate Change (IPCC) is charged with providing regular reports on the state of climate change research to government policymakers. These reports are the product of thousands of scientists efforts. A series of reviews involving both scientists and policymakersmore » make them among the most reviewed documents produced in any scientific field. The high profile of these reports acts a driver to many researchers in the climate sciences. The Fourth Assessment Report (AR4) is scheduled to be released in 2007. SciDAC sponsored research has enabled the United States climate modeling community to make significant contributions to this report. Two large multi-Laboratory SciDAC projects are directly relevant to the activities of the IPCC. The first, entitled ''Collaborative Design and Development of the Community Climate System Model for Terascale Computers'', has made important software contributions to the recently released third version of the Community Climate System Model (CCSM3.0) developed at the National Center for Atmospheric Research. This is a multi-institutional project involving Los Alamos National Laboratory, Oak Ridge National Laboratory, Lawrence Berkeley National Laboratory, Pacific Northwest National Laboratory, Argonne National Laboratory, Lawrence Livermore National Laboratory and the National Center for Atmospheric Research. The original principal investigators were Robert Malone and John B. Drake. The current principal investigators are Phil Jones and John B. Drake. The second project, entitled ''Earth System Grid II: Turning Climate Datasets into Community Resources'' aims to facilitate the distribution of the copious amounts of data produced by coupled climate model integrations to the general scientific community. This is also a multi-institutional project involving Argonne National Laboratory, Oak Ridge National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory and the National Center for Atmospheric Research. The principal investigators are Ian Foster, Don Middleton and Dean Williams. Perhaps most significant among the activities of the ''Collaborative Design'', project was the development of an efficient multi-processor coupling package. CCSM3.0 is an extraordinarily complicated physics code. The fully coupled model consists of separate submodels of the atmosphere, ocean, sea ice and land. In addition, comprehensive biogeochemistry and atmospheric chemistry submodels are under intensive current development. Each of these submodels is a large and sophisticated program in its own right. Furthermore, in the coupled model, each of the submodels, including the coupler, is a separate multiprocessor executable program. The coupler package must efficiently coordinate the communication as well as interpolate or aggregate information between these programs. This regridding function is necessary because each major subsystem (air, water or surface) is allowed to have its own independent grid.« less

  14. Prediction of climate change in Brunei Darussalam using statistical downscaling model

    NASA Astrophysics Data System (ADS)

    Hasan, Dk. Siti Nurul Ain binti Pg. Ali; Ratnayake, Uditha; Shams, Shahriar; Nayan, Zuliana Binti Hj; Rahman, Ena Kartina Abdul

    2017-06-01

    Climate is changing and evidence suggests that the impact of climate change would influence our everyday lives, including agriculture, built environment, energy management, food security and water resources. Brunei Darussalam located within the heart of Borneo will be affected both in terms of precipitation and temperature. Therefore, it is crucial to comprehend and assess how important climate indicators like temperature and precipitation are expected to vary in the future in order to minimise its impact. This study assesses the application of a statistical downscaling model (SDSM) for downscaling General Circulation Model (GCM) results for maximum and minimum temperatures along with precipitation in Brunei Darussalam. It investigates future climate changes based on numerous scenarios using Hadley Centre Coupled Model, version 3 (HadCM3), Canadian Earth System Model (CanESM2) and third-generation Coupled Global Climate Model (CGCM3) outputs. The SDSM outputs were improved with the implementation of bias correction and also using a monthly sub-model instead of an annual sub-model. The outcomes of this assessment show that monthly sub-model performed better than the annual sub-model. This study indicates a satisfactory applicability for generation of maximum temperatures, minimum temperatures and precipitation for future periods of 2017-2046 and 2047-2076. All considered models and the scenarios were consistent in predicting increasing trend of maximum temperature, increasing trend of minimum temperature and decreasing trend of precipitations. Maximum overall trend of Tmax was also observed for CanESM2 with Representative Concentration Pathways (RCP) 8.5 scenario. The increasing trend is 0.014 °C per year. Accordingly, by 2076, the highest prediction of average maximum temperatures is that it will increase by 1.4 °C. The same model predicts an increasing trend of Tmin of 0.004 °C per year, while the highest trend is seen under CGCM3-A2 scenario which is 0.009 °C per year. The highest change predicted for the Tmin is therefore 0.9 °C by 2076. The precipitation showed a maximum trend of decrease of 12.7 mm year. It is also seen in the output using CanESM2 data that precipitation will be more chaotic with some reaching 4800 mm per year and also producing low rainfall about 1800 mm per year. All GCMs considered are consistent in predicting it is very likely that Brunei is expected to experience more warming as well as less frequent precipitation events but with a possibility of intensified and drastically high rainfalls in the future.

  15. The messy truth about weight loss

    USDA-ARS?s Scientific Manuscript database

    The prevalence of obesity continues to rise worldwide, and today 37% of Americans are obese and an additional 34% are overweight. The metabolic effects of obesity are known to severely increase the risk of all major noncommunicable diseases including type 2 diabetes, heart disease, stroke, and sever...

  16. Measuring the Success of Warrior Transition Units

    DTIC Science & Technology

    2009-04-30

    overworked case managers.”1 They described patients and family members who were frustrated with the “messy bureaucratic battlefield”2 of Walter Reed...on every Warrior that includes an analysis of suicide risk, violence towards others, medication use, falls, driving, alcohol, non-prescribed drug use

  17. Building a Learning Organization.

    ERIC Educational Resources Information Center

    Mohr, Nancy; Dichter, Alan

    2001-01-01

    Faculties must pass through several stages when becoming learning organizations: the honeymoon, conflict, confusion, messy, scary, and mature-group stages. Mature school communities have learned to view power differently, make learning more meaningful for students, and model a just and democratic society. Consensus is the starting point. (MLH)

  18. Benzylpiperazine: "A messy drug".

    PubMed

    Katz, D P; Deruiter, J; Bhattacharya, D; Ahuja, M; Bhattacharya, S; Clark, C R; Suppiramaniam, V; Dhanasekaran, M

    2016-07-01

    Designer drugs are synthetic structural analogues/congeners of controlled substances with slightly modified chemical structures intended to mimic the pharmacological effects of known drugs of abuse so as to evade drug classification. Benzylpiperazine (BZP), a piperazine derivative, elevates synaptic dopamine and serotonin levels producing stimulatory and hallucinogenic effects, respectively, similar to the well-known drug of abuse, methylenedioxymethamphetamine (MDMA). Furthermore, BZP augments the release of norepinephrine by inhibiting presynaptic autoreceptors, therefore, BZP is a "messy drug" due to its multifaceted regulation of synaptic monoamine neurotransmitters. Initially, pharmaceutical companies used BZP as a therapeutic drug for the treatment of various disease states, but due to its contraindications and abuse potential it was withdrawn from the market. BZP imparts predominately sympathomimetic effects accompanied by serious cardiovascular implications. Addictive properties of BZP include behavioral sensitization, cross sensitization, conditioned place preference and repeated self-administration. Additional testing of piperazine derived drugs is needed due to a scarcity of toxicological data and widely abuse worldwide. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Principal process analysis of biological models.

    PubMed

    Casagranda, Stefano; Touzeau, Suzanne; Ropers, Delphine; Gouzé, Jean-Luc

    2018-06-14

    Understanding the dynamical behaviour of biological systems is challenged by their large number of components and interactions. While efforts have been made in this direction to reduce model complexity, they often prove insufficient to grasp which and when model processes play a crucial role. Answering these questions is fundamental to unravel the functioning of living organisms. We design a method for dealing with model complexity, based on the analysis of dynamical models by means of Principal Process Analysis. We apply the method to a well-known model of circadian rhythms in mammals. The knowledge of the system trajectories allows us to decompose the system dynamics into processes that are active or inactive with respect to a certain threshold value. Process activities are graphically represented by Boolean and Dynamical Process Maps. We detect model processes that are always inactive, or inactive on some time interval. Eliminating these processes reduces the complex dynamics of the original model to the much simpler dynamics of the core processes, in a succession of sub-models that are easier to analyse. We quantify by means of global relative errors the extent to which the simplified models reproduce the main features of the original system dynamics and apply global sensitivity analysis to test the influence of model parameters on the errors. The results obtained prove the robustness of the method. The analysis of the sub-model dynamics allows us to identify the source of circadian oscillations. We find that the negative feedback loop involving proteins PER, CRY, CLOCK-BMAL1 is the main oscillator, in agreement with previous modelling and experimental studies. In conclusion, Principal Process Analysis is a simple-to-use method, which constitutes an additional and useful tool for analysing the complex dynamical behaviour of biological systems.

  20. Environmental systems and management activities on the Kennedy Space Center, Merritt Island, Florida: results of a modeling workshop

    USGS Publications Warehouse

    Hamilton, David B.; Andrews, Austin K.; Auble, Gregor T.; Ellison, Richard A.; Farmer, Adrian H.; Roelle, James E.

    1985-01-01

    In the early 1960's, the National Aeronautics and Space Administration (NASA) began purchasing 140,000 acres on Merritt Island, Florida, in order to develop a center for space exploration. Most of this land was acquired to provide a safety and security buffer around NASA facilities. NASA, as the managing agency for the Kennedy Space Center (KSC), is responsible for preventing or controlling environmental pollution from the Federal facilities and activities at the Space Center and is committed to use all practicable means to protect and enhance the quality of the surrounding environment. The Merritt Island National Wildlife Refuge was established in 1963 when management authority for undeveloped lands at KSC was transferred to the U.S. Fish and Wildlife Service. In addition to manage for 11 Federally-listed threatened and endangered species and other resident and migratory fish and wildlife populations, the Refuge has comanagement responsibility for 19,000 acres of mosquito control impoundments and 2,500 acres of citrus groves. The Canaveral National Seashore was developed in 1975 when management of a portion of the coastal lands was transferred from NASA to the National Park Service. This multiagency jurisdiction on Merritt Island has resulted in a complex management environment. The modeling workshop described in this report was conducted May 21-25, 1984, at the Kennedy Space Center to: (1) enhance communication among the agencies with management responsibilities on Merritt Island; (2) integrate available information concerning the development, management, and ecology of Merritt Island; and (3) identify key research and monitoring needs associated with the management and use of the island's resources. The workshop was structured around the formulation of a model that would simulate primary management and use activities on Merritt Island and their effects on upland, impoundment, and estuarine vegetation and associated wildlife. The simulation model is composed of four connected submodels. The Uplands submodel calculates changes in acres and structural components of vegetation communities resulting from succession, fire, facilities development, and shuttle launch depositions, as well as the quantity and quality of surface runoff and aquifer input to an impoundment and an estuary. The Impoundment submodel next determines water quality and quantity and changes in vegetation resulting from water level manipulation and prescribed burning. The Estuary submodel than determines water quality parameters and acres of seagrass beds. Finally, the Wildlife submodel calculates habitat suitability indices for key species of interest, based on vegetation conditions in the uplands and impoundments and on several hydrologic parameters. The model represents a hypothetical management unit with 2,500 acres of uplands, a 600-acre impoundment, and a 1,500-acre section of estuary. Two management scenarios were run to analyze model behavior. The scenarios differ in the frequency of shuttle launches and prescribed burning, the extent of facilities development, the amount of land disposed waste material applied, and the nature and timing of impoundment water level control. Early in a model development project, the process of building the model is usually of greater benefit than the model itself. The model building process stimulates interaction among agencies, assists in integrating existing information, and helps identify research needs. These benefits usually accrue even in the absence of real predictive power in the resulting model. Open communication occurs among the Federal, State, and local agencies involved with activities on Merritt Island and the agencies have a cooperative working relationship. The workshop provided an opportunity for all of these agencies to meet at one time and have focused discussions on the key environmental and multiagency resource management issues. The workshop framework helped to integrate information and assumptions from a number of disciplines and agencies. This integration occurred in the computer simulation model and among workshop participants as submodel linkages were developed and scenario results discussed. A number of research needs were identified at the workshop during the model building and testing exercises and associated discussions. These needs were based on the informed judgement of researchers and managers familiar with Merritt Island or similar areas, rather than on a comprehensive literature review of sensitivity analysis of the preliminary model developed at the workshop. Some of the needs can be addressed by interpreting the results of completed studies from similar geographic areas as they relate to Merritt Island, while other will require additional research studies on Merritt Island. Major research needs associated with the Upland submodel include behavior of the near-surface aquifer, factors limiting slash pine regeneration, frequency and effects of natural fire on various cover types, cumulative effects of shuttle launches, and fate in upland soils of nitrogen and phosphorous from land applied waste material. Key Impoundment submodel needs include documentation of vegetation changes in response to altered water depth, salinity, and nutrient concentrations and better specification of the functional characteristics of impoundments as chemical filters. Important information gaps identified in the Estuary submodel include a more complete analysis of factors contributing to phytoplankton abundance, evaluation of sources of turbidity other than phytoplankton, and identification and quantification of factors limiting seagrass distribution. Primary research needs associated with the Wildlife submodel include a survey of breeding habitat, production data, and harvest data for mottled ducks; data on the emigration and immigration of juvenile mullet (and other transient fish) in the impoundment; the contribution of various seagrasses to habitat requirements of sea trout; and the effects of dissolved oxygen on survival of juvenile sea trout. Ideally, the modeling workshop process is iterative in nature. Periods between workshops are used for research, data collection, and model refinement. Each workshop integrates information collected since the last workshop and produces a more credible model that is more useful in evaluating management alternatives. Participants felt that continued application of this process would help provide ongoing integration and communication among agencies and would allow each agency's planning and management activities to be viewed within the context of an overall assessment.

  1. Modeling snail breeding in Bioregenerative Life Support System

    NASA Astrophysics Data System (ADS)

    Kovalev, Vladimir; Tikhomirov, Alexander A.; Nickolay Manukovsky, D..

    It is known that snail meat is a high quality food that is rich in protein. Hence, heliciculture or land snail farming spreads worldwide because it is a profitable business. The possibility to use the snails of Helix pomatia in Biological Life Support System (BLSS) was studied by Japanese Researches. In that study land snails were considered to be producers of animal protein. Also, snail breeding was an important part of waste processing, because snails were capable to eat the inedible plant biomass. As opposed to the agricultural snail farming, heliciculture in BLSS should be more carefully planned. The purpose of our work was to develop a model for snail breeding in BLSS that can predict mass flow rates in and out of snail facility. There are three linked parts in the model called “Stoichiometry”, “Population” and “Mass balance”, which are used in turn. Snail population is divided into 12 age groups from oviposition to one year. In the submodel “Stoichiometry” the individual snail growth and metabolism in each of 12 age groups are described with stoichiometry equations. Reactants are written on the left side of the equations, while products are written on the right side. Stoichiometry formulas of reactants and products consist of four chemical elements: C, H, O, N. The reactants are feed and oxygen, products are carbon dioxide, metabolic water, snail meat, shell, feces, slime and eggs. If formulas of substances in the stoichiometry equations are substituted with their molar masses, then stoichiometry equations are transformed to the equations of molar mass balance. To get the real mass balance of individual snail growth and metabolism one should multiply the value of each molar mass in the equations on the scale parameter, which is the ratio between mass of monthly consumed feed and molar mass of feed. Mass of monthly consumed feed and stoichiometry coefficients of formulas of meat, shell, feces, slime and eggs should be determined experimentally. An age structure and size of snail population are optimized on the base of individual growth and metabolic characteristics with the help of the second submodel "Population". In this simulation a daily amount of snail meat consumed by crewmembers is a guideline which specifies population productivity. Also, the daily amount of snail meat may have an optional value. Prescribed population characteristics are used in the third submodel "Mass balance" to equalize input and output mass flow rates of snail facility. In this submodel we add a water and ash to the organic masses of feed, meat, feces, shell and eggs. Moreover, masses of calcium carbonate and potable water are added to the left side of mass balance equations. Mass of calcium carbonate is distributed among shell, feces and eggs. Summarizing the twelve equations for each snail age, we get the mass balance equation for the snail facility. All simulations are performed by using Solver Add-In for Excel 2007.

  2. Risk assessment of flood disaster and forewarning model at different spatial-temporal scales

    NASA Astrophysics Data System (ADS)

    Zhao, Jun; Jin, Juliang; Xu, Jinchao; Guo, Qizhong; Hang, Qingfeng; Chen, Yaqian

    2018-05-01

    Aiming at reducing losses from flood disaster, risk assessment of flood disaster and forewarning model is studied. The model is built upon risk indices in flood disaster system, proceeding from the whole structure and its parts at different spatial-temporal scales. In this study, on the one hand, it mainly establishes the long-term forewarning model for the surface area with three levels of prediction, evaluation, and forewarning. The method of structure-adaptive back-propagation neural network on peak identification is used to simulate indices in prediction sub-model. Set pair analysis is employed to calculate the connection degrees of a single index, comprehensive index, and systematic risk through the multivariate connection number, and the comprehensive assessment is made by assessment matrixes in evaluation sub-model. The comparison judging method is adopted to divide warning degree of flood disaster on risk assessment comprehensive index with forewarning standards in forewarning sub-model and then the long-term local conditions for proposing planning schemes. On the other hand, it mainly sets up the real-time forewarning model for the spot, which introduces the real-time correction technique of Kalman filter based on hydrological model with forewarning index, and then the real-time local conditions for presenting an emergency plan. This study takes Tunxi area, Huangshan City of China, as an example. After risk assessment and forewarning model establishment and application for flood disaster at different spatial-temporal scales between the actual and simulated data from 1989 to 2008, forewarning results show that the development trend for flood disaster risk remains a decline on the whole from 2009 to 2013, despite the rise in 2011. At the macroscopic level, project and non-project measures are advanced, while at the microcosmic level, the time, place, and method are listed. It suggests that the proposed model is feasible with theory and application, thus offering a way for assessing and forewarning flood disaster risk.

  3. Scaling theory of topological phase transitions

    NASA Astrophysics Data System (ADS)

    Chen, Wei

    2016-02-01

    Topologically ordered systems are characterized by topological invariants that are often calculated from the momentum space integration of a certain function that represents the curvature of the many-body state. The curvature function may be Berry curvature, Berry connection, or other quantities depending on the system. Akin to stretching a messy string to reveal the number of knots it contains, a scaling procedure is proposed for the curvature function in inversion symmetric systems, from which the topological phase transition can be identified from the flow of the driving energy parameters that control the topology (hopping, chemical potential, etc) under scaling. At an infinitesimal operation, one obtains the renormalization group (RG) equations for the driving energy parameters. A length scale defined from the curvature function near the gap-closing momentum is suggested to characterize the scale invariance at critical points and fixed points, and displays a universal critical behavior in a variety of systems examined.

  4. Ooey, Gooey, Fish Guts

    ERIC Educational Resources Information Center

    Timmons, Maryellen

    2004-01-01

    Fish dissections are a great way to introduce the concepts of food webs, predator-prey relationships, and ecosystems, but these labs are expensive, messy, smelly, and require a lot of supervision because of the tools involved. The author has developed an inexpensive, safe, and clean alternative where students "dissect" simulated fish…

  5. Psychological Readiness and Motor Skills Needed for Toilet Training

    MedlinePlus

    ... as dirty or messy will only make your child feel bad), but you can certainly say positive things about how good it feels and smells to be clean and dry. Your goal is to strengthen your child’s awareness of the feeling of needing to go ...

  6. The Mud Center: Recapturing Childhood.

    ERIC Educational Resources Information Center

    Jensen, Becky J.; Bullard, Julie A.

    2002-01-01

    Describes a Montana child development center's creation of an area in which children could enjoy messy, creative, sensory experiences playing with mud and a wide variety of outdoor props. Discusses how mud play contributed to young children's emerging interests and provided opportunities for expressing creativity, enhancing fine motor skills, and…

  7. Colleague to Colleague: Deepening Instructional Practice

    ERIC Educational Resources Information Center

    Gullen, Kristine; Chaffee, Martin

    2012-01-01

    Collaborative dialogue about instructional practices is essential to the growth of the education profession. To determine what effective instruction is and how to improve their own instructional practice, educators must clarify and publicly state their beliefs about instruction, teaching, and leadership. This is messy and complex work, and to…

  8. "People Are Messy": Complex Narratives of Supervising New Professionals in Student Affairs

    ERIC Educational Resources Information Center

    Davis, Tiffany J.; Cooper, Diane L.

    2017-01-01

    This study explored how supervisors in student affairs narrate their experiences of supervising new professionals. Utilizing narrative inquiry methodology, data were obtained through in-depth interviews of 13 supervisors and analyzed using thematic and narrative analysis methods. Implications for graduate preparation programs, professional…

  9. A Formula for Factoring.

    ERIC Educational Resources Information Center

    Roebuck, Kay I. Meeks

    1997-01-01

    Suggests use of the quadratic formula to build understanding that connections between factors and solutions to equations work both ways. Making use of natural connections among concepts allows students to work more efficiently. Presents four sample problems showing the roots of equations. Messy quadratic equations with rational roots can be solved…

  10. Rhizomatic Mapping: Spaces for Learning in Higher Education

    ERIC Educational Resources Information Center

    Grellier, Jane

    2013-01-01

    Philosopher Gilles Deleuze and psychoanalyst Felix Guattari's figuration of the rhizome describes structures that are non-hierarchical and open-ended. Rhizomatic analyses are increasingly being adopted in educational research to challenge traditional power structures, give voice to those previously unheard and open issues in messy but authentic…

  11. In Praise of Messy Data

    ERIC Educational Resources Information Center

    Gould, Roy; Sunbury, Susan; Dussault, Mary

    2014-01-01

    The "Next-Generation Science Standards" emphasize the importance of teaching the practices of science alongside content ideas and crosscutting concepts (NGSS Lead States 2013). Chief among these practices is the ability to gather, assess, analyze, and interpret data. Authentic inquiry near the leading-edge of science offers a wonderful…

  12. Enhancements to the Economic Impact Forecast System (EIFS).

    DTIC Science & Technology

    1984-04-01

    IU U .. A ILC.. Meww 4 """ Economia c. .- brmodc ’ The economic submodel is appropriately classified as an export base model that jointly determines...9 yes 3 Washington - 1963 State of Washington 27 no 4 Utah - 1963 State of Utah 39 yes 5 New Mexico - 1960 State of New Mexico 42 yes 6 Kansas - 1965... Mexico .311 .627 -.017 .360 .635 (13.266) (1.381) (8.507) Kansas .556 427 -.022 .616 .433 (11.270) (.854) (7156) Clinton .229 .681 -.005 .247 .677

  13. ICT in health care: sociotechnical approaches.

    PubMed

    Berg, M; Aarts, J; van der Lei, J

    2003-01-01

    The importance of the social sciences for medical informatics is increasingly recognized. As ICT requires inter-action with people and thereby inevitably affects them, understanding ICT requires a focus on the interrelation between technology and its social environment. Sociotechnical approaches increase our understanding of how ICT applications are developed, introduced and become a part of social practices. Socio-technical approaches share several starting points: 1) they see health care work as a social, 'real life' phenomenon, which may seem 'messy' at first, but which is guided by a practical rationality that can only be overlooked at a high price (i.e. failed systems). 2) They see technological innovation as a social process, in which organizations are deeply affected. 3) Through in-depth, formative evaluation, they can help improve system design and implementation.

  14. The Application of Neutron Transport Green's Functions to Threat Scenario Simulation

    NASA Astrophysics Data System (ADS)

    Thoreson, Gregory G.; Schneider, Erich A.; Armstrong, Hirotatsu; van der Hoeven, Christopher A.

    2015-02-01

    Radiation detectors provide deterrence and defense against nuclear smuggling attempts by scanning vehicles, ships, and pedestrians for radioactive material. Understanding detector performance is crucial to developing novel technologies, architectures, and alarm algorithms. Detection can be modeled through radiation transport simulations; however, modeling a spanning set of threat scenarios over the full transport phase-space is computationally challenging. Previous research has demonstrated Green's functions can simulate photon detector signals by decomposing the scenario space into independently simulated submodels. This paper presents decomposition methods for neutron and time-dependent transport. As a result, neutron detector signals produced from full forward transport simulations can be efficiently reconstructed by sequential application of submodel response functions.

  15. Strategic Planning in an Educational Development Centre: Motivation, Management, and Messiness

    ERIC Educational Resources Information Center

    Albon, Simon P.; Iqbal, Isabeau; Pearson, Marion L.

    2016-01-01

    Strategic planning in universities is frequently positioned as vital for clarifying future directions, providing a coherent basis for decision-making, establishing priorities, and improving organizational performance. Models for successful strategic planning abound and often present the process as linear and straightforward. In this essay, we…

  16. Here's Another Nice Mess: Using Video in Reflective Dialogue Research Method

    ERIC Educational Resources Information Center

    Hepplewhite, K.

    2014-01-01

    This account discusses "reflective dialogues", a process utilising video to re-examine in-action decision-making with theatre practitioners who operate in community contexts. The reflexive discussions combine with observation, text and digital documentation to offer a sometimes "messy" (from Schön 1987) dynamic to the research…

  17. Increasing On-Task Performance for Students with ADHD

    ERIC Educational Resources Information Center

    Fowler, Mary

    2010-01-01

    Inattention and/or impulsivity and hyperactivity are the core symptoms of attention deficit hyperactivity disorder (ADHD). In the day-to-day grind of teaching, when problems emerge, the teachers' best intentions and sensitivities are tested. Fidgety, loud, disorganized, disruptive, hurried, careless, and off-task behavior coupled with messy,…

  18. Balloon Sculpture

    ERIC Educational Resources Information Center

    Warwick, James F.

    1976-01-01

    For the adventurous teacher and student there is an alternative to the often messy mixing, pouring, casting, cutting, scoring and sanding of plaster of Paris for casting or sculptural projects. Balloon sculpture, devised, designed and shown here by a sculptor/teacher, is an eye appealing sculptural form and holds a strong interest for students.…

  19. Critical Exchange: Religion and Schooling in Conversation

    ERIC Educational Resources Information Center

    Stern, Julian

    2017-01-01

    Given the complex and messy contexts of schooling, conversations between religion and schooling can be "admitted" as examples of the sort of situated conversation that goes beyond the "false necessity" of universal state-controlled school-based education. There are distinct claims to be made about religion and schooling in…

  20. Autonomy, Perfectionism and the Justification of Education

    ERIC Educational Resources Information Center

    Drerup, Johannes

    2015-01-01

    This paper is concerned with the practical importance of different forms of paternalism for educational theory and practice. Contrary to the traditional treatment of paternalism as a sometimes necessary and rather messy aspect of educational practices, I demonstrate that paternalism is to be regarded as an "indigenous concept" (Herbart)…

  1. Approaching messy problems: strategies for environmental analysis

    Treesearch

    L. M. Reid; R. R. Ziemer; T. E. Lisle

    1996-01-01

    Environmental problems are never neatly defined. Instead, each is a tangle of interacting processes whose manifestation and interpretation are warped by the vagaries of time, weather, expectation, and economics. Each problem involves livelihoods, values, and numerous specialized disciplines. Nevertheless, federal agencies in the Pacific Northwest have been given the...

  2. The "Post-Post Period" and Environmental Education Research

    ERIC Educational Resources Information Center

    McKenzie, Marcia

    2005-01-01

    Described as "post-experimental" and of the "post-post period," the current moment in social science research is typified by multi-voiced texts, researcher reflexivity, cultural criticism, and experimental works; characteristics in keeping with post-structurally informed understandings of social science research as contingent, evolving and messy.…

  3. Not-So-Messy Hands-On Science.

    ERIC Educational Resources Information Center

    Bryan, Denise; Denty, Amy

    2002-01-01

    Presents four elementary hands-on science activities that highlight animal adaptation (how birds' beaks are adapted to suit their habitats), the water cycle (how nature cleans rainwater that seeps into the ground), aquatic ecosystems (changes over time in an aquatic habitat), and animal habitats (all living beings' need for food, water, shelter,…

  4. On Teaching a Fractured Macroeconomics: Thoughts

    ERIC Educational Resources Information Center

    Salemi, Michael K.

    1987-01-01

    Discusses Galbraith's (see SO516713) three major points, 1) that the Joint Council's "Framework" should not hide the fact that macroeconomics is messy and political; 2) that the emphasis in the "Framework" is misplaced; and 3) that in certain areas, such as aggregate supply and demand, it is wrong. (JDH)

  5. The Messy Business of Democracy.

    ERIC Educational Resources Information Center

    Harrington-Lueker, Donna

    1993-01-01

    Reviews eight books that address the issues of democratic governance. Includes a historic look at democratic leadership; a history of multicultural America; and the view that the humanities, as taught in school, have become deeply politicized. Other topics are a historical defense of bilingual education, population changes and educational policy,…

  6. Connecting to the Messy Reality

    ERIC Educational Resources Information Center

    Teitel, Lee

    2009-01-01

    The Massachusetts story is about persistence. Instead of jumping from one fad to the next, the Massachusetts Association of School Superintendents (MASS) developed a comprehensive and focused plan and stuck to it for several years. It is clearly a story about people--the trust and connections that developed among networks of superintendents that…

  7. Opportunity Knocks

    ERIC Educational Resources Information Center

    Hoskins, Barbara

    2009-01-01

    Many people are finding themselves in the middle of a messy muddle these days. Faced with budget cuts and the challenge of extending the reach of the institution to increase enrollments, people are in the classic do-more-with-less situation. In this article, the author discusses how distance-education programs can rapidly grow with limited…

  8. Difficult Dialogues about Service Learning: Embrace the Messiness

    ERIC Educational Resources Information Center

    Hui, S. Mei-Yen

    2009-01-01

    When she was graduate coordinator for the Office of Community Service-Learning's Alternative Breaks (AB) program at the University of Maryland-College Park, the author had the privilege of working with undergraduate student trip leaders as they researched, planned, and coordinated weeklong service-learning immersion trips in which students would…

  9. Real Talk, Real Teaching

    ERIC Educational Resources Information Center

    Nichols, Maria

    2014-01-01

    What happens in classrooms when we create the time and space for authentic talk about texts? Extended, collaborative conversations that allow understanding to unfold over time can be messy and dynamic. As students wrestle with complex texts and ideas, talk can become lively--and predictable problems can arise. In this article, Marie Nichols uses…

  10. Transitioning from Expository Laboratory Experiments to Course-Based Undergraduate Research in General Chemistry

    ERIC Educational Resources Information Center

    Clark, Ted M.; Ricciardo, Rebecca; Weaver, Tyler

    2016-01-01

    General chemistry courses predominantly use expository experiments that shape student expectations of what a laboratory activity entails. Shifting within a semester to course-based undergraduate research activities that include greater decision-making, collaborative work, and "messy" real-world data necessitates a change in student…

  11. Comparison and Item Analysis of the MESSY for Autistic and Normal Children.

    ERIC Educational Resources Information Center

    Matson, Johnny L.; And Others

    1991-01-01

    Social skills and levels of inappropriate assertiveness/impulsiveness were assessed using the Matson Evaluation of Social Skills with Youngsters, with 17 autistic children (ages 2-21) and 17 matched nonautistic children. Significant differences in both appropriate and inappropriate social behaviors were found. Results suggest the importance of…

  12. Digital Downsides: Exploring University Students' Negative Engagements with Digital Technology

    ERIC Educational Resources Information Center

    Selwyn, Neil

    2016-01-01

    Digital technologies are now an integral feature of university study. As such, academic research has tended to concentrate on the potential of digital technologies to support, extend and even "enhance" student learning. This paper, in contrast, explores the rather more messy realities of students' engagements with digital technology. In…

  13. Problems as Possibilities: Problem-Based Learning for K-12 Education.

    ERIC Educational Resources Information Center

    Torp, Linda; Sage, Sara

    Problem-based learning (PBL) is an experiential form of learning centered around the collaborative investigation and resolution of "messy, real-world" problems. This book offers opportunities to learn about problem-based learning from the perspectives of teachers, students, parents, administrators, and curriculum developers. Chapter 1 tells…

  14. Educational Reform Implementation: A Co-Constructed Process. Research Report 5.

    ERIC Educational Resources Information Center

    Datnow, Amanda; Hubbard, Lea; Mehan, Hugh

    This research report argues for viewing the complex, often messy process of school reform implementation as a "conditional matrix" coupled with qualitative research. As illustration, two studies (of six reform efforts in one county and of implementation of an untracking program in Kentucky) are reported. Preliminary analysis reveals that…

  15. Attitude determination using an adaptive multiple model filtering Scheme

    NASA Technical Reports Server (NTRS)

    Lam, Quang; Ray, Surendra N.

    1995-01-01

    Attitude determination has been considered as a permanent topic of active research and perhaps remaining as a forever-lasting interest for spacecraft system designers. Its role is to provide a reference for controls such as pointing the directional antennas or solar panels, stabilizing the spacecraft or maneuvering the spacecraft to a new orbit. Least Square Estimation (LSE) technique was utilized to provide attitude determination for the Nimbus 6 and G. Despite its poor performance (estimation accuracy consideration), LSE was considered as an effective and practical approach to meet the urgent need and requirement back in the 70's. One reason for this poor performance associated with the LSE scheme is the lack of dynamic filtering or 'compensation'. In other words, the scheme is based totally on the measurements and no attempts were made to model the dynamic equations of motion of the spacecraft. We propose an adaptive filtering approach which employs a bank of Kalman filters to perform robust attitude estimation. The proposed approach, whose architecture is depicted, is essentially based on the latest proof on the interactive multiple model design framework to handle the unknown of the system noise characteristics or statistics. The concept fundamentally employs a bank of Kalman filter or submodel, instead of using fixed values for the system noise statistics for each submodel (per operating condition) as the traditional multiple model approach does, we use an on-line dynamic system noise identifier to 'identify' the system noise level (statistics) and update the filter noise statistics using 'live' information from the sensor model. The advanced noise identifier, whose architecture is also shown, is implemented using an advanced system identifier. To insure the robust performance for the proposed advanced system identifier, it is also further reinforced by a learning system which is implemented (in the outer loop) using neural networks to identify other unknown quantities such as spacecraft dynamics parameters, gyro biases, dynamic disturbances, or environment variations.

  16. Complex systems approach to scientific publication and peer-review system: development of an agent-based model calibrated with empirical journal data.

    PubMed

    Kovanis, Michail; Porcher, Raphaël; Ravaud, Philippe; Trinquart, Ludovic

    Scientific peer-review and publication systems incur a huge burden in terms of costs and time. Innovative alternatives have been proposed to improve the systems, but assessing their impact in experimental studies is not feasible at a systemic level. We developed an agent-based model by adopting a unified view of peer review and publication systems and calibrating it with empirical journal data in the biomedical and life sciences. We modeled researchers, research manuscripts and scientific journals as agents. Researchers were characterized by their scientific level and resources, manuscripts by their scientific value, and journals by their reputation and acceptance or rejection thresholds. These state variables were used in submodels for various processes such as production of articles, submissions to target journals, in-house and external peer review, and resubmissions. We collected data for a sample of biomedical and life sciences journals regarding acceptance rates, resubmission patterns and total number of published articles. We adjusted submodel parameters so that the agent-based model outputs fit these empirical data. We simulated 105 journals, 25,000 researchers and 410,000 manuscripts over 10 years. A mean of 33,600 articles were published per year; 19 % of submitted manuscripts remained unpublished. The mean acceptance rate was 21 % after external peer review and rejection rate 32 % after in-house review; 15 % publications resulted from the first submission, 47 % the second submission and 20 % the third submission. All decisions in the model were mainly driven by the scientific value, whereas journal targeting and persistence in resubmission defined whether a manuscript would be published or abandoned after one or many rejections. This agent-based model may help in better understanding the determinants of the scientific publication and peer-review systems. It may also help in assessing and identifying the most promising alternative systems of peer review.

  17. Attitude determination using an adaptive multiple model filtering Scheme

    NASA Astrophysics Data System (ADS)

    Lam, Quang; Ray, Surendra N.

    1995-05-01

    Attitude determination has been considered as a permanent topic of active research and perhaps remaining as a forever-lasting interest for spacecraft system designers. Its role is to provide a reference for controls such as pointing the directional antennas or solar panels, stabilizing the spacecraft or maneuvering the spacecraft to a new orbit. Least Square Estimation (LSE) technique was utilized to provide attitude determination for the Nimbus 6 and G. Despite its poor performance (estimation accuracy consideration), LSE was considered as an effective and practical approach to meet the urgent need and requirement back in the 70's. One reason for this poor performance associated with the LSE scheme is the lack of dynamic filtering or 'compensation'. In other words, the scheme is based totally on the measurements and no attempts were made to model the dynamic equations of motion of the spacecraft. We propose an adaptive filtering approach which employs a bank of Kalman filters to perform robust attitude estimation. The proposed approach, whose architecture is depicted, is essentially based on the latest proof on the interactive multiple model design framework to handle the unknown of the system noise characteristics or statistics. The concept fundamentally employs a bank of Kalman filter or submodel, instead of using fixed values for the system noise statistics for each submodel (per operating condition) as the traditional multiple model approach does, we use an on-line dynamic system noise identifier to 'identify' the system noise level (statistics) and update the filter noise statistics using 'live' information from the sensor model. The advanced noise identifier, whose architecture is also shown, is implemented using an advanced system identifier. To insure the robust performance for the proposed advanced system identifier, it is also further reinforced by a learning system which is implemented (in the outer loop) using neural networks to identify other unknown quantities such as spacecraft dynamics parameters, gyro biases, dynamic disturbances, or environment variations.

  18. Regional and climate forcing on forage fish and apex predators in the California Current: new insights from a fully coupled ecosystem model.

    NASA Astrophysics Data System (ADS)

    Fiechter, J.; Rose, K.; Curchitser, E. N.; Huckstadt, L. A.; Costa, D. P.; Hedstrom, K.

    2016-12-01

    A fully coupled ecosystem model is used to describe the impact of regional and climate variability on changes in abundance and distribution of forage fish and apex predators in the California Current Large Marine Ecosystem. The ecosystem model consists of a biogeochemical submodel (NEMURO) embedded in a regional ocean circulation submodel (ROMS), and both coupled with a multi-species individual-based submodel for two forage fish species (sardine and anchovy) and one apex predator (California sea lion). Sardine and anchovy are specifically included in the model as they exhibit significant interannual and decadal variability in population abundances, and are commonly found in the diet of California sea lions. Output from the model demonstrates how regional-scale (i.e., upwelling intensity) and basin-scale (i.e., PDO and ENSO signals) physical processes control species distributions and predator-prey interactions on interannual time scales. The results also illustrate how variability in environmental conditions leads to the formation of seasonal hotspots where prey and predator spatially overlap. While specifically focused on sardine, anchovy and sea lions, the modeling framework presented here can provide new insights into the physical and biological mechanisms controlling trophic interactions in the California Current, or other regions where similar end-to-end ecosystem models may be implemented.

  19. A microscale three-dimensional urban energy balance model for studying surface temperatures

    NASA Astrophysics Data System (ADS)

    Krayenhoff, E. Scott; Voogt, James A.

    2007-06-01

    A microscale three-dimensional (3-D) urban energy balance model, Temperatures of Urban Facets in 3-D (TUF-3D), is developed to predict urban surface temperatures for a variety of surface geometries and properties, weather conditions, and solar angles. The surface is composed of plane-parallel facets: roofs, walls, and streets, which are further sub-divided into identical square patches, resulting in a 3-D raster-type model geometry. The model code is structured into radiation, conduction and convection sub-models. The radiation sub-model uses the radiosity approach and accounts for multiple reflections and shading of direct solar radiation. Conduction is solved by finite differencing of the heat conduction equation, and convection is modelled by empirically relating patch heat transfer coefficients to the momentum forcing and the building morphology. The radiation and conduction sub-models are tested individually against measurements, and the complete model is tested against full-scale urban surface temperature and energy balance observations. Modelled surface temperatures perform well at both the facet-average and the sub-facet scales given the precision of the observations and the uncertainties in the model inputs. The model has several potential applications, such as the calculation of radiative loads, and the investigation of effective thermal anisotropy (when combined with a sensor-view model).

  20. Three-dimensional simulation of the motion of a single particle under a simulated turbulent velocity field

    NASA Astrophysics Data System (ADS)

    Moreno-Casas, P. A.; Bombardelli, F. A.

    2015-12-01

    A 3D Lagrangian particle tracking model is coupled to a 3D channel velocity field to simulate the saltation motion of a single sediment particle moving in saltation mode. The turbulent field is a high-resolution three dimensional velocity field that reproduces a by-pass transition to turbulence on a flat plate due to free-stream turbulence passing above de plate. In order to reduce computational costs, a decoupled approached is used, i.e., the turbulent flow is simulated independently from the tracking model, and then used to feed the 3D Lagrangian particle model. The simulations are carried using the point-particle approach. The particle tracking model contains three sub-models, namely, particle free-flight, a post-collision velocity and bed representation sub-models. The free-flight sub-model considers the action of the following forces: submerged weight, non-linear drag, lift, virtual mass, Magnus and Basset forces. The model also includes the effect of particle angular velocity. The post-collision velocities are obtained by applying conservation of angular and linear momentum. The complete model was validated with experimental results from literature within the sand range. Results for particle velocity time series and distribution of particle turbulent intensities are presented.

  1. A teleonomic model describing performance (body, milk and intake) during growth and over repeated reproductive cycles throughout the lifespan of dairy cattle. 1. Trajectories of life function priorities and genetic scaling.

    PubMed

    Martin, O; Sauvant, D

    2010-12-01

    The prediction of the control of nutrient partitioning, particularly energy, is a major issue in modelling dairy cattle performance. The proportions of energy channelled to physiological functions (growth, maintenance, gestation and lactation) change as the animal ages and reproduces, and according to its genotype and nutritional environment. This is the first of two papers describing a teleonomic model of individual performance during growth and over repeated reproductive cycles throughout the lifespan of dairy cattle. The conceptual framework is based on the coupling of a regulating sub-model providing teleonomic drives to govern the work of an operating sub-model scaled with genetic parameters. The regulating sub-model describes the dynamic partitioning of a mammal female's priority between life functions targeted to growth (G), ageing (A), balance of body reserves (R) and nutrient supply of the unborn (U), newborn (N) and suckling (S) calf. The so-called GARUNS dynamic pattern defines a trajectory of relative priorities, goal directed towards the survival of the individual for the continuation of the specie. The operating sub-model describes changes in body weight (BW) and composition, foetal growth, milk yield and composition and food intake in dairy cows throughout their lifespan, that is, during growth, over successive reproductive cycles and through ageing. This dynamic pattern of performance defines a reference trajectory of a cow under normal husbandry conditions and feed regimen. Genetic parameters are incorporated in the model to scale individual performance and simulate differences within and between breeds. The model was calibrated for dairy cows with literature data. The model was evaluated by comparison with simulations of previously published empirical equations of BW, body condition score, milk yield and composition and feed intake. This evaluation showed that the model adequately simulates these production variables throughout the lifespan, and across a range of dairy cattle genotypes.

  2. An integrated model for assessing both crop productivity and agricultural water resources at a large scale

    NASA Astrophysics Data System (ADS)

    Okada, M.; Sakurai, G.; Iizumi, T.; Yokozawa, M.

    2012-12-01

    Agricultural production utilizes regional resources (e.g. river water and ground water) as well as local resources (e.g. temperature, rainfall, solar energy). Future climate changes and increasing demand due to population increases and economic developments would intensively affect the availability of water resources for agricultural production. While many studies assessed the impacts of climate change on agriculture, there are few studies that dynamically account for changes in water resources and crop production. This study proposes an integrated model for assessing both crop productivity and agricultural water resources at a large scale. Also, the irrigation management to subseasonal variability in weather and crop response varies for each region and each crop. To deal with such variations, we used the Markov Chain Monte Carlo technique to quantify regional-specific parameters associated with crop growth and irrigation water estimations. We coupled a large-scale crop model (Sakurai et al. 2012), with a global water resources model, H08 (Hanasaki et al. 2008). The integrated model was consisting of five sub-models for the following processes: land surface, crop growth, river routing, reservoir operation, and anthropogenic water withdrawal. The land surface sub-model was based on a watershed hydrology model, SWAT (Neitsch et al. 2009). Surface and subsurface runoffs simulated by the land surface sub-model were input to the river routing sub-model of the H08 model. A part of regional water resources available for agriculture, simulated by the H08 model, was input as irrigation water to the land surface sub-model. The timing and amount of irrigation water was simulated at a daily step. The integrated model reproduced the observed streamflow in an individual watershed. Additionally, the model accurately reproduced the trends and interannual variations of crop yields. To demonstrate the usefulness of the integrated model, we compared two types of impact assessment of climate change on crop productivity in a watershed. The first was carried out by the large-scale crop model alone. The second was carried out by the integrated model of the large-scale crop model and the H08 model. The former projected that changes in temperature and precipitation due to future climate change would give rise to increasing the water stress in crops. Nevertheless, the latter projected that the increasing amount of agricultural water resources in the watershed would supply sufficient amount of water for irrigation, consequently reduce the water stress. The integrated model demonstrated the importance of taking into account the water circulation in watershed when predicting the regional crop production.

  3. The Use of Social Ecological Hotspots Mapping: Co-Developing Adaptation Strategies for Resource Management by Communities and Policy Makers

    NASA Astrophysics Data System (ADS)

    Alessa, L.

    2014-12-01

    Ultimately, adaptation is based on a set of trade-offs rather than optimal conditions, something that is rarely seen in messy social ecological systems (SES). In this talk, we discuss the role of spatial hot-spot mapping using social and biophysical data to understand the feedbacks in SES. We review the types of data needed, their means of acquisition and the analytic methods involved. In addition, we outline the challenges faced in co-developing this type of inquiry based on lessons learned from several long-term programs. Finally, we present the utility of SES hotspots in developing adaptation strategies on the ground by communities and policy makers.

  4. Demonstration of a fully-coupled end-to-end model for small pelagic fish using sardine and anchovy in the California Current

    NASA Astrophysics Data System (ADS)

    Rose, Kenneth A.; Fiechter, Jerome; Curchitser, Enrique N.; Hedstrom, Kate; Bernal, Miguel; Creekmore, Sean; Haynie, Alan; Ito, Shin-ichi; Lluch-Cota, Salvador; Megrey, Bernard A.; Edwards, Chris A.; Checkley, Dave; Koslow, Tony; McClatchie, Sam; Werner, Francisco; MacCall, Alec; Agostini, Vera

    2015-11-01

    We describe and document an end-to-end model of anchovy and sardine population dynamics in the California Current as a proof of principle that such coupled models can be developed and implemented. The end-to-end model is 3-dimensional, time-varying, and multispecies, and consists of four coupled submodels: hydrodynamics, Eulerian nutrient-phytoplankton-zooplankton (NPZ), an individual-based full life cycle anchovy and sardine submodel, and an agent-based fishing fleet submodel. A predator roughly mimicking albacore was included as individuals that consumed anchovy and sardine. All submodels were coded within the ROMS open-source community model, and used the same resolution spatial grid and were all solved simultaneously to allow for possible feedbacks among the submodels. We used a super-individual approach and solved the coupled models on a distributed memory parallel computer, both of which created challenging but resolvable bookkeeping challenges. The anchovy and sardine growth, mortality, reproduction, and movement, and the fishing fleet submodel, were each calibrated using simplified grids before being inserted into the full end-to-end model. An historical simulation of 1959-2008 was performed, and the latter 45 years analyzed. Sea surface height (SSH) and sea surface temperature (SST) for the historical simulation showed strong horizontal gradients and multi-year scale temporal oscillations related to various climate indices (PDO, NPGO), and both showed responses to ENSO variability. Simulated total phytoplankton was lower during strong El Nino events and higher for the strong 1999 La Nina event. The three zooplankton groups generally corresponded to the spatial and temporal variation in simulated total phytoplankton. Simulated biomasses of anchovy and sardine were within the historical range of observed biomasses but predicted biomasses showed much less inter-annual variation. Anomalies of annual biomasses of anchovy and sardine showed a switch in the mid-1990s from anchovy to sardine dominance. Simulated averaged weights- and lengths-at-age did not vary much across decades, and movement patterns showed anchovy located close to the coast while sardine were more dispersed and farther offshore. Albacore predation on anchovy and sardine was concentrated near the coast in two pockets near the Monterey Bay area and equatorward of Cape Mendocino. Predation mortality from fishing boats was concentrated where sardine age-1 and older individuals were located close to one of the five ports. We demonstrated that it is feasible to perform multi-decadal simulations of a fully-coupled end-to-end model, and that this can be done for a model that follows individual fish and boats on the same 3-dimensional grid as the hydrodynamics. Our focus here was on proof of principle and our results showed that we solved the major technical, bookkeeping, and computational issues. We discuss the next steps to increase computational speed and to include important biological differences between anchovy and sardine. In a companion paper (Fiechter et al., 2015), we further analyze the historical simulation in the context of the various hypotheses that have been proposed to explain the sardine and anchovy cycles.

  5. A Theoretical Math Model for Projecting AIS3+ Thoracic Injury for Belted Occupants in Frontal Impact.

    PubMed

    Laituri, Tony R; Sullivan, Donald; Sullivan, Kaye; Prasad, Priya

    2004-11-01

    A theoretical math model was created to assess the net effect of aging populations versus evolving system designs from the standpoint of thoracic injury potential. The model was used to project the next twenty-five years of thoracic injuries in Canada. The choice of Canada was topical because rulemaking for CMVSS 208 has been proposed recently. The study was limited to properly-belted, front-outboard, adult occupants in 11-1 o'clock frontal crashes. Moreover, only AIS3+ thoracic injury potential was considered. The research consisted of four steps. First, sub-models were developed and integrated. The sub-models were made for numerous real-world effects including population growth, crash involvement, fleet penetration of various systems (via system introduction, vehicle production, and vehicle attrition), and attendant injury risk estimation. Second, existing NASS data were used to estimate the number of AIS3+ chest-injured drivers in Canada in 2001. This served as data for model validation. Third, the projection model was correlated favorably with the 2001 field estimate. Finally, for the scenario that 2004-2030 model-year systems would perform like 2000-2003 model-year systems, a projection was made to estimate the long-term effect of eliminating designs that would not comply with the proposed CMVSS 208. The 2006-2030-projection result for this scenario: 764 occupants would benefit from the proposed regulation. This projection was considered to be conservative because future innovation was not considered, and, to date, the fleet's average chest deflections have been decreasing. The model also predicted that, through 2016, the effect of improving system performance would be more influential than the population-aging effect; thereafter, the population-aging effect would somewhat counteract the effect of improving system performance. This theoretical math model can provide insights for both designers and rule makers.

  6. Hybrid model analysis of intra-aortic balloon pump performance as a function of ventricular and circulatory parameters.

    PubMed

    Ferrari, Gianfranco; Khir, Ashraf W; Fresiello, Libera; Di Molfetta, Arianna; Kozarski, Maciej

    2011-09-01

    We investigated the effects of the intra-aortic balloon pump (IABP) on endocardial viability ratio (EVR), cardiac output (CO), end-systolic (V(es)) and end-diastolic (V(ed)) ventricular volumes, total coronary blood flow (TCBF), and ventricular energetics (external work [EW], pressure-volume area [PVA]) under different ventricular (E(max) and diastolic stiffness) and circulatory (arterial compliance) parameters. We derived a hybrid model from a computational model, which is based on merging computational and hydraulic submodels. The lumped parameter computational submodel consists of left and right hearts and systemic, pulmonary, and coronary circulations. The hydraulic submodel includes part of the systemic arterial circulation, essentially a silicone rubber tube representing the aorta, which contains a 40-mL IAB. EVR, CO, V(es), and V(ed), TCBF and ventricular energetics (EW, PVA) were analyzed against the ranges of left ventricular E(max) (0.3-0.5-1 mm Hg/cm(3)) and diastolic stiffness V(stiffness) (≈0.08 and ≈0.3 mm Hg/cm(3), obtained by changing diastolic stiffness constant) and systemic arterial compliance (1.8-2.5 cm(3)/mm Hg). All experiments were performed comparing the selected variables before and during IABP assistance. Increasing E(maxl) from 0.5 to 2 mm Hg/cm(3) resulted in IABP assistance producing lower percentage changes in the selected variables. The changes in ventricular diastolic stiffness strongly influence both absolute value of EVR and its variations during IABP (71 and 65% for lower and higher arterial compliance, respectively). V(ed) and V(es) changes are rather small but higher for lower E(max) and higher V(stiffness). Lower E(max) and higher V(stiffness) resulted in higher TCBF and CO during IABP assistance (∼35 and 10%, respectively). The use of this hybrid model allows for testing real devices in realistic, stable, and repeatable circulatory conditions. Specifically, the presented results show that IABP performance is dependent, at least in part, on left ventricular filling, ejection characteristics, and arterial compliance. It is possible in this way to simulate patient-specific conditions and predict the IABP performance at different values of the circulatory or ventricular parameters. Further work is required to study the conditions for heart recovery modeling, baroreceptor controls, and physiological feedbacks. © 2011, Copyright the Authors. Artificial Organs © 2011, International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  7. Application of Wavelet-Based Methods for Accelerating Multi-Time-Scale Simulation of Bistable Heterogeneous Catalysis

    DOE PAGES

    Gur, Sourav; Frantziskonis, George N.; Univ. of Arizona, Tucson, AZ; ...

    2017-02-16

    Here, we report results from a numerical study of multi-time-scale bistable dynamics for CO oxidation on a catalytic surface in a flowing, well-mixed gas stream. The problem is posed in terms of surface and gas-phase submodels that dynamically interact in the presence of stochastic perturbations, reflecting the impact of molecular-scale fluctuations on the surface and turbulence in the gas. Wavelet-based methods are used to encode and characterize the temporal dynamics produced by each submodel and detect the onset of sudden state shifts (bifurcations) caused by nonlinear kinetics. When impending state shifts are detected, a more accurate but computationally expensive integrationmore » scheme can be used. This appears to make it possible, at least in some cases, to decrease the net computational burden associated with simulating multi-time-scale, nonlinear reacting systems by limiting the amount of time in which the more expensive integration schemes are required. Critical to achieving this is being able to detect unstable temporal transitions such as the bistable shifts in the example problem considered here. Lastly, our results indicate that a unique wavelet-based algorithm based on the Lipschitz exponent is capable of making such detections, even under noisy conditions, and may find applications in critical transition detection problems beyond catalysis.« less

  8. An eco-epidemiological system with infected prey and predator subject to the weak Allee effect.

    PubMed

    Sasmal, Sourav Kumar; Chattopadhyay, Joydev

    2013-12-01

    In this article, we propose a general prey–predator model with disease in prey and predator subject to the weak Allee effects. We make the following assumptions: (i) infected prey competes for resources but does not contribute to reproduction; and (ii) in comparison to the consumption of the susceptible prey, consumption of infected prey would contribute less or negatively to the growth of predator. Based on these assumptions, we provide basic dynamic properties for the full model and corresponding submodels with and without the Allee effects. By comparing the disease free submodels (susceptible prey–predator model) with and without the Allee effects, we conclude that the Allee effects can create or destroy the interior attractors. This enables us to obtain the complete dynamics of the full model and conclude that the model has only one attractor (only susceptible prey survives or susceptible-infected coexist), or two attractors (bi-stability with only susceptible prey and susceptible prey–predator coexist or susceptible prey-infected prey coexists and susceptible prey–predator coexist). This model does not support the coexistence of susceptible-infected-predator, which is caused by the assumption that infected population contributes less or are harmful to the growth of predator in comparison to the consumption of susceptible prey.

  9. An Integrated Framework for Model-Based Distributed Diagnosis and Prognosis

    NASA Technical Reports Server (NTRS)

    Bregon, Anibal; Daigle, Matthew J.; Roychoudhury, Indranil

    2012-01-01

    Diagnosis and prognosis are necessary tasks for system reconfiguration and fault-adaptive control in complex systems. Diagnosis consists of detection, isolation and identification of faults, while prognosis consists of prediction of the remaining useful life of systems. This paper presents a novel integrated framework for model-based distributed diagnosis and prognosis, where system decomposition is used to enable the diagnosis and prognosis tasks to be performed in a distributed way. We show how different submodels can be automatically constructed to solve the local diagnosis and prognosis problems. We illustrate our approach using a simulated four-wheeled rover for different fault scenarios. Our experiments show that our approach correctly performs distributed fault diagnosis and prognosis in an efficient and robust manner.

  10. The development and preliminary application of an invariant coupled diffusion and chemistry model

    NASA Technical Reports Server (NTRS)

    Hilst, G. R.; Donaldson, C. DUP.; Teske, M.; Contiliano, R.; Freiberg, J.

    1973-01-01

    In many real-world pollution chemical reaction problems, the rate of reaction problems, the rate of reaction may be greatly affected by unmixedness. An approximate closure scheme for a chemical kinetic submodel which conforms to the principles of invariant modeling and which accounts for the effects of inhomogeneous mixing over a wide range of conditions has been developed. This submodel has been coupled successfully with invariant turbulence and diffusion models, permitting calculation of two-dimensional diffusion of two reacting (isothermally) chemical species. The initial calculations indicate the ozone reactions in the wake of stratospheric aircraft will be substantially affected by the rate of diffusion of ozone into the wake, and in the early wake, by unmixedness.

  11. Dealing with Messiness and Uncertainty in Practitioner Research: The Nature of Participatory Action Research

    ERIC Educational Resources Information Center

    Goodnough, Karen

    2008-01-01

    This article reports on the experiences and perceptions of K-12 teachers as they engaged in a participatory action research (PAR) project, "Science Across the Curriculum." Although the experiences and professional learning of two of the project participants are highlighted, the challenges that all participants experienced as they conceptualized…

  12. Making a Mess of Academic Work: Experience, Purpose and Identity

    ERIC Educational Resources Information Center

    Malcolm, Janice; Zukas, Miriam

    2009-01-01

    Within the policy discourse of academic work, teaching, research and administration are seen as discrete elements of practice. We explore the assumptions evident in this "official story" and contrast it with the messy experience of academic work, drawing upon empirical studies and conceptualisations from our own research and from recent…

  13. Coaching as Caring (The Smiling Gallery): Accessing Hidden Knowledge

    ERIC Educational Resources Information Center

    Jones, Robyn L.

    2009-01-01

    Background: Recent research into coaching has been critical of much previous work, particularly in terms of the tendency to paint a rather unproblematic portrayal of the activity. The criticism has focussed on the erroneous supposition that method can be substituted for individuals, thus giving a synthetic account of a most messy of jobs.…

  14. Connected Learning in the Library as a Product of Hacking, Making, Social Diversity and Messiness

    ERIC Educational Resources Information Center

    Bilandzic, Mark

    2016-01-01

    Learning is most effective when intrinsically motivated through personal interest, and situated in a supportive socio-cultural context. This paper reports on findings from a study that explored implications for design of interactive learning environments through 18 months of ethnographic observations of people's interactions at "Hack The…

  15. Critical Exchange: Avoiding Schooling Taboos--A Reply to Rasmussen

    ERIC Educational Resources Information Center

    Stern, Julian

    2017-01-01

    This paper is a reply to Mary L. Rasumussen's paper, "Critical Exchange: Religion and Schooling: What Should Their Relationship Be?" In this paper, Julian Stern makes three claims: (1) the need for a messy conversation; (2) the need to include "whole" people in schools; and (3) the need to consider existentially significant…

  16. Training Interdisciplinary "Wicked Problem" Solvers: Applying Lessons from HERO in Community-Based Research Experiences for Undergraduates

    ERIC Educational Resources Information Center

    Cantor, Alida; DeLauer, Verna; Martin, Deborah; Rogan, John

    2015-01-01

    Management of "wicked problems", messy real-world problems that defy resolution, requires thinkers who can transcend disciplinary boundaries, work collaboratively, and handle complexity and obstacles. This paper explores how educators can train undergraduates in these skills through applied community-based research, using the example of…

  17. Beyond Technology, How to Spark Kids' Passions

    ERIC Educational Resources Information Center

    Barseghian, Tina

    2012-01-01

    Helping kids find their passion outside the confines of standardized curriculum and testing can be a messy endeavor, but worth the challenge. Marc Prensky, author of "BRAIN GAIN: Technology and the Quest for Digital Wisdom," said that, rather than finding different ways for everyone to do the same curriculum, educators need to allow individual…

  18. The Slide-Lecture: An Alternative to Chalkdust?

    ERIC Educational Resources Information Center

    Wilkins, S. A.

    Many instructors teaching large survey courses use the chalkboard to aid their lectures in spite of the waste of class time in writing and erasing, the clutter and confusion that may result, and the messiness of chalkdust. As an alternative, the slide-lecture method has been used for several years at Bossier Community College in teaching…

  19. Moving to the Center: Disorientation and Intention

    ERIC Educational Resources Information Center

    Wilson, Maja; Niemczyk, Michael

    2008-01-01

    Maja Wilson and Michael Niemczyk advocate turning away from mandated writing toward learning environments that honor the messy, inner life of the writer. They explain the importance of disorientation in that it unsettles but nurtures the emerging intention of student writers, and they stress the need to return our attention to the heart and center…

  20. Tourism and wilderness: dancing with the messy monster

    Treesearch

    Ralf Buckley

    2000-01-01

    Currently, tourism offers one of the best prospects for conserving remaining areas of unprotected wilderness in most parts of the world. Tourism produces environmental impacts, and in heavily-visited protected areas these impacts may be a significant threat to conservation values and a major management issue; along with other anthropogenic impacts such as weeds, pests...

  1. A Methodological Self-Study of Quantitizing: Negotiating Meaning and Revealing Multiplicity

    ERIC Educational Resources Information Center

    Seltzer-Kelly, Deborah; Westwood, Sean J.; Pena-Guzman, David M.

    2012-01-01

    This inquiry developed during the process of "quantitizing" qualitative data the authors had gathered for a mixed methods curriculum efficacy study. Rather than providing the intended rigor to their data coding process, their use of an intercoder reliability metric prompted their investigation of the multiplicity and messiness that, as they…

  2. Smooth Transfer: A Once Mundane Administrative Issue Re-Emerges as a Key Tool for Equity

    ERIC Educational Resources Information Center

    Purcell, Francesca B.

    2006-01-01

    Undergraduate transfer is a messy and too-often frustrating part of college for faculty, staff and, above all, the students themselves. Students are discouraged by unclear and complicated curriculum requirements. Faculty are reluctant to accept courses from another institution and question the preparedness of transfer students. Advisors are…

  3. Flexibly Global? Performing Culture and Identity in an Age of Uncertainty

    ERIC Educational Resources Information Center

    Giardina, Michael D.

    2009-01-01

    Presented as a symbolic interactive messy performance text, Michael Giardina sutures himself into and through the landscape of global social relations, including his own interpretive interactions of disconnection and reconnection with place, home, and nation. In so doing, and in these collages of lived textuality, he examines the complex,…

  4. Generative Insights from the Eleanor Chelimsky Forum on Evaluation Theory and Practice

    ERIC Educational Resources Information Center

    Leviton, Laura C.

    2014-01-01

    Both speakers at the Eleanor Chelimsky Forum on Theory and Practice in Evaluation pointed out the complexity and messiness of evaluation practice, and thus potential limits on theory and generalizable knowledge. The concept of reflective practice offers one way forward to build evaluation theory. Building generalizable knowledge about practice…

  5. Good Teaching Is a Conversation

    ERIC Educational Resources Information Center

    Shakespear, Eileen

    2008-01-01

    Teaching is a messy profession, the details of teachers' work are tightly woven into the myriad ways of humans. Even with uncertainty, teachers do improve their practice. In this article, the author describes how she moved her thinking forward over 35 years as an increasingly experienced teacher. Like many CES teachers, the ways that she has…

  6. Let's Get Messy!: Exploring Sensory and Art Activities with Infants and Toddlers

    ERIC Educational Resources Information Center

    Schwarz, Trudi; Luckenbill, Julia

    2012-01-01

    Infant/toddler teachers take a child-centered, emergent approach, meaning that they observe the children at play, ask themselves what they are interested in learning, and design developmentally appropriate curricula to meet and extend those interests. This curriculum development technique leads to "possibilities for the child to develop deeper…

  7. Messy, Butch, and Queer: LGBTQ Youth and the School-to-Prison Pipeline

    ERIC Educational Resources Information Center

    Snapp, Shannon D.; Hoenig, Jennifer M.; Fields, Amanda; Russell, Stephen T.

    2015-01-01

    Emerging evidence suggests that lesbian, gay, bisexual, transgender, queer, and questioning (LGBTQ) youth experience disparate treatment in schools that may result in criminal sanctions. In an effort to understand the pathways that push youth out of schools, we conducted focus groups with youth (n = 31) from Arizona, California, and Georgia, and…

  8. Learning To Love the Swamp: Reshaping Education for Public Service.

    ERIC Educational Resources Information Center

    Schall, Ellen

    1996-01-01

    The world of public service is compared to a swamp in which important, complex, and messy problems are addressed, and it is argued that graduate and professional education must be reshaped to produce leaders who can make sense of current challenges. Education that is more experiential, behavioral, interactive, and collectively oriented is…

  9. Messy world: managing dynamic landscape.

    Treesearch

    Sally Duncan

    1999-01-01

    What lessons does historical disturbance hold for the management of future landscapes? Fred Swanson, a researcher at the Pacific Northwest Research Station and John Cissel, research liaison for the Willamette NF, are members of a team of scientists and land managers who are examining the way we think about and manage landscapes.The team found that past...

  10. The European Union and the Comprehensive Civil-Military Approach in Euro-Atlantic Security: Matching Reality to Rhetoric

    DTIC Science & Technology

    2010-01-01

    aspects below the political PSC level. This practically guaranteed inco - herence and disunity as an institutional inheritance. Second, and equally as...central point of emphasis that “defense and diplomacy are no longer discrete choices . . . but must complement one another throughout the messy process

  11. Emergence of the Politics of Education Field: Making Sense of the Messy Center.

    ERIC Educational Resources Information Center

    Scribner, Jay D.; Maxcy, Brendan; Aleman, Enrique

    2003-01-01

    Places the evolution of politics of education field in historical context and introduces a framework for understanding how three theoretical streams--micropolitics, political culture, and neoinstitutionalism--emerged as the behavioralist movement receded. Argues that the field has been advancing by means of integrative and aggregative drives that…

  12. Language Planning and Student Experiences: Intention, Rhetoric and Implementation

    ERIC Educational Resources Information Center

    Lo Bianco, Joseph; Aliani, Renata

    2013-01-01

    This book is a timely comparison of the divergent worlds of policy implementation and policy ambition, the messy, often contradictory here-and-now reality of languages in schools and the sharp-edged, shiny, future-oriented representation of languages in policy. Two deep rooted tendencies in Australian political and social life, multiculturalism…

  13. Understanding, Embracing and Reflecting upon the Messiness of Doctoral Fieldwork

    ERIC Educational Resources Information Center

    Naveed, Arif; Sakata, Nozomi; Kefallinou, Anthoula; Young, Sara; Anand, Kusha

    2017-01-01

    This Forum issue discusses the centrality of the fieldwork in doctoral research. The inevitability of researchers' influence and of their values apparent during and after their fieldwork calls for a high degree of reflexivity. Since the standard methodology textbooks do not sufficiently guide on addressing such challenges, doctoral researchers go…

  14. Troubling Whiteness: Music Education and the "Messiness" of Equity Work

    ERIC Educational Resources Information Center

    Hess, Juliet

    2018-01-01

    At the elementary level, White, female music teachers largely populate music education. In the diverse schools of Toronto in Canada, teachers navigate their White subjectivities in a range of ways. My research examines the discourses, philosophies, and practices of four White, female elementary music educators who have striven to challenge…

  15. Considering Materiality in Educational Policy: Messy Objects and Multiple Reals

    ERIC Educational Resources Information Center

    Fenwick, Tara; Edwards, Richard

    2011-01-01

    Educational analysts need new ways to engage with policy processes in a networked world of complex transnational connections. In this discussion, Tara Fenwick and Richard Edwards argue for a greater focus on materiality in educational policy as a way to trace the heterogeneous interactions and precarious linkages that enact policy as complex…

  16. Messy Problems and Lay Audiences: Teaching Critical Thinking within the Finance Curriculum

    ERIC Educational Resources Information Center

    Carrithers, David; Ling, Teresa; Bean, John C.

    2008-01-01

    This article investigates the critical thinking difficulties of finance majors when asked to address ill-structured finance problems. The authors build on previous research in which they asked students to analyze an ill-structured investment problem and recommend a course of action. The results revealed numerous critical thinking weaknesses,…

  17. Necessarily Cumbersome, Messy, and Slow: Community Collaborative Work within Art Institutions

    ERIC Educational Resources Information Center

    Filipovic, Yaël

    2013-01-01

    Building relationships and community collaborations--especially on an institutional level--is a slow and long-term process. These types of innovative, experimental, and long-term collaborations with community organizations and groups often lead art institutions to reflect on the value and place of their institutional structures when engaging in…

  18. Teaching Machines to Think Fuzzy

    ERIC Educational Resources Information Center

    Technology Teacher, 2004

    2004-01-01

    Fuzzy logic programs for computers make them more human. Computers can then think through messy situations and make smart decisions. It makes computers able to control things the way people do. Fuzzy logic has been used to control subway trains, elevators, washing machines, microwave ovens, and cars. Pretty much all the human has to do is push one…

  19. Moral Functioning: Navigating the Messy Landscape of Values in Finnish Preschools

    ERIC Educational Resources Information Center

    Puroila, Anna-Maija; Haho, Annu

    2017-01-01

    This article employs a narrative approach to explore educators' moral functioning in Finnish preschools. Our study is theoretically inspired by notions drawn from feminist and sociocultural studies, according to which education is understood as an entirely moral phenomenon. Within a holistic framework, moral functioning is understood as a concept…

  20. Adapting nurse competence to future patient needs using Checkland's Soft Systems Methodology.

    PubMed

    Železnik, Danica; Kokol, Peter; Blažun Vošner, Helena

    2017-01-01

    New emerging technologies, health globalization, demographic change, new healthcare paradigms, advances in healthcare delivery and social networking will change the needs of patients in the future and consequently will require that new knowledge, competence and skill sets be acquired by nurses. Checkland's Soft Systems Methodology, focusing on the enriched CATWOE and PQR elements of the root definitions, combined with our own developed "Too much - Too little constraint" approach was used to devise impending knowledge, competence and skill sets. The analysis revealed ten needs among patients of the future, 63 constraints and 18 knowledge, competence and skill sets for the future nurse. The completed study showed that SSM is an appropriate tool for high level structuring of a "messy" real-world problem situation to meet prospective nursing challenges. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Counter-intuitive quasi-periodic motion in the autonomous vibration of cracked Timoshenko beams

    NASA Astrophysics Data System (ADS)

    Brandon, J. A.; Abraham, O. N. L.

    1995-08-01

    The time domain behaviour of a cracked Timoshenko beam is constructed by alternation of two linear models corresponding to the open and closed condition of the crack. It might be expected that a response which is composed of the alternation of two systems with different properties would extinguish the periodicities of the constituent sub-models. The numerical studies presented illustrate the perpetuation of these features without showing any evidence for the creation of periodicities based on a common assumption of the mean period of a bilinear model.

  2. Mechanochemical endovenous ablation versus radiofrequency ablation in the treatment of primary small saphenous vein insufficiency (MESSI trial): study protocol for a randomized controlled trial.

    PubMed

    Boersma, Doeke; van Eekeren, Ramon R J P; Kelder, Hans J C; Werson, Debora A B; Holewijn, Suzanne; Schreve, Michiel A; Reijnen, Michel M P J; de Vries, Jean Paul P M

    2014-10-29

    Minimally invasive endothermal techniques, for example, radiofrequency ablation (RFA), have revolutionized the treatment of insufficient truncal veins and are associated with an excellent outcome. The use of thermal energy requires the instillation of tumescent anesthesia around the vein. Mechanochemical endovenous ablation (MOCA™) combines mechanical endothelial damage, using a rotating wire, with simultaneous infusion of a liquid sclerosans. Tumescent anesthesia is not required as no heat is used. Prospective studies using MOCA™ in both great and small saphenous veins showed good anatomical and clinical results with fast postoperative recovery. The MESSI trial (Mechanochemical Endovenous ablation versus radiofrequency ablation in the treatment of primary Small Saphenous vein Insufficiency) is a multicenter randomized controlled trial in which a total of 160 patients will be randomized (1:1) to MOCA™ or RFA. Consecutive patients with primary small saphenous vein incompetence, who meet the eligibility criteria, will be invited to participate in this trial. The primary endpoint is anatomic success, defined as occlusion of the treated veins objectified with duplex ultrasonography at 1 year follow-up. Secondary endpoints are post-procedural pain, initial technical success, clinical success, complications and the duration of the procedure. Initial technical success is defined as the ability to position the device adequately, treat the veins as planned and occlude the treated vein directly after the procedure has been proven by duplex ultrasonography. Clinical success is defined as an objective improvement of clinical outcome after treatment, measured with the Venous Clinical Severity Score (VCSS). Power analyses are conducted for anatomical success and post-procedural pain.Both groups will be evaluated on an intention-to-treat principle. The hypothesis of the MESSI trial is that the anatomic success rate of MOCA™ is not inferior to RFA. The second hypothesis is that post-procedural pain is significantly less after MOCA compared to RFA. NTR4613 Date of trial registration: 28 May 2014.

  3. Test of multi-object exoplanet search spectral interferometer

    NASA Astrophysics Data System (ADS)

    Zhang, Kai; Wang, Liang; Jiang, Haijiao; Zhu, Yongtian; Hou, Yonghui; Dai, Songxin; Tang, Jin; Tang, Zhen; Zeng, Yizhong; Chen, Yi; Wang, Lei; Hu, Zhongwen

    2014-07-01

    Exoplanet detection, a highlight in the current astronomy, will be part of puzzle in astronomical and astrophysical future, which contains dark energy, dark matter, early universe, black hole, galactic evolution and so on. At present, most of the detected Exoplanets are confirmed through methods of radial velocity and transit. Guo shoujing Telescope well known as LAMOST is an advanced multi-object spectral survey telescope equipped with 4000 fibers and 16 low resolution fiber spectrographs. To explore its potential in different astronomical activities, a new radial velocity method named Externally Dispersed Interferometry (EDI) is applied to serve Exoplanet detection through combining a fixed-delay interferometer with the existing spectrograph in medium spectral resolution mode (R=5,000-10,000). This new technology has an impressive feature to enhance radial velocity measuring accuracy of the existing spectrograph through installing a fixed-delay interferometer in front of spectrograph. This way produces an interference spectrum with higher sensitivity to Doppler Effect by interference phase and fixed delay. This relative system named Multi-object Exoplanet Search Spectral Interferometer (MESSI) is composed of a few parts, including a pair of multi-fiber coupling sockets, a remote control iodine subsystem, a multi-object fixed delay interferometer and the existing spectrograph. It covers from 500 to 550 nm and simultaneously observes up to 21 stars. Even if it's an experimental instrument at present, it's still well demonstrated in paper that how MESSI does explore an effective way to build its own system under the existing condition of LAMOST and get its expected performance for multi-object Exoplanet detection, especially instrument stability and its special data reduction. As a result of test at lab, inside temperature of its instrumental chamber is stable in a range of +/-0.5degree Celsius within 12 hours, and the direct instrumental stability without further observation correction is equivalent to be +/-50m/s every 20mins.

  4. [Estimation model for daily transpiration of greenhouse muskmelon in its vegetative growth period].

    PubMed

    Zhang, Da-Long; Li, Jian-Ming; Wu, Pu-Te; Li, Wei-Li; Zhao, Zhi-Hua; Xu, Fei; Li, Jun

    2013-07-01

    For developing an estimation method of muskmelon transpiration in greenhouse, an estimation model for the daily transpiration of greenhouse muskmelon in its vegetative growth period was established, based on the greenhouse environmental parameters, muskmelon growth and development parameters, and soil moisture parameters. According to the specific environment in greenhouse, the item of aerodynamics in Penman-Monteith equation was modified, and the greenhouse environmental sub-model suitable for calculating the reference crop evapotranspiration in greenhouse was deduced. The crop factor sub-model was established with the leaf area index as independent variable, and the form of the model was linear function. The soil moisture sub-model was established with the soil relative effective moisture content as independent variable, and the form of the model was logarithmic function. With interval sowing, the model parameters were estimated and analyzed, according to the measurement data of different sowing dates in a year. The prediction accuracy of the model for sufficient irrigation and water-saving irrigation was verified, according to measurement data when the relative soil moisture content was 80%, 70%, and 60%, and the mean relative error was 11.5%, 16.2% , and 16.9% respectively. The model was a beneficial exploration for the application of Penman-Monteith equation under greenhouse environment and water-saving irrigation, having good application foreground and popularization value.

  5. 3D micro-crack propagation simulation at enamel/adhesive interface using FE submodeling and element death techniques.

    PubMed

    Liu, Heng-Liang; Lin, Chun-Li; Sun, Ming-Tsung; Chang, Yen-Hsiang

    2010-06-01

    This study investigates micro-crack propagation at the enamel/adhesive interface using finite element (FE) submodeling and element death techniques. A three-dimensional (3D) FE macro-model of the enamel/adhesive/ceramic subjected to shear bond testing was generated and analyzed. A 3D micro-model with interfacial bonding structure was constructed at the upper enamel/adhesive interface where the stress concentration was found from the macro-model results. The morphology of this interfacial bonding structure (i.e., resin tag) was assigned based on resin tag geometry and enamel rod arrangement from a scanning electron microscopy micrograph. The boundary conditions for the micro-model were determined from the macro-model results. A custom iterative code combined with the element death technique was used to calculate the micro-crack propagation. Parallel experiments were performed to validate this FE simulation. The stress concentration within the adhesive occurred mainly at the upper corner near the enamel/adhesive interface and the resin tag base. A simulated fracture path was found at the resin tag base along the enamel/adhesive interface. A morphological observation of the fracture patterns obtained from in vitro testing corresponded with the simulation results. This study shows that the FE submodeling and element death techniques could be used to simulate the 3D micro-stress pattern and the crack propagation noted at the enamel/adhesive interface.

  6. Impact of influent data frequency and model structure on the quality of WWTP model calibration and uncertainty.

    PubMed

    Cierkens, Katrijn; Plano, Salvatore; Benedetti, Lorenzo; Weijers, Stefan; de Jonge, Jarno; Nopens, Ingmar

    2012-01-01

    Application of activated sludge models (ASMs) to full-scale wastewater treatment plants (WWTPs) is still hampered by the problem of model calibration of these over-parameterised models. This either requires expert knowledge or global methods that explore a large parameter space. However, a better balance in structure between the submodels (ASM, hydraulic, aeration, etc.) and improved quality of influent data result in much smaller calibration efforts. In this contribution, a methodology is proposed that links data frequency and model structure to calibration quality and output uncertainty. It is composed of defining the model structure, the input data, an automated calibration, confidence interval computation and uncertainty propagation to the model output. Apart from the last step, the methodology is applied to an existing WWTP using three models differing only in the aeration submodel. A sensitivity analysis was performed on all models, allowing the ranking of the most important parameters to select in the subsequent calibration step. The aeration submodel proved very important to get good NH(4) predictions. Finally, the impact of data frequency was explored. Lowering the frequency resulted in larger deviations of parameter estimates from their default values and larger confidence intervals. Autocorrelation due to high frequency calibration data has an opposite effect on the confidence intervals. The proposed methodology opens doors to facilitate and improve calibration efforts and to design measurement campaigns.

  7. Effects of flood control alternatives on fish and wildlife resources of the Malheur-Harney lakes basin

    USGS Publications Warehouse

    Hamilton, David B.; Auble, Gregor T.; Ellison, Richard A.; Roelle, James E.

    1985-01-01

    Malheur Lake is the largest freshwater marsh in the western contiguous United States and is one of the main management units of the Malheur National Wildlife Refuge in southeastern Oregon. The marsh provides excellent waterfowl production habitat as well as vital migration habitats for birds in the Pacific flyway. Water shortages have typically been a problem in this semiarid area; however, record snowfalls and cool summers have recently caused Malheur Lake to rise to its highest level in recorded history. This has resulted in the loss of approximately 57,000 acres of important wildlife habitat as well as extensive flooding of local ranches, roads, and railroad lines. Because of the importance of the Refuge, any water management plan for the Malheur-Harney Lakes Basin needs to consider the impact of management alternatives on the hydrology of Malheur Lake. The facilitated modeling workshop described in this report was conducted January 14-18, 1985, under the joint sponsorship of the Portland Ecological Services Field Office and the Malheur National Wildlife Refuge, Region 1, U.S. Fish and Wildlife Service (FWS). The Portland Field Office is responsible for FWS reporting requirements on Federal water resource projects while the Refuge staff has management responsibility for much of the land affected by high water levels in the Malheur-Harney Lakes Basin. The primary objective of the workshop was to begin gathering and analyzing information concerning potential fish and wildlife impacts, needs, and opportunities associated with proposed U.S. Army Corps of Engineers (COE) flood control alternatives for Malheur Lake. The workshop was structured around the formulation of a computer model that would simulate the hydrologic effects of the various alternatives and any concommitant changes in vegetation communities and wildlife use patterns. The simulation model is composed of three connected submodels. The Hydrology submodel calculates changes in lake volume, elevation, and surface area, as well as changes in water quality, that result from the proposed water management projects (upstream storage, upstream diversions, drainage canals) and the no action alternative. The Vegetation submodel determines associated changes in the areal extent of wetland and upland vegetation communities. Finally, the Wildlife submodel calculates indices of abundance or habitat suitability for colonial nesting birds (great egret, double-crested cormorant, white-faced ibis), greater sandhill crane, diving ducks, tundra swan, dabbling ducks, and Canada goose based on hydrologic and vegetation conditions. The model represents the Malheur-Harney Lakes Basin, but provides water quantity and quality indicators associated with additional flows that might occur in the Malheur River Basin. Several management scenarios, representing various flood control alternatives and assumptions concerning future runoff, were run to analyze model behavior. Scenario results are not intended as an analysis of all potential management actions or assumptions concerning future runoff. Rather, they demonstrate the type of analysis that could be conducted if the model was sufficiently refined and tested. Early in a model development project, the process of building the model is usually of greater benefit than the model itself. The model building process stimulates interaction among agencies, assists in integrating existing information, and helps identify research needs. These benefits usually accrue even in the absence of real predictive power in the resulting model. This workshop initiated interaction among the primary State and Federal resource and development agencies in a nonadversarial forum. The exchange of information and expertise among agencies provided the FWS with the best information currently available for use in the Planning Aid Letter it will develop at the Reconnaissance state of the COE study. If the COE subsequently initiates a Feasability Study, this information will be refined further and will aid the FWS in preparing its Coordination Act Report on any flood control alternative proposed by the COE. The model building and testing process also helped identify model limitations and more general information needs that should be evaluated for further study prior to preparation of an FWS Coordination Act Report. Major needs associated with the Hydrology submodel include a more detailed representation of hydrologic units (separately consider Harney Lake, Mud Lake, and Malheur Lake or the three hydrological units within Malheur Lake, rather than a combined lake system) and explicitly representation of groundwater storage and discharge in water budget calculations. A better representation of the hydrological units will require more detailed topographic data for the basin, capacity-elevation and elevation-surface area curves for each unit, and better water flow data between the units. Additional water quality parameters and constraints on proposed canal operation due to conditions in the Malheur River might also be added. Key Vegetation submodel needs include fine-tuning existing vegetation relationships in the model and adding relationships to address the influence of historical conditions on vegetation development, effects of very rapid changes in lake level, effects of wildlife populations (e.g., carp, muskrat), responses of vegetation to habitat management actions (e.g, haying, grazing, burning), and better representation of sago pondweed dynamics. A complementary geographic information system might also be developed for spatial analyses. Major needs that should be evaluated for the Wildlife submodel include addition of other wildlife species that have important effects on habitat on the Refuge (e.g., carp, muskrat) and consideration of additional life-cycle requisites and controlling variable for species presently in the model. Some of these limitations could perhaps be overcome if historical data on habitat conditions were developed to use with historical data on wildlife populations.

  8. Examining Text Complexity in the Early Grades

    ERIC Educational Resources Information Center

    Fitzgerald, Jill; Elmore, Jeff; Hiebert, Elfrieda H.; Koons, Heather H.; Bowen, Kimberly; Sanford-Moore, Eleanor E.; Stenner, A. Jackson

    2016-01-01

    The Common Core raises the stature of texts to new heights, creating a hubbub. The fuss is especially messy at the early grades, where children are expected to read more complex texts than in the past. But early-grades teachers have been given little actionable guidance about text complexity. The authors recently examined early-grades texts to…

  9. Evolution of Humans: Understanding the Nature and Methods of Science through Cooperative Learning

    ERIC Educational Resources Information Center

    Lee, Yeung Chung

    2011-01-01

    This article describes the use of an enquiry-based approach to the study of human evolution in a practical context, integrating role-playing, jigsaw cooperative learning and scientific argumentation. The activity seeks to unravel the evolutionary relationships of five hominids and one ape from rather "messy" evidence. This approach enhanced the…

  10. Methods and Strategies: It's Child's Play

    ERIC Educational Resources Information Center

    Leach, Jenay Sharp

    2012-01-01

    After a few years of teaching high school physics to juniors and seniors, the author decided it was time for a new challenge. That was when she serendipitously saw the opening for an elementary "science resource teacher." Teaching elementary school was going to be messy, in every possible sense. The author realized that what worked for the big…

  11. From Theory to Practice: A Path through Imperfection--Lessons from Evaluation of Policy Oriented Environmental Health Research in Belgium

    ERIC Educational Resources Information Center

    Keune, Hans; Morrens, Bert; Loots, Ilse; Springael, Johan

    2011-01-01

    Background: Dealing with complex issues per definition bears the burden of imperfection. Whatever comforting theoretical concepts may promise, real life complexity will take its messy toll once travelling from conceptual ambition to real life practice. We specifically reflect on the social scientific contribution to these inter- and…

  12. The Edge of Messy: Interplays of Daily Storytelling and Grand Narratives in Teacher Learning

    ERIC Educational Resources Information Center

    Selland, Makenzie K.

    2017-01-01

    This paper examines the interplay of daily storytelling and societal narratives of teaching in one student teacher's experience. Drawing on narrative and post-structural theories, I conducted a case study using narrative inquiry and ethnographic methods to examine the moment-to-moment storytelling of one student teacher across a range of teaching…

  13. Loving and Hating Mathematics: Challenging the Myths of Mathematical Life

    ERIC Educational Resources Information Center

    Hersh, Reuben; John-Steiner, Vera

    2010-01-01

    Mathematics is often thought of as the coldest expression of pure reason. But few subjects provoke hotter emotions--and inspire more love and hatred--than mathematics. And although math is frequently idealized as floating above the messiness of human life, its story is nothing if not human; often, it is all too human. "Loving and Hating…

  14. Problem-Based Learning in the Life Science Classroom, K-12

    ERIC Educational Resources Information Center

    McConnell, Tom; Parker, Joyce; Eberhardt, Janet

    2016-01-01

    "Problem-Based Learning in the Life Science Classroom, K-12" offers a great new way to ignite your creativity. Authors Tom McConnell, Joyce Parker, and Janet Eberhardt show you how to engage students with scenarios that represent real-world science in all its messy, thought-provoking glory. The scenarios prompt K-12 learners to immerse…

  15. Learning, Action and Solutions in Action Learning: Investigation of Facilitation Practice Using the Concept of Living Theories

    ERIC Educational Resources Information Center

    Sanyal, Chandana

    2018-01-01

    This paper explores the practice of action learning (AL) facilitation in supporting AL set members to address their 'messy' problems through a self-reflexive approach using the concept of 'living theory' [Whitehead, J., and J. McNiff. 2006. "Action Research Living Theory." London: Sage]. The facilitation practice is investigated through…

  16. Missing Stories: The Messy Processes, Multifaceted Risks, & Multiple Roles of Critical Ethnographers

    ERIC Educational Resources Information Center

    Howard, Joy; Thompson, Candace; Nash, Kindel; Rodriguez, Sophia

    2016-01-01

    In this article, four critical ethnographers reflect on dilemmas that arose during individual research projects. We grappled with the question: What does critical ethnography require from us as we work to represent stories that emerge in contexts where students and/or teachers have been marginalized? After engaging in a three-year process of…

  17. Why Does Well

    ERIC Educational Resources Information Center

    Sartorius, Tara Cady

    2010-01-01

    There is something disappointing about life. It is messy and out of control. It seems the more one tries to put life in order, the more ordering there is to do. The more one seeks explanations, the more confusing things become. Life's an impossible task. Maybe one should just give up. Or, then again, one might as well keep trying. It's this…

  18. "I Could Never Have Learned This in a Lecture": Transformative Learning in Rural Health Education

    ERIC Educational Resources Information Center

    Prout, Sarah; Lin, Ivan; Nattabi, Barbara; Green, Charmaine

    2014-01-01

    Health indicators for rural populations in Australia continue to lag behind those of urban populations and particularly for Indigenous populations who make up a large proportion of people living in rural and remote Australia. Preparation of health practitioners who are adequately prepared to face the "messy swamps" of rural health…

  19. A History of Black and Brown: Chicana/o-African American Cultural and Political Relations

    ERIC Educational Resources Information Center

    Alvarez, Luis; Widener, Daniel

    2008-01-01

    Rather than assume that ethnicity or race necessarily marks the edges of one's culture or politics, the contributors to this dossier highlight the messy, blurry, and often contradictory relationships that arise when Chicana/os and African Americans engage one another. The essays explore the complicated mix of cooperation and conflict that…

  20. Solutions of the chemical kinetic equations for initially inhomogeneous mixtures.

    NASA Technical Reports Server (NTRS)

    Hilst, G. R.

    1973-01-01

    Following the recent discussions by O'Brien (1971) and Donaldson and Hilst (1972) of the effects of inhomogeneous mixing and turbulent diffusion on simple chemical reaction rates, the present report provides a more extensive analysis of when inhomogeneous mixing has a significant effect on chemical reaction rates. The analysis is then extended to the development of an approximate chemical sub-model which provides much improved predictions of chemical reaction rates over a wide range of inhomogeneities and pathological distributions of the concentrations of the reacting chemical species. In particular, the development of an approximate representation of the third-order correlations of the joint concentration fluctuations permits closure of the chemical sub-model at the level of the second-order moments of these fluctuations and the mean concentrations.

  1. An acoustic-convective splitting-based approach for the Kapila two-phase flow model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eikelder, M.F.P. ten, E-mail: m.f.p.teneikelder@tudelft.nl; Eindhoven University of Technology, Department of Mathematics and Computer Science, P.O. Box 513, 5600 MB Eindhoven; Daude, F.

    In this paper we propose a new acoustic-convective splitting-based numerical scheme for the Kapila five-equation two-phase flow model. The splitting operator decouples the acoustic waves and convective waves. The resulting two submodels are alternately numerically solved to approximate the solution of the entire model. The Lagrangian form of the acoustic submodel is numerically solved using an HLLC-type Riemann solver whereas the convective part is approximated with an upwind scheme. The result is a simple method which allows for a general equation of state. Numerical computations are performed for standard two-phase shock tube problems. A comparison is made with a non-splittingmore » approach. The results are in good agreement with reference results and exact solutions.« less

  2. Study on a multi-delay spectral interferometry for stellar radial velocity measurement

    NASA Astrophysics Data System (ADS)

    Zhang, Kai; Jiang, Haijiao; Tang, Jin; Ji, Hangxin; Zhu, Yongtian; Wang, Liang

    2014-08-01

    High accuracy radial velocity measurement isn't only one of the most important methods for detecting earth-like Exoplanets, but also one of the main developing fields of astronomical observation technologies in future. Externally dispersed interferometry (EDI) generates a kind of particular interference spectrum through combining a fixed-delay interferometer with a medium-resolution spectrograph. It effectively enhances radial velocity measuring accuracy by several times. Another further study on multi-delay interferometry was gradually developed after observation success with only a fixed-delay, and its relative instrumentation makes more impressive performance in near Infrared band. Multi-delay is capable of giving wider coverage from low to high frequency in Fourier field so that gives a higher accuracy in radial velocity measurement. To study on this new technology and verify its feasibility at Guo Shoujing telescope (LAMOST), an experimental instrumentation with single fixed-delay named MESSI has been built and tested at our lab. Another experimental study on multi-delay spectral interferometry given here is being done as well. Basically, this multi-delay experimental system is designed in according to the similar instrument named TEDI at Palomar observatory and the preliminary test result of MESSI. Due to existence of LAMOST spectrograph at lab, a multi-delay interferometer design actually dominates our work. It's generally composed of three parts, respectively science optics, phase-stabilizing optics and delay-calibrating optics. To switch different fixed delays smoothly during observation, the delay-calibrating optics is possibly useful to get high repeatability during switching motion through polychromatic interferometry. Although this metrology is based on white light interferometry in theory, it's different that integrates all of interference signals independently obtained by different monochromatic light in order to avoid dispersion error caused by broad band in big optical path difference (OPD).

  3. BBIS: Beacon Bus Information System

    NASA Astrophysics Data System (ADS)

    Kasim, Shahreen; Hafit, Hanayanti; Pei Juin, Kong; Afizah Afif, Zehan; Hashim, Rathiah; Ruslai, Husni; Jahidin, Kamaruzzaman; Syafwan Arshad, Mohammad

    2016-11-01

    Lack of bus information for example bus timetable, status of the bus and messy advertisement on bulletin board at the bus stop will give negative impact to tourist. Therefore, a real-time update bus information bulletin board provides all information needed so that passengers can save their bus information searching time. Supported with Android or iOS, Beacon Bus Information System (BBIS) provides bus information between Batu Pahat and Kluang area. BBIS is a system that implements physical web technology and interaction on demand. It built on Backend-as-a-Service, a cloud solution and Firebase non relational database as data persistence backend and syncs between user client in the real-time. People walk through bus stop with smart device and do not require any application. Bluetooth Beacon is used to achieve smart device's best performance of data sharing. Intellij IDEA 15 is one of the tools that that used to develop the BBIS system. Multi-language included front end and backend supported Integration development environment (IDE) helped to speed up integration process.

  4. Modeling the Impact of Energy and Water Prices on Reservoir and Aquifer Management

    NASA Astrophysics Data System (ADS)

    Dale, L. L.; Vicuna, S.; Faybishenko, B.

    2008-12-01

    Climate change and polices to limit carbon emissions are likely to increase energy and water scarcity and raise prices. These price impacts affect the way that reservoirs and aquifers should be managed to maximize the value of water and energy outputs. In this paper, we use a model of storage in a specific region to illustrate how energy and water prices affect optimal reservoir and aquifer management. We evaluate reservoir-aquifer water management in the Merced water basin in California, applying an optimization model of storage benefits associated with different management options and input prices. The model includes two submodels: (a) a monthly nonlinear submodel for optimization of the conjunctive energy/water use and (b) an inter-annual stochastic dynamic programming submodel used for determining an operating rule matrix which maximizes system benefits for given economic and hydrologic conditions. The model input parameters include annual inflows, initial storage, crop water demands, crop prices and electricity prices. The model is used to determine changes in net energy generation and water delivery and associated changes in water storage levels caused by changes in water and energy output prices. For the scenario of water/energy tradeoffs for a pure reservoir (with no groundwater use), we illustrate the tradeoff between the agricultural water use and hydropower generation (MWh) for different energy/agriculture price ratios. The analysis is divided into four steps. The first and second steps describe these price impacts on reservoirs and aquifers, respectively. The third step covers price impacts on conjunctive reservoir and aquifer management. The forth step describes price impacts on reservoir and aquifer storage in the more common historical situation, when these facilities are managed separately. The study indicates that optimal reservoir and aquifer storage levels are a positive function of the energy to water price ratio. The study also concludes that conjunctive use of a reservoir and an aquifer tends to force convergence in the long term, multiyear, average groundwater and reservoir storage heads. The results of this study can be used for developing an efficient strategy of managing energy and water resources in different regions across a broad range of climatic, agricultural, and economic scenarios.

  5. A Model for Dissolution of Lime in Steelmaking Slags

    NASA Astrophysics Data System (ADS)

    Sarkar, Rahul; Roy, Ushasi; Ghosh, Dinabandhu

    2016-08-01

    In a previous study by Sarkar et al. (Metall. Mater. Trans. B 46B:961 2015), a dynamic model of the LD steelmaking was developed. The prediction of the previous model (Sarkar et al. in Metall. Mater. Trans. B 46B:961 2015) for the bath (metal) composition matched well with the plant data (Cicutti et al. in Proceedings of 6th International Conference on Molten Slags, Fluxes and Salts, Stockholm City, 2000). However, with respect to the slag composition, the prediction was not satisfactory. The current study aims to improve upon the previous model Sarkar et al. (Metall. Mater. Trans. B 46B:961 2015) by incorporating a lime dissolution submodel into the earlier one. From the industrial point of view, the understanding of the lime dissolution kinetics is important to meet the ever-increasing demand of producing low-P steel at a low basicity. In the current study, three-step kinetics for the lime dissolution is hypothesized on the assumption that a solid layer of 2CaO·SiO2 should form around the unreacted core of the lime. From the available experimental data, it seems improbable that the observed kinetics should be controlled singly by any one kinetic step. Accordingly, a general, mixed control model has been proposed to calculate the dissolution rate of the lime under varying slag compositions and temperatures. First, the rate equation for each of the three rate-controlling steps has been derived, for three different lime geometries. Next, the rate equation for the mixed control kinetics has been derived and solved to find the dissolution rate. The model predictions have been validated by means of the experimental data available in the literature. In addition, the effects of the process conditions on the dissolution rate have been studied, and compared with the experimental results wherever possible. Incorporation of this submodel into the earlier global model (Sarkar et al. in Metall. Mater. Trans. B 46B:961 2015) enables the prediction of the lime dissolution rate in the dynamic system of LD steelmaking. In addition, with the inclusion of this submodel, significant improvement in the prediction of the slag composition during the main blow period has been observed.

  6. Modeling homeorhetic trajectories of milk component yields, body composition and dry-matter intake in dairy cows: Influence of parity, milk production potential and breed.

    PubMed

    Daniel, J B; Friggens, N C; van Laar, H; Ingvartsen, K L; Sauvant, D

    2018-06-01

    The control of nutrient partitioning is complex and affected by many factors, among them physiological state and production potential. Therefore, the current model aims to provide for dairy cows a dynamic framework to predict a consistent set of reference performance patterns (milk component yields, body composition change, dry-matter intake) sensitive to physiological status across a range of milk production potentials (within and between breeds). Flows and partition of net energy toward maintenance, growth, gestation, body reserves and milk components are described in the model. The structure of the model is characterized by two sub-models, a regulating sub-model of homeorhetic control which sets dynamic partitioning rules along the lactation, and an operating sub-model that translates this into animal performance. The regulating sub-model describes lactation as the result of three driving forces: (1) use of previously acquired resources through mobilization, (2) acquisition of new resources with a priority of partition towards milk and (3) subsequent use of resources towards body reserves gain. The dynamics of these three driving forces were adjusted separately for fat (milk and body), protein (milk and body) and lactose (milk). Milk yield is predicted from lactose and protein yields with an empirical equation developed from literature data. The model predicts desired dry-matter intake as an outcome of net energy requirements for a given dietary net energy content. The parameters controlling milk component yields and body composition changes were calibrated using two data sets in which the diet was the same for all animals. Weekly data from Holstein dairy cows was used to calibrate the model within-breed across milk production potentials. A second data set was used to evaluate the model and to calibrate it for breed differences (Holstein, Danish Red and Jersey) on the mobilization/reconstitution of body composition and on the yield of individual milk components. These calibrations showed that the model framework was able to adequately simulate milk yield, milk component yields, body composition changes and dry-matter intake throughout lactation for primiparous and multiparous cows differing in their production level.

  7. Modelling and predicting the spatial distribution of tree root density in heterogeneous forest ecosystems

    PubMed Central

    Mao, Zhun; Saint-André, Laurent; Bourrier, Franck; Stokes, Alexia; Cordonnier, Thomas

    2015-01-01

    Background and Aims In mountain ecosystems, predicting root density in three dimensions (3-D) is highly challenging due to the spatial heterogeneity of forest communities. This study presents a simple and semi-mechanistic model, named ChaMRoots, that predicts root interception density (RID, number of roots m–2). ChaMRoots hypothesizes that RID at a given point is affected by the presence of roots from surrounding trees forming a polygon shape. Methods The model comprises three sub-models for predicting: (1) the spatial heterogeneity – RID of the finest roots in the top soil layer as a function of tree basal area at breast height, and the distance between the tree and a given point; (2) the diameter spectrum – the distribution of RID as a function of root diameter up to 50 mm thick; and (3) the vertical profile – the distribution of RID as a function of soil depth. The RID data used for fitting in the model were measured in two uneven-aged mountain forest ecosystems in the French Alps. These sites differ in tree density and species composition. Key Results In general, the validation of each sub-model indicated that all sub-models of ChaMRoots had good fits. The model achieved a highly satisfactory compromise between the number of aerial input parameters and the fit to the observed data. Conclusions The semi-mechanistic ChaMRoots model focuses on the spatial distribution of root density at the tree cluster scale, in contrast to the majority of published root models, which function at the level of the individual. Based on easy-to-measure characteristics, simple forest inventory protocols and three sub-models, it achieves a good compromise between the complexity of the case study area and that of the global model structure. ChaMRoots can be easily coupled with spatially explicit individual-based forest dynamics models and thus provides a highly transferable approach for modelling 3-D root spatial distribution in complex forest ecosystems. PMID:26173892

  8. Critical diversity: Divided or united states of social coordination

    PubMed Central

    Kelso, J. A. Scott; Tognoli, Emmanuelle

    2018-01-01

    Much of our knowledge of coordination comes from studies of simple, dyadic systems or systems containing large numbers of components. The huge gap ‘in between’ is seldom addressed, empirically or theoretically. We introduce a new paradigm to study the coordination dynamics of such intermediate-sized ensembles with the goal of identifying key mechanisms of interaction. Rhythmic coordination was studied in ensembles of eight people, with differences in movement frequency (‘diversity’) manipulated within the ensemble. Quantitative change in diversity led to qualitative changes in coordination, a critical value separating régimes of integration and segregation between groups. Metastable and multifrequency coordination between participants enabled communication across segregated groups within the ensemble, without destroying overall order. These novel findings reveal key factors underlying coordination in ensemble sizes previously considered too complicated or 'messy' for systematic study and supply future theoretical/computational models with new empirical checkpoints. PMID:29617371

  9. Soft Systems Methodology

    NASA Astrophysics Data System (ADS)

    Checkland, Peter; Poulter, John

    Soft systems methodology (SSM) is an approach for tackling problematical, messy situations of all kinds. It is an action-oriented process of inquiry into problematic situations in which users learn their way from finding out about the situation, to taking action to improve it. The learning emerges via an organised process in which the situation is explored using a set of models of purposeful action (each built to encapsulate a single worldview) as intellectual devices, or tools, to inform and structure discussion about a situation and how it might be improved. This paper, written by the original developer Peter Checkland and practitioner John Poulter, gives a clear and concise account of the approach that covers SSM's specific techniques, the learning cycle process of the methodology and the craft skills which practitioners develop. This concise but theoretically robust account nevertheless includes the fundamental concepts, techniques, core tenets described through a wide range of settings.

  10. Globally-Applicable Predictive Wildfire Model   a Temporal-Spatial GIS Based Risk Analysis Using Data Driven Fuzzy Logic Functions

    NASA Astrophysics Data System (ADS)

    van den Dool, G.

    2017-11-01

    This study (van den Dool, 2017) is a proof of concept for a global predictive wildfire model, in which the temporal-spatial characteristics of wildfires are placed in a Geographical Information System (GIS), and the risk analysis is based on data-driven fuzzy logic functions. The data sources used in this model are available as global datasets, but subdivided into three pilot areas: North America (California/Nevada), Europe (Spain), and Asia (Mongolia), and are downscaled to the highest resolution (3-arc second). The GIS is constructed around three themes: topography, fuel availability and climate. From the topographical data, six derived sub-themes are created and converted to a fuzzy membership based on the catchment area statistics. The fuel availability score is a composite of four data layers: land cover, wood loads, biomass, biovolumes. As input for the climatological sub-model reanalysed daily averaged, weather-related data is used, which is accumulated to a global weekly time-window (to account for the uncertainty within the climatological model) and forms the temporal component of the model. The final product is a wildfire risk score (from 0 to 1) by week, representing the average wildfire risk in an area. To compute the potential wildfire risk the sub-models are combined usinga Multi-Criteria Approach, and the model results are validated against the area under the Receiver Operating Characteristic curve.

  11. An assessment of CFD-based wall heat transfer models in piston engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sircar, Arpan; Paul, Chandan; Ferreyro-Fernandez, Sebastian

    The lack of accurate submodels for in-cylinder heat transfer has been identified as a key shortcoming in developing truly predictive, physics-based computational fluid dynamics (CFD) models that can be used to develop combustion systems for advanced high-efficiency, low-emissions engines. Only recently have experimental methods become available that enable accurate near-wall measurements to enhance simulation capability via advancing models. Initial results show crank-angle dependent discrepancies with respect to previously used boundary-layer models of up to 100%. However, available experimental data is quite sparse (only few data points on engine walls) and limited (available measurements are those of heat flux only). Predictivemore » submodels are needed for medium-resolution ("engineering") LES and for unsteady Reynolds-averaged simulations (URANS). Recently, some research groups have performed DNS studies on engine-relevant conditions using simple geometries. These provide very useful data for benchmarking wall heat transfer models under such conditions. Further, a number of new and more sophisticated models have also become available in the literature which account for these engine-like conditions. Some of these have been incorporated while others of a more complex nature, which include solving additional partial differential equations (PDEs) within the thin boundary layer near the wall, are underway. These models will then be tested against the available DNS/experimental data in both SI (spark-ignition) and CI (compression-ignition) engines.« less

  12. Modelling of environmental impacts of solid waste landfilling within the life-cycle analysis program EASEWASTE.

    PubMed

    Kirkeby, Janus T; Birgisdottir, Harpa; Bhander, Gurbakash Singh; Hauschild, Michael; Christensen, Thomas H

    2007-01-01

    A new computer-based life-cycle assessment model (EASEWASTE) has been developed to evaluate resource and environmental consequences of solid waste management systems. This paper describes the landfilling sub-model used in the life-cycle assessment program EASEWASTE, and examines some of the implications of this sub-model. All quantities and concentrations of leachate and landfill gas can be modified by the user in order to bring them in agreement with the actual landfill that is assessed by the model. All emissions, except the generation of landfill gas, are process specific. The landfill gas generation is calculated on the basis of organic matter in the landfilled waste. A landfill assessment example is provided. For this example, the normalised environmental effects of landfill gas on global warming and photochemical smog are much greater than the environmental effects for landfill leachate or for landfill construction. A sensitivity analysis for this example indicates that the overall environmental impact is sensitive to the gas collection efficiency and the use of the gas, but not to the amount of leachate generated, or the amount of soil or liner material used in construction. The landfill model can be used for evaluating different technologies with different liners, gas and leachate collection efficiencies, and to compare the environmental consequences of landfilling with alternative waste treatment options such as incineration or anaerobic digestion.

  13. A consistent simulation of oxygen isotope mass-independent fractionation (MIF) in CO and O3 using AC-GCM EMAC

    NASA Astrophysics Data System (ADS)

    Gromov, Sergey; Jöckel, Patrick; Brenninkmeijer, Carl A. M.

    2015-04-01

    We present the most consistent estimate of the atmospheric distribution of oxygen mass-independent fractionation (MIF) of carbon monoxide (Δ17O(CO) = (δ17O(CO)+1)/(δ18O(CO)+1)β-1, β = 0.528, V-SMOW scale) inferred using the ECHAM/MESSy Atmospheric Chemistry (EMAC) model (Jöckel et al., 2010). Although MIF of CO is largely determined by its removal reaction with OH, implementing a comprehensive chemistry scheme and detailed surface emissions in EMAC allows to single out the lesser inputs of MIF due to oxygen from ozone and other atmospheric oxygen reservoirs. The model shows that less than 2% of CO molecules inherit their oxygen atoms from O3 (mostly via ozonolysis reactions) which translates into an additional +0.60o in the average tropospheric Δ17O(CO) value. The remaining non-MIF oxygen (from water and atmospheric O2) outbalances this input by -0.24o respectively. The chemical kinetics of alkene ozonolysis (viz. yield of CO per reacted O3 and O atoms transfer to CO) simulated in EMAC is in good agreement with the laboratory studies of Röckmann et al. (1998a). This also pertains to the inferred (OH) sink-induced effective tropospheric MIF of +(4.3±0.2)o in comparison to +(4.1±0.3)o reckoned by Röckmann et al. (1998b). The explicitly simulated tropospheric Δ17O(O3) value in EMAC averages at +30.4o and has small variation, which is consistent with that expected from the laboratory data. Instead, the most recent observations of ozone tropospheric MIF (Vicars and Savarino, 2014) suggest a value of +25o being the most representative, which renders the simulated MIF input from O3 in CO potentially overestimated by ~20%. The EMAC-simulated δ18O(O3), however, agrees well with observational data, whilst sensitivity studies confirm non-negligible increase in atmospheric δ18O(CO) due to input of O3 oxygen to CO. A pronounced CO enrichment in heavy oxygen is expected in the stratosphere via the reactions of methane and O(1D), provided that the latter inherits the isotope composition of O3. Despite slightly underestimated variation, the simulated Δ17O(CO) surface seasonal cycles are in very good agreement with the observations in the NH. For the SH, where observations of CO MIF are not available to date, the model predicts a substantially higher average and smaller variation of Δ17O(CO). Finally, EMAC ascertains that boundary layer 13C and 18O sink effective enrichments of CO tightly correlate with the Δ17O(CO) signal, indicating that the latter can be used as a measure of CO chemical age, i.e. exposure to OH. Moreover, the MIF of CO constitutes a tool for inferring the actual (i.e. not modified by sink fractionation) isotope composition of its sources. References: Jöckel, P., Kerkweg, A., Pozzer, A., Sander, R., Tost, H., Riede, H., Baumgaertner, A., Gromov, S., and Kern, B.: Development cycle 2 of the Modular Earth Submodel System (MESSy2), Geosci. Model Dev., 3, 717-752, doi: 10.5194/gmd-3-717-2010, 2010. Röckmann, T., Brenninkmeijer, C. A. M., Neeb, P., and Crutzen, P. J.: Ozonolysis of nonmethane hydrocarbons as a source of the observed mass independent oxygen isotope enrichment in tropospheric CO, J. Geophys. Res. Atm., 103, 1463-1470, doi: 10.1029/97JD02929, 1998a. Röckmann, T., Brenninkmeijer, C. A. M., Saueressig, G., Bergamaschi, P., Crowley, J. N., Fischer, H., and Crutzen, P. J.: Mass-independent oxygen isotope fractionation in atmospheric CO as a result of the reaction CO+OH, Science, 281, 544-546, doi: 10.1126/science.281.5376.544, 1998b. Vicars, W. C. and Savarino, J.: Quantitative constraints on the 17O-excess (Δ17O) signature of surface ozone: Ambient measurements from 50° N to 50° S using the nitrite-coated filter technique, Geochim. Cosmochim. Acta, 135, 270-287, doi: 10.1016/j.gca.2014.03.023, 2014.

  14. The Purpose of Mess in Action Research: Building Rigour though a Messy Turn

    ERIC Educational Resources Information Center

    Cook, Tina

    2009-01-01

    Mess and rigour might appear to be strange bedfellows. This paper argues that the purpose of mess is to facilitate a turn towards new constructions of knowing that lead to transformation in practice (an action turn). Engaging in action research--research that can disturb both individual and communally held notions of knowledge for practice--will…

  15. Engaging with and Moving on from Participatory Research: A Personal Reflection

    ERIC Educational Resources Information Center

    Gristy, Cath

    2015-01-01

    In this paper, I respond to the call to articulate experiences of the messy realities of participatory research. I reflect on my engagement and struggle with the realities and ethics of a piece of case study research, which set out with a participatory approach. The project involved a group of young people from an isolated rural community who…

  16. Messy but Meaningful: Exploring the Transition to Reform-Based Pedagogy with Teachers of Mathematics and Coordinators in Ontario, Canada

    ERIC Educational Resources Information Center

    Jarvis, Daniel

    2016-01-01

    The RE4MUL8 Project involved the creation of an online/mobile resource for Intermediate Division (Grade 7 and 8) teachers of mathematics. This resource showcases video documentaries of seven key mathematics topic lessons (fractions, integers, proportional reasoning, composite shapes and solids, solving equations, and, patterning and algebraic…

  17. Food table on ISS

    NASA Image and Video Library

    2015-04-08

    ISS043E091650 (04/08/2015) --- A view of the food table located in the Russian Zvezda service module on the International Space Station taken by Expedition 43 Flight Engineer Scott Kelly. Assorted food, drink and condiment packets are visible. Kelly tweeted this image along with the comment: ""Looks messy, but it's functional. Our #food table on the @space station. What's for breakfast? #YearInSpace".

  18. Including Families and Communities in Urban Education. Issues in the Research, Theory, Policy, and Practice of Urban Education

    ERIC Educational Resources Information Center

    Hands, Catherine, Ed.; Hubbard, Lea, Ed.

    2011-01-01

    The work of school, family and community partnerships is complex and messy and demands a thoughtful and deep investigation. Currently, parent and community involvement does not draw on school reform and educational change literature and conversely the school change literature often ignores the crucial role that communities play in educational…

  19. A nondestructive trap for Dendroctonus frontalis Zimmerman (Coleoptera: Scolytidae)

    Treesearch

    John C. Moser; LLoyd E. Browne

    1978-01-01

    The bucket trap is a lightweight device for capturing southern pine beetles in flight and retaining them either alive or dead for later examination. It is not messy like the sticky trap and not cumbersome like conventional live traps. Placing the bucket against a vertical silhouette increases the number of beetles caught. Few nontarget insects are captured except for...

  20. From Latin Americans to Latinos: Latin American Immigration in US: The Unwanted Children

    ERIC Educational Resources Information Center

    Moraña, Ana

    2007-01-01

    It is my understanding that Latin American immigrants in the United States, during the contested process of becoming Latinos (US citizens or the offspring of Latin Americans born in US) are for the most part socially portrayed as unwanted, messy children who need to be educated before they can become American citizens. Whether they can be called…

  1. A Professor's Fortunate Suggestion: An Essay on the Transformational Power of Interpretive Epistemologies

    ERIC Educational Resources Information Center

    Diversi, Marcelo

    2007-01-01

    This is an essay about the transformative power of interpretive epistemologies for those who come into the social sciences seeking meaning in the messiness of human experience. It is about how an epistemological exercise gave the author symbolic tools to become sentient of his father's subjective oppressive force in his life--with the man living…

  2. "I Keep Me Safe." Risk and Resilience in Children with Messy Lives

    ERIC Educational Resources Information Center

    Wright, Travis

    2013-01-01

    Though we do our best to protect children from life's underbelly, bad things happen. Hurricanes, school shootings, divorce, exploding crime rates, economic downturns, child abuse, and acts of terror have become reality for many. Sadly, students are not immune from the chaos that often results. If a child worries that he is not safe or thinks…

  3. Authenticity in Education: From Narcissism and Freedom to the Messy Interplay of Self-Exploration and Acceptable Tension

    ERIC Educational Resources Information Center

    Thompson, Merlin B.

    2015-01-01

    The problem with authenticity--the idea of being "true to one's self"--is that its somewhat checkered reputation garners a complete range of favorable and unfavorable reactions. In educational settings, authenticity is lauded as one of the top two traits students desire in their teachers. Yet, authenticity is criticized for its tendency…

  4. What's the Technology For? Teacher Attention and Pedagogical Goals in a Modeling-Focused Professional Development Workshop

    ERIC Educational Resources Information Center

    Wilkerson, Michelle Hoda; Andrews, Chelsea; Shaban, Yara; Laina, Vasiliki; Gravel, Brian E.

    2016-01-01

    This paper explores the role that technology can play in engaging pre-service teachers with the iterative, "messy" nature of model-based inquiry. Over the course of 5 weeks, 11 pre-service teachers worked in groups to construct models of diffusion using a computational animation and simulation toolkit, and designed lesson plans for the…

  5. The Messy Nature of Science: Famous Scientists Can Help Clear Up

    ERIC Educational Resources Information Center

    Sinclair, Alex; Strachan, Amy

    2016-01-01

    Having embraced the inclusion of evolution in the National Curriculum for primary science in England and briefly bemoaned the omission of any physics in key stage 1 (ages 5-7), it was time to focus on the biggest change, that of working scientifically. While the authors were aware of the non-statutory suggestions to study famous scientists such as…

  6. On the Problem of Theorising: An Insider Account of Research Practice

    ERIC Educational Resources Information Center

    Clegg, Sue

    2012-01-01

    This article addresses the problem of how we theorise in writing about higher education through a reconstruction and interrogation of my previous work. I argue that theorising is messy, incomplete and a non-reductive process. Using C. Wright-Mills' notion of craft and Val Hey's insights into theorists who come to haunt us, I retrace the steps…

  7. Of Groomers and Tour Guides: The Role of Writing in the Fellowships Office

    ERIC Educational Resources Information Center

    Bickford, Leslie

    2017-01-01

    Making writing less scary for students and focusing on the messy, recursive nature of writing helps students use the writing process to bring forth the thoughts that might otherwise not find their way into essays. Students who revisit their writing also revisit their thinking and are empowered to cultivate and articulate that thinking in clearer…

  8. Problem-Based Learning in the Earth and Space Science Classroom, K-12

    ERIC Educational Resources Information Center

    McConnell, Tom J.; Parker, Joyce; Eberhardt, Janet

    2017-01-01

    If you've ever asked yourself whether problem-based learning (PBL) can bring new life to both your teaching and your students' learning, here's your answer: Yes. This all-in-one guide will help you engage your students in scenarios that represent real-world science in all its messy, thought-provoking glory. The scenarios will prompt K-12 students…

  9. Circles in the Sand: Challenge and Reinforcement of Gender Stereotypes in a Literacy Programme in Sudan

    ERIC Educational Resources Information Center

    Greany, Kate

    2008-01-01

    Participatory literacy programmes in developing countries are often seen as an important tool for women's empowerment and equality. This article problematises the way in which evaluation of progress towards these goals is couched in a linear trajectory, and often fails to uncover the messy reality of women's negotiations to achieve their own aims.…

  10. The "Messy" Business of Academic Developers Leading Other Academic Developers: Critical Reflection on a Curriculum Realignment Exercise

    ERIC Educational Resources Information Center

    Thomas, Sharon; Cordiner, Moira

    2014-01-01

    Little has been written about academic developers (ADs) working in teams leading other ADs. This paper chronicles the experience of a group of ADs in one Australian university working on a curriculum realignment exercise. Unexpectedly the dominant theme in participants' reflections was group dynamics, not the process. We were confronted by…

  11. Confronting Ambiguity, Anarchy, and Crisis in Institutional Research: Using Student Unit Record Databases in Extra-Institutional Research

    ERIC Educational Resources Information Center

    Musoba, Glenda D.; Gross, Jacob P. K.; Hossler, Don

    2008-01-01

    The world of policymaking on the campuses of colleges and universities is messy, ambiguous, and contested. In this complex environment, which Kingdon (2003) has aptly called a "policy soup," the role of institutional research is often not only to provide answers to existing policy questions but to produce information to help transform…

  12. Stranger in a Strange Land: The Undergraduate in the Academic Library--A Collaborative Pedagogy for Undergraduate Research

    ERIC Educational Resources Information Center

    Bankert, Dabney A.; Van Vuuren, Melissa S.

    2008-01-01

    This is not another tired lament for a Golden Age when all students were brilliantly prepared for college, but rather an elaboration of a central pedagogical reality the authors had each separately faced--it is not easy to teach the complex set of skills subsumed under the heading "research," that organic, contingent, messy, recursive…

  13. History and Generality of Aquatox

    EPA Pesticide Factsheets

    Aquatic fate, toxicology, and ecosystem submodels were coupled to “close the loop,” representing both direct and indirecteffects. the model is also a platform to which other environmental stressors may be added for extensive analysis.

  14. 40 CFR 87.42 - Production report to EPA.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...: (1) The type of engine (turbofan, turboprop, etc.) and complete sub-model name, including any.... Specify thrust in kW for turboprop engines. You may omit the following items specified in Part III...

  15. Vibration and acoustic frequency spectra for industrial process modeling using selective fusion multi-condition samples and multi-source features

    NASA Astrophysics Data System (ADS)

    Tang, Jian; Qiao, Junfei; Wu, ZhiWei; Chai, Tianyou; Zhang, Jian; Yu, Wen

    2018-01-01

    Frequency spectral data of mechanical vibration and acoustic signals relate to difficult-to-measure production quality and quantity parameters of complex industrial processes. A selective ensemble (SEN) algorithm can be used to build a soft sensor model of these process parameters by fusing valued information selectively from different perspectives. However, a combination of several optimized ensemble sub-models with SEN cannot guarantee the best prediction model. In this study, we use several techniques to construct mechanical vibration and acoustic frequency spectra of a data-driven industrial process parameter model based on selective fusion multi-condition samples and multi-source features. Multi-layer SEN (MLSEN) strategy is used to simulate the domain expert cognitive process. Genetic algorithm and kernel partial least squares are used to construct the inside-layer SEN sub-model based on each mechanical vibration and acoustic frequency spectral feature subset. Branch-and-bound and adaptive weighted fusion algorithms are integrated to select and combine outputs of the inside-layer SEN sub-models. Then, the outside-layer SEN is constructed. Thus, "sub-sampling training examples"-based and "manipulating input features"-based ensemble construction methods are integrated, thereby realizing the selective information fusion process based on multi-condition history samples and multi-source input features. This novel approach is applied to a laboratory-scale ball mill grinding process. A comparison with other methods indicates that the proposed MLSEN approach effectively models mechanical vibration and acoustic signals.

  16. The Logic of Reachability

    NASA Technical Reports Server (NTRS)

    Smith, David E.; Jonsson, Ari K.; Clancy, Daniel (Technical Monitor)

    2001-01-01

    In recent years, Graphplan style reachability analysis and mutual exclusion reasoning have been used in many high performance planning systems. While numerous refinements and extensions have been developed, the basic plan graph structure and reasoning mechanisms used in these systems are tied to the very simple STRIPS model of action. In 1999, Smith and Weld generalized the Graphplan methods for reachability and mutex reasoning to allow actions to have differing durations. However, the representation of actions still has some severe limitations that prevent the use of these techniques for many real-world planning systems. In this paper, we 1) separate the logic of reachability from the particular representation and inference methods used in Graphplan, and 2) extend the notions of reachability and mutual exclusion to more general notions of time and action. As it turns out, the general rules for mutual exclusion reasoning take on a remarkably clean and simple form. However, practical instantiations of them turn out to be messy, and require that we make representation and reasoning choices.

  17. 40 CFR 87.42 - Production report to EPA.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... the following information for each sub-model: (1) The type of engine (turbofan, turboprop, etc.) and... report CO2 emissions. Specify thrust in kW for turboprop engines. You may omit the following items...

  18. 40 CFR 87.42 - Production report to EPA.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... the following information for each sub-model: (1) The type of engine (turbofan, turboprop, etc.) and... report CO2 emissions. Specify thrust in kW for turboprop engines. You may omit the following items...

  19. Ghost Hunting with Lollies, Chess and Lego: Appreciating the "Messy" Complexity (and Costs) of Doing Difficult Research in Education

    ERIC Educational Resources Information Center

    Graham, Linda J.; Buckley, Linda

    2014-01-01

    This paper contributes to conversations about the funding and quality of education research. The paper proceeds in two parts. Part I sets the context by presenting an historical analysis of funding allocations made to Education research through the ARC's Discovery projects scheme between the years 2002 and 2014, and compares these trends to…

  20. What Success Really Looks Like: Bright Spots and Blots Define a Career Trajectory

    ERIC Educational Resources Information Center

    Abrams, Jennifer

    2015-01-01

    The image of success not being a straight, upward arrow but a big, messy scribbled blob defines the career trajectory of the author perfectly. Her work has moved forward, pushed upward, and stretched further, but it has not been a smooth and easy path. In this article, an education consultant reflects on her career highs and lows as she shifts…

  1. The Messiness of Language Socialization in Reading Groups: Participation in and Resistance to the Values of Essayist Literacy

    ERIC Educational Resources Information Center

    Poole, Deborah

    2008-01-01

    This paper focuses on the process of literacy socialization in several 5th grade reading groups. Through close analysis of spoken interaction, which centers on a heavily illustrated, non-fiction text, the paper proposes that these reading groups can be seen as complex sites of socialization to the values associated with essayist literacy (i.e.,…

  2. "Did We Learn English or What?": A Study Abroad Student in the UK Carrying and Crossing Boundaries in Out-of-Class Communication

    ERIC Educational Resources Information Center

    Badwan, Khawla M.

    2017-01-01

    Language educators in many parts of the world are torn between preparing language learners to pass language proficiency tests and trying to let their classrooms reflect the messiness of out-of-class communication. Because testing is "an activity which perhaps more than any other dictates what is taught" (Hall, 2014, p. 379), helping…

  3. Improved Method Of Bending Concentric Pipes

    NASA Technical Reports Server (NTRS)

    Schroeder, James E.

    1995-01-01

    Proposed method for bending two concentric pipes simultaneously while maintaining void between them replaces present tedious, messy, and labor-intensive method. Array of rubber tubes inserted in gap between concentric pipes. Tubes then inflated with relatively incompressible liquid to fill gap. Enables bending to be done faster and more cleanly, and amenable to automation of significant portion of bending process on computer numerically controlled (CNC) tube-bending machinery.

  4. The Process Is Just Messy: A Historical Perspective on Adoption of Innovations

    ERIC Educational Resources Information Center

    Orrill, Chandra Hawley

    2016-01-01

    Suppose you were an alien trying to understand how people in the United States feel about the Common Core School Standards for Mathematics (CCSSM). You could look at the Internet, mass media, YouTube and all of the other venues available. Walking away from them, you would be very confused about whether the U.S. loves or hates the CCSSM, whether…

  5. Assumptions, Emotions, and Interpretations as Ethical Moments: Navigating a Small-Scale Cross-Cultural Online Interviewing Study

    ERIC Educational Resources Information Center

    Frisoli, Paul St. John

    2010-01-01

    In this paper, I map important "messy" elements that I learned from my five-month small-scale research project, one that was designed around pivotal works on online social research. I used computers and the Internet with Minan, a young man living in Guinea, West Africa, in order to examine his perceptions surrounding the value of these…

  6. Do We Really Know What the Problems Are? A Messy Conversation about Pedagogical Questions and the Scholarship of Teaching and Learning

    ERIC Educational Resources Information Center

    Tsao, Ting Man

    2010-01-01

    This article presents the author's critique of Jennifer A. Rich's essay titled "How Do We See What We See? Pedagogical Lacunae and Their Pitfalls in the Classroom." Rather than adopting the classic "how I solved a problem" narrative, Rich relentlessly focuses on her questions and problems--their contexts, their complexity, other questions to which…

  7. Heat Stable Enzymes from Thermophiles

    DTIC Science & Technology

    1998-02-01

    final product and is somewhat messy to work with. Therefore, alternatives were tested. However, no combination of corn syrup , alternative sugars and...INTRODUCTION 9 CLONING OF ALKALINE PHOSPHATASE GENE AND PRODUCTION OF HIGH SPECIFIC ACTIVITY ENZYME 9 Cloning into E. coil and expression of high activity...JKR209, into an alternative, better producing organism. CLONING OF ALKALINE PHOSPHATASE GENE AND PRODUCTION OF HIGH SPECIFIC ACTIVITY ENZYME Cloning into

  8. Implementing and Sustaining School Improvement. Principal's Research Review: Supporting the Principal's Data-Informed Decisions. Vol. 6, No. 2

    ERIC Educational Resources Information Center

    Protheroe, Nancy

    2011-01-01

    School improvement can be a complex, messy business. At its most basic, school improvement is change--change that might require people to abandon long-held beliefs and practices, shift roles, and learn new skills. Kilgore and Reynolds (2011) suggested that successful change requires that people change their perceptions as well as their actions.…

  9. Estimation of saltation emission in the Kubuqi Desert, North China.

    PubMed

    Du, Heqiang; Xue, Xian; Wang, Tao

    2014-05-01

    The Kubuqi Desert suffered more severe wind erosion hazard. Every year, a mass of aeolian sand was blown in the Ten Tributaries that are tributaries of the Yellow River. To estimate the quantity of aeolian sediment blown into the Ten Tributaries from the Kubuqi Desert, it is necessary to simulate the saltation processes of the Kubuqi Desert. A saltation submodel of the IWEMS (Integrated Wind-Erosion Modeling System) and its accompanying RS (Remote Sensing) and GIS (Geographic Information System) methods were used to model saltation emissions in the Kubuqi Desert. To calibrate the saltation submodel, frontal area of vegetation, soil moisture, wind velocity and saltation sediment were observed synchronously on several points in 2011 and 2012. In this study, a model namely BEACH (Bridge Event And Continuous Hydrological) was introduced to simulate the daily soil moisture. Using the surface parameters (frontal area of vegetation and soil moisture) along with the observed wind velocities and saltation sediments for the observed points, the saltation model was calibrated and validated. To reduce the simulate error, a subdaily wind velocity program, WINDGEN was introduced in this model to simulate the hourly wind velocity of the Kubuqi Desert. By incorporating simulated hourly wind velocity, and model variables, the saltation emission of the Kubuqi Desert was modeled. The model results show that the total sediment flow rate was 1-30.99 tons/m over the last 10years (2001-2010). The saltation emission mainly occurs in the north central part of the Kubuqi Desert in winter and spring. Integrating the wind directions, the quantity of the aeolian sediment that deposits in the Ten Tributaries was estimated. Compared with the observed data by the local government and hydrometric stations, our estimation is reasonable. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Development of a physiologically based pharmacokinetic model for inhalation of jet fuels in the rat.

    PubMed

    Martin, Sheppard A; Campbell, Jerry L; Tremblay, Raphael T; Fisher, Jeffrey W

    2012-01-01

    The pharmacokinetic behavior of the majority of jet fuel constituents has not been previously described in the framework of a physiologically based pharmacokinetic (PBPK) model for inhalation exposure. Toxic effects have been reported in multiple organ systems, though exposure methods varied across studies, utilizing either vaporized or aerosolized fuels. The purpose of this work was to assess the pharmacokinetics of aerosolized and vaporized fuels, and develop a PBPK model capable of describing both types of exposures. To support model development, n-tetradecane and n-octane exposures were conducted at 89 mg/m(3) aerosol+vapor and 1000-5000 ppm vapor, respectively. Exposures to JP-8 and S-8 were conducted at ~900-1000 mg/m(3), and ~200 mg/m(3) to a 50:50 blend of both fuels. Sub-models were developed to assess the behavior of representative constituents and grouped unquantified constituents, termed "lumps", accounting for the remaining fuel mass. The sub-models were combined into the first PBPK model for petroleum and synthetic jet fuels. Inhalation of hydrocarbon vapors was described with simple gas-exchange assumptions for uptake and exhalation. For aerosol droplets systemic uptake occurred in the thoracic region. Visceral tissues were described using perfusion and diffusion-limited equations. The model described kinetics at multiple fuel concentrations, utilizing a chemical "lumping" strategy to estimate parameters for fractions of speciated and unspeciated hydrocarbons and gauge metabolic interactions. The model more accurately simulated aromatic and lower molecular weight (MW) n-alkanes than some higher MW chemicals. Metabolic interactions were more pronounced at high (~2700-1000 mg/m(3)) concentrations. This research represents the most detailed assessment of fuel pharmacokinetics to date.

  11. Validation of a spatial model used to locate fish spawning reef construction sites in the St. Clair–Detroit River system

    USGS Publications Warehouse

    Fischer, Jason L.; Bennion, David; Roseman, Edward F.; Manny, Bruce A.

    2015-01-01

    Lake sturgeon (Acipenser fulvescens) populations have suffered precipitous declines in the St. Clair–Detroit River system, following the removal of gravel spawning substrates and overfishing in the late 1800s to mid-1900s. To assist the remediation of lake sturgeon spawning habitat, three hydrodynamic models were integrated into a spatial model to identify areas in two large rivers, where water velocities were appropriate for the restoration of lake sturgeon spawning habitat. Here we use water velocity data collected with an acoustic Doppler current profiler (ADCP) to assess the ability of the spatial model and its sub-models to correctly identify areas where water velocities were deemed suitable for restoration of fish spawning habitat. ArcMap 10.1 was used to create raster grids of water velocity data from model estimates and ADCP measurements which were compared to determine the percentage of cells similarly classified as unsuitable, suitable, or ideal for fish spawning habitat remediation. The spatial model categorized 65% of the raster cells the same as depth-averaged water velocity measurements from the ADCP and 72% of the raster cells the same as surface water velocity measurements from the ADCP. Sub-models focused on depth-averaged velocities categorized the greatest percentage of cells similar to ADCP measurements where 74% and 76% of cells were the same as depth-averaged water velocity measurements. Our results indicate that integrating depth-averaged and surface water velocity hydrodynamic models may have biased the spatial model and overestimated suitable spawning habitat. A model solely integrating depth-averaged velocity models could improve identification of areas suitable for restoration of fish spawning habitat.

  12. A Distributed Snow Evolution Modeling System (SnowModel)

    NASA Astrophysics Data System (ADS)

    Liston, G. E.; Elder, K.

    2004-12-01

    A spatially distributed snow-evolution modeling system (SnowModel) has been specifically designed to be applicable over a wide range of snow landscapes, climates, and conditions. To reach this goal, SnowModel is composed of four sub-models: MicroMet defines the meteorological forcing conditions, EnBal calculates surface energy exchanges, SnowMass simulates snow depth and water-equivalent evolution, and SnowTran-3D accounts for snow redistribution by wind. While other distributed snow models exist, SnowModel is unique in that it includes a well-tested blowing-snow sub-model (SnowTran-3D) for application in windy arctic, alpine, and prairie environments where snowdrifts are common. These environments comprise 68% of the seasonally snow-covered Northern Hemisphere land surface. SnowModel also accounts for snow processes occurring in forested environments (e.g., canopy interception related processes). SnowModel is designed to simulate snow-related physical processes occurring at spatial scales of 5-m and greater, and temporal scales of 1-hour and greater. These include: accumulation from precipitation; wind redistribution and sublimation; loading, unloading, and sublimation within forest canopies; snow-density evolution; and snowpack ripening and melt. To enhance its wide applicability, SnowModel includes the physical calculations required to simulate snow evolution within each of the global snow classes defined by Sturm et al. (1995), e.g., tundra, taiga, alpine, prairie, maritime, and ephemeral snow covers. The three, 25-km by 25-km, Cold Land Processes Experiment (CLPX) mesoscale study areas (MSAs: Fraser, North Park, and Rabbit Ears) are used as SnowModel simulation examples to highlight model strengths, weaknesses, and features in forested, semi-forested, alpine, and shrubland environments.

  13. POEM: PESTICIDE ORCHARD ECOSYSTEM MODEL

    EPA Science Inventory

    The Pesticide Orchard Ecosystem Model (POEM) is a mathematical model of organophosphate pesticide movement in an apple orchard ecosystem. In addition submodels on invertebrate population dynamics are included. The fate model allows the user to select the pesticide, its applicatio...

  14. Electrical engineering unit for the reactive power control of the load bus at the voltage instability

    NASA Astrophysics Data System (ADS)

    Kotenev, A. V.; Kotenev, V. I.; Kochetkov, V. V.; Elkin, D. A.

    2018-01-01

    For the purpose of reactive power control error reduction and decrease of the voltage sags in the electric power system caused by the asynchronous motors started the mathematical model of the load bus was developed. The model was built up of the sub-models of the following elements: a transformer, a transmission line, a synchronous and an asynchronous loads and a capacitor bank load, and represents the automatic reactive power control system taking into account electromagnetic processes of the asynchronous motors started and reactive power changing of the electric power system elements caused by the voltage fluctuation. The active power/time and reactive power/time characteristics based on the recommended procedure of the equivalent electric circuit parameters calculation were obtained. The derived automatic reactive power control system was shown to eliminate the voltage sags in the electric power system caused by the asynchronous motors started.

  15. Understanding the Impacts of Climate Change and Land Use Dynamics Using a Fully Coupled Hydrologic Feedback Model between Surface and Subsurface Systems

    NASA Astrophysics Data System (ADS)

    Park, C.; Lee, J.; Koo, M.

    2011-12-01

    Climate is the most critical driving force of the hydrologic system of the Earth. Since the industrial revolution, the impacts of anthropogenic activities to the Earth environment have been expanded and accelerated. Especially, the global emission of carbon dioxide into the atmosphere is known to have significantly increased temperature and affected the hydrologic system. Many hydrologists have contributed to the studies regarding the climate change on the hydrologic system since the Intergovernmental Panel on Climate Change (IPCC) was created in 1988. Among many components in the hydrologic system groundwater and its response to the climate change and anthropogenic activities are not fully understood due to the complexity of subsurface conditions between the surface and the groundwater table. A new spatio-temporal hydrologic model has been developed to estimate the impacts of climate change and land use dynamics on the groundwater. The model consists of two sub-models: a surface model and a subsurface model. The surface model involves three surface processes: interception, runoff, and evapotranspiration, and the subsurface model does also three subsurface processes: soil moisture balance, recharge, and groundwater flow. The surface model requires various input data including land use, soil types, vegetation types, topographical elevations, and meteorological data. The surface model simulates daily hydrological processes for rainfall interception, surface runoff varied by land use change and crop growth, and evapotranspiration controlled by soil moisture balance. The daily soil moisture balance is a key element to link two sub-models as it calculates infiltration and groundwater recharge by considering a time delay routing through a vadose zone down to the groundwater table. MODFLOW is adopted to simulate groundwater flow and interaction with surface water components as well. The model is technically flexible to add new model or modify existing model as it is developed with an object-oriented language - Python. The model also can easily be localized by simple modification of soil and crop properties. The actual application of the model after calibration was successful and results showed reliable water balance and interaction between the surface and subsurface hydrologic systems.

  16. Desertification in the south Junggar Basin, 2000-2009: Part II. Model development and trend analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Miao; Lin, Yi

    2018-07-01

    The substantial objective of desertification monitoring is to derive its development trend, which facilitates pre-making policies to handle its potential influences. Aiming at this extreme goal, previous studies have proposed a large number of remote sensing (RS) based methods to retrieve multifold indicators, as reviewed in Part I. However, most of these indicators individually capable of characterizing a single aspect of land attributes, e.g., albedo quantifying land surface reflectivity, cannot show a full picture of desertification processes; few comprehensive RS-based models have either been published. To fill this gap, this Part II was dedicated to developing a RS information model for comprehensively characterizing the desertification and deriving its trend, based on the indicators retrieved in Part I in the same case of the south Junggar Basin, China in the last decade (2000-2009). The proposed model was designed to have three dominant component modules, i.e., the vegetation-relevant sub-model, the soil-relevant sub-model, and the water-relevant sub-model, which synthesize all of the retrieved indicators to integrally reflect the processes of desertification; based on the model-output indices, the desertification trends were derived using the least absolute deviation fitting algorithm. Tests indicated that the proposed model did work and the study area showed different development tendencies for different desertification levels. Overall, this Part II established a new comprehensive RS information model for desertification risk assessment and its trend deriving, and the whole study comprising Part I and Part II advanced a relatively standard framework for RS-based desertification monitoring.

  17. On operational flood forecasting system involving 1D/2D coupled hydraulic model and data assimilation

    NASA Astrophysics Data System (ADS)

    Barthélémy, S.; Ricci, S.; Morel, T.; Goutal, N.; Le Pape, E.; Zaoui, F.

    2018-07-01

    In the context of hydrodynamic modeling, the use of 2D models is adapted in areas where the flow is not mono-dimensional (confluence zones, flood plains). Nonetheless the lack of field data and the computational cost constraints limit the extensive use of 2D models for operational flood forecasting. Multi-dimensional coupling offers a solution with 1D models where the flow is mono-dimensional and with local 2D models where needed. This solution allows for the representation of complex processes in 2D models, while the simulated hydraulic state is significantly better than that of the full 1D model. In this study, coupling is implemented between three 1D sub-models and a local 2D model for a confluence on the Adour river (France). A Schwarz algorithm is implemented to guarantee the continuity of the variables at the 1D/2D interfaces while in situ observations are assimilated in the 1D sub-models to improve results and forecasts in operational mode as carried out by the French flood forecasting services. An implementation of the coupling and data assimilation (DA) solution with domain decomposition and task/data parallelism is proposed so that it is compatible with operational constraints. The coupling with the 2D model improves the simulated hydraulic state compared to a global 1D model, and DA improves results in 1D and 2D areas.

  18. Development of a regional groundwater flow model for the area of the Idaho National Engineering Laboratory, Eastern Snake River Plain Aquifer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCarthy, J.M.; Arnett, R.C.; Neupauer, R.M.

    This report documents a study conducted to develop a regional groundwater flow model for the Eastern Snake River Plain Aquifer in the area of the Idaho National Engineering Laboratory. The model was developed to support Waste Area Group 10, Operable Unit 10-04 groundwater flow and transport studies. The products of this study are this report and a set of computational tools designed to numerically model the regional groundwater flow in the Eastern Snake River Plain aquifer. The objective of developing the current model was to create a tool for defining the regional groundwater flow at the INEL. The model wasmore » developed to (a) support future transport modeling for WAG 10-04 by providing the regional groundwater flow information needed for the WAG 10-04 risk assessment, (b) define the regional groundwater flow setting for modeling groundwater contaminant transport at the scale of the individual WAGs, (c) provide a tool for improving the understanding of the groundwater flow system below the INEL, and (d) consolidate the existing regional groundwater modeling information into one usable model. The current model is appropriate for defining the regional flow setting for flow submodels as well as hypothesis testing to better understand the regional groundwater flow in the area of the INEL. The scale of the submodels must be chosen based on accuracy required for the study.« less

  19. Ocean color modeling: Parameterization and interpretation

    NASA Astrophysics Data System (ADS)

    Feng, Hui

    The ocean color as observed near the water surface is determined mainly by dissolved and particulate substances, known as "optically-active constituents," in the upper water column. The goal of ocean color modeling is to interpret an ocean color spectrum quantitatively to estimate the suite of optically-active constituents near the surface. In recent years, ocean color modeling efforts have been centering upon three major optically-active constituents: chlorophyll concentration, colored dissolved organic matter, and scattering particulates. Many challenges are still being faced in this arena. This thesis generally addresses and deals with some critical issues in ocean color modeling. In chapter one, an extensive literature survey on ocean color modeling is given. A general ocean color model is presented to identify critical candidate uncertainty sources in modeling the ocean color. The goal for this thesis study is then defined as well as some specific objectives. Finally, a general overview of the dissertation is portrayed, defining each of the follow-up chapters to target some relevant objectives. In chapter two, a general approach is presented to quantify constituent concentration retrieval errors induced by uncertainties in inherent optical property (IOP) submodels of a semi-analytical forward model. Chlorophyll concentrations are retrieved by inverting a forward model with nonlinear IOPs. The study demonstrates how uncertainties in individual IOP submodels influence the accuracy of the chlorophyll concentration retrieval at different chlorophyll concentration levels. The important finding for this study shows that precise knowledge of spectral shapes of IOP submodels is critical for accurate chlorophyll retrieval, suggesting an improvement in retrieval accuracy requires precise spectral IOP measurements. In chapter three, three distinct inversion techniques, namely, nonlinear optimization (NLO), principal component analysis (PCA) and artificial neural network (ANN) are compared to assess their inversion performances to retrieve optically-active constituents for a complex nonlinear bio-optical system simulated by a semi-analytical ocean color model. A well-designed simulation scheme was implemented to simulate waters of different bio-optical complexity, and then the three inversion methods were applied to these simulated datasets for performance evaluation. In chapter four, an approach is presented for optimally parameterizing an irradiance reflectance model on the basis of a bio-optical dataset made at 45 stations in the Tokyo Bay and nearby regions between 1982 and 1984. (Abstract shortened by UMI.)

  20. POLUTE. Forest Air Pollutant Uptake Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, C.E. Jr.; Sinclair, T.R.

    1992-02-13

    POLUTE is a computer model designed to estimate the uptake of air pollutants by forests. The model utilizes submodels to describe atmospheric diffusion immediately above and within the canopy, and into the sink areas within or on the trees. The program implementing the model is general and can be used, with only minor changes, for any gaseous pollutant. The model provides an estimate describing the response of the vegetarian-atmosphere system to the environment as related to three types of processes: atmospheric diffusion, diffusion near and inside the absorbing plant, and the physical and chemical processes at the sink on ormore » within the plant.« less

  1. Evaluating operational vacuum for landfill biogas extraction.

    PubMed

    Fabbricino, Massimiliano

    2007-01-01

    This manuscript proposes a practical methodology for estimating the operational vacuum for landfill biogas extraction from municipal landfills. The procedure is based on two sub-models which simulate landfill gas production from organic waste decomposition and distribution of gas pressure and gas movement induced by suction at a blower station. The two models are coupled in a single mass balance equation, obtaining a relationship between the operational vacuum and the amount of landfill gas that can be extracted from an assigned system of vertical wells. To better illustrate the procedure, it is applied to a case study, where a good agreement between simulated and measured data, within +/- 30%, is obtained.

  2. Closed-form solutions of performability. [modeling of a degradable buffer/multiprocessor system

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1981-01-01

    Methods which yield closed form performability solutions for continuous valued variables are developed. The models are similar to those employed in performance modeling (i.e., Markovian queueing models) but are extended so as to account for variations in structure due to faults. In particular, the modeling of a degradable buffer/multiprocessor system is considered whose performance Y is the (normalized) average throughput rate realized during a bounded interval of time. To avoid known difficulties associated with exact transient solutions, an approximate decomposition of the model is employed permitting certain submodels to be solved in equilibrium. These solutions are then incorporated in a model with fewer transient states and by solving the latter, a closed form solution of the system's performability is obtained. In conclusion, some applications of this solution are discussed and illustrated, including an example of design optimization.

  3. Changing the Way We Assess Leadership

    DTIC Science & Technology

    1997-01-01

    article is twofold. The first is to present a theory of leader- ship for the circumstances described above. The second is to provide manag - ers with a...between management and leadership . While both management and leadership are necessary, the change and complexity associated with the future demands that...the leadership role takes precedence over the management role. This concept of managerial leadership in an environment full of surprising, novel, messy

  4. Project Parapluie: A User Generated Shelter Design for the Recreation of School-Age Children in a Montreal Housing Project.

    ERIC Educational Resources Information Center

    Teasdale, Pierre

    In many Canadian multi-family residential environments there is often a lack of sheltered space for recreation, other than the dwelling unit, where children can play in inclement weather and engage in those kinds of activities which usually cannot take place in the home. (Such activities include those that are too noisy, too messy, too large to…

  5. Self-descriptions on LinkedIn: Recruitment or friendship identity?

    PubMed

    Garcia, Danilo; Cloninger, Kevin M; Granjard, Alexandre; Molander-Söderholm, Kristian; Amato, Clara; Sikström, Sverker

    2018-04-26

    We used quantitative semantics to find clusters of words in LinkedIn users' self-descriptions to an employer or a friend. Some of these clusters discriminated between worker and friend conditions (e.g., flexible vs. caring) and between LinkedIn users with high and low education (e.g., analytical vs. messy). © 2018 The Institute of Psychology, Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  6. Generalization and Parallelization of Messy Genetic Algorithms and Communication in Parallel Genetic Algorithms.

    DTIC Science & Technology

    1992-12-01

    Dynamics and Free Energy Perturbation Methods." Reviews in Computational Chem- istry edited by Kenny B. Lipkowitz and Donald B. Boyd, chapter 8, 295-320...atomic motions during annealing, allows the search to probabilistically move in a locally non-optimal direction. The probability of doing so is...Network processors communicate via communication links. This type of communication is generally very slow relative to other processor activities

  7. Getting back to the rough ground: deception and 'social living'.

    PubMed

    Reddy, Vasudevi

    2007-04-29

    At the heart of the social intelligence hypothesis is the central role of 'social living'. But living is messy and psychologists generally seek to avoid this mess in the interests of getting clean data and cleaner logical explanations. The study of deception as intelligent action is a good example of the dangers of such avoidance. We still do not have a full picture of the development of deceptive actions in human infants and toddlers or an explanation of why it emerges. This paper applies Byrne & Whiten's functional taxonomy of tactical deception to the social behaviour of human infants and toddlers using data from three previous studies. The data include a variety of acts, such as teasing, pretending, distracting and concealing, which are not typically considered in relation to human deception. This functional analysis shows the onset of non-verbal deceptive acts to be surprisingly early. Infants and toddlers seem to be able to communicate false information (about themselves, about shared meanings and about events) as early as true information. It is argued that the development of deception must be a fundamentally social and communicative process and that if we are to understand why deception emerges at all, the scientist needs to get 'back to the rough ground' as Wittgenstein called it and explore the messy social lives in which it develops.

  8. Simulating physiological interactions in a hybrid system of mathematical models.

    PubMed

    Kretschmer, Jörn; Haunsberger, Thomas; Drost, Erick; Koch, Edmund; Möller, Knut

    2014-12-01

    Mathematical models can be deployed to simulate physiological processes of the human organism. Exploiting these simulations, reactions of a patient to changes in the therapy regime can be predicted. Based on these predictions, medical decision support systems (MDSS) can help in optimizing medical therapy. An MDSS designed to support mechanical ventilation in critically ill patients should not only consider respiratory mechanics but should also consider other systems of the human organism such as gas exchange or blood circulation. A specially designed framework allows combining three model families (respiratory mechanics, cardiovascular dynamics and gas exchange) to predict the outcome of a therapy setting. Elements of the three model families are dynamically combined to form a complex model system with interacting submodels. Tests revealed that complex model combinations are not computationally feasible. In most patients, cardiovascular physiology could be simulated by simplified models decreasing computational costs. Thus, a simplified cardiovascular model that is able to reproduce basic physiological behavior is introduced. This model purely consists of difference equations and does not require special algorithms to be solved numerically. The model is based on a beat-to-beat model which has been extended to react to intrathoracic pressure levels that are present during mechanical ventilation. The introduced reaction to intrathoracic pressure levels as found during mechanical ventilation has been tuned to mimic the behavior of a complex 19-compartment model. Tests revealed that the model is able to represent general system behavior comparable to the 19-compartment model closely. Blood pressures were calculated with a maximum deviation of 1.8 % in systolic pressure and 3.5 % in diastolic pressure, leading to a simulation error of 0.3 % in cardiac output. The gas exchange submodel being reactive to changes in cardiac output showed a resulting deviation of less than 0.1 %. Therefore, the proposed model is usable in combinations where cardiovascular simulation does not have to be detailed. Computing costs have been decreased dramatically by a factor 186 compared to a model combination employing the 19-compartment model.

  9. Seismic‐hazard forecast for 2016 including induced and natural earthquakes in the central and eastern United States

    USGS Publications Warehouse

    Petersen, Mark D.; Mueller, Charles; Moschetti, Morgan P.; Hoover, Susan M.; Llenos, Andrea L.; Ellsworth, William L.; Michael, Andrew J.; Rubinstein, Justin L.; McGarr, Arthur F.; Rukstales, Kenneth S.

    2016-01-01

    The U.S. Geological Survey (USGS) has produced a one‐year (2016) probabilistic seismic‐hazard assessment for the central and eastern United States (CEUS) that includes contributions from both induced and natural earthquakes that are constructed with probabilistic methods using alternative data and inputs. This hazard assessment builds on our 2016 final model (Petersen et al., 2016) by adding sensitivity studies, illustrating hazard in new ways, incorporating new population data, and discussing potential improvements. The model considers short‐term seismic activity rates (primarily 2014–2015) and assumes that the activity rates will remain stationary over short time intervals. The final model considers different ways of categorizing induced and natural earthquakes by incorporating two equally weighted earthquake rate submodels that are composed of alternative earthquake inputs for catalog duration, smoothing parameters, maximum magnitudes, and ground‐motion models. These alternatives represent uncertainties on how we calculate earthquake occurrence and the diversity of opinion within the science community. In this article, we also test sensitivity to the minimum moment magnitude between M 4 and M 4.7 and the choice of applying a declustered catalog with b=1.0 rather than the full catalog with b=1.3. We incorporate two earthquake rate submodels: in the informed submodel we classify earthquakes as induced or natural, and in the adaptive submodel we do not differentiate. The alternative submodel hazard maps both depict high hazard and these are combined in the final model. Results depict several ground‐shaking measures as well as intensity and include maps showing a high‐hazard level (1% probability of exceedance in 1 year or greater). Ground motions reach 0.6g horizontal peak ground acceleration (PGA) in north‐central Oklahoma and southern Kansas, and about 0.2g PGA in the Raton basin of Colorado and New Mexico, in central Arkansas, and in north‐central Texas near Dallas–Fort Worth. The chance of having levels of ground motions corresponding to modified Mercalli intensity (MMI) VI or greater earthquake shaking is 2%–12% per year in north‐central Oklahoma and southern Kansas and New Madrid similar to the chance of damage at sites in high‐hazard portions of California caused by natural earthquakes. Hazard is also significant in the Raton basin of Colorado/New Mexico; north‐central Arkansas; Dallas–Fort Worth, Texas; and in a few other areas. Hazard probabilities are much lower (by about half or more) for exceeding MMI VII or VIII. Hazard is 3‐ to 10‐fold higher near some areas of active‐induced earthquakes than in the 2014 USGS National Seismic Hazard Model (NSHM), which did not consider induced earthquakes. This study in conjunction with the LandScan TM Database (2013) indicates that about 8 million people live in areas of active injection wells that have a greater than 1% chance of experiencing damaging ground shaking (MMI≥VI) in 2016. The final model has high uncertainty, and engineers, regulators, and industry should use these assessments cautiously to make informed decisions on mitigating the potential effects of induced and natural earthquakes.

  10. A MULTILAYER BIOCHEMICAL DRY DEPOSITION MODEL 1. MODEL FORMULATION

    EPA Science Inventory

    A multilayer biochemical dry deposition model has been developed based on the NOAA Multilayer Model (MLM) to study gaseous exchanges between the soil, plants, and the atmosphere. Most of the parameterizations and submodels have been updated or replaced. The numerical integration ...

  11. Automated Analysis of Renewable Energy Datasets ('EE/RE Data Mining')

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bush, Brian; Elmore, Ryan; Getman, Dan

    This poster illustrates methods to substantially improve the understanding of renewable energy data sets and the depth and efficiency of their analysis through the application of statistical learning methods ('data mining') in the intelligent processing of these often large and messy information sources. The six examples apply methods for anomaly detection, data cleansing, and pattern mining to time-series data (measurements from metering points in buildings) and spatiotemporal data (renewable energy resource datasets).

  12. Pyrolized biochar for heavy metal adsorption

    EPA Pesticide Factsheets

    Removal of copper and lead metal ions from water using pyrolized plant materials. Method can be used to develop a low cost point-of-use device for cleaning contaminated water. This dataset is associated with the following publication:DeMessie, B., E. Sahle-Demessie , and G. Sorial. Cleaning Water Contaminated With Heavy Metal Ions Using Pyrolyzed Banana Peel Adsorbents. Separation Science and Technology. Marcel Dekker Incorporated, New York, NY, USA, 50(16): 2448-2457, (2015).

  13. Anti-C.Diff Pill

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Anand

    The anti- C.diff pill is being developed for patients who are at high risk for Clostridium difficile, a bacterium that causes diarrhea and serious intestinal conditions like colitis. C-diff is deadly, killing over 30,000 people a year in the United States and costing an average of $42,000 per treatment. The most common treatment is a fecal transplant (FT). The anti- C.diff replaces the invasive and messy FT practice with a pill.

  14. A New Perspective on the Foraging Ecology of Apex Predators in the California Current: Results from a Fully Coupled Ecosystem Model

    NASA Astrophysics Data System (ADS)

    Fiechter, J.; Huckstadt, L. A.; Rose, K.; Costa, D. P.; Curchitser, E. N.; Hedstrom, K.; Edwards, C. A.; Moore, A. M.

    2016-02-01

    Results from a fully coupled end-to-end ecosystem model for the California Current Large Marine Ecosystem are used to describe the impact of environmental variability on the foraging ecology of its most abundant apex predator, California sea lions (Zalophus californianus). The ecosystem model consists of a biogeochemical submodel embedded in a regional ocean circulation submodel, and both coupled with a multi-species individual-based submodel for forage fish (sardine and anchovy) and California sea lions. For sea lions, bioenergetics and behavioral attributes are specified using available TOPP (Tagging Of Pacific Predators) data on their foraging patterns and diet in the California Current. Sardine and anchovy are explicitly included in the model as they represent important prey sources for California sea lions and exhibit significant interannual and decadal variability in population abundances. Output from a 20-year run (1989-2008) of the model demonstrates how different physical and biological processes control habitat utilization and foraging success of California sea lions on interannual time scales. A principal component analysis of sea lion foraging patterns indicates that the first mode of variability is alongshore and tied to sardine availability, while the second mode is cross-shore and associated with coastal upwelling intensity (a behavior consistent with male sea lion tracking data collected in 2004 vs. 2005). The results also illustrate how variability in environmental conditions and forage fish distribution affects sea lions feeding success. While specifically focusing on the foraging ecology of sea lions, our modeling framework has the ability to provide new and unique perspectives on trophic interactions in the California Current, or other regions where similar end-to-end ecosystem models may be implemented.

  15. A new seasonal-deciduous spring phenology submodel in the Community Land Model 4.5: impacts on carbon and water cycling under future climate scenarios.

    PubMed

    Chen, Min; Melaas, Eli K; Gray, Josh M; Friedl, Mark A; Richardson, Andrew D

    2016-11-01

    A spring phenology model that combines photoperiod with accumulated heating and chilling to predict spring leaf-out dates is optimized using PhenoCam observations and coupled into the Community Land Model (CLM) 4.5. In head-to-head comparison (using satellite data from 2003 to 2013 for validation) for model grid cells over the Northern Hemisphere deciduous broadleaf forests (5.5 million km 2 ), we found that the revised model substantially outperformed the standard CLM seasonal-deciduous spring phenology submodel at both coarse (0.9 × 1.25°) and fine (1 km) scales. The revised model also does a better job of representing recent (decadal) phenological trends observed globally by MODIS, as well as long-term trends (1950-2014) in the PEP725 European phenology dataset. Moreover, forward model runs suggested a stronger advancement (up to 11 days) of spring leaf-out by the end of the 21st century for the revised model. Trends toward earlier advancement are predicted for deciduous forests across the whole Northern Hemisphere boreal and temperate deciduous forest region for the revised model, whereas the standard model predicts earlier leaf-out in colder regions, but later leaf-out in warmer regions, and no trend globally. The earlier spring leaf-out predicted by the revised model resulted in enhanced gross primary production (up to 0.6 Pg C yr -1 ) and evapotranspiration (up to 24 mm yr -1 ) when results were integrated across the study region. These results suggest that the standard seasonal-deciduous submodel in CLM should be reconsidered, otherwise substantial errors in predictions of key land-atmosphere interactions and feedbacks may result. © 2016 John Wiley & Sons Ltd.

  16. Quantifying the uncertainty of nonpoint source attribution in distributed water quality models: A Bayesian assessment of SWAT's sediment export predictions

    NASA Astrophysics Data System (ADS)

    Wellen, Christopher; Arhonditsis, George B.; Long, Tanya; Boyd, Duncan

    2014-11-01

    Spatially distributed nonpoint source watershed models are essential tools to estimate the magnitude and sources of diffuse pollution. However, little work has been undertaken to understand the sources and ramifications of the uncertainty involved in their use. In this study we conduct the first Bayesian uncertainty analysis of the water quality components of the SWAT model, one of the most commonly used distributed nonpoint source models. Working in Southern Ontario, we apply three Bayesian configurations for calibrating SWAT to Redhill Creek, an urban catchment, and Grindstone Creek, an agricultural one. We answer four interrelated questions: can SWAT determine suspended sediment sources with confidence when end of basin data is used for calibration? How does uncertainty propagate from the discharge submodel to the suspended sediment submodels? Do the estimated sediment sources vary when different calibration approaches are used? Can we combine the knowledge gained from different calibration approaches? We show that: (i) despite reasonable fit at the basin outlet, the simulated sediment sources are subject to uncertainty sufficient to undermine the typical approach of reliance on a single, best fit simulation; (ii) more than a third of the uncertainty of sediment load predictions may stem from the discharge submodel; (iii) estimated sediment sources do vary significantly across the three statistical configurations of model calibration despite end-of-basin predictions being virtually identical; and (iv) Bayesian model averaging is an approach that can synthesize predictions when a number of adequate distributed models make divergent source apportionments. We conclude with recommendations for future research to reduce the uncertainty encountered when using distributed nonpoint source models for source apportionment.

  17. Holomorphic solutions of the susy Grassmannian σ-model and gauge invariance

    NASA Astrophysics Data System (ADS)

    Hussin, V.; Lafrance, M.; Yurduşen, İ.; Zakrzewski, W. J.

    2018-05-01

    We study the gauge invariance of the supersymmetric Grassmannian sigma model . It is richer then its purely bosonic submodel and we show how to use it in order to reduce some constant curvature holomorphic solutions of the model into simpler expressions.

  18. BACTERIAL MORTALITY DUE TO SOLAR RADIATION, COMPARING EXPERIMENTAL AND STATISTICAL EVIDENCE

    EPA Science Inventory

    Many researchers report that sunlight is a primary stressor of beach indicator bacteria. Some water quality models include code that quantifies the effect of radiation on bacterial decay. For example, the EPA Visual Plumes model includes two coliform and one enterococcus submodel...

  19. PSO-MISMO modeling strategy for multistep-ahead time series prediction.

    PubMed

    Bao, Yukun; Xiong, Tao; Hu, Zhongyi

    2014-05-01

    Multistep-ahead time series prediction is one of the most challenging research topics in the field of time series modeling and prediction, and is continually under research. Recently, the multiple-input several multiple-outputs (MISMO) modeling strategy has been proposed as a promising alternative for multistep-ahead time series prediction, exhibiting advantages compared with the two currently dominating strategies, the iterated and the direct strategies. Built on the established MISMO strategy, this paper proposes a particle swarm optimization (PSO)-based MISMO modeling strategy, which is capable of determining the number of sub-models in a self-adaptive mode, with varying prediction horizons. Rather than deriving crisp divides with equal-size s prediction horizons from the established MISMO, the proposed PSO-MISMO strategy, implemented with neural networks, employs a heuristic to create flexible divides with varying sizes of prediction horizons and to generate corresponding sub-models, providing considerable flexibility in model construction, which has been validated with simulated and real datasets.

  20. A Joint Model for Longitudinal Measurements and Survival Data in the Presence of Multiple Failure Types

    PubMed Central

    Elashoff, Robert M.; Li, Gang; Li, Ning

    2009-01-01

    Summary In this article we study a joint model for longitudinal measurements and competing risks survival data. Our joint model provides a flexible approach to handle possible nonignorable missing data in the longitudinal measurements due to dropout. It is also an extension of previous joint models with a single failure type, offering a possible way to model informatively censored events as a competing risk. Our model consists of a linear mixed effects submodel for the longitudinal outcome and a proportional cause-specific hazards frailty submodel (Prentice et al., 1978, Biometrics 34, 541-554) for the competing risks survival data, linked together by some latent random effects. We propose to obtain the maximum likelihood estimates of the parameters by an expectation maximization (EM) algorithm and estimate their standard errors using a profile likelihood method. The developed method works well in our simulation studies and is applied to a clinical trial for the scleroderma lung disease. PMID:18162112

  1. Preliminary evaluation of a lake whitefish (Coregonus clupeaformis) bioenergetics model

    USGS Publications Warehouse

    Madenjian, Charles P.; Pothoven, Steven A.; Schneeberger, Philip J.; O'Connor, Daniel V.; Brandt, Stephen B.

    2005-01-01

    We conducted a preliminary evaluation of a lake whitefish (Coregonus clupeaformis) bioenergetics model by applying the model to size-at-age data for lake whitefish from northern Lake Michigan. We then compared estimates of gross growth efficiency (GGE) from our bioenergetis model with previously published estimates of GGE for bloater (C. hoyi) in Lake Michigan and for lake whitefish in Quebec. According to our model, the GGE of Lake Michigan lake whitefish decreased from 0.075 to 0.02 as age increased from 2 to 5 years. In contrast, the GGE of lake whitefish in Quebec inland waters decreased from 0.12 to 0.05 for the same ages. When our swimming-speed submodel was replaced with a submodel that had been used for lake trout (Salvelinus namaycush) in Lake Michigan and an observed predator energy density for Lake Michigan lake whitefish was employed, our model predicted that the GGE of Lake Michigan lake whitefish decreased from 0.12 to 0.04 as age increased from 2 to 5 years.

  2. Reactive solute transport in streams: 1. Development of an equilibrium- based model

    USGS Publications Warehouse

    Runkel, Robert L.; Bencala, Kenneth E.; Broshears, Robert E.; Chapra, Steven C.

    1996-01-01

    An equilibrium-based solute transport model is developed for the simulation of trace metal fate and transport in streams. The model is formed by coupling a solute transport model with a chemical equilibrium submodel based on MINTEQ. The solute transport model considers the physical processes of advection, dispersion, lateral inflow, and transient storage, while the equilibrium submodel considers the speciation and complexation of aqueous species, precipitation/dissolution and sorption. Within the model, reactions in the water column may result in the formation of solid phases (precipitates and sorbed species) that are subject to downstream transport and settling processes. Solid phases on the streambed may also interact with the water column through dissolution and sorption/desorption reactions. Consideration of both mobile (water-borne) and immobile (streambed) solid phases requires a unique set of governing differential equations and solution techniques that are developed herein. The partial differential equations describing physical transport and the algebraic equations describing chemical equilibria are coupled using the sequential iteration approach.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kress, Joel David

    The development and scale up of cost effective carbon capture processes is of paramount importance to enable the widespread deployment of these technologies to significantly reduce greenhouse gas emissions. The U.S. Department of Energy initiated the Carbon Capture Simulation Initiative (CCSI) in 2011 with the goal of developing a computational toolset that would enable industry to more effectively identify, design, scale up, operate, and optimize promising concepts. The first half of the presentation will introduce the CCSI Toolset consisting of basic data submodels, steady-state and dynamic process models, process optimization and uncertainty quantification tools, an advanced dynamic process control framework,more » and high-resolution filtered computationalfluid- dynamics (CFD) submodels. The second half of the presentation will describe a high-fidelity model of a mesoporous silica supported, polyethylenimine (PEI)-impregnated solid sorbent for CO 2 capture. The sorbent model includes a detailed treatment of transport and amine-CO 2- H 2O interactions based on quantum chemistry calculations. Using a Bayesian approach for uncertainty quantification, we calibrate the sorbent model to Thermogravimetric (TGA) data.« less

  4. Neural network submodel as an abstraction tool: relating network performance to combat outcome

    NASA Astrophysics Data System (ADS)

    Jablunovsky, Greg; Dorman, Clark; Yaworsky, Paul S.

    2000-06-01

    Simulation of Command and Control (C2) networks has historically emphasized individual system performance with little architectural context or credible linkage to `bottom- line' measures of combat outcomes. Renewed interest in modeling C2 effects and relationships stems from emerging network intensive operational concepts. This demands improved methods to span the analytical hierarchy between C2 system performance models and theater-level models. Neural network technology offers a modeling approach that can abstract the essential behavior of higher resolution C2 models within a campaign simulation. The proposed methodology uses off-line learning of the relationships between network state and campaign-impacting performance of a complex C2 architecture and then approximation of that performance as a time-varying parameter in an aggregated simulation. Ultimately, this abstraction tool offers an increased fidelity of C2 system simulation that captures dynamic network dependencies within a campaign context.

  5. An operations and command systems for the extreme ultraviolet explorer

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Korsmeyer, David J.; Olson, Eric C.; Wong, Gary

    1994-01-01

    About 40% of the budget of a scientific spacecraft mission is usually consumed by Mission Operations & Data Analysis (MO&DA) with MO driving these costs. In the current practice, MO is separated from spacecraft design and comes in focus relatively late in the mission life cycle. As a result, spacecraft may be designed that are very difficult to operate. NASA centers have extensive MO expertise but often lessons learned in one mission are not exploited for other parallel or future missions. A significant reduction of MO costs is essential to ensure a continuing and growing access to space for the scientific community. We are addressing some of these issues with a highly automated payload operations and command system for an existing mission, the Extreme Ultraviolet Explorer (EUVE). EUVE is currently operated jointly by the Goddard Space Flight Center (GSFC), responsible for spacecraft operations, and the Center for Extreme Ultraviolet Astrophysics (CEA) of the University of California, Berkeley, which controls the telescopes and scientific instruments aboard the satellite. The new automated system is being developed by a team including personnel from the NASA Ames Research Center (ARC), the Jet Propulsion Laboratory (JPL) and the Center for EUV Astrophysics (CEA). An important goal of the project is to provide AI-based technology that can be easily operated by nonspecialists in AI. Another important goal is the reusability of the techniques for other missions. Models of the EUVE spacecraft need to be built both for planning/scheduling and for monitoring. In both cases, our modeling tools allow the assembly of a spacecraft model from separate sub-models of the various spacecraft subsystems. These sub-models are reusable; therefore, building mission operations systems for another small satellite mission will require choosing pre-existing modules, reparametrizing them with respect to the actual satellite telemetry information, and reassembling them in a new model. We briefly describe the EUVE mission and indicate why it is particularly suitable for the task. Then we briefly outline our current work in mission planning/scheduling and spacecraft and instrument health monitoring.

  6. The Stirring of Oceanic Crust in the Mantle: How it Changes with Time?

    NASA Astrophysics Data System (ADS)

    McNamara, A. K.; Li, M.

    2017-12-01

    The Large Low Shear Velocity Provinces (LLSVPs) beneath Africa and the Pacific are considerably-sized seismic anomalies in the lower mantle that likely play a key role in global mantle convection. Unfortunately, we do not know what they are, and hypotheses include thermal megaplumes, plume clusters, primordial piles, thermochemical superplumes, and large accumulations of ancient, subducted oceanic crust. Discovering which of these are the cause of LLSVPs will provide fundamental understanding toward the nature of global-scale mantle convection. Here, we focus on two of the possibilities: primordial piles and accumulations of subducted oceanic crust. In previous work, it seemed clear that each provide a distinguishably-different morphology: primordial piles are clearly defined entities with sharp edges and tops, whereas accumulations of oceanic crust appear quite messy and have fuzzy, advective boundaries, particularly at their tops. Therefore, it was thought that by performing seismic studies that define the tops of LLSVPs, we could distinguish between these possibilities. Here, we ask the following question: Can piles formed by ancient oceanic crust eventually "clean themselves up" and evolve into structures that more-resemble what we think primordial piles should look like at the present day? Here, we present geodynamics work that demonstrates that this is indeed the case. The driving mechanism is a thinning of oceanic crust through time (as the mantle cools, there is less melt at ridges, and therefore, crust is thinner). We find that in the early, hotter Earth, if crust is on the order of 20-30 km thick, it will accumulate into messy piles at the base of the mantle. As crust thins beyond a critical thinness, it will stop accumulating and be stirred into the background mantle instead. Once crust stops accumulating in the lower mantle, the pre-existing messy piles begin to sharpen into well-defined piles with sharp edges and tops. Furthermore, we find that this process leads to a characteristically-different thermal evolution, in which the upper mantle cools more rapidly during the accumulation phase, and then heats up again afterwards. In conclusion, we find that the seismic detection of sharp edges on LLSVPs cannot be used to exclude accumulation of oceanic crust as a possible cause of LLSVPs.

  7. Modeling of Heat Transfer in Rooms in the Modelica "Buildings" Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wetter, Michael; Zuo, Wangda; Nouidui, Thierry Stephane

    This paper describes the implementation of the room heat transfer model in the free open-source Modelica \\Buildings" library. The model can be used as a single room or to compose a multizone building model. We discuss how the model is decomposed into submodels for the individual heat transfer phenomena. We also discuss the main physical assumptions. The room model can be parameterized to use different modeling assumptions, leading to linear or non-linear differential algebraic systems of equations. We present numerical experiments that show how these assumptions affect computing time and accuracy for selected cases of the ANSI/ASHRAE Standard 140- 2007more » envelop validation tests.« less

  8. CFD studies on biomass thermochemical conversion.

    PubMed

    Wang, Yiqun; Yan, Lifeng

    2008-06-01

    Thermochemical conversion of biomass offers an efficient and economically process to provide gaseous, liquid and solid fuels and prepare chemicals derived from biomass. Computational fluid dynamic (CFD) modeling applications on biomass thermochemical processes help to optimize the design and operation of thermochemical reactors. Recent progression in numerical techniques and computing efficacy has advanced CFD as a widely used approach to provide efficient design solutions in industry. This paper introduces the fundamentals involved in developing a CFD solution. Mathematical equations governing the fluid flow, heat and mass transfer and chemical reactions in thermochemical systems are described and sub-models for individual processes are presented. It provides a review of various applications of CFD in the biomass thermochemical process field.

  9. Multi-model approach to characterize human handwriting motion.

    PubMed

    Chihi, I; Abdelkrim, A; Benrejeb, M

    2016-02-01

    This paper deals with characterization and modelling of human handwriting motion from two forearm muscle activity signals, called electromyography signals (EMG). In this work, an experimental approach was used to record the coordinates of a pen tip moving on the (x, y) plane and EMG signals during the handwriting act. The main purpose is to design a new mathematical model which characterizes this biological process. Based on a multi-model approach, this system was originally developed to generate letters and geometric forms written by different writers. A Recursive Least Squares algorithm is used to estimate the parameters of each sub-model of the multi-model basis. Simulations show good agreement between predicted results and the recorded data.

  10. CFD Studies on Biomass Thermochemical Conversion

    PubMed Central

    Wang, Yiqun; Yan, Lifeng

    2008-01-01

    Thermochemical conversion of biomass offers an efficient and economically process to provide gaseous, liquid and solid fuels and prepare chemicals derived from biomass. Computational fluid dynamic (CFD) modeling applications on biomass thermochemical processes help to optimize the design and operation of thermochemical reactors. Recent progression in numerical techniques and computing efficacy has advanced CFD as a widely used approach to provide efficient design solutions in industry. This paper introduces the fundamentals involved in developing a CFD solution. Mathematical equations governing the fluid flow, heat and mass transfer and chemical reactions in thermochemical systems are described and sub-models for individual processes are presented. It provides a review of various applications of CFD in the biomass thermochemical process field. PMID:19325848

  11. POLUTE; forest air pollutant uptake model. [IBM360,370; CSMP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, C.E.

    POLUTE is a computer model designed to estimate the uptake of air pollutants by forests. The model utilizes submodels to describe atmospheric diffusion immediately above and within the canopy, and into the sink areas within or on the trees. The program implementing the model is general and can be used, with only minor changes, for any gaseous pollutant. The model provides an estimate describing the response of the vegetarian-atmosphere system to the environment as related to three types of processes: atmospheric diffusion, diffusion near and inside the absorbing plant, and the physical and chemical processes at the sink on ormore » within the plant.IBM360,370; CSMP; OS/370.« less

  12. The induced electric field due to a current transient

    NASA Astrophysics Data System (ADS)

    Beck, Y.; Braunstein, A.; Frankental, S.

    2007-05-01

    Calculations and measurements of the electric fields, induced by a lightning strike, are important for understanding the phenomenon and developing effective protection systems. In this paper, a novel approach to the calculation of the electric fields due to lightning strikes, using a relativistic approach, is presented. This approach is based on a known current wave-pair model, representing the lightning current wave. The model presented is one that describes the lightning current wave, either at the first stage of the descending charge wave from the cloud or at the later stage of the return stroke. The electric fields computed are cylindrically symmetric. A simplified method for the calculation of the electric field is achieved by using special relativity theory and relativistic considerations. The proposed approach, described in this paper, is based on simple expressions (by applying Coulomb's law) compared with much more complicated partial differential equations based on Maxwell's equations. A straight forward method of calculating the electric field due to a lightning strike, modelled as a negative-positive (NP) wave-pair, is determined by using the special relativity theory in order to calculate the 'velocity field' and relativistic concepts for calculating the 'acceleration field'. These fields are the basic elements required for calculating the total field resulting from the current wave-pair model. Moreover, a modified simpler method using sub models is represented. The sub-models are filaments of either static charges or charges at constant velocity only. Combining these simple sub-models yields the total wave-pair model. The results fully agree with that obtained by solving Maxwell's equations for the discussed problem.

  13. Linking landscape characteristics to local grizzly bear abundance using multiple detection methods in a hierarchical model

    USGS Publications Warehouse

    Graves, T.A.; Kendall, Katherine C.; Royle, J. Andrew; Stetz, J.B.; Macleod, A.C.

    2011-01-01

    Few studies link habitat to grizzly bear Ursus arctos abundance and these have not accounted for the variation in detection or spatial autocorrelation. We collected and genotyped bear hair in and around Glacier National Park in northwestern Montana during the summer of 2000. We developed a hierarchical Markov chain Monte Carlo model that extends the existing occupancy and count models by accounting for (1) spatially explicit variables that we hypothesized might influence abundance; (2) separate sub-models of detection probability for two distinct sampling methods (hair traps and rub trees) targeting different segments of the population; (3) covariates to explain variation in each sub-model of detection; (4) a conditional autoregressive term to account for spatial autocorrelation; (5) weights to identify most important variables. Road density and per cent mesic habitat best explained variation in female grizzly bear abundance; spatial autocorrelation was not supported. More female bears were predicted in places with lower road density and with more mesic habitat. Detection rates of females increased with rub tree sampling effort. Road density best explained variation in male grizzly bear abundance and spatial autocorrelation was supported. More male bears were predicted in areas of low road density. Detection rates of males increased with rub tree and hair trap sampling effort and decreased over the sampling period. We provide a new method to (1) incorporate multiple detection methods into hierarchical models of abundance; (2) determine whether spatial autocorrelation should be included in final models. Our results suggest that the influence of landscape variables is consistent between habitat selection and abundance in this system.

  14. Modeling of the Contaminated Sediment in the Erft River

    NASA Astrophysics Data System (ADS)

    Hu, Wei; Westrich, Bernhard; Rode, Michael

    2010-05-01

    Sediment transport processes play an important role in the surface water systems coupled with rainfall-runoff and contaminant transport. Pollutants like heavy metals adsorbed mainly by fine sediment particles can be deposited, eroded or transported further downstream. When the toxic pollutants deposited before and covered by cleaner sediment are remobilized by large flow events such as floods, they pose a hidden threat to the human health and environment. In the Erft River, due to mining activities in the past, the heavy metals release from the tributary Veybach on the downstream water and sediment quality is significant. Recent measurements prove the decreasing concentration trend of heavy metals in the river bed sediment from the Veybach. One-dimensional hydrodynamic model COSMOS is used to model the complicated water flow, sediment erosion, deposition and contaminant mixing and transport in the mainstream of the Erft River. It is based on a finite-difference formulation and consists of one-dimensional, unsteady sub-model of flow and transport, coupled with a sub-model of the layered sediment bed. The model accounts for the following governing physical-chemical processes: convective and dispersive transport, turbulent mixing deposited sediment surface, deposition, consolidation, aging and erosion of sediment, adsorption-desorption of pollutants to suspended particles and losses of pollutants due to decay or volatilization. The results reproduce the decreasing profile of the pollutant concentration in the river bed sediment nicely. Further modeling is to analysis the influence of the mixing process at the water-riverbed interface on the contaminant transport, hydrological scenarios impact on the remobilization of the sink of pollutant and its negative consequences on the river basin.

  15. A Caveat Note on Tuning in the Development of Coupled Climate Models

    NASA Astrophysics Data System (ADS)

    Dommenget, Dietmar; Rezny, Michael

    2018-01-01

    State-of-the-art coupled general circulation models (CGCMs) have substantial errors in their simulations of climate. In particular, these errors can lead to large uncertainties in the simulated climate response (both globally and regionally) to a doubling of CO2. Currently, tuning of the parameterization schemes in CGCMs is a significant part of the developed. It is not clear whether such tuning actually improves models. The tuning process is (in general) neither documented, nor reproducible. Alternative methods such as flux correcting are not used nor is it clear if such methods would perform better. In this study, ensembles of perturbed physics experiments are performed with the Globally Resolved Energy Balance (GREB) model to test the impact of tuning. The work illustrates that tuning has, in average, limited skill given the complexity of the system, the limited computing resources, and the limited observations to optimize parameters. While tuning may improve model performance (such as reproducing observed past climate), it will not get closer to the "true" physics nor will it significantly improve future climate change projections. Tuning will introduce artificial compensating error interactions between submodels that will hamper further model development. In turn, flux corrections do perform well in most, but not all aspects. A main advantage of flux correction is that it is much cheaper, simpler, more transparent, and it does not introduce artificial error interactions between submodels. These GREB model experiments should be considered as a pilot study to motivate further CGCM studies that address the issues of model tuning.

  16. THE LAKE MICHIGAN MASS BALANCE PROJECT: QUALITY ASSURANCE PLAN FOR MATHEMATICAL MODELLING

    EPA Science Inventory

    This report documents the quality assurance process for the development and application of the Lake Michigan Mass Balance Models. The scope includes the overall modeling framework as well as the specific submodels that are linked to form a comprehensive synthesis of physical, che...

  17. Flexible Energy Scheduling Tool for Integrating Variable Generation | Grid

    Science.gov Websites

    , security-constrained economic dispatch, and automatic generation control programs. DOWNLOAD PAPER Electric commitment, security-constrained economic dispatch, and automatic generation control sub-models. Each sub resolutions and operating strategies can be explored. FESTIV produces not only economic metrics but also

  18. On the Development of Spray Submodels Based on Droplet Size Moments

    NASA Astrophysics Data System (ADS)

    Beck, J. C.; Watkins, A. P.

    2002-11-01

    Hitherto, all polydisperse spray models have been based on discretising the liquid flow field into groups of equally sized droplets. The authors have recently developed a spray model that captures the full polydisperse nature of the spray flow without using droplet size classes (Beck, 2000, Ph.D thesis, UMIST; Beck and Watkins, 2001, Proc. R. Soc. London A). The parameters used to describe the distribution of droplet sizes are the moments of the droplet size distribution function. Transport equations are written for the two moments which represent the liquid mass and surface area, and two more moments representing the sum of drop radii and droplet number are approximated via use of a presumed distribution function, which is allowed to vary in space and time. The velocities to be used in the two transport equations are obtained by defining moment-average quantities and constructing further transport equations for the relevant moment-average velocities. An equation for the energy of the liquid phase and standard gas phase equations, including a k-ɛ turbulence model, are also solved. All the equations are solved in an Eulerian framework using the finite-volume approach, and the phases are coupled through source terms. Effects such as interphase drag, droplet breakup, and droplet-droplet collisions are also captured through the use of source terms. The development of the submodels to describe these effects is the subject of this paper. All the source terms for the hydrodynamics of the spray are derived in this paper in terms of the four moments of the droplet size distribution in order to find the net effect on the whole spray flow field. The development of similar submodels to describe heat and mass transfer effects between the phases is the subject of a further paper (Beck and Watkins, 2001, J. Heat Fluid Flow). The model has been applied to a wide variety of different sprays, including high-pressure diesel sprays, wide-angle solid-cone water sprays, hollow-cone spray s, and evaporating sprays. The comparisons of the results with experimental data show that the model performs well. The interphase drag model, along with the model for the turbulent dispersion of the liquid, produces excellent agreement in the spray penetration results, and the moment-average velocity approach gives good radial distributions of droplet size, showing the capability of the model to predict polydisperse behaviour. Good submodel performance results in droplet breakup, collisions, and evaporation effects (see (Beck and Watkins, 2001, J. Heat Fluid Flow)) also being captured successfully.

  19. The Messiness of Meaning Making: Examining the Affordances of the Digital Space as a Mentoring and Tutoring Space for the Acquisition of Academic Literacy

    ERIC Educational Resources Information Center

    Arend, Moeain; Hunma, Aditi; Hutchings, Catherine; Nomdo, Gideon

    2017-01-01

    Having incorporated a digital aspect to our academic literacy course, and having monitored this over the last three years, we have come to believe that online mentoring can serve as an essential form of tutoring and mentoring. Our study is located in the field of New Literacy Studies and examines the affordances of a digital space in a first year…

  20. STS-46 aft flight deck payload station "Marsha's workstation" aboard OV-104

    NASA Image and Video Library

    2012-11-19

    STS046-01-024 (31 July-8 Aug 1992) --- This area on the Space Shuttle Atlantis' flight deck forward port side was referred to as "Marsha's (Ivins) work station" by fellow crew members who good-naturedly kidded the mission specialist and who usually added various descriptive modifiers such as "messy" or "cluttered". Food, cameras, camera gear, cassettes, cable, flight text material and other paraphernalia can be seen in the area, just behind the commander's station.

  1. Diffusion of synthetic biology: a challenge to biosafety.

    PubMed

    Schmidt, Markus

    2008-06-01

    One of the main aims of synthetic biology is to make biology easier to engineer. Major efforts in synthetic biology are made to develop a toolbox to design biological systems without having to go through a massive research and technology process. With this "de-skilling" agenda, synthetic biology might finally unleash the full potential of biotechnology and spark a wave of innovation, as more and more people have the necessary skills to engineer biology. But this ultimate domestication of biology could easily lead to unprecedented safety challenges that need to be addressed: more and more people outside the traditional biotechnology community will create self-replicating machines (life) for civil and defence applications, "biohackers" will engineer new life forms at their kitchen table; and illicit substances will be produced synthetically and much cheaper. Such a scenario is a messy and dangerous one, and we need to think about appropriate safety standards now.

  2. Computational Design of Clusters for Catalysis

    NASA Astrophysics Data System (ADS)

    Jimenez-Izal, Elisa; Alexandrova, Anastassia N.

    2018-04-01

    When small clusters are studied in chemical physics or physical chemistry, one perhaps thinks of the fundamental aspects of cluster electronic structure, or precision spectroscopy in ultracold molecular beams. However, small clusters are also of interest in catalysis, where the cold ground state or an isolated cluster may not even be the right starting point. Instead, the big question is: What happens to cluster-based catalysts under real conditions of catalysis, such as high temperature and coverage with reagents? Myriads of metastable cluster states become accessible, the entire system is dynamic, and catalysis may be driven by rare sites present only under those conditions. Activity, selectivity, and stability are highly dependent on size, composition, shape, support, and environment. To probe and master cluster catalysis, sophisticated tools are being developed for precision synthesis, operando measurements, and multiscale modeling. This review intends to tell the messy story of clusters in catalysis.

  3. An Approach to Quantitative Fiscal Planning. Phase I Report.

    ERIC Educational Resources Information Center

    Gaylord, Thomas A.

    The development of time-series revenue projections for University of Alaska Budget Request Units (BRUs) is described. Fiscal planning modes in higher education are reviewed, along with the attributes of judgmental, time-series, and causal forecasting techniques. The following six submodels comprise the necessary dimensions of the comprehensive…

  4. SOME QUANTITATIVE ASPECTS OF THE INSTRUCTIONAL PROCESS.

    ERIC Educational Resources Information Center

    GAVIN, WILLIAM J.; SPITZER, MURRAY

    THE DATA FROM THE SEVERAL STUDIES ANALYZED IN THIS REPORT HAVE BEEN COLLECTED AS PART OF AN ON-GOING EFFORT TO IMPLEMENT THE ABT ASSOCIATES' EDUCATION COST EFFECTIVENESS INSTRUCTIONAL PROCESS SUBMODEL, WHICH IS DEVELOPING TECHNIQUES TO EVALUATE THE QUANTITATIVE, CAUSE-AND-EFFECT RELATIONSHIP BETWEEN THE INSTRUCTIONAL PROCESS AND SCHOLASTIC…

  5. A Self Consistent RF Discharge, Plasma Chemistry and Surface Model for Plasma Enhanced Chemical Vapor Deposition

    DTIC Science & Technology

    1988-06-30

    consists of three submodels for the electron kinetics, plasma chemistry , and surface deposition kinetics for a-Si:H deposited from radio frequency...properties. Plasma enhanced, Chemical vapor deposition, amorphous silicon, Modeling, Electron kinetics, Plasma chemistry , Deposition kinetics, Rf discharge, Silane, Film properties, Silicon.

  6. SEAGRASS STRESS RESPONSE MODEL: THE IMPORTANCE OF LIGHT, TEMPERATURE, SEDIMENTATION AND GEOCHEMISTRY

    EPA Science Inventory

    Our objective is to define interactions between seagrass and water-column and sediment stressors. The model was developed and optimized for sediments in Thalassia testudinum seagrass beds of Lower Laguna Madre, Texas, USA and is composed of a plant sub-model and a sediment diagen...

  7. N+3 Aircraft Concept Designs and Trade Studies. Volume 2; Appendices-Design Methodologies for Aerodynamics, Structures, Weight, and Thermodynamic Cycles

    NASA Technical Reports Server (NTRS)

    Greitzer, E. M.; Bonnefoy, P. A.; delaRosaBlanco, E.; Dorbian, C. S.; Drela, M.; Hall, D. K.; Hansman, R. J.; Hileman, J. I.; Liebeck, R. H.; Lovegren, J.; hide

    2010-01-01

    Appendices A to F present the theory behind the TASOPT methodology and code. Appendix A describes the bulk of the formulation, while Appendices B to F develop the major sub-models for the engine, fuselage drag, BLI accounting, etc.

  8. REGIONAL-SCALE (1000 KM) MODEL OF PHOTOCHEMICAL AIR POLLUTION. PART 2. INPUT PROCESSOR NETWORK DESIGN

    EPA Science Inventory

    Detailed specifications are given for a network of data processors and submodels that can generate the parameter fields required by the regional oxidant model formulated in Part 1 of this report. Operations performed by the processor network include simulation of the motion and d...

  9. Design of Adaptive Organizations for Effects Based Operations

    DTIC Science & Technology

    2009-04-04

    8217 Activities. The economic and infrastructure sub-model included nodes for each of the main essential services: water , electricity, sewage, health, and...Data * frack !d Numeric T :Mistil» Track Data 0 Event: String InttrctpUK DJIJ Inttrctptor Id Nunwric : ATI5 Tactsejl Piclurt • KA pan - R

  10. Injury Profile SIMulator, a Qualitative Aggregative Modelling Framework to Predict Crop Injury Profile as a Function of Cropping Practices, and the Abiotic and Biotic Environment. I. Conceptual Bases

    PubMed Central

    Aubertot, Jean-Noël; Robin, Marie-Hélène

    2013-01-01

    The limitation of damage caused by pests (plant pathogens, weeds, and animal pests) in any agricultural crop requires integrated management strategies. Although significant efforts have been made to i) develop, and to a lesser extent ii) combine genetic, biological, cultural, physical and chemical control methods in Integrated Pest Management (IPM) strategies (vertical integration), there is a need for tools to help manage Injury Profiles (horizontal integration). Farmers design cropping systems according to their goals, knowledge, cognition and perception of socio-economic and technological drivers as well as their physical, biological, and chemical environment. In return, a given cropping system, in a given production situation will exhibit a unique injury profile, defined as a dynamic vector of the main injuries affecting the crop. This simple description of agroecosystems has been used to develop IPSIM (Injury Profile SIMulator), a modelling framework to predict injury profiles as a function of cropping practices, abiotic and biotic environment. Due to the tremendous complexity of agroecosystems, a simple holistic aggregative approach was chosen instead of attempting to couple detailed models. This paper describes the conceptual bases of IPSIM, an aggregative hierarchical framework and a method to help specify IPSIM for a given crop. A companion paper presents a proof of concept of the proposed approach for a single disease of a major crop (eyespot on wheat). In the future, IPSIM could be used as a tool to help design ex-ante IPM strategies at the field scale if coupled with a damage sub-model, and a multicriteria sub-model that assesses the social, environmental, and economic performances of simulated agroecosystems. In addition, IPSIM could also be used to help make diagnoses on commercial fields. It is important to point out that the presented concepts are not crop- or pest-specific and that IPSIM can be used on any crop. PMID:24019908

  11. The Paleoclimate Uncertainty Cascade: Tracking Proxy Errors Via Proxy System Models.

    NASA Astrophysics Data System (ADS)

    Emile-Geay, J.; Dee, S. G.; Evans, M. N.; Adkins, J. F.

    2014-12-01

    Paleoclimatic observations are, by nature, imperfect recorders of climate variables. Empirical approaches to their calibration are challenged by the presence of multiple sources of uncertainty, which may confound the interpretation of signals and the identifiability of the noise. In this talk, I will demonstrate the utility of proxy system models (PSMs, Evans et al, 2013, 10.1016/j.quascirev.2013.05.024) to quantify the impact of all known sources of uncertainty. PSMs explicitly encode the mechanistic knowledge of the physical, chemical, biological and geological processes from which paleoclimatic observations arise. PSMs may be divided into sensor, archive and observation components, all of which may conspire to obscure climate signals in actual paleo-observations. As an example, we couple a PSM for the δ18O of speleothem calcite to an isotope-enabled climate model (Dee et al, submitted) to analyze the potential of this measurement as a proxy for precipitation amount. A simple soil/karst model (Partin et al, 2013, 10.1130/G34718.1) is used as sensor model, while a hiatus-permitting chronological model (Haslett & Parnell, 2008, 10.1111/j.1467-9876.2008.00623.x) is used as part of the observation model. This subdivision allows us to explicitly model the transformation from precipitation amount to speleothem calcite δ18O as a multi-stage process via a physical and chemical sensor model, and a stochastic archive model. By illustrating the PSM's behavior within the context of the climate simulations, we show how estimates of climate variability may be affected by each submodel's transformation of the signal. By specifying idealized climate signals(periodic vs. episodic, slow vs. fast) to the PSM, we investigate how frequency and amplitude patterns are modulated by sensor and archive submodels. To the extent that the PSM and the climate models are representative of real world processes, then the results may help us more accurately interpret existing paleodata, characterize their uncertainties, and design sampling strategies that exploit their strengths while mitigating their weaknesses.

  12. Injury Profile SIMulator, a qualitative aggregative modelling framework to predict crop injury profile as a function of cropping practices, and the abiotic and biotic environment. I. Conceptual bases.

    PubMed

    Aubertot, Jean-Noël; Robin, Marie-Hélène

    2013-01-01

    The limitation of damage caused by pests (plant pathogens, weeds, and animal pests) in any agricultural crop requires integrated management strategies. Although significant efforts have been made to i) develop, and to a lesser extent ii) combine genetic, biological, cultural, physical and chemical control methods in Integrated Pest Management (IPM) strategies (vertical integration), there is a need for tools to help manage Injury Profiles (horizontal integration). Farmers design cropping systems according to their goals, knowledge, cognition and perception of socio-economic and technological drivers as well as their physical, biological, and chemical environment. In return, a given cropping system, in a given production situation will exhibit a unique injury profile, defined as a dynamic vector of the main injuries affecting the crop. This simple description of agroecosystems has been used to develop IPSIM (Injury Profile SIMulator), a modelling framework to predict injury profiles as a function of cropping practices, abiotic and biotic environment. Due to the tremendous complexity of agroecosystems, a simple holistic aggregative approach was chosen instead of attempting to couple detailed models. This paper describes the conceptual bases of IPSIM, an aggregative hierarchical framework and a method to help specify IPSIM for a given crop. A companion paper presents a proof of concept of the proposed approach for a single disease of a major crop (eyespot on wheat). In the future, IPSIM could be used as a tool to help design ex-ante IPM strategies at the field scale if coupled with a damage sub-model, and a multicriteria sub-model that assesses the social, environmental, and economic performances of simulated agroecosystems. In addition, IPSIM could also be used to help make diagnoses on commercial fields. It is important to point out that the presented concepts are not crop- or pest-specific and that IPSIM can be used on any crop.

  13. Open Software Tools Applied to Jordan's National Multi-Agent Water Management Model

    NASA Astrophysics Data System (ADS)

    Knox, Stephen; Meier, Philipp; Harou, Julien; Yoon, Jim; Selby, Philip; Lachaut, Thibaut; Klassert, Christian; Avisse, Nicolas; Khadem, Majed; Tilmant, Amaury; Gorelick, Steven

    2016-04-01

    Jordan is the fourth most water scarce country in the world, where demand exceeds supply in a politically and demographically unstable context. The Jordan Water Project (JWP) aims to perform policy evaluation by modelling the hydrology, economics, and governance of Jordan's water resource system. The multidisciplinary nature of the project requires a modelling software system capable of integrating submodels from multiple disciplines into a single decision making process and communicating results to stakeholders. This requires a tool for building an integrated model and a system where diverse data sets can be managed and visualised. The integrated Jordan model is built using Pynsim, an open-source multi-agent simulation framework implemented in Python. Pynsim operates on network structures of nodes and links and supports institutional hierarchies, where an institution represents a grouping of nodes, links or other institutions. At each time step, code within each node, link and institution can executed independently, allowing for their fully autonomous behaviour. Additionally, engines (sub-models) perform actions over the entire network or on a subset of the network, such as taking a decision on a set of nodes. Pynsim is modular in design, allowing distinct modules to be modified easily without affecting others. Data management and visualisation is performed using Hydra (www.hydraplatform.org), an open software platform allowing users to manage network structure and data. The Hydra data manager connects to Pynsim, providing necessary input parameters for the integrated model. By providing a high-level portal to the model, Hydra removes a barrier between the users of the model (researchers, stakeholders, planners etc) and the model itself, allowing them to manage data, run the model and visualise results all through a single user interface. Pynsim's ability to represent institutional hierarchies, inter-network communication and the separation of node, link and institutional logic from higher level processes (engine) suit JWP's requirements. The use of Hydra Platform and Pynsim helps make complex customised models such as the JWP model easier to run and manage with international groups of researchers.

  14. Model Verification and Validation Concepts for a Probabilistic Fracture Assessment Model to Predict Cracking of Knife Edge Seals in the Space Shuttle Main Engine High Pressure Oxidizer

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.; Riha, David S.

    2013-01-01

    Physics-based models are routinely used to predict the performance of engineered systems to make decisions such as when to retire system components, how to extend the life of an aging system, or if a new design will be safe or available. Model verification and validation (V&V) is a process to establish credibility in model predictions. Ideally, carefully controlled validation experiments will be designed and performed to validate models or submodels. In reality, time and cost constraints limit experiments and even model development. This paper describes elements of model V&V during the development and application of a probabilistic fracture assessment model to predict cracking in space shuttle main engine high-pressure oxidizer turbopump knife-edge seals. The objective of this effort was to assess the probability of initiating and growing a crack to a specified failure length in specific flight units for different usage and inspection scenarios. The probabilistic fracture assessment model developed in this investigation combined a series of submodels describing the usage, temperature history, flutter tendencies, tooth stresses and numbers of cycles, fatigue cracking, nondestructive inspection, and finally the probability of failure. The analysis accounted for unit-to-unit variations in temperature, flutter limit state, flutter stress magnitude, and fatigue life properties. The investigation focused on the calculation of relative risk rather than absolute risk between the usage scenarios. Verification predictions were first performed for three units with known usage and cracking histories to establish credibility in the model predictions. Then, numerous predictions were performed for an assortment of operating units that had flown recently or that were projected for future flights. Calculations were performed using two NASA-developed software tools: NESSUS(Registered Trademark) for the probabilistic analysis, and NASGRO(Registered Trademark) for the fracture mechanics analysis. The goal of these predictions was to provide additional information to guide decisions on the potential of reusing existing and installed units prior to the new design certification.

  15. Dynamical Lorentz symmetry breaking in 3D and charge fractionalization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charneski, B.; Gomes, M.; Silva, A. J. da

    2009-03-15

    We analyze the breaking of Lorentz invariance in a 3D model of fermion fields self-coupled through four-fermion interactions. The low-energy limit of the theory contains various submodels which are similar to those used in the study of graphene or in the description of irrational charge fractionalization.

  16. MCFire model technical description

    Treesearch

    David R. Conklin; James M. Lenihan; Dominique Bachelet; Ronald P. Neilson; John B. Kim

    2016-01-01

    MCFire is a computer program that simulates the occurrence and effects of wildfire on natural vegetation, as a submodel within the MC1 dynamic global vegetation model. This report is a technical description of the algorithms and parameter values used in MCFire, intended to encapsulate its design and features a higher level that is more conceptual than the level...

  17. Probability model for analyzing fire management alternatives: theory and structure

    Treesearch

    Frederick W. Bratten

    1982-01-01

    A theoretical probability model has been developed for analyzing program alternatives in fire management. It includes submodels or modules for predicting probabilities of fire behavior, fire occurrence, fire suppression, effects of fire on land resources, and financial effects of fire. Generalized "fire management situations" are used to represent actual fire...

  18. Modeling Effects of Annealing on Coal Char Reactivity to O 2 and CO 2 , Based on Preparation Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holland, Troy; Bhat, Sham; Marcy, Peter

    Oxy-fired coal combustion is a promising potential carbon capture technology. Predictive computational fluid dynamics (CFD) simulations are valuable tools in evaluating and deploying oxyfuel and other carbon capture technologies, either as retrofit technologies or for new construction. However, accurate predictive combustor simulations require physically realistic submodels with low computational requirements. A recent sensitivity analysis of a detailed char conversion model (Char Conversion Kinetics (CCK)) found thermal annealing to be an extremely sensitive submodel. In the present work, further analysis of the previous annealing model revealed significant disagreement with numerous datasets from experiments performed after that annealing model was developed. Themore » annealing model was accordingly extended to reflect experimentally observed reactivity loss, because of the thermal annealing of a variety of coals under diverse char preparation conditions. The model extension was informed by a Bayesian calibration analysis. In addition, since oxyfuel conditions include extraordinarily high levels of CO 2, the development of a first-ever CO 2 reactivity loss model due to annealing is presented.« less

  19. Modeling Effects of Annealing on Coal Char Reactivity to O 2 and CO 2 , Based on Preparation Conditions

    DOE PAGES

    Holland, Troy; Bhat, Sham; Marcy, Peter; ...

    2017-08-25

    Oxy-fired coal combustion is a promising potential carbon capture technology. Predictive computational fluid dynamics (CFD) simulations are valuable tools in evaluating and deploying oxyfuel and other carbon capture technologies, either as retrofit technologies or for new construction. However, accurate predictive combustor simulations require physically realistic submodels with low computational requirements. A recent sensitivity analysis of a detailed char conversion model (Char Conversion Kinetics (CCK)) found thermal annealing to be an extremely sensitive submodel. In the present work, further analysis of the previous annealing model revealed significant disagreement with numerous datasets from experiments performed after that annealing model was developed. Themore » annealing model was accordingly extended to reflect experimentally observed reactivity loss, because of the thermal annealing of a variety of coals under diverse char preparation conditions. The model extension was informed by a Bayesian calibration analysis. In addition, since oxyfuel conditions include extraordinarily high levels of CO 2, the development of a first-ever CO 2 reactivity loss model due to annealing is presented.« less

  20. Study of vapor flow into a capillary acquisition device. [for cryogenic rocket propellants

    NASA Technical Reports Server (NTRS)

    Dodge, F. T.; Bowles, E. B.

    1982-01-01

    An analytical model was developed that prescribes the conditions for vapor flow through the window screen of a start basket. Several original submodels were developed as part of this model. The submodels interrelate such phenomena as the effect of internal evaporation of the liquid, the bubble point change of a screen in the presence of wicking, the conditions for drying out of a screen through a combination of evaporation and pressure difference, the vapor inflow rate across a wet screen as a function of pressure difference, and the effect on wicking of a difference between the static pressure of the liquid reservoir and the surrounding vapor. Most of these interrelations were verified by a series of separate effects tests, which were also used to determine certain empirical constants in the models. The equations of the model were solved numerically for typical start basket designs, and a simplified start basket was constructed to verify the predictions, using both volatile and nonvolatile test liquids. The test results verified the trends predicted by the model.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Yongxi

    We propose an integrated modeling framework to optimally locate wireless charging facilities along a highway corridor to provide sufficient in-motion charging. The integrated model consists of a master, Infrastructure Planning Model that determines best locations with integrated two sub-models that explicitly capture energy consumption and charging and the interactions between electric vehicle and wireless charging technologies, geometrics of highway corridors, speed, and auxiliary system. The model is implemented in an illustrative case study of a highway corridor of Interstate 5 in Oregon. We found that the cost of establishing the charging lane is sensitive and increases with the speed tomore » achieve. Through sensitivity analyses, we gain better understanding on the extent of impacts of geometric characteristics of highways and battery capacity on the charging lane design.« less

  2. Simulation of comprehensive chemistry and atmospheric methane lifetime in the LGM with EMAC

    NASA Astrophysics Data System (ADS)

    Gromov, Sergey; Steil, Benedikt

    2017-04-01

    Past records of atmospheric methane (CH4) abundance/isotope composition may provide a substantial insight on C exchanges in the Earth System (ES). When simulated in the climate models, CH4 helps to identify climate parameters transitions via triggering of its different (natural) sources, with a proviso that its sinks are adequately represented in the model. The latter are still a matter of large uncertainty in the studies focussing on the interpretation of CH4 evolution throughout Last Glacial Maximum (LGM), judging the conferred span of tropospheric CH4 lifetime (λ) of 3-16 yr [1-4]. In this study, we attempt to: (i) deliver the most adequate estimate of the LGM atmospheric sink of CH4 in the EMAC AC-GCM [5] equipped with the comprehensive representation of atmospheric chemistry [6], (ii) reveal the ES and CH4 emission parameters that are most influential for λ and (iii) based on these findings, suggest a parameterisation for λ that may be consistently used in climate models. In pursuing (i) we have tuned the EMAC model for simulating LGM atmospheric chemistry state, including careful revisiting of the trace gases emissions from the biosphere, biomass burning/lightning source, etc. The latter affect the key simulated component bound with λ, viz. the abundance and distribution of the hydroxyl radicals (OH) which, upon reacting with CH4, constitute its main tropospheric sink. Our preliminary findings suggest that OH is buffered in the atmosphere in a similar fashion to preindustrial climate, which in line with the recent studies employing comprehensive chemistry mechanisms (e.g., [3]). The analysis in (ii) suggests that tropospheric λ values may be qualitatively described as a convolution of values typical for zonal domain with high and low photolytic recycling rates (i.e. tropics and extra-tropics), as in the latter a dependence of the zonal average λ value on the CH4 emission strength exists. We further use the extensive diagnostic in EMAC to infer the sensitivity of zonal OH to changes in various component of the ES, e.g. in stratospheric O3 input and dynamics. Finally, we discuss the potential set of parameters required for efficient λ and/or OH parameterisation implementation in models dealing with (transient) climate simulations. References 1. Fischer, H., et al.: Changing boreal methane sources and constant biomass burning during the last termination, Nature, 452, 864-867, doi: 10.1038/nature06825, 2008. 2. Kaplan, J. O., Folberth, G.,and Hauglustaine, D. A.: Role of methane and biogenic volatile organic compound sources in late glacial and Holocene fluctuations of atmospheric methane concentrations, Global Biogeochemical Cycles, 20, n/a-n/a, doi: 10.1029/2005GB002590, 2006. 3. Murray, L. T., et al.: Factors controlling variability in the oxidative capacity of the troposphere since the Last Glacial Maximum, Atmos. Chem. Phys., 14, 3589-3622, doi: 10.5194/acp-14-3589-2014, 2014. 4. Valdes, P. J., Beerling, D. J.,and Johnson, C. E.: The ice age methane budget, Geophysical Research Letters, 32, n/a-n/a, doi: 10.1029/2004GL021004, 2005. 5. Jöckel, P., et al.: Development cycle 2 of the Modular Earth Submodel System (MESSy2), Geosci. Model Dev., 3, 717-752, doi: 10.5194/gmd-3-717-2010, 2010. 6. Lelieveld, J., et al.: Global tropospheric hydroxyl distribution, budget and reactivity, Atmos. Chem. Phys., 16, 12477-12493, doi: 10.5194/acp-16-12477-2016, 2016.

  3. A modular BLSS simulation model

    NASA Technical Reports Server (NTRS)

    Rummel, John D.; Volk, Tyler

    1987-01-01

    A bioregenerative life support system (BLSS) for extraterrestrial use will be faced with coordination problems more acute than those in any ecosystem found on Earth. A related problem in BLSS design is providing an interface between the various life support processors, one that will allow for their coordination while still allowing for system expansion. A modular model is presented of a BLSS that interfaces system processors only with the material storage reservoirs, allowing those reservoirs to act as the principal buffers in the system and thus minimizing difficulties with processor coordination. The modular nature of the model allows independent development of the detailed submodels that exist within the model framework. Using this model, BLSS dynamics were investigated under normal conditions and under various failure modes. Partial and complete failures of various components, such as the waste processors or the plants themselves, drive transient responses in the model system, allowing the examination of the effectiveness of the system reservoirs as buffers. The results from simulations help to determine control strategies and BLSS design requirements. An evolved version could be used as an interactive control aid in a future BLSS.

  4. An outburst powered by the merging of two stars inside the envelope of a giant

    NASA Astrophysics Data System (ADS)

    Hillel, Shlomi; Schreier, Ron; Soker, Noam

    2017-11-01

    We conduct 3D hydrodynamical simulations of energy deposition into the envelope of a red giant star as a result of the merger of two close main sequence stars or brown dwarfs, and show that the outcome is a highly non-spherical outflow. Such a violent interaction of a triple stellar system can explain the formation of `messy', I.e. lacking any kind of symmetry, planetary nebulae and similar nebulae around evolved stars. We do not simulate the merging process, but simply assume that after the tight binary system enters the envelope of the giant star the interaction with the envelope causes the two components, stars or brown dwarfs, to merge and liberate gravitational energy. We deposit the energy over a time period of about 9 h, which is about 1 per cent of the the orbital period of the merger product around the centre of the giant star. The ejection of the fast hot gas and its collision with previously ejected mass are very likely to lead to a transient event, I.e. an intermediate luminosity optical transient.

  5. A single-station empirical model for TEC over the Antarctic Peninsula using GPS-TEC data

    NASA Astrophysics Data System (ADS)

    Feng, Jiandi; Wang, Zhengtao; Jiang, Weiping; Zhao, Zhenzhen; Zhang, Bingbing

    2017-02-01

    Compared with regional or global total electron content (TEC) empirical models, single-station TEC empirical models may exhibit higher accuracy in describing TEC spatial and temporal variations for a single station. In this paper, a new single-station empirical total electron content (TEC) model, called SSM-month, for the O'Higgins Station in the Antarctic Peninsula is proposed by using Global Positioning System (GPS)-TEC data from 01 January 2004 to 30 June 2015. The diurnal variation of TEC in the O'Higgins Station may have changing features in different months, sometimes even in opposite forms, because of ionospheric phenomena, such as the Mid-latitude Summer Nighttime Anomaly (MSNA). To avoid the influence of different diurnal variations, the concept of monthly modeling is proposed in this study. The SSM-month model, which is established by month (including 12 submodels that correspond to the 12 months), can effectively describe the diurnal variation of TEC in different months. Each submodel of the SSM-month model exhibits good agreement with GPS-TEC input data. Overall, the SSM-month model fits the input data with a bias of 0.03 TECU (total electron content unit, 1 TECU = 1016 el m-2) and a standard deviation of 2.78 TECU. This model, which benefits from the modeling method, can effectively describe the MSNA phenomenon without implementing any modeling correction. TEC data derived from Center for Orbit Determination in Europe global ionosphere maps (CODE GIMs), International Reference Ionosphere 2012 (IRI2012), and NeQuick are compared with the SSM-month model in the years of 2001 and 2015-2016. Result shows that the SSM-month model exhibits good consistency with CODE GIMs, which is better than that of IRI2012 and NeQuick, in the O'Higgins Station on the test days.

  6. A physiologically based pharmacokinetic model for atrazine and its main metabolites in the adult male C57BL/6 mouse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin Zhoumeng; Interdisciplinary Toxicology Program, University of Georgia, Athens, GA 30602; Fisher, Jeffrey W.

    Atrazine (ATR) is a chlorotriazine herbicide that is widely used and relatively persistent in the environment. In laboratory rodents, excessive exposure to ATR is detrimental to the reproductive, immune, and nervous systems. To better understand the toxicokinetics of ATR and to fill the need for a mouse model, a physiologically based pharmacokinetic (PBPK) model for ATR and its main chlorotriazine metabolites (Cl-TRIs) desethyl atrazine (DE), desisopropyl atrazine (DIP), and didealkyl atrazine (DACT) was developed for the adult male C57BL/6 mouse. Taking advantage of all relevant and recently made available mouse-specific data, a flow-limited PBPK model was constructed. The ATR andmore » DACT sub-models included blood, brain, liver, kidney, richly and slowly perfused tissue compartments, as well as plasma protein binding and red blood cell binding, whereas the DE and DIP sub-models were constructed as simple five-compartment models. The model adequately simulated plasma levels of ATR and Cl-TRIs and urinary dosimetry of Cl-TRIs at four single oral dose levels (250, 125, 25, and 5 mg/kg). Additionally, the model adequately described the dose dependency of brain and liver ATR and DACT concentrations. Cumulative urinary DACT amounts were accurately predicted across a wide dose range, suggesting the model's potential use for extrapolation to human exposures by performing reverse dosimetry. The model was validated using previously reported data for plasma ATR and DACT in mice and rats. Overall, besides being the first mouse PBPK model for ATR and its Cl-TRIs, this model, by analogy, provides insights into tissue dosimetry for rats. The model could be used in tissue dosimetry prediction and as an aid in the exposure assessment to this widely used herbicide.« less

  7. Computational Combustion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Westbrook, C K; Mizobuchi, Y; Poinsot, T J

    2004-08-26

    Progress in the field of computational combustion over the past 50 years is reviewed. Particular attention is given to those classes of models that are common to most system modeling efforts, including fluid dynamics, chemical kinetics, liquid sprays, and turbulent flame models. The developments in combustion modeling are placed into the time-dependent context of the accompanying exponential growth in computer capabilities and Moore's Law. Superimposed on this steady growth, the occasional sudden advances in modeling capabilities are identified and their impacts are discussed. Integration of submodels into system models for spark ignition, diesel and homogeneous charge, compression ignition engines, surfacemore » and catalytic combustion, pulse combustion, and detonations are described. Finally, the current state of combustion modeling is illustrated by descriptions of a very large jet lifted 3D turbulent hydrogen flame with direct numerical simulation and 3D large eddy simulations of practical gas burner combustion devices.« less

  8. A Decision Making Methodology in Support of the Business Rules Lifecycle

    NASA Technical Reports Server (NTRS)

    Wild, Christopher; Rosca, Daniela

    1998-01-01

    The business rules that underlie an enterprise emerge as a new category of system requirements that represent decisions about how to run the business, and which are characterized by their business-orientation and their propensity for change. In this report, we introduce a decision making methodology which addresses several aspects of the business rules lifecycle: acquisition, deployment and evolution. We describe a meta-model for representing business rules in terms of an enterprise model, and also a decision support submodel for reasoning about and deriving the rules. The possibility for lifecycle automated assistance is demonstrated in terms of the automatic extraction of business rules from the decision structure. A system based on the metamodel has been implemented, including the extraction algorithm. This is the final report for Daniela Rosca's PhD fellowship. It describes the work we have done over the past year, current research and the list of publications associated with her thesis topic.

  9. A virtual data language and system for scientific workflow management in data grid environments

    NASA Astrophysics Data System (ADS)

    Zhao, Yong

    With advances in scientific instrumentation and simulation, scientific data is growing fast in both size and analysis complexity. So-called Data Grids aim to provide high performance, distributed data analysis infrastructure for data- intensive sciences, where scientists distributed worldwide need to extract information from large collections of data, and to share both data products and the resources needed to produce and store them. However, the description, composition, and execution of even logically simple scientific workflows are often complicated by the need to deal with "messy" issues like heterogeneous storage formats and ad-hoc file system structures. We show how these difficulties can be overcome via a typed workflow notation called virtual data language, within which issues of physical representation are cleanly separated from logical typing, and by the implementation of this notation within the context of a powerful virtual data system that supports distributed execution. The resulting language and system are capable of expressing complex workflows in a simple compact form, enacting those workflows in distributed environments, monitoring and recording the execution processes, and tracing the derivation history of data products. We describe the motivation, design, implementation, and evaluation of the virtual data language and system, and the application of the virtual data paradigm in various science disciplines, including astronomy, cognitive neuroscience.

  10. Sequential optimization of a terrestrial biosphere model constrained by multiple satellite based products

    NASA Astrophysics Data System (ADS)

    Ichii, K.; Kondo, M.; Wang, W.; Hashimoto, H.; Nemani, R. R.

    2012-12-01

    Various satellite-based spatial products such as evapotranspiration (ET) and gross primary productivity (GPP) are now produced by integration of ground and satellite observations. Effective use of these multiple satellite-based products in terrestrial biosphere models is an important step toward better understanding of terrestrial carbon and water cycles. However, due to the complexity of terrestrial biosphere models with large number of model parameters, the application of these spatial data sets in terrestrial biosphere models is difficult. In this study, we established an effective but simple framework to refine a terrestrial biosphere model, Biome-BGC, using multiple satellite-based products as constraints. We tested the framework in the monsoon Asia region covered by AsiaFlux observations. The framework is based on the hierarchical analysis (Wang et al. 2009) with model parameter optimization constrained by satellite-based spatial data. The Biome-BGC model is separated into several tiers to minimize the freedom of model parameter selections and maximize the independency from the whole model. For example, the snow sub-model is first optimized using MODIS snow cover product, followed by soil water sub-model optimized by satellite-based ET (estimated by an empirical upscaling method; Support Vector Regression (SVR) method; Yang et al. 2007), photosynthesis model optimized by satellite-based GPP (based on SVR method), and respiration and residual carbon cycle models optimized by biomass data. As a result of initial assessment, we found that most of default sub-models (e.g. snow, water cycle and carbon cycle) showed large deviations from remote sensing observations. However, these biases were removed by applying the proposed framework. For example, gross primary productivities were initially underestimated in boreal and temperate forest and overestimated in tropical forests. However, the parameter optimization scheme successfully reduced these biases. Our analysis shows that terrestrial carbon and water cycle simulations in monsoon Asia were greatly improved, and the use of multiple satellite observations with this framework is an effective way for improving terrestrial biosphere models.

  11. Impact of 3-D orographic gravity wave parameterisation on stratosphere dynamics

    NASA Astrophysics Data System (ADS)

    Eichinger, Roland; Garny, Hella; Cai, Duy; Jöckel, Patrick

    2017-04-01

    Stratosphere dynamics are strongly influenced by gravity waves (GWs) propagating upwards from the troposphere. Some of these GWs are generated through flow over small-scale orography and can not be resolved by common general circulation models (GCMs). Due to computational model designs, their parameterisation usually follows a one dimensional columnar approach that, among other simplifications, neglects the horizontal propagation of GWs on their way up into the Middle Atmosphere. This causes contradictions between models and observations in location and strength of GW drag force through their dissipation and as a consequence, also in stratospheric mean flow. In the EMAC (ECHAM MESSy Atmospheric Chemistry) model, we have found this deficiency to cause a too weak Antarctic polar vortex, which directly impacts stratospheric temperatures and thereby the chemical reactions that determine ozone depletion. For this reason, we adapt a three dimensional parameterisation for orographic GWs, that had been implemented and tested in the MIROC GCM, to the MESSy coding standard. This computationally light scheme can then be used in a modular and flexible way in a cascade of model setups from an idealised version for conceptional process analyses to full climate chemistry simulations for quantitative investigations. This model enhancement can help to reconcile models and observations in wave drag forcing itself, but in consequence, also in Brewer-Dobson Circulation trends across the recent decades. Furthermore, uncertainties in weather and climate predictions as well as in future ozone projections can be reduced.

  12. Mastery in Goal Scoring, T-Pattern Detection, and Polar Coordinate Analysis of Motor Skills Used by Lionel Messi and Cristiano Ronaldo

    PubMed Central

    Castañer, Marta; Barreira, Daniel; Camerino, Oleguer; Anguera, M. Teresa; Fernandes, Tiago; Hileno, Raúl

    2017-01-01

    Research in soccer has traditionally given more weight to players' technical and tactical skills, but few studies have analyzed the motor skills that underpin specific motor actions. The objective of this study was to investigate the style of play of the world's top soccer players, Cristiano Ronaldo and Lionel Messi, and how they use their motor skills in attacking actions that result in a goal. We used and improved the easy-to-use observation instrument (OSMOS-soccer player) with 9 criteria, each one expanded to build 50 categories. Associations between these categories were investigated by T-pattern detection and polar coordinate analysis. T-pattern analysis detects temporal structures of complex behavioral sequences composed of simpler or directly distinguishable events within specified observation periods (time point series). Polar coordinate analysis involves the application of a complex procedure to provide a vector map of interrelated behaviors obtained from prospective and retrospective sequential analysis. The T-patterns showed that for both players the combined criteria were mainly between the different aspects of motor skills, namely the use of lower limbs, contact with the ball using the outside of the foot, locomotion, body orientation with respect to the opponent goal line, and the criteria of technical actions and the right midfield. Polar coordinate analysis detected significant associations between the same criteria included in the T-patterns as well as the criteria of turning the body, numerical equality with no pressure, and relative numerical superiority. PMID:28553245

  13. Mastery in Goal Scoring, T-Pattern Detection, and Polar Coordinate Analysis of Motor Skills Used by Lionel Messi and Cristiano Ronaldo.

    PubMed

    Castañer, Marta; Barreira, Daniel; Camerino, Oleguer; Anguera, M Teresa; Fernandes, Tiago; Hileno, Raúl

    2017-01-01

    Research in soccer has traditionally given more weight to players' technical and tactical skills, but few studies have analyzed the motor skills that underpin specific motor actions. The objective of this study was to investigate the style of play of the world's top soccer players, Cristiano Ronaldo and Lionel Messi, and how they use their motor skills in attacking actions that result in a goal. We used and improved the easy-to-use observation instrument (OSMOS-soccer player) with 9 criteria, each one expanded to build 50 categories. Associations between these categories were investigated by T-pattern detection and polar coordinate analysis. T-pattern analysis detects temporal structures of complex behavioral sequences composed of simpler or directly distinguishable events within specified observation periods (time point series). Polar coordinate analysis involves the application of a complex procedure to provide a vector map of interrelated behaviors obtained from prospective and retrospective sequential analysis. The T-patterns showed that for both players the combined criteria were mainly between the different aspects of motor skills, namely the use of lower limbs, contact with the ball using the outside of the foot, locomotion, body orientation with respect to the opponent goal line, and the criteria of technical actions and the right midfield. Polar coordinate analysis detected significant associations between the same criteria included in the T-patterns as well as the criteria of turning the body, numerical equality with no pressure, and relative numerical superiority.

  14. Analysing biomass torrefaction supply chain costs.

    PubMed

    Svanberg, Martin; Olofsson, Ingemar; Flodén, Jonas; Nordin, Anders

    2013-08-01

    The objective of the present work was to develop a techno-economic system model to evaluate how logistics and production parameters affect the torrefaction supply chain costs under Swedish conditions. The model consists of four sub-models: (1) supply system, (2) a complete energy and mass balance of drying, torrefaction and densification, (3) investment and operating costs of a green field, stand-alone torrefaction pellet plant, and (4) distribution system to the gate of an end user. The results show that the torrefaction supply chain reaps significant economies of scale up to a plant size of about 150-200 kiloton dry substance per year (ktonDS/year), for which the total supply chain costs accounts to 31.8 euro per megawatt hour based on lower heating value (€/MWhLHV). Important parameters affecting total cost are amount of available biomass, biomass premium, logistics equipment, biomass moisture content, drying technology, torrefaction mass yield and torrefaction plant capital expenditures (CAPEX). Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. TKKMOD: A computer simulation program for an integrated wind diesel system. Version 1.0: Document and user guide

    NASA Astrophysics Data System (ADS)

    Manninen, L. M.

    1993-12-01

    The document describes TKKMOD, a simulation model developed at Helsinki University of Technology for a specific wind-diesel system layout, with special emphasis on the battery submodel and its use in simulation. The model has been included into the European wind-diesel modeling software package WDLTOOLS under the CEC JOULE project 'Engineering Design Tools for Wind-Diesel Systems' (JOUR-0078). WDLTOOLS serves as the user interface and processes the input and output data of different logistic simulation models developed by the project participants. TKKMOD cannot be run without this shell. The report only describes the simulation principles and model specific parameters of TKKMOD and gives model specific user instructions. The input and output data processing performed outside this model is described in the documentation of the shell. The simulation model is utilized for calculation of long-term performance of the reference system configuration for given wind and load conditions. The main results are energy flows, losses in the system components, diesel fuel consumption, and the number of diesel engine starts.

  16. New submodel for watershed-scale simulations of fecal bacteria fate and transport at agricultural and pasture lands

    USDA-ARS?s Scientific Manuscript database

    Microbial contamination of waters is the critical public health issue. The watershed-scale process-based modeling of bacteria fate and transport (F&T) has been proven to serve as the useful tool for predicting microbial water quality and evaluating management practices. The objective of this work is...

  17. Aerothermal modeling program, phase 1

    NASA Technical Reports Server (NTRS)

    Srinivasan, R.; Reynolds, R.; Ball, I.; Berry, R.; Johnson, K.; Mongia, H.

    1983-01-01

    Aerothermal submodels used in analytical combustor models are analyzed. The models described include turbulence and scalar transport, gaseous full combustion, spray evaporation/combustion, soot formation and oxidation, and radiation. The computational scheme is discussed in relation to boundary conditions and convergence criteria. Also presented is the data base for benchmark quality test cases and an analysis of simple flows.

  18. Improving longleaf pine mortality predictions in the Southern Variant of the Forest Vegetation Simulator

    Treesearch

    R. Justin DeRose; John D. Shaw; Giorgio Vacchiano; James N. Long

    2008-01-01

    The Southern Variant of the Forest Vegetation Simulator (FVS-SN) is made up of individual submodels that predict tree growth, recruitment and mortality. Forest managers on Ft. Bragg, North Carolina, discovered biologically unrealistic longleaf pine (Pinus palustris) size-density predictions at large diameters when using FVS-SN to project red-cockaded...

  19. Air traffic simulation in chemistry-climate model EMAC 2.41: AirTraf 1.0

    NASA Astrophysics Data System (ADS)

    Yamashita, Hiroshi; Grewe, Volker; Jöckel, Patrick; Linke, Florian; Schaefer, Martin; Sasaki, Daisuke

    2016-09-01

    Mobility is becoming more and more important to society and hence air transportation is expected to grow further over the next decades. Reducing anthropogenic climate impact from aviation emissions and building a climate-friendly air transportation system are required for a sustainable development of commercial aviation. A climate optimized routing, which avoids climate-sensitive regions by re-routing horizontally and vertically, is an important measure for climate impact reduction. The idea includes a number of different routing strategies (routing options) and shows a great potential for the reduction. To evaluate this, the impact of not only CO2 but also non-CO2 emissions must be considered. CO2 is a long-lived gas, while non-CO2 emissions are short-lived and are inhomogeneously distributed. This study introduces AirTraf (version 1.0) that performs global air traffic simulations, including effects of local weather conditions on the emissions. AirTraf was developed as a new submodel of the ECHAM5/MESSy Atmospheric Chemistry (EMAC) model. Air traffic information comprises Eurocontrol's Base of Aircraft Data (BADA Revision 3.9) and International Civil Aviation Organization (ICAO) engine performance data. Fuel use and emissions are calculated by the total energy model based on the BADA methodology and Deutsches Zentrum für Luft- und Raumfahrt (DLR) fuel flow method. The flight trajectory optimization is performed by a genetic algorithm (GA) with respect to a selected routing option. In the model development phase, benchmark tests were performed for the great circle and flight time routing options. The first test showed that the great circle calculations were accurate to -0.004 %, compared to those calculated by the Movable Type script. The second test showed that the optimal solution found by the algorithm sufficiently converged to the theoretical true-optimal solution. The difference in flight time between the two solutions is less than 0.01 %. The dependence of the optimal solutions on the initial set of solutions (called population) was analyzed and the influence was small (around 0.01 %). The trade-off between the accuracy of GA optimizations and computational costs is clarified and the appropriate population and generation (one iteration of GA) sizing is discussed. The results showed that a large reduction in the number of function evaluations of around 90 % can be achieved with only a small decrease in the accuracy of less than 0.1 %. Finally, AirTraf simulations are demonstrated with the great circle and the flight time routing options for a typical winter day. The 103 trans-Atlantic flight plans were used, assuming an Airbus A330-301 aircraft. The results confirmed that AirTraf simulates the air traffic properly for the two routing options. In addition, the GA successfully found the time-optimal flight trajectories for the 103 airport pairs, taking local weather conditions into account. The consistency check for the AirTraf simulations confirmed that calculated flight time, fuel consumption, NOx emission index and aircraft weights show good agreement with reference data.

  20. Army Research Office’s ARO in Review 2014.The Annual Historical Record of the Army Research Laboratory’s Army Research Office (ARO) Programs and Funding Activities

    DTIC Science & Technology

    2015-07-01

    TEM image of 1T-TaS2 showing CDW discommensuration network. (Main panel) Nonlinear resistivity and current slip at large bias of device shown in lower...the same species. As most pollen is generally dispersed by either wind or insects, the male plants must produce pollen in vast amounts (up to...for Massive and Messy Data • Professor Yuri Bazilevs, University of California - San Diego; Fluid-Structure Interaction Simulation of Gas Turbine

  1. Trace gas variability within the Asian monsoon anticyclone on intraseasonal and interannual timescales

    NASA Astrophysics Data System (ADS)

    Nützel, Matthias; Dameris, Martin; Fierli, Federico; Stiller, Gabriele; Garny, Hella; Jöckel, Patrick

    2016-04-01

    The Asian monsoon and the associated monsoon anticyclone have the potential of substantially influencing the composition of the UTLS (upper troposphere/lower stratosphere) and hence global climate. Here we study the variability of the Asian summer monsoon anticyclone in the UTLS on intraseasonal and interannual timescales using results from long term simulations performed with the CCM EMAC (ECHAM5/MESSy Atmospheric Chemistry). In particular, we focus on specified dynamics simulations (Newtonian relaxation to ERA-Interim data) covering the period 1980-2013, which have been performed within the ESCiMo (Earth System Chemistry integrated Modelling) project (Jöckel et al., GMDD, 2015). Our main focus lies on variability of the anticyclone's strength (in terms of potential vorticity, geopotential and circulation) and variability in trace gas signatures (O3, H2O) within the anticyclone. To support our findings, we also include observations from satellites (MIPAS, MLS). Our work is linked to the EU StratoClim campaign in 2016.

  2. Automated Blazar Light Curves Using Machine Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Spencer James

    Every night in a remote clearing called Fenton Hill high in the Jemez Mountains of central New Mexico, a bank of robotically controlled telescopes tilt their lenses to the sky for another round of observation through digital imaging. Los Alamos National Laboratory’s Thinking Telescopes project is watching for celestial transients including high-power cosmic flashes called, and like all science, it can be messy work. To keep the project clicking along, Los Alamos scientists routinely install equipment upgrades, maintain the site, and refine the sophisticated machinelearning computer programs that process those images and extract useful data from them. Each week themore » system amasses 100,000 digital images of the heavens, some of which are compromised by clouds, wind gusts, focus problems, and so on. For a graduate student at the Lab taking a year’s break between master’s and Ph.D. studies, working with state-of-the-art autonomous telescopes that can make fundamental discoveries feels light years beyond the classroom.« less

  3. The Fire and Fuels Extension to the Forest Vegetation Simulator

    Treesearch

    Elizabeth Reinhardt; Nicholas L. Crookston

    2003-01-01

    The Fire and Fuels Extension (FFE) to the Forest Vegetation Simulator (FVS) simulates fuel dynamics and potential fire behaviour over time, in the context of stand development and management. Existing models of fire behavior and fire effects were added to FVS to form this extension. New submodels representing snag and fuel dynamics were created to complete the linkages...

  4. Modeling the spatially dynamic distribution of humans in the Oregon (USA) coast range.

    Treesearch

    Jeffrey D. Kline; David L. Azuma; Alissa Moses

    2003-01-01

    A common approach to land use change analyses in multidisciplinary landscape-level studies is to delineate discrete forest and non-forest or urban and non-urban land use categories to serve as inputs into sets of integrated sub-models describing socioeconomic and ecological processes. Such discrete land use categories, however, may be inappropriate when the...

  5. The development of estimated methodology for interfacial adhesion of semiconductor coatings having an enormous mismatch extent

    NASA Astrophysics Data System (ADS)

    Lee, Chang-Chun; Huang, Pei-Chen

    2018-05-01

    The long-term reliability of multi-stacked coatings suffering the bending or rolling load was a severe challenge to extend the lifespan of foregoing structure. In addition, the adhesive strength of dissimilar materials was regarded as the major mechanical reliability concerns among multi-stacked films. However, the significant scale-mismatch from several nano-meter to micro-meter among the multi-stacked coatings causing the numerical accuracy and converged capability issues on fracture-based simulation approach. For those reasons, this study proposed the FEA-based multi-level submodeling and multi-point constraint (MPC) technique to conquer the foregoing scale-mismatch issue. The results indicated that the decent region of first and second-order submodeling can achieve the small error of 1.27% compared with the experimental result and significantly reduced the mesh density and computing time. Moreover, the MPC method adopted in FEA simulation also shown only 0.54% error when the boundary of selected local region was away the concerned critical region following the Saint-Venant principle. In this investigation, two FEA-based approaches were used to conquer the evidently scale mismatch issue when the adhesive strengths of micro and nano-scale multi-stacked coating were taken into account.

  6. An optimal strategy for functional mapping of dynamic trait loci.

    PubMed

    Jin, Tianbo; Li, Jiahan; Guo, Ying; Zhou, Xiaojing; Yang, Runqing; Wu, Rongling

    2010-02-01

    As an emerging powerful approach for mapping quantitative trait loci (QTLs) responsible for dynamic traits, functional mapping models the time-dependent mean vector with biologically meaningful equations and are likely to generate biologically relevant and interpretable results. Given the autocorrelation nature of a dynamic trait, functional mapping needs the implementation of the models for the structure of the covariance matrix. In this article, we have provided a comprehensive set of approaches for modelling the covariance structure and incorporated each of these approaches into the framework of functional mapping. The Bayesian information criterion (BIC) values are used as a model selection criterion to choose the optimal combination of the submodels for the mean vector and covariance structure. In an example for leaf age growth from a rice molecular genetic project, the best submodel combination was found between the Gaussian model for the correlation structure, power equation of order 1 for the variance and the power curve for the mean vector. Under this combination, several significant QTLs for leaf age growth trajectories were detected on different chromosomes. Our model can be well used to study the genetic architecture of dynamic traits of agricultural values.

  7. Eddy-resolving 1/10° model of the World Ocean

    NASA Astrophysics Data System (ADS)

    Ibrayev, R. A.; Khabeev, R. N.; Ushakov, K. V.

    2012-02-01

    The first results on simulating the intra-annual variability of the World Ocean circulation by use of the eddy-resolving model are considered. For this purpose, a model of the World Ocean with a 1/10° horizontal resolution and 49 vertical levels was developed (a 1/10 × 1/10 × 49 model of the World Ocean). This model is based on the traditional system of three-dimensional equations of the large-scale dynamics of the ocean and boundary conditions with an explicit allowance for water fluxes on the free surface of the ocean. The equations are written in the tripolar coordinate system. The numerical method is based on the separation of the barotropic and baroclinic components of the solution. Discretization in time is implemented using explicit schemes allowing effective parallelization for a large number of processors. The model uses the sub-models of the boundary layer of the atmosphere and the submodel of sea-ice thermodynamics. The model of the World Ocean was developed at the Institute of Numerical Mathematics of the Russian Academy of Sciences (INM RAS) and the P.P. Shirshov Institute of Oceanogy (IO RAS). The formulation of the problem of simulating the intra-annual variability of thermohydrodynamic processes of the World Ocean and the parameterizations that were used are considered. In the numerical experiment, the temporal evolution of the atmospheric effect is determined by the normal annual cycle according to the conditions of the international Coordinated Ocean-Ice Reference Experiment (CORE-I). The calculation was carried out on a multiprocessor computer with distributed memory; 1601 computational cores were used. The presented analysis demonstrates that the obtained results are quite satisfactory when compared to the results that were obtained by other eddy-resolving models of the global ocean. The analysis of the model solution is, to a larger extent, of a descriptive character. A detailed analysis of the results is to be presented in following works. This experiment is a significant first step in developing the eddy-resolving model of the World Ocean.

  8. Gaskets for low-energy houses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harr, D.

    1986-05-01

    New materials and techniques make it easier to build today's tight, energy-efficient homes. One system that has won many converts recently is the airtight drywall approach (ADA). ADA relies heavily on the use of foam gasketing. In the ADA building system, nearly all joints--sill to foundation, band joist, wall plate to subfloor, and drywall to frame-are gasketed with foam tapes. The combination of gaskets, drywall, and caulk creates an airtight envelope. Foam gasketing tape is well suited from many of these joints because it is clean, economical, and easy to apply. The right gasket will maintain the seal even ifmore » the joint moves, and won't squeeze out of the joint under compression. Caulk, on the other hand, is messy, hard to apply, and squeezes out of the joint under compression. Saturated urethanes are elastic sealants that always recover, even after being completely compressed. Some saturated urethanes recover faster than others, depending on what saturant is used, but all exert a force to recover because they are urethanes. In the construction industry, where gaskets are likely to be buried permanently within the framework, saturated urethane foam gaskets really make sense.« less

  9. East Europe Report

    DTIC Science & Technology

    1987-02-25

    definite solution from among those admissible, specified in the from of a vector of criteria and weights. The set of criteria can be different depending on...was developed in stages, by aggregating an increasing number of submodels (i.e. partial models) prepared by different teams. Nor can its present...entities operating under different constraints and towards different objectives. These models and various combinations thereof can also be useful to

  10. Spray combustion model improvement study, 1

    NASA Technical Reports Server (NTRS)

    Chen, C. P.; Kim, Y. M.; Shang, H. M.

    1993-01-01

    This study involves the development of numerical and physical modeling in spray combustion. These modeling efforts are mainly motivated to improve the physical submodels of turbulence, combustion, atomization, dense spray effects, and group vaporization. The present mathematical formulation can be easily implemented in any time-marching multiple pressure correction methodologies such as MAST code. A sequence of validation cases includes the nonevaporating, evaporating and_burnin dense_sprays.

  11. Inventory-based sensitivity analysis of the Large Tree Diameter Growth Submodel of the Southern Variant of the FVS

    Treesearch

    Giorgio Vacchiano; John D. Shaw; R. Justin DeRose; James N. Long

    2008-01-01

    Diameter increment is an important variable in modeling tree growth. Most facets of predicted tree development are dependent in part on diameter or diameter increment, the most commonly measured stand variable. The behavior of the Forest Vegetation Simulator (FVS) largely relies on the performance of the diameter increment model and the subsequent use of predicted dbh...

  12. Validation of an intermediate-complexity model for simulating marine biogeochemistry under anoxic conditions in the modern Black Sea

    NASA Astrophysics Data System (ADS)

    Romaniello, Stephen J.; Derry, Louis A.

    2010-08-01

    We test the ability of a new 1-D intermediate-complexity box model (ICBM) that includes process-based C, N, P, O, and S biogeochemistry to simulate profiles and fluxes of biogeochemically reactive species across a wide range of ocean redox states. The ICBM was developed to simulate whole ocean processes for paleoceanographic applications and has been tested with data from the modern global ocean. Here we adapt the circulation submodel of the ICBM to simulate water mass exchange and eddy diffusion processes in the Black Sea but make only very minor changes to the biogeochemical submodel. We force the model with estimated natural and anthropogenic inputs of tracers and nutrients to the Black Sea and compare the results of the simulations to modern observations. Ventilation of the Black Sea is modeled by depth-dependent entrainment of Cold Intermediate Layer water into Bosphorus plume water and subsequent intrusion into deep layers. The simulated profiles of circulation tracers θ, salinity, CFC-12, and radiocarbon agree well with available data, suggesting that the model does a reasonable job of representing physical exchange. Vertical profiles of biogeochemically active components are in good overall agreement with observations. The lack of trace metal (Mn and Fe) cycling in the model results in some discrepancies between the simulated profiles and observation across the suboxic zone; however, the overall redox balance is not sensitive to this difference. We compare modeled basin-wide biogeochemical fluxes to available estimates, but in a number of cases uncertainties in modern budgets limit our ability to test the model rigorously. In agreement with earlier work we find that fixed N losses via thiodenitrification are likely a major pathway in the Black Sea N cycle. Overall, the same biogeochemical submodel used to simulate the modern global ocean appears to perform well in simulating Black Sea processes without requiring significant modification. The ability of a single model to perform across a wide range of redox states is an important prerequisite for applying the ICBM to deep time paleoceanographic problems. The model source code is available as MATLAB™ 7 m-files provided as auxiliary material.

  13. Disturbance Distance: Using a process based ecosystem model to estimate and map potential thresholds in disturbance rates that would give rise to fundamentally altered ecosystems

    NASA Astrophysics Data System (ADS)

    Dolan, K. A.; Hurtt, G. C.; Fisk, J.; Flanagan, S.; LePage, Y.; Sahajpal, R.

    2014-12-01

    Disturbance plays a critical role in shaping the structure and function of forested ecosystems as well as the ecosystem services they provide, including but not limited to: carbon storage, biodiversity habitat, water quality and flow, and land atmosphere exchanges of energy and water. As recent studies highlight novel disturbance regimes resulting from pollution, invasive pests and climate change, there is a need to include these alterations in predictions of future forest function and structure. The Ecosystem Demography (ED) model is a mechanistic model of forest ecosystem dynamics in which individual-based forest dynamics can be efficiently implemented over regional to global scales due to advanced scaling methods. We utilize ED to characterize the sensitivity of potential vegetation structure and function to changes in rates of density independent mortality. Disturbance rate within ED can either be altered directly or through the development of sub-models. Disturbance sub-models in ED currently include fire, land use and hurricanes. We use a tiered approach to understand the sensitivity of North American ecosystems to changes in background density independent mortality. Our first analyses were conducted at half-degree spatial resolution with a constant rate of disturbance in space and time, which was altered between runs. Annual climate was held constant at the site level and the land use and fire sub-models were turned off. Results showed an ~ 30% increase in non-forest area across the US when disturbance rates were changed from 0.6% a year to 1.2% a year and a more than 3.5 fold increase in non-forest area when disturbance rates doubled again from 1.2% to 2.4%. Continued runs altered natural background disturbance rates with the existing fire and hurricane sub models turned on as well as historic and future land use. By quantify differences between model outputs that characterize ecosystem structure and function related to the carbon cycle across the US, we are identifying areas and characteristics that display higher sensitivities to change in disturbance rates.

  14. Application of the environmentally sensitive forest growth and mortality submodel, ESGM, for estimating the historic and future forest carbon budget for the Sooke Lake Watershed, British Columbia.

    NASA Astrophysics Data System (ADS)

    Trofymow, J. A.; Hember, R.; Smiley, B. P.; Morken, S.; Kurz, W. A.

    2016-12-01

    Forest resource managers require knowledge of how natural disturbances, harvest, land-use change, and climate change affect carbon (C) budgets of complex landscapes. In this study, a retrospective (1911-2012) forest C budget for the 8500 ha Sooke Lake watershed was developed based on forest inventories, disturbance, and stream monitoring data using the Canadian Forest Service's spatially-explicit Generic Carbon Budget Model (GCBM). This standard version of GCBM used species-specific volume-over-age curves and site indices to determine tree growth and thus does not explicitly account for environmental factors (climate, CO2, N deposition) that may affect trees and net ecosystem production (NEP). Therefore, a new submodel was developed for GCBM, ESGM, which uses empirical equations to account for influences of 8 environmental factors on tree growth and mortality, based on analysis of multi-decadal data from 19,777 field plots from western North America. Annual environmental variables were prepared (1910-2012) for input to GCBMesgm and temperature effects on decay rates were turned on in the GCBM soil submodel. In response to fires, harvesting, planting, and deforestation for drinking water reservoir expansions, the standard GCBM run showed over 100 years (1911, 1940, 1991, 2012) aboveground biomass C (262, 189, 148, 177 MgC/ha) and NEP (0.6, -1.3, 0.8, 2.3 Mg C/ha/yr) declined and then increased as harvest and deforestation ceased in 2002. From 1.5 -6.5% of terrestrial humified soil C losses (30,640 Mg C/100 yrs) were estimated to have been exported as dissolved organic carbon. Assuming no future disturbances, the standard GCBM run indicates NEP will peak at 2.64 MgC/ha/yr in 2024 and biomass C reach 1910 levels by 2075. Comparisons will be made between standard GCBM and GCBMesgmruns of the C budget for the historic period and for future climate scenarios (baseline, RCP4.5 and RCP8.5) from the CanESM2 GCM, to explore the potential implications of environmental change for future watershed management.

  15. Eye on the Sky

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Spencer

    Every night in a remote clearing called Fenton Hill high in the Jemez Mountains of central New Mexico, a bank of robotically controlled telescopes tilt their lenses to the sky for another round of observation through digital imaging. Los Alamos National Laboratory’s Thinking Telescopes project is watching for celestial transients including high-power cosmic flashes called blazars, and like all science, it can be messy work. But for a graduate student at the Lab taking a year’s break between master’s and Ph.D. studies, working with these state-of-the-art autonomous telescopes that can make fundamental discoveries feels light years beyond the classroom.

  16. Real coded genetic algorithm for fuzzy time series prediction

    NASA Astrophysics Data System (ADS)

    Jain, Shilpa; Bisht, Dinesh C. S.; Singh, Phool; Mathpal, Prakash C.

    2017-10-01

    Genetic Algorithm (GA) forms a subset of evolutionary computing, rapidly growing area of Artificial Intelligence (A.I.). Some variants of GA are binary GA, real GA, messy GA, micro GA, saw tooth GA, differential evolution GA. This research article presents a real coded GA for predicting enrollments of University of Alabama. Data of Alabama University is a fuzzy time series. Here, fuzzy logic is used to predict enrollments of Alabama University and genetic algorithm optimizes fuzzy intervals. Results are compared to other eminent author works and found satisfactory, and states that real coded GA are fast and accurate.

  17. Towards a new theory of practice for community health psychology.

    PubMed

    Nolas, Sevasti-Melissa

    2014-01-01

    The article sets out the value of theorizing collective action from a social science perspective that engages with the messy actuality of practice. It argues that community health psychology relies on an abstract version of Paulo Freire's earlier writing, the Pedagogy of the Oppressed, which provides scholar-activists with a 'map' approach to collective action. The article revisits Freire's later work, the Pedagogy of Hope, and argues for the importance of developing a 'journey' approach to collective action. Theories of practice are discussed for their value in theorizing such journeys, and in bringing maps (intentions) and journeys (actuality) closer together.

  18. Exit Presentation

    NASA Technical Reports Server (NTRS)

    Melone, Kate

    2016-01-01

    Skills Acquired: Tensile Testing: Prepare materials and setting up the tensile tests; Collect and interpret (messy) data. Outgassing Testing: Understand TML (Total Mass Loss) and CVCM (Collected Volatile Condensable Material); Collaboration with other NASA centers. Z2 (NASA's Prototype Space Suit Development) Support: Hands on building mockups of components; Analyze data; Work with others, understanding what both parties need in order to make a run successful. LCVG (Liquid Cooling and Ventilation Garment) Flush and Purge Console: Both formal design and design review process; How to determine which components to use - flow calculations, pressure ratings, size, etc.; Hazard Analysis; How to make design tradeoffs.

  19. Why shared decision making is not good enough: lessons from patients.

    PubMed

    Olthuis, Gert; Leget, Carlo; Grypdonck, Mieke

    2014-07-01

    A closer look at the lived illness experiences of medical professionals themselves shows that shared decision making is in need of a logic of care. This paper underlines that medical decision making inevitably takes place in a messy and uncertain context in which sharing responsibilities may impose a considerable burden on patients. A better understanding of patients' lived experiences enables healthcare professionals to attune to what individual patients deem important in their lives.This will contribute to making medical decisions in a good and caring manner, taking into account the lived experience of being ill.

  20. Clean it up: motivating a 13-year-old boy to pick up his room.

    PubMed

    James, Helene M; Luyben, Paul D

    2009-01-01

    The purpose of this study was to reduce the messiness in a 13-year-old boy's room. Previous research indicated that contingent access to an activity reinforcer such as computer time might well provide the motivation to do what the participant had steadfastly refused to do in the past. The data show that relative to baseline there was a substantial decrease in the number of objects out-of-place once the contingency was in effect, although limitations in the design preclude absolute confidence that the intervention produced the reductions observed.

  1. Analysis of messy data with heteroscedastic in mean models

    NASA Astrophysics Data System (ADS)

    Trianasari, Nurvita; Sumarni, Cucu

    2016-02-01

    In the analysis of the data, we often faced with the problem of data where the data did not meet some assumptions. In conditions of such data is often called data messy. This problem is a consequence of the data that generates outliers that bias or error estimation. To analyze the data messy, there are three approaches, namely standard analysis, transform data and data analysis methods rather than a standard. Simulations conducted to determine the performance of a third comparative test procedure on average often the model variance is not homogeneous. Data simulation of each scenario is raised as much as 500 times. Next, we do the analysis of the average comparison test using three methods, Welch test, mixed models and Welch-r test. Data generation is done through software R version 3.1.2. Based on simulation results, these three methods can be used for both normal and abnormal case (homoscedastic). The third method works very well on data balanced or unbalanced when there is no violation in the homogenity's assumptions variance. For balanced data, the three methods still showed an excellent performance despite the violation of the assumption of homogeneity of variance, with the requisite degree of heterogeneity is high. It can be shown from the level of power test above 90 percent, and the best to Welch method (98.4%) and the Welch-r method (97.8%). For unbalanced data, Welch method will be very good moderate at in case of heterogeneity positive pair with a 98.2% power. Mixed models method will be very good at case of highly heterogeneity was negative negative pairs with power. Welch-r method works very well in both cases. However, if the level of heterogeneity of variance is very high, the power of all method will decrease especially for mixed models methods. The method which still works well enough (power more than 50%) is Welch-r method (62.6%), and the method of Welch (58.6%) in the case of balanced data. If the data are unbalanced, Welch-r method works well enough in the case of highly heterogeneous positive positive or negative negative pairs, there power are 68.8% and 51% consequencly. Welch method perform well enough only in the case of highly heterogeneous variety of positive positive pairs with it is power of 64.8%. While mixed models method is good in the case of a very heterogeneous variety of negative partner with 54.6% power. So in general, when there is a variance is not homogeneous case, Welch method is applied to the data rank (Welch-r) has a better performance than the other methods.

  2. Thermal management methods for compact high power LED arrays

    NASA Astrophysics Data System (ADS)

    Christensen, Adam; Ha, Minseok; Graham, Samuel

    2007-09-01

    The package and system level temperature distributions of a high power (>1W) light emitting diode (LED) array has been investigated using numerical heat flow models. For this analysis, a thermal resistor network model was combined with a 3D finite element submodel of an LED structure to predict system and die level temperatures. The impact of LED array density, LED power density, and active versus passive cooling methods on device operation were calculated. In order to help understand the role of various thermal resistances in cooling such compact arrays, the thermal resistance network was analyzed in order to estimate the contributions from materials as well as active and passive cooling schemes. An analysis of thermal stresses and residual stresses in the die are also calculated based on power dissipation and convection heat transfer coefficients. Results show that the thermal stress in the GaN layer are compressive which can impact the band gap and performance of the LEDs.

  3. Modeling Soot Oxidation and Gasification with Bayesian Statistics

    DOE PAGES

    Josephson, Alexander J.; Gaffin, Neal D.; Smith, Sean T.; ...

    2017-08-22

    This paper presents a statistical method for model calibration using data collected from literature. The method is used to calibrate parameters for global models of soot consumption in combustion systems. This consumption is broken into two different submodels: first for oxidation where soot particles are attacked by certain oxidizing agents; second for gasification where soot particles are attacked by H 2O or CO 2 molecules. Rate data were collected from 19 studies in the literature and evaluated using Bayesian statistics to calibrate the model parameters. Bayesian statistics are valued in their ability to quantify uncertainty in modeling. The calibrated consumptionmore » model with quantified uncertainty is presented here along with a discussion of associated implications. The oxidation results are found to be consistent with previous studies. Significant variation is found in the CO 2 gasification rates.« less

  4. Modeling Soot Oxidation and Gasification with Bayesian Statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Josephson, Alexander J.; Gaffin, Neal D.; Smith, Sean T.

    This paper presents a statistical method for model calibration using data collected from literature. The method is used to calibrate parameters for global models of soot consumption in combustion systems. This consumption is broken into two different submodels: first for oxidation where soot particles are attacked by certain oxidizing agents; second for gasification where soot particles are attacked by H 2O or CO 2 molecules. Rate data were collected from 19 studies in the literature and evaluated using Bayesian statistics to calibrate the model parameters. Bayesian statistics are valued in their ability to quantify uncertainty in modeling. The calibrated consumptionmore » model with quantified uncertainty is presented here along with a discussion of associated implications. The oxidation results are found to be consistent with previous studies. Significant variation is found in the CO 2 gasification rates.« less

  5. Rosen's (M,R) system as an X-machine.

    PubMed

    Palmer, Michael L; Williams, Richard A; Gatherer, Derek

    2016-11-07

    Robert Rosen's (M,R) system is an abstract biological network architecture that is allegedly both irreducible to sub-models of its component states and non-computable on a Turing machine. (M,R) stands as an obstacle to both reductionist and mechanistic presentations of systems biology, principally due to its self-referential structure. If (M,R) has the properties claimed for it, computational systems biology will not be possible, or at best will be a science of approximate simulations rather than accurate models. Several attempts have been made, at both empirical and theoretical levels, to disprove this assertion by instantiating (M,R) in software architectures. So far, these efforts have been inconclusive. In this paper, we attempt to demonstrate why - by showing how both finite state machine and stream X-machine formal architectures fail to capture the self-referential requirements of (M,R). We then show that a solution may be found in communicating X-machines, which remove self-reference using parallel computation, and then synthesise such machine architectures with object-orientation to create a formal basis for future software instantiations of (M,R) systems. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Pressurization System Modeling for a Generic Bimese Two- Stage-to-Orbit Reusable Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Mazurkivich, Pete; Chandler, Frank; Nguyen, Han

    2005-01-01

    A pressurization system model was developed for a generic bimese Two-Stage-to-orbit Reusable Launch Vehicle using a cross-feed system and operating with densified propellants. The model was based on the pressurization system model for a crossfeed subscale water test article and was validated with test data obtained from the test article. The model consists of the liquid oxygen and liquid hydrogen pressurization models, each made up of two submodels, Booster and Orbiter tank pressurization models. The tanks are controlled within a 0.2-psi band and pressurized on the ground with ambient helium and autogenously in flight with gaseous oxygen and gaseous hydrogen. A 15-psi pressure difference is maintained between the Booster and Orbiter tanks to ensure crossfeed check valve closure before Booster separation. The analysis uses an ascent trajectory generated for a generic bimese vehicle and a tank configuration based on the Space Shuttle External Tank. It determines the flow rates required to pressurize the tanks on the ground and in flight, and demonstrates the model's capability to analyze the pressurization system performance of a full-scale bimese vehicle with densified propellants.

  7. Pattern-oriented modelling: a ‘multi-scope’ for predictive systems ecology

    PubMed Central

    Grimm, Volker; Railsback, Steven F.

    2012-01-01

    Modern ecology recognizes that modelling systems across scales and at multiple levels—especially to link population and ecosystem dynamics to individual adaptive behaviour—is essential for making the science predictive. ‘Pattern-oriented modelling’ (POM) is a strategy for doing just this. POM is the multi-criteria design, selection and calibration of models of complex systems. POM starts with identifying a set of patterns observed at multiple scales and levels that characterize a system with respect to the particular problem being modelled; a model from which the patterns emerge should contain the right mechanisms to address the problem. These patterns are then used to (i) determine what scales, entities, variables and processes the model needs, (ii) test and select submodels to represent key low-level processes such as adaptive behaviour, and (iii) find useful parameter values during calibration. Patterns are already often used in these ways, but a mini-review of applications of POM confirms that making the selection and use of patterns more explicit and rigorous can facilitate the development of models with the right level of complexity to understand ecological systems and predict their response to novel conditions. PMID:22144392

  8. The TEF modeling and analysis approach to advance thermionic space power technology

    NASA Astrophysics Data System (ADS)

    Marshall, Albert C.

    1997-01-01

    Thermionics space power systems have been proposed as advanced power sources for future space missions that require electrical power levels significantly above the capabilities of current space power systems. The Defense Special Weapons Agency's (DSWA) Thermionic Evaluation Facility (TEF) is carrying out both experimental and analytical research to advance thermionic space power technology to meet this expected need. A Modeling and Analysis (M&A) project has been created at the TEF to develop analysis tools, evaluate concepts, and guide research. M&A activities are closely linked to the TEF experimental program, providing experiment support and using experimental data to validate models. A planning exercise has been completed for the M&A project, and a strategy for implementation was developed. All M&A activities will build on a framework provided by a system performance model for a baseline Thermionic Fuel Element (TFE) concept. The system model is composed of sub-models for each of the system components and sub-systems. Additional thermionic component options and model improvements will continue to be incorporated in the basic system model during the course of the program. All tasks are organized into four focus areas: 1) system models, 2) thermionic research, 3) alternative concepts, and 4) documentation and integration. The M&A project will provide a solid framework for future thermionic system development.

  9. A two-stage storage routing model for green roof runoff detention.

    PubMed

    Vesuviano, Gianni; Sonnenwald, Fred; Stovin, Virginia

    2014-01-01

    Green roofs have been adopted in urban drainage systems to control the total quantity and volumetric flow rate of runoff. Modern green roof designs are multi-layered, their main components being vegetation, substrate and, in almost all cases, a separate drainage layer. Most current hydrological models of green roofs combine the modelling of the separate layers into a single process; these models have limited predictive capability for roofs not sharing the same design. An adaptable, generic, two-stage model for a system consisting of a granular substrate over a hard plastic 'egg box'-style drainage layer and fibrous protection mat is presented. The substrate and drainage layer/protection mat are modelled separately by previously verified sub-models. Controlled storm events are applied to a green roof system in a rainfall simulator. The time-series modelled runoff is compared to the monitored runoff for each storm event. The modelled runoff profiles are accurate (mean Rt(2) = 0.971), but further characterization of the substrate component is required for the model to be generically applicable to other roof configurations with different substrate.

  10. A Qualitative Model of Human Interaction with Complex Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.

    1987-01-01

    A qualitative model describing human interaction with complex dynamic systems is developed. The model is hierarchical in nature and consists of three parts: a behavior generator, an internal model, and a sensory information processor. The behavior generator is responsible for action decomposition, turning higher level goals or missions into physical action at the human-machine interface. The internal model is an internal representation of the environment which the human is assumed to possess and is divided into four submodel categories. The sensory information processor is responsible for sensory composition. All three parts of the model act in consort to allow anticipatory behavior on the part of the human in goal-directed interaction with dynamic systems. Human workload and error are interpreted in this framework, and the familiar example of an automobile commute is used to illustrate the nature of the activity in the three model elements. Finally, with the qualitative model as a guide, verbal protocols from a manned simulation study of a helicopter instrument landing task are analyzed with particular emphasis on the effect of automation on human-machine performance.

  11. A qualitative model of human interaction with complex dynamic systems

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.

    1987-01-01

    A qualitative model describing human interaction with complex dynamic systems is developed. The model is hierarchical in nature and consists of three parts: a behavior generator, an internal model, and a sensory information processor. The behavior generator is responsible for action decomposition, turning higher level goals or missions into physical action at the human-machine interface. The internal model is an internal representation of the environment which the human is assumed to possess and is divided into four submodel categories. The sensory information processor is responsible for sensory composition. All three parts of the model act in consort to allow anticipatory behavior on the part of the human in goal-directed interaction with dynamic systems. Human workload and error are interpreted in this framework, and the familiar example of an automobile commute is used to illustrate the nature of the activity in the three model elements. Finally, with the qualitative model as a guide, verbal protocols from a manned simulation study of a helicopter instrument landing task are analyzed with particular emphasis on the effect of automation on human-machine performance.

  12. Combining Ecosystem Observations, Manipulations and the Data Assimilation Research Testbed to Provide New Insights into Community Land Model Performance

    NASA Astrophysics Data System (ADS)

    Foster, A.; Armstrong, A. H.; Shuman, J. K.; Ranson, K.; Shugart, H. H., Jr.; Rogers, B. M.; Goetz, S. J.

    2016-12-01

    Global temperatures have increased about 0.2°C per decade since 1979, and the high latitudes are warming faster than the rest of the globe. Climate change within Alaska is likely to bring about increased drought and longer fire seasons, as well as increases in the severity and frequency of fires. These changes in disturbance regimes and their associated effects on ecosystem C stocks, including permafrost, may lead to a positive feedback to further climate warming. As of now, it is uncertain how vegetation will respond to ongoing climate change, and the addition of disturbance effects leads to even more complicated and varied scenarios. Through ecological modeling, we have the capacity to examine forest processes at multiple temporal and spatial scales, allowing for the testing of complex interactions between vegetation, climate, and disturbances. The University of Virginia Forest Model Enhanced (UVAFME) is an individual tree-based forest model that has been updated for use in interior boreal Alaska, with a new permafrost model and updated fire simulation. These updated submodels allow for feedback between soils, vegetation, and fire severity through fuels tracking and impact of depth of burn on permafrost dynamics. We present these updated submodels as well as calibration and validation of UVAFME to the Yukon River Basin in Alaska, with comparisons to inventory data. We also present initial findings from simulations of potential future forest biomass, structure, and species composition across the Yukon River Basin under expected changes in precipitation, temperature, and disturbances. We predict changing climate and the associated impacts on wildfire and permafrost dynamics will result in shifts in biomass and species composition across the region, with potential for further feedback to the climate-vegetation-disturbance system. These simulations advance our understanding of the possible futures for the Alaskan boreal forest, which is a valuable part of the global carbon budget.

  13. Multi-Scale Modeling of Boreal Forest Vegetation Growth Under the Influence of Permafrost and Wildfire Interactions

    NASA Astrophysics Data System (ADS)

    Foster, A.; Armstrong, A. H.; Shuman, J. K.; Ranson, K.; Shugart, H. H., Jr.; Rogers, B. M.; Goetz, S. J.

    2017-12-01

    Global temperatures have increased about 0.2°C per decade since 1979, and the high latitudes are warming faster than the rest of the globe. Climate change within Alaska is likely to bring about increased drought and longer fire seasons, as well as increases in the severity and frequency of fires. These changes in disturbance regimes and their associated effects on ecosystem C stocks, including permafrost, may lead to a positive feedback to further climate warming. As of now, it is uncertain how vegetation will respond to ongoing climate change, and the addition of disturbance effects leads to even more complicated and varied scenarios. Through ecological modeling, we have the capacity to examine forest processes at multiple temporal and spatial scales, allowing for the testing of complex interactions between vegetation, climate, and disturbances. The University of Virginia Forest Model Enhanced (UVAFME) is an individual tree-based forest model that has been updated for use in interior boreal Alaska, with a new permafrost model and updated fire simulation. These updated submodels allow for feedback between soils, vegetation, and fire severity through fuels tracking and impact of depth of burn on permafrost dynamics. We present these updated submodels as well as calibration and validation of UVAFME to the Yukon River Basin in Alaska, with comparisons to inventory data. We also present initial findings from simulations of potential future forest biomass, structure, and species composition across the Yukon River Basin under expected changes in precipitation, temperature, and disturbances. We predict changing climate and the associated impacts on wildfire and permafrost dynamics will result in shifts in biomass and species composition across the region, with potential for further feedback to the climate-vegetation-disturbance system. These simulations advance our understanding of the possible futures for the Alaskan boreal forest, which is a valuable part of the global carbon budget.

  14. A model for estimating air-pollutant uptake by forests: calculation of absorption of sulfur dioxide from dispersed sources

    Treesearch

    C. E., Jr. Murphy; T. R. Sinclair; K. R. Knoerr

    1977-01-01

    The computer model presented in this paper is designed to estimate the uptake of air pollutants by forests. The model utilizes submodels to describe atmospheric diffusion immediately above and within the canopy, and into the sink areas within or on the trees. The program implementing the model is general and can be used with only minor changes for any gaseous pollutant...

  15. Calibration and Propagation of Uncertainty for Independence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holland, Troy Michael; Kress, Joel David; Bhat, Kabekode Ghanasham

    This document reports on progress and methods for the calibration and uncertainty quantification of the Independence model developed at UT Austin. The Independence model is an advanced thermodynamic and process model framework for piperazine solutions as a high-performance CO 2 capture solvent. Progress is presented in the framework of the CCSI standard basic data model inference framework. Recent work has largely focused on the thermodynamic submodels of Independence.

  16. DairyWise, a whole-farm dairy model.

    PubMed

    Schils, R L M; de Haan, M H A; Hemmer, J G A; van den Pol-van Dasselaar, A; de Boer, J A; Evers, A G; Holshof, G; van Middelkoop, J C; Zom, R L G

    2007-11-01

    A whole-farm dairy model was developed and evaluated. The DairyWise model is an empirical model that simulated technical, environmental, and financial processes on a dairy farm. The central component is the FeedSupply model that balanced the herd requirements, as generated by the DairyHerd model, and the supply of homegrown feeds, as generated by the crop models for grassland and corn silage. The output of the FeedSupply model was used as input for several technical, environmental, and economic submodels. The submodels simulated a range of farm aspects such as nitrogen and phosphorus cycling, nitrate leaching, ammonia emissions, greenhouse gas emissions, energy use, and a financial farm budget. The final output was a farm plan describing all material and nutrient flows and the consequences on the environment and economy. Evaluation of DairyWise was performed with 2 data sets consisting of 29 dairy farms. The evaluation showed that DairyWise was able to simulate gross margin, concentrate intake, nitrogen surplus, nitrate concentration in ground water, and crop yields. The variance accounted for ranged from 37 to 84%, and the mean differences between modeled and observed values varied between -5 to +3% per set of farms. We conclude that DairyWise is a powerful tool for integrated scenario development and evaluation for scientists, policy makers, extension workers, teachers and farmers.

  17. Constraining 3-PG with a new δ13C submodel: a test using the δ13C of tree rings.

    PubMed

    Wei, Liang; Marshall, John D; Link, Timothy E; Kavanagh, Kathleen L; DU, Enhao; Pangle, Robert E; Gag, Peter J; Ubierna, Nerea

    2014-01-01

    A semi-mechanistic forest growth model, 3-PG (Physiological Principles Predicting Growth), was extended to calculate δ(13)C in tree rings. The δ(13)C estimates were based on the model's existing description of carbon assimilation and canopy conductance. The model was tested in two ~80-year-old natural stands of Abies grandis (grand fir) in northern Idaho. We used as many independent measurements as possible to parameterize the model. Measured parameters included quantum yield, specific leaf area, soil water content and litterfall rate. Predictions were compared with measurements of transpiration by sap flux, stem biomass, tree diameter growth, leaf area index and δ(13)C. Sensitivity analysis showed that the model's predictions of δ(13)C were sensitive to key parameters controlling carbon assimilation and canopy conductance, which would have allowed it to fail had the model been parameterized or programmed incorrectly. Instead, the simulated δ(13)C of tree rings was no different from measurements (P > 0.05). The δ(13)C submodel provides a convenient means of constraining parameter space and avoiding model artefacts. This δ(13)C test may be applied to any forest growth model that includes realistic simulations of carbon assimilation and transpiration. © 2013 John Wiley & Sons Ltd.

  18. Integrating GIS, cellular automata, and genetic algorithm in urban spatial optimization: a case study of Lanzhou

    NASA Astrophysics Data System (ADS)

    Xu, Xibao; Zhang, Jianming; Zhou, Xiaojian

    2006-10-01

    This paper presents a model integrating GIS, cellular automata (CA) and genetic algorithm (GA) in urban spatial optimization. The model involves three objectives of the maximization of land-use efficiency, the maximization of urban spatial harmony and appropriate proportion of each land-use type. CA submodel is designed with standard Moore neighbor and three transition rules to maximize the land-use efficiency and urban spatial harmony, according to the land-use suitability and spatial harmony index. GA submodel is designed with four constraints and seven steps for the maximization of urban spatial harmony and appropriate proportion of each land-use type, including encoding, initializing, calculating fitness, selection, crossover, mutation and elitism. GIS is used to prepare for the input data sets for the model and perform spatial analysis on the results, while CA and GA are integrated to optimize urban spatial structure, programmed with Matlab 7 and coupled with GIS loosely. Lanzhou, a typical valley-basin city with fast urban development, is chosen as the case study. At the end, a detail analysis and evaluation of the spatial optimization with the model are made, and it proves to be a powerful tool in optimizing urban spatial structure and make supplement for urban planning and policy-makers.

  19. Numerical Simulation on Hydrodynamics and Combustion in a Circulating Fluidized Bed under O2/CO2 and Air Atmospheres

    NASA Astrophysics Data System (ADS)

    Zhou, W.; Zhao, C. S.; Duan, L. B.; Qu, C. R.; Lu, J. Y.; Chen, X. P.

    Oxy-fuel circulating fluidized bed (CFB) combustion technology is in the stage of initial development for carbon capture and storage (CCS). Numerical simulation is helpful to better understanding the combustion process and will be significant for CFB scale-up. In this paper, a computational fluid dynamics (CFD) model was employed to simulate the hydrodynamics of gas-solid flow in a CFB riser based on the Eulerian-Granular multiphase model. The cold model predicted the main features of the complex gas-solid flow, including the cluster formation of the solid phase along the walls, the flow structure of up-flow in the core and downward flow in the annular region. Furthermore, coal devolatilization, char combustion and heat transfer were considered by coupling semi-empirical sub-models with CFD model to establish a comprehensive model. The gas compositions and temperature profiles were predicted and the outflow gas fractions are validated with the experimental data in air combustion. With the experimentally validated model being applied, the concentration and temperature distributions in O2/CO2 combustion were predicted. The model is useful for the further development of a comprehensive model including more sub-models, such as pollutant emissions, and better understanding the combustion process in furnace.

  20. Simulating wind and marine hydrokinetic turbines with actuator lines in RANS and LES

    NASA Astrophysics Data System (ADS)

    Bachant, Peter; Wosnik, Martin

    2015-11-01

    As wind and marine hydrokinetic (MHK) turbine designs mature, focus is shifting towards improving turbine array layouts for maximizing overall power output, i.e., minimizing wake interference for axial-flow or horizontal-axis turbines, or taking advantage of constructive wake interaction for cross-flow or vertical-axis turbines. Towards this goal, an actuator line model (ALM) was developed to provide a computationally feasible method for simulating full turbine arrays inside Navier-Stokes models. The ALM predicts turbine loading with the blade element method combined with sub-models for dynamic stall and flow curvature. The open-source software is written as an extension library for the OpenFOAM CFD package, which allows the ALM body force to be applied to their standard RANS and LES solvers. Turbine forcing is also applied to volume of fluid (VOF) models, e.g., for predicting free surface effects on submerged MHK devices. An additional sub-model is considered for injecting turbulence model scalar quantities based on actuator line element loading. Results are presented for the simulation of performance and wake dynamics of axial- and cross-flow turbines and compared with moderate Reynolds number experiments and body-fitted mesh, blade-resolving CFD. Work supported by NSF-CBET grant 1150797.

Top