Sample records for complexation models scms

  1. Simple Climate Model Evaluation Using Impulse Response Tests

    NASA Astrophysics Data System (ADS)

    Schwarber, A.; Hartin, C.; Smith, S. J.

    2017-12-01

    Simple climate models (SCMs) are central tools used to incorporate climate responses into human-Earth system modeling. SCMs are computationally inexpensive, making them an ideal tool for a variety of analyses, including consideration of uncertainty. Despite their wide use, many SCMs lack rigorous testing of their fundamental responses to perturbations. Here, following recommendations of a recent National Academy of Sciences report, we compare several SCMs (Hector-deoclim, MAGICC 5.3, MAGICC 6.0, and the IPCC AR5 impulse response function) to diagnose model behavior and understand the fundamental system responses within each model. We conduct stylized perturbations (emissions and forcing/concentration) of three different chemical species: CO2, CH4, and BC. We find that all 4 models respond similarly in terms of overall shape, however, there are important differences in the timing and magnitude of the responses. For example, the response to a BC pulse differs over the first 20 years after the pulse among the models, a finding that is due to differences in model structure. Such perturbation experiments are difficult to conduct in complex models due to internal model noise, making a direct comparison with simple models challenging. We can, however, compare the simplified model response from a 4xCO2 step experiment to the same stylized experiment carried out by CMIP5 models, thereby testing the ability of SCMs to emulate complex model results. This work allows an assessment of how well current understanding of Earth system responses are incorporated into multi-model frameworks by way of simple climate models.

  2. Surface complexation modeling for predicting solid phase arsenic concentrations in the sediments of the Mississippi River Valley alluvial aquifer, Arkansas, USA

    USGS Publications Warehouse

    Sharif, M.S.U.; Davis, R.K.; Steele, K.F.; Kim, B.; Hays, P.D.; Kresse, T.M.; Fazio, J.A.

    2011-01-01

    The potential health impact of As in drinking water supply systems in the Mississippi River Valley alluvial aquifer in the state of Arkansas, USA is significant. In this context it is important to understand the occurrence, distribution and mobilization of As in the Mississippi River Valley alluvial aquifer. Application of surface complexation models (SCMs) to predict the sorption behavior of As and hydrous Fe oxides (HFO) in the laboratory has increased in the last decade. However, the application of SCMs to predict the sorption of As in natural sediments has not often been reported, and such applications are greatly constrained by the lack of site-specific model parameters. Attempts have been made to use SCMs considering a component additivity (CA) approach which accounts for relative abundances of pure phases in natural sediments, followed by the addition of SCM parameters individually for each phase. Although few reliable and internally consistent sorption databases related to HFO exist, the use of SCMs using laboratory-derived sorption databases to predict the mobility of As in natural sediments has increased. This study is an attempt to evaluate the ability of the SCMs using the geochemical code PHREEQC to predict solid phase As in the sediments of the Mississippi River Valley alluvial aquifer in Arkansas. The SCM option of the double-layer model (DLM) was simulated using ferrihydrite and goethite as sorbents quantified from chemical extractions, calculated surface-site densities, published surface properties, and published laboratory-derived sorption constants for the sorbents. The model results are satisfactory for shallow wells (10.6. m below ground surface), where the redox condition is relatively oxic or mildly suboxic. However, for the deep alluvial aquifer (21-36.6. m below ground surface) where the redox condition is suboxic to anoxic, the model results are unsatisfactory. ?? 2011 Elsevier Ltd.

  3. Applying ADLs to Assess Emerging Industry Specifications for Dynamic Discovery of Ad Hoc Network Services

    DTIC Science & Technology

    2001-01-31

    function of Jini, UPnP, SLP, Bluetooth , and HAVi • Projected specific UML models for Jini, UPnP, and SLP • Developed a Rapide Model of Jini...is used by all JINI entities in directed -- discovery mode. It is part of the SCM_Discovery -- Module. Sends Unicast messages to SCMs on list of... SCMS to be discovered until all SCMS are found. -- Receives updates from SCM DB of discovered SCMs and -- removes SCMs accordingly -- NOTE

  4. Evaluation of Precipitation Simulated by Seven SCMs against the ARM Observations at the SGP Site

    NASA Technical Reports Server (NTRS)

    Song, Hua; Lin, Wuyin; Lin, Yanluan; Wolf, Audrey B.; Neggers, Roel; Donner, Leo J.; Del Genio, Anthony D.; Liu, Yangang

    2013-01-01

    This study evaluates the performances of seven single-column models (SCMs) by comparing simulated surface precipitation with observations at the Atmospheric Radiation Measurement Program Southern Great Plains (SGP) site from January 1999 to December 2001. Results show that although most SCMs can reproduce the observed precipitation reasonably well, there are significant and interesting differences in their details. In the cold season, the model-observation differences in the frequency and mean intensity of rain events tend to compensate each other for most SCMs. In the warm season, most SCMs produce more rain events in daytime than in nighttime, whereas the observations have more rain events in nighttime. The mean intensities of rain events in these SCMs are much stronger in daytime, but weaker in nighttime, than the observations. The higher frequency of rain events during warm-season daytime in most SCMs is related to the fact that most SCMs produce a spurious precipitation peak around the regime of weak vertical motions but rich in moisture content. The models also show distinct biases between nighttime and daytime in simulating significant rain events. In nighttime, all the SCMs have a lower frequency of moderate-to-strong rain events than the observations for both seasons. In daytime, most SCMs have a higher frequency of moderate-to-strong rain events than the observations, especially in the warm season. Further analysis reveals distinct meteorological backgrounds for large underestimation and overestimation events. The former occur in the strong ascending regimes with negative low-level horizontal heat and moisture advection, whereas the latter occur in the weak or moderate ascending regimes with positive low-level horizontal heat and moisture advection.

  5. Modelling the ability of source control measures to reduce inundation risk in a community-scale urban drainage system

    NASA Astrophysics Data System (ADS)

    Mei, Chao; Liu, Jiahong; Wang, Hao; Shao, Weiwei; Xia, Lin; Xiang, Chenyao; Zhou, Jinjun

    2018-06-01

    Urban inundation is a serious challenge that increasingly confronts the residents of many cities, as well as policymakers, in the context of rapid urbanization and climate change worldwide. In recent years, source control measures (SCMs) such as green roofs, permeable pavements, rain gardens, and vegetative swales have been implemented to address flood inundation in urban settings, and proven to be cost-effective and sustainable. In order to investigate the ability of SCMs on reducing inundation in a community-scale urban drainage system, a dynamic rainfall-runoff model of a community-scale urban drainage system was developed based on SWMM. SCMs implementing scenarios were modelled under six design rainstorm events with return period ranging from 2 to 100 years, and inundation risks of the drainage system were evaluated before and after the proposed implementation of SCMs, with a risk-evaluation method based on SWMM and analytic hierarchy process (AHP). Results show that, SCMs implementation resulting in significantly reduction of hydrological indexes that related to inundation risks, range of reduction rates of average flow, peak flow, and total flooded volume of the drainage system were 28.1-72.1, 19.0-69.2, and 33.9-56.0 %, respectively, under six rainfall events with return periods ranging from 2 to 100 years. Corresponding, the inundation risks of the drainage system were significantly reduced after SCMs implementation, the risk values falling below 0.2 when the rainfall return period was less than 10 years. Simulation results confirm the effectiveness of SCMs on mitigating inundation, and quantified the potential of SCMs on reducing inundation risks in the urban drainage system, which provided scientific references for implementing SCMs for inundation control of the study area.

  6. Predicting Bacteria Removal by Enhanced Stormwater Control Measures (SCMs) at the Watershed Scale

    NASA Astrophysics Data System (ADS)

    Wolfand, J.; Bell, C. D.; Boehm, A. B.; Hogue, T. S.; Luthy, R. G.

    2017-12-01

    Urban stormwater is a major cause of water quality impairment, resulting in surface waters that fail to meet water quality standards and support their designated uses. Fecal indicator bacteria are present in high concentrations in stormwater and are strictly regulated in receiving waters; yet, their fate and transport in urban stormwater is poorly understood. Stormwater control measures (SCMs) are often used to treat, infiltrate, and release urban runoff, but field measurements show that the removal of bacteria by these structural solutions is limited (median log removal = 0.24, n = 370). Researchers have therefore looked to improve bacterial removal by enhancing SCMs through alterations in flow regimes or adding geomedia such as biochar. The present research seeks to develop a model to predict removal of fecal indicator bacteria by enhanced SCMs at the watershed scale in a semi-arid climate. Using the highly developed Ballona Creek watershed (290 km2) located in Los Angeles County as a case study, a hydrologic model is coupled with a stochastic water quality model to predict E. coli concentration near the outfall of the Ballona Creek, Santa Monica Bay. A hydrologic model was developed using EPA SWMM, calibrated for flow from water year 1998-2006 (NSE = 0.94; R2 = 0.94), and validated from water year 2007-2015 (NSE = 0.90; R2 = 0.93). This bacterial loading model was then linked to EPA SUSTAIN and a SCM bacterial removal script to simulate log removal of bacteria by various SCMs and predict bacterial concentrations in Ballona Creek. Preliminary results suggest small enhancements to SCMs that improve bacterial removal (<0.5 log removal) may offer large benefits to surface water quality and enable communities such as Los Angeles to meet their regulatory requirements.

  7. To what extent can green infrastructure mitigate downstream flooding in a peri-urban catchment?

    NASA Astrophysics Data System (ADS)

    Schubert, J. E.; Burns, M.; Sanders, B. F.; Flethcher, T.

    2016-12-01

    In this research, we couple an urban hydrologic model (MUSIC, eWater, AUS) with a fine resolution 2D hydrodynamic model (BreZo, UC Irvine, USA) to test to what extent retrofitting an urban watershed with stormwater control measures (SCMs) can propagate flood management benefits downstream. Our study site is the peri-urban Little Stringybark Creek (LSC) catchment in eastern Melbourne, AUS, with an area of 4.5 km2 and connected impervious area of 9%. Urban development is mainly limited to the upper 2 km2of the catchment. Since 2009 the LSC catchment has been the subject of a large-scale experiment aiming to restore morenatural flow by implementing over 300 SCMs, such as rain tanks and infiltration trenches, resulting in runoff from 50% of connected impervious areas now being intercepted by some form of SCM. For our study we calibrated the hydrologic and hydraulic models based on current catchment conditions, then we developed models representing alternative SCM scenarios including a complete lack of SCMs versus a full implementation of SCMs. Flow in the hydrologic/hydraulic models is forced using a range of synthetic rainfall events with annual exceedance probabilities (AEPs) between 63-1% and durations between 10 min to 24 hr. Metrics of SCM efficacy in changing flood regime include flood depths and extents, flow intensity (m2/s), flood duration, and critical storm duration leading to maximum flood conditions. Results indicate that across the range of AEPs tested and for storm durations equal or less than 3 hours, current SCM conditions reduce downstream flooded area on average by 29%, while a full implementation of SCMs would reduce downstream flooded area on average by 91%. A full implementation of SCMs could also lower maximum flow intensities by 83% on average, reducing damage potential to structures in the flow path and increasing the ability for vehicles to evacuate flooded streets. We also found that for storm durations longer than 3 hours, the SCMs capacity to retain rainfall runoff volumes is much decreased, with a full implementation of SCMs only reducing flooded area by 8% and flow intensity by 5.5%. Therefore additional measures are required for downstream flood hazard mitigation from long duration events.

  8. Use of X-ray diffraction to quantify amorphous supplementary cementitious materials in anhydrous and hydrated blended cements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snellings, R., E-mail: ruben.snellings@epfl.ch; Salze, A.; Scrivener, K.L., E-mail: karen.scrivener@epfl.ch

    2014-10-15

    The content of individual amorphous supplementary cementitious materials (SCMs) in anhydrous and hydrated blended cements was quantified by the PONKCS [1] X-ray diffraction (XRD) method. The analytical precision and accuracy of the method were assessed through comparison to a series of mixes of known phase composition and of increasing complexity. A 2σ precision smaller than 2–3 wt.% and an accuracy better than 2 wt.% were achieved for SCMs in mixes with quartz, anhydrous Portland cement, and hydrated Portland cement. The extent of reaction of SCMs in hydrating binders measured by XRD was 1) internally consistent as confirmed through the standardmore » addition method and 2) showed a linear correlation to the cumulative heat release as measured independently by isothermal conduction calorimetry. The advantages, limitations and applicability of the method are discussed with reference to existing methods that measure the degree of reaction of SCMs in blended cements.« less

  9. Assessing Robustness Properties in Dynamic Discovery of Ad Hoc Network Services (Briefing Charts)

    DTIC Science & Technology

    2001-10-04

    JINI entities in directed -- discovery mode. It is part of the SCM_Discovery -- Module. Sends Unicast messages to SCMs on list of -- SCMS to be...discovered until all SCMS are found. -- Receives updates from SCM DB of discovered SCMs and -- removes SCMs accordingly -- NOTE: Failure and...For All (SM, SD, SCM ): (SM, SD) IsElementOf SCM registered-services (CC1) implies SCM IsElementOf SM discovered- SCMs For All

  10. Development of a Model of Nitrogen Cycling in Stormwater Control Measures and Application of the Model at the Watershed Scale

    NASA Astrophysics Data System (ADS)

    Bell, C.; Tague, C.; McMillan, S. K.

    2016-12-01

    Stormwater control measures (SCMs) create ecosystems in urban watersheds that store water and promote nitrogen (N) retention and removal. This work used computer modeling at two spatial scales (the individual SCM and watershed scale) to quantify how SCMs affect runoff and nitrogen export in urban watersheds. First, routines that simulate the dynamic hydrologic and water quality processes of an individual wet pond SCM were developed and applied to quantify N processing under different environmental and design scenarios. Results showed that deeper SCMs have greater inorganic N removal efficiencies because they have more stored volume of relatively N-deplete water, and therefore have a greater capacity to dilute relatively N-rich inflow. N removal by the SCM was more sensitive to this design parameter than it was to variations in air temperature, inflow N concentrations, and inflow volume. Next, these SCM model routines were used to simulate processes of a suburban watershed in Charlotte, NC with 16 SCMs. The watershed configuration was varied to simulate runoff under different scenarios of impervious surface connectivity to SCMs with the goal of developing a simple predictive relationship between watershed condition and N loads. We used unmitigated imperviousness (UI), percent of the impervious area that is unmitigated by SCMs, to quantify watershed condition. Results showed that as SCM mitigation decreased, or as UI increased from 3% to 15%, runoff ratios and loads of nitrite and total dissolved N increased by 26% (21-32%), 14% (3-26%) and 13% (2-25%), respectively. The shape of the relationship between these response variables and UI was linear, which indicates that mitigation of any impervious surfaces will result in proportional reductions. However, the range of UI included in this study is on the low end of urban watersheds and future work will assess the behavior of this relationship at higher TI and UI levels.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daleu, C. L.; Plant, R. S.; Woolnough, S. J.

    As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison ofmore » the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. Lastly, these large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.« less

  12. Single-Case Research Methods: History and Suitability for a Psychological Science in Need of Alternatives.

    PubMed

    Hurtado-Parrado, Camilo; López-López, Wilson

    2015-09-01

    This paper presents a historical and conceptual analysis of a group of research strategies known as the Single-Case Methods (SCMs). First, we present an overview of the SCMs, their history, and their major proponents. We will argue that the philosophical roots of SCMs can be found in the ideas of authors who recognized the importance of understanding both the generality and individuality of psychological functioning. Second, we will discuss the influence that the natural sciences' attitude toward measurement and experimentation has had on SCMs. Although this influence can be traced back to the early days of experimental psychology, during which incipient forms of SCMs appeared, SCMs reached full development during the subsequent advent of Behavior Analysis (BA). Third, we will show that despite the success of SCMs in BA and other (mainly applied) disciplines, these designs are currently not prominent in psychology. More importantly, they have been neglected as a possible alternative to one of the mainstream approaches in psychology, the Null Hypothesis Significance Testing (NHST), despite serious controversies about the limitations of this prevailing method. Our thesis throughout this section will be that SCMs should be considered as an alternative to NHST because many of the recommendations for improving the use of significance testing (Wilkinson & the TFSI, 1999) are main characteristics of SCMs. The paper finishes with a discussion of a number of the possible reasons why SCMs have been neglected.

  13. Intercomparison of methods of coupling between convection and large-scale circulation: 2. Comparison over nonuniform surface conditions

    DOE PAGES

    Daleu, C. L.; Plant, R. S.; Woolnough, S. J.; ...

    2016-03-18

    As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison ofmore » the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. Lastly, these large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.« less

  14. Intercomparison of methods of coupling between convection and large‐scale circulation: 2. Comparison over nonuniform surface conditions

    PubMed Central

    Plant, R. S.; Woolnough, S. J.; Sessions, S.; Herman, M. J.; Sobel, A.; Wang, S.; Kim, D.; Cheng, A.; Bellon, G.; Peyrille, P.; Ferry, F.; Siebesma, P.; van Ulft, L.

    2016-01-01

    Abstract As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large‐scale dynamics in a set of cloud‐resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative‐convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison of the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large‐scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column‐relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large‐scale velocity profiles which are smoother and less top‐heavy compared to those produced by the WTG simulations. These large‐scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two‐way feedback between convection and the large‐scale circulation. PMID:27642501

  15. Assessing the State-of-the-Art in Dynamic Discovery of Ad Hoc Network Services

    DTIC Science & Technology

    2001-07-18

    directed -- discovery mode. It is part of the SCM_Discovery -- Module. Sends Unicast messages to SCMs on list of -- SCMS to be discovered until all... SCMS are found. -- Receives updates from SCM DB of discovered SCMs and -- removes SCMs accordingly -- NOTE: Failure and recovery behavior are not...ALLFindService10 SM4 GROUP1GroupJoin10 SCM1 SM4LinkFail5 SM4NodeFail5 ParametersCommandTime TopologyScenario Execute with Rapide For All (SM, SD, SCM

  16. Hydrologic response to stormwater control measures in urban watersheds

    NASA Astrophysics Data System (ADS)

    Bell, Colin D.; McMillan, Sara K.; Clinton, Sandra M.; Jefferson, Anne J.

    2016-10-01

    Stormwater control measures (SCMs) are designed to mitigate deleterious effects of urbanization on river networks, but our ability to predict the cumulative effect of multiple SCMs at watershed scales is limited. The most widely used metric to quantify impacts of urban development, total imperviousness (TI), does not contain information about the extent of stormwater control. We analyzed the discharge records of 16 urban watersheds in Charlotte, NC spanning a range of TI (4.1-54%) and area mitigated with SCMs (1.3-89%). We then tested multiple watershed metrics that quantify the degree of urban impact and SCM mitigation to determine which best predicted hydrologic response across sites. At the event time scale, linear models showed TI to be the best predictor of both peak unit discharge and rainfall-runoff ratios across a range of storm sizes. TI was also a strong driver of both a watershed's capacity to buffer small (e.g., 1-10 mm) rain events, and the relationship between peak discharge and precipitation once that buffering capacity is exceeded. Metrics containing information about SCMs did not appear as primary predictors of event hydrologic response, suggesting that the level of SCM mitigation in many urban watersheds is insufficient to influence hydrologic response. Over annual timescales, impervious surfaces unmitigated by SCMs and tree coverage were best correlated with streamflow flashiness and water yield, respectively. The shift in controls from the event scale to the annual scale has important implications for water resource management, suggesting that overall limitation of watershed imperviousness rather than partial mitigation by SCMs may be necessary to alleviate the hydrologic impacts of urbanization.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daleu, C. L.; Plant, R. S.; Woolnough, S. J.

    Here, as part of an international intercomparison project, a set of single-column models (SCMs) and cloud-resolving models (CRMs) are run under the weak-temperature gradient (WTG) method and the damped gravity wave (DGW) method. For each model, the implementation of the WTG or DGW method involves a simulated column which is coupled to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. The simulated column has the same surface conditions as the reference state and is initialized with profiles from the reference state. We performed systematic comparison of the behavior of different models under a consistentmore » implementation of the WTG method and the DGW method and systematic comparison of the WTG and DGW methods in models with different physics and numerics. CRMs and SCMs produce a variety of behaviors under both WTG and DGW methods. Some of the models reproduce the reference state while others sustain a large-scale circulation which results in either substantially lower or higher precipitation compared to the value of the reference state. CRMs show a fairly linear relationship between precipitation and circulation strength. SCMs display a wider range of behaviors than CRMs. Some SCMs under the WTG method produce zero precipitation. Within an individual SCM, a DGW simulation and a corresponding WTG simulation can produce different signed circulation. When initialized with a dry troposphere, DGW simulations always result in a precipitating equilibrium state. The greatest sensitivities to the initial moisture conditions occur for multiple stable equilibria in some WTG simulations, corresponding to either a dry equilibrium state when initialized as dry or a precipitating equilibrium state when initialized as moist. Multiple equilibria are seen in more WTG simulations for higher SST. In some models, the existence of multiple equilibria is sensitive to some parameters in the WTG calculations.« less

  18. Is there hope for multi-site complexation modeling?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bickmore, Barry R.; Rosso, Kevin M.; Mitchell, S. C.

    2006-06-06

    It has been shown here that the standard formulation of the MUSIC model does not deliver the molecular-scale insight into oxide surface reactions that it promises. The model does not properly divide long-range electrostatic and short-range contributions to acid-base reaction energies, and it does not treat solvation in a physically realistic manner. However, even if the current MUSIC model does not succeed in its ambitions, its ambitions are still reasonable. It was a pioneering attempt in that Hiemstra and coworkers recognized that intrinsic equilibrium constants, where the effects of long-range electrostatic effects have been removed, must be theoretically constrained priormore » to model fitting if there is to be any hope of obtaining molecular-scale insights from SCMs. We have also shown, on the other hand, that it may be premature to dismiss all valence-based models of acidity. Not only can some such models accurately predict intrinsic acidity constants, but they can also now be linked to the results of molecular dynamics simulations of solvated systems. Significant challenges remain for those interested in creating SCMs that are accurate at the molecular scale. It will only be after all model parameters can be predicted from theory, and the models validated against titration data that we will be able to begin to have some confidence that we really are adequately describing the chemical systems in question.« less

  19. CGILS: Results from the First Phase of an International Project to Understand the Physical Mechanisms of Low Cloud Feedbacks in Single Column Models

    NASA Technical Reports Server (NTRS)

    Zhang, Minghua; Bretherton, Christopher S.; Blossey, Peter N.; Austin, Phillip H.; Bacmeister, Julio T.; Bony, Sandrine; Brient, Florent; Cheedela, Suvarchal K.; Cheng, Anning; DelGenio, Anthony; hide

    2013-01-01

    1] CGILS-the CFMIP-GASS Intercomparison of Large Eddy Models (LESs) and single column models (SCMs)-investigates the mechanisms of cloud feedback in SCMs and LESs under idealized climate change perturbation. This paper describes the CGILS results from 15 SCMs and 8 LES models. Three cloud regimes over the subtropical oceans are studied: shallow cumulus, cumulus under stratocumulus, and well-mixed coastal stratus/stratocumulus. In the stratocumulus and coastal stratus regimes, SCMs without activated shallow convection generally simulated negative cloud feedbacks, while models with active shallow convection generally simulated positive cloud feedbacks. In the shallow cumulus alone regime, this relationship is less clear, likely due to the changes in cloud depth, lateral mixing, and precipitation or a combination of them. The majority of LES models simulated negative cloud feedback in the well-mixed coastal stratus/stratocumulus regime, and positive feedback in the shallow cumulus and stratocumulus regime. A general framework is provided to interpret SCM results: in a warmer climate, the moistening rate of the cloudy layer associated with the surface-based turbulence parameterization is enhanced; together with weaker large-scale subsidence, it causes negative cloud feedback. In contrast, in the warmer climate, the drying rate associated with the shallow convection scheme is enhanced. This causes positive cloud feedback. These mechanisms are summarized as the "NESTS" negative cloud feedback and the "SCOPE" positive cloud feedback (Negative feedback from Surface Turbulence under weaker Subsidence-Shallow Convection PositivE feedback) with the net cloud feedback depending on how the two opposing effects counteract each other. The LES results are consistent with these interpretations

  20. Security credentials management system (SCMS) design and analysis for the connected vehicle system : draft.

    DOT National Transportation Integrated Search

    2013-12-27

    This report presents an analysis by Booz Allen Hamilton (Booz Allen) of the technical design for the Security Credentials Management System (SCMS) intended to support communications security for the connected vehicle system. The SCMS technical design...

  1. Surface-Cross-Linked Micelles as Multifunctionalized Organic Nanoparticles for Controlled Release, Light Harvesting, and Catalysis

    PubMed Central

    2016-01-01

    Surfactant micelles are dynamic entities with a rapid exchange of monomers. By “clicking” tripropargylammonium-containing surfactants with diazide cross-linkers, we obtained surface-cross-linked micelles (SCMs) that could be multifunctionalized for different applications. They triggered membrane fusion through tunable electrostatic interactions with lipid bilayers. Antenna chromophores could be installed on them to create artificial light-harvesting complexes with efficient energy migration among tens to hundreds of chromophores. When cleavable cross-linkers were used, the SCMs could break apart in response to redox or pH signals, ejecting entrapped contents quickly as a result of built-in electrostatic stress. They served as caged surfactants whose surface activity was turned on by environmental stimuli. They crossed cell membranes readily. Encapsulated fluorophores showed enhanced photophysical properties including improved quantum yields and greatly expanded Stokes shifts. Catalytic groups could be installed on the surface or in the interior, covalently attached or physically entrapped. As enzyme mimics, the SCMs enabled rational engineering of the microenvironment around the catalysts to afford activity and selectivity not possible with conventional catalysts. PMID:27181610

  2. Performance of Service-Discovery Architectures in Response to Node Failures

    DTIC Science & Technology

    2003-06-01

    cache manager ( SCM ). Multiple SCMs can be used to mitigate the effect of SCM failure. In both architectures, service discovery occurs passively, via...employed, the SCM operates as an intermediary, matching advertised SDs of SMs to SD requirements provided by SUs. In this study, each SM manages one SP...architecture in our experimental topology: with 12 SMs, one SU, and up to three SCMs . To animate our three-party model, we chose discovery behaviors from the

  3. Intercomparison of the capabilities of simplified climate models to project the effects of aviation CO2 on climate

    NASA Astrophysics Data System (ADS)

    Khodayari, Arezoo; Wuebbles, Donald J.; Olsen, Seth C.; Fuglestvedt, Jan S.; Berntsen, Terje; Lund, Marianne T.; Waitz, Ian; Wolfe, Philip; Forster, Piers M.; Meinshausen, Malte; Lee, David S.; Lim, Ling L.

    2013-08-01

    This study evaluates the capabilities of the carbon cycle and energy balance treatments relative to the effect of aviation CO2 emissions on climate in several existing simplified climate models (SCMs) that are either being used or could be used for evaluating the effects of aviation on climate. Since these models are used in policy-related analyses, it is important that the capabilities of such models represent the state of understanding of the science. We compare the Aviation Environmental Portfolio Management Tool (APMT) Impacts climate model, two models used at the Center for International Climate and Environmental Research-Oslo (CICERO-1 and CICERO-2), the Integrated Science Assessment Model (ISAM) model as described in Jain et al. (1994), the simple Linear Climate response model (LinClim) and the Model for the Assessment of Greenhouse-gas Induced Climate Change version 6 (MAGICC6). In this paper we select scenarios to illustrate the behavior of the carbon cycle and energy balance models in these SCMs. This study is not intended to determine the absolute and likely range of the expected climate response in these models but to highlight specific features in model representations of the carbon cycle and energy balance models that need to be carefully considered in studies of aviation effects on climate. These results suggest that carbon cycle models that use linear impulse-response-functions (IRF) in combination with separate equations describing air-sea and air-biosphere exchange of CO2 can account for the dominant nonlinearities in the climate system that would otherwise not have been captured with an IRF alone, and hence, produce a close representation of more complex carbon cycle models. Moreover, results suggest that an energy balance model with a 2-box ocean sub-model and IRF tuned to reproduce the response of coupled Earth system models produces a close representation of the globally-averaged temperature response of more complex energy balance models.

  4. Intercomparison of methods of coupling between convection and large-scale circulation. 1. Comparison over uniform surface conditions

    DOE PAGES

    Daleu, C. L.; Plant, R. S.; Woolnough, S. J.; ...

    2015-10-24

    Here, as part of an international intercomparison project, a set of single-column models (SCMs) and cloud-resolving models (CRMs) are run under the weak-temperature gradient (WTG) method and the damped gravity wave (DGW) method. For each model, the implementation of the WTG or DGW method involves a simulated column which is coupled to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. The simulated column has the same surface conditions as the reference state and is initialized with profiles from the reference state. We performed systematic comparison of the behavior of different models under a consistentmore » implementation of the WTG method and the DGW method and systematic comparison of the WTG and DGW methods in models with different physics and numerics. CRMs and SCMs produce a variety of behaviors under both WTG and DGW methods. Some of the models reproduce the reference state while others sustain a large-scale circulation which results in either substantially lower or higher precipitation compared to the value of the reference state. CRMs show a fairly linear relationship between precipitation and circulation strength. SCMs display a wider range of behaviors than CRMs. Some SCMs under the WTG method produce zero precipitation. Within an individual SCM, a DGW simulation and a corresponding WTG simulation can produce different signed circulation. When initialized with a dry troposphere, DGW simulations always result in a precipitating equilibrium state. The greatest sensitivities to the initial moisture conditions occur for multiple stable equilibria in some WTG simulations, corresponding to either a dry equilibrium state when initialized as dry or a precipitating equilibrium state when initialized as moist. Multiple equilibria are seen in more WTG simulations for higher SST. In some models, the existence of multiple equilibria is sensitive to some parameters in the WTG calculations.« less

  5. Thermodynamic analysis of Bacillus subtilis endospore protonation using isothermal titration calorimetry

    NASA Astrophysics Data System (ADS)

    Harrold, Zoë R.; Gorman-Lewis, Drew

    2013-05-01

    Bacterial proton and metal adsorption reactions have the capacity to affect metal speciation and transport in aqueous environments. We coupled potentiometric titration and isothermal titration calorimetry (ITC) analyses to study Bacillus subtilis spore-proton adsorption. We modeled the potentiometric data using a four and five-site non-electrostatic surface complexation model (NE-SCM). Heats of spore surface protonation from coupled ITC analyses were used to determine site specific enthalpies of protonation based on NE-SCMs. The five-site model resulted in a substantially better model fit for the heats of protonation but did not significantly improve the potentiometric titration model fit. The improvement observed in the five-site protonation heat model suggests the presence of a highly exothermic protonation reaction circa pH 7 that cannot be resolved in the less sensitive potentiometric data. From the log Ks and enthalpies we calculated corresponding site specific entropies. Log Ks and site concentrations describing spore surface protonation are statistically equivalent to B. subtilis cell surface protonation constants. Spore surface protonation enthalpies, however, are more exothermic relative to cell based adsorption suggesting a different bonding environment. The thermodynamic parameters defined in this study provide insight on molecular scale spore-surface protonation reactions. Coupled ITC and potentiometric titrations can reveal highly exothermic, and possibly endothermic, adsorption reactions that are overshadowed in potentiometric models alone. Spore-proton adsorption NE-SCMs derived in this study provide a framework for future metal adsorption studies.

  6. Stimuli-responsive cross-linked micelles for on-demand drug delivery against cancers

    PubMed Central

    Li, Yuanpei; Xiao, Kai; Zhu, Wei; Deng, Wenbin; Lam, Kit S.

    2013-01-01

    Stimuli-responsive cross-linked micelles (SCMs) represent an ideal nanocarrier system for drug delivery against cancers. SCMs exhibit superior structural stability compared to their non-crosslinked counterpart. Therefore, these nanocarriers are able to minimize the premature drug release during blood circulation. The introduction of environmentally sensitive crosslinkers or assembly units makes SCMs responsive to single or multiple stimuli present in tumor local microenvironment or exogenously applied stimuli. In these instances, the payload drug is released almost exclusively in cancerous tissue or cancer cells upon accumulation via enhanced permeability and retention effect or receptor mediated endocytosis. In this review, we highlight recent advances in the development of SCMs for cancer therapy. We also introduce the latest biophysical techniques, such as electron paramagnetic resonance (EPR) spectroscopy and fluorescence resonance energy transfer (FRET), for the characterization of the interactions between SCMs and blood proteins. PMID:24060922

  7. Surface Complexation Modeling of U(VI) Adsorption onto Savannah River Site Sediments

    NASA Astrophysics Data System (ADS)

    Dong, W.; Wan, J.; Tokunaga, T. K.; Denham, M.; Davis, J.; Hubbard, S. S.

    2011-12-01

    The Savannah River Site (SRS) was a U.S. Department of Energy facility for plutonium production during the Cold War. Waste plumes containing low-level radioactivity and acidic waste solutions were discharged to a series of unlined seepage basins in the F-Area of the SRS from 1955 to 1988. Although the site has undergone many years of active remediation, the groundwater remains acidic, and the concentrations of U and other radionuclides are still significantly higher than their Maximum Contaminant Levels (MCLs). The objective of this effort is to understand and predict U(VI) mobility in acidic waste plumes through developing surface complexation models (SCMs). Laboratory batch experiments were conducted to evaluate U adsorption behavior over the pH range of 3.0 to 9.5. Ten sorbent samples were selected including six contaminated sediment samples from three boreholes drilled within the plume and along the groundwater flow direction, two uncontaminated (pristine) sediment samples from a borehole outside of the plume, and two reference minerals, goethite and kaolinite (identified as the dominant minerals in the clay size fraction of the F-Area sediments). The results show that goethite and kaolinite largely control U partitioning behavior. In comparison with the pristine sediment, U(VI) adsorption onto contaminated sediments exhibits adsorption edges shifted toward lower pH by about 1.0 unit (e.g., from pH≈4.5 to pH≈3.5). We developed a SCMs based component additivity (CA) approach, which can successfully predict U(VI) adsorption onto uncontaminated SRS sediments. However, application of the same SCMs based CA approach to contaminated sediments resulted in underestimates of U(VI) adsorption at acidic pH conditions. The model sensitivity analyses indicate that both goethite and kaolinite surfaces co-contributed to U(VI) adsorption under acidic pH conditions. In particular, the exchange sites of clay minerals might play an important role in adsorption of U(VI) at pH < 5.0. These results suggested that the contaminated sediments might either contain other more reactive clay minerals such as smectite, or that the long-term acid-leaching process might have altered the surface reactivity of the original sediments. Further studies are needed to identify more reactive mineral facies and understand the effects of acid leaching on the surface reactivity of the sediments.

  8. The Ulysses spacecraft control and monitoring concepts and realities

    NASA Technical Reports Server (NTRS)

    Hamer, Paul; Angold, Nigel

    1993-01-01

    Ulysses is a joint ESA-NASA mission, the primary purpose of the mission is to make scientific measurements of the Sun outside the plane of the ecliptic. The delay in launching Ulysses, due to the Challenger disaster, meant that the hardware on which the Spacecraft Control and Monitoring System (SCMS) resides was becoming obsolete, and it was decided to convert SCMS to run on a DEC/VAX machine under VMS. The paper will cover the spacecraft, the conversion, the converted SCMS, problems found, and the upgrades implemented for solutions. It will also discuss the future for and enhancements already made to the converted SCMS.

  9. Comparison of sediment and nutrient export and runoff characteristics from watersheds with centralized versus distributed stormwater management

    USGS Publications Warehouse

    Hopkins, Kristina G.; Loperfido, J.V.; Craig, Laura S.; Noe, Gregory; Hogan, Dianna

    2017-01-01

    Stormwater control measures (SCMs) are used to retain stormwater and pollutants. SCMs have traditionally been installed in a centralized manner using detention to mitigate peak flows. Recently, distributed SCM networks that treat runoff near the source have been increasingly utilized. The aim of this study was to evaluate differences among watersheds that vary in SCM arrangement by assessing differences in baseflow nutrient (NOx-N and PO4−) concentrations and fluxes, stormflow export of suspended sediments and particulate phosphorus (PP), and runoff characteristics. A paired watershed approach was used to compare export between 2004 and 2016 from one forested watershed (For-MD), one suburban watershed with centralized SCMs (Cent-MD), and one suburban watershed with distributed SCMs (Dist-MD). Results indicated baseflow nitrate (NOx-N) concentrations typically exceeded 1 mg-N/L in all watersheds and were highest in Dist-MD. Over the last 10 years in Dist-MD, nitrate concentrations in both stream baseflow and in a groundwater well declined as land use shifted from agriculture to suburban. Baseflow nitrate export temporarily increased during the construction phase of SCM development in Dist-MD. This temporary pulse of nitrate may be attributed to the conversion of sediment control facilities to SCMs and increased subsurface flushing as infiltration SCMs came on line. During storm flow, Dist-MD tended to have less runoff and lower maximum specific discharge than Cent-MD for small events (<1.3 cm), but runoff responses became increasingly similar to Cent-MD with increasing precipitation (>1.3 cm). Mass export estimated during paired storm events indicated Dist-MD exported 30% less sediment and 31% more PP than Cent-MD. For large precipitation events, export of sediment and PP was similar among all three watersheds. Results suggest that distributed SCMs can reduce runoff and sediment loads during small rain events compared to centralized SCMs, but these differences become less evident for large events when peak discharge likely leads to substantial bank erosion.

  10. The effect of particle size distribution on the design of urban stormwater control measures

    USGS Publications Warehouse

    Selbig, William R.; Fienen, Michael N.; Horwatich, Judy A.; Bannerman, Roger T.

    2016-01-01

    An urban pollutant loading model was used to demonstrate how incorrect assumptions on the particle size distribution (PSD) in urban runoff can alter the design characteristics of stormwater control measures (SCMs) used to remove solids in stormwater. Field-measured PSD, although highly variable, is generally coarser than the widely-accepted PSD characterized by the Nationwide Urban Runoff Program (NURP). PSDs can be predicted based on environmental surrogate data. There were no appreciable differences in predicted PSD when grouped by season. Model simulations of a wet detention pond and catch basin showed a much smaller surface area is needed to achieve the same level of solids removal using the median value of field-measured PSD as compared to NURP PSD. Therefore, SCMs that used the NURP PSD in the design process could be unnecessarily oversized. The median of measured PSDs, although more site-specific than NURP PSDs, could still misrepresent the efficiency of an SCM because it may not adequately capture the variability of individual runoff events. Future pollutant loading models may account for this variability through regression with environmental surrogates, but until then, without proper site characterization, the adoption of a single PSD to represent all runoff conditions may result in SCMs that are under- or over-sized, rendering them ineffective or unnecessarily costly.

  11. Psychometric properties of the Social Comparison Motives Scale.

    PubMed

    Tigges, Beth Baldwin

    2009-01-01

    This article describes the 19-item Social Comparison Motive Scale [SCMS], a measure of adolescents' motives for social comparison related to pregnancy. Dimensions and items were developed based on adolescent focus groups. The instrument was reviewed for content validity, pilot tested, and administered to 431 adolescents aged 14-18 years. Principal axis factor analysis with oblique rotation supported five dimensions. Convergent and discriminant validity were demonstrated by moderate correlations (r = .50) between the SCMS and the Iowa-Netherlands Comparison Orientation Measure and low correlations (r = .15) between the SCMS and the Rosenberg Self-Esteem Scale. Cronbach's alphas were .91 overall and .71 to .85 for the subscales. The SCMS demonstrated reliability and validity as a measure of adolescents' motives for comparing themselves with others about pregnancy.

  12. A Case Study on Nitrogen Uptake and Denitrification in a ...

    EPA Pesticide Factsheets

    Restoring urban infrastructure and managing the nitrogen cycle represent emerging challenges for urban water quality. We investigated whether stormwater control measures (SCMs), a form of green infrastructure, integrated into restored and degraded urban stream networks can influence watershed nitrogen loads. We hypothesized that hydrologically connected floodplains and SCMs are “hot spots” for nitrogen removal through denitrification because they have ample organic carbon, low dissolved oxygen levels, and extended hydrologic residence times. We tested this hypothesis by comparing nitrogen retention metrics in two urban stream networks (one restored and one urban degraded) that each contain SCMs, and a forested reference watershed at the Baltimore Long-Term Ecological Research site. We used an urban watershed continuum approach which included sampling over both space and time with a combination of: (1) longitudinal reach-scale mass balances of nitrogen and carbon conducted over 2 years during baseflow and storms (n = 24 sampling dates × 15 stream reaches = 360) and (2) 15N push–pull tracer experiments to measure in situ denitrification in SCMs and floodplain features (n = 72). The SCMs consisted of inline wetlands installed below a storm drain outfall at one urban site (restored Spring Branch) and a wetland/wet pond configured in an oxbow design to receive water during high flow events at another highly urbanized site (Gwynns Run). The SCMs significantly d

  13. Redox-sensitive shell-crosslinked polypeptide-block-polysaccharide micelles for efficient intracellular anticancer drug delivery.

    PubMed

    Zhang, Aiping; Zhang, Zhe; Shi, Fenghua; Xiao, Chunsheng; Ding, Jianxun; Zhuang, Xiuli; He, Chaoliang; Chen, Li; Chen, Xuesi

    2013-09-01

    Redox-responsive SCMs based on amphiphilic PBLG-b-dextran with good biocompatibility are synthesized and used for efficient intracellular drug delivery. The molecular structures and SCMs characteristics are characterized by (1) H NMR, FT-IR, TEM, and DLS. The hydrodynamic radius of SCMs increases gradually in PBS due to the cleavage of disulfide bond in micellar shell caused by the presence of GSH. The encapsulation efficiency and release kinetics of DOX are investigated. The fastest DOX release is observed under intracellular-mimicking reductive environments. An MTT assay demonstrates that DOX-loaded SCMs show higher cellular proliferation inhibition against GSH-OEt pretreated HeLa and HepG2 than that of the non-pretreated and BSO-pretreated ones. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Comparison of sediment and nutrient export and runoff characteristics from watersheds with centralized versus distributed stormwater management.

    PubMed

    Hopkins, Kristina G; Loperfido, J V; Craig, Laura S; Noe, Gregory B; Hogan, Dianna M

    2017-12-01

    Stormwater control measures (SCMs) are used to retain stormwater and pollutants. SCMs have traditionally been installed in a centralized manner using detention to mitigate peak flows. Recently, distributed SCM networks that treat runoff near the source have been increasingly utilized. The aim of this study was to evaluate differences among watersheds that vary in SCM arrangement by assessing differences in baseflow nutrient (NO x -N and PO 4 - ) concentrations and fluxes, stormflow export of suspended sediments and particulate phosphorus (PP), and runoff characteristics. A paired watershed approach was used to compare export between 2004 and 2016 from one forested watershed (For-MD), one suburban watershed with centralized SCMs (Cent-MD), and one suburban watershed with distributed SCMs (Dist-MD). Results indicated baseflow nitrate (NO x -N) concentrations typically exceeded 1 mg-N/L in all watersheds and were highest in Dist-MD. Over the last 10 years in Dist-MD, nitrate concentrations in both stream baseflow and in a groundwater well declined as land use shifted from agriculture to suburban. Baseflow nitrate export temporarily increased during the construction phase of SCM development in Dist-MD. This temporary pulse of nitrate may be attributed to the conversion of sediment control facilities to SCMs and increased subsurface flushing as infiltration SCMs came on line. During storm flow, Dist-MD tended to have less runoff and lower maximum specific discharge than Cent-MD for small events (<1.3 cm), but runoff responses became increasingly similar to Cent-MD with increasing precipitation (>1.3 cm). Mass export estimated during paired storm events indicated Dist-MD exported 30% less sediment and 31% more PP than Cent-MD. For large precipitation events, export of sediment and PP was similar among all three watersheds. Results suggest that distributed SCMs can reduce runoff and sediment loads during small rain events compared to centralized SCMs, but these differences become less evident for large events when peak discharge likely leads to substantial bank erosion. Published by Elsevier Ltd.

  15. SCM Forcing Data Derived from NWP Analyses

    DOE Data Explorer

    Jakob, Christian

    2008-01-15

    Forcing data, suitable for use with single column models (SCMs) and cloud resolving models (CRMs), have been derived from NWP analyses for the ARM (Atmospheric Radiation Measurement) Tropical Western Pacific (TWP) sites of Manus Island and Nauru.

  16. Performance analysis of three-dimensional-triple-level cell and two-dimensional-multi-level cell NAND flash hybrid solid-state drives

    NASA Astrophysics Data System (ADS)

    Sakaki, Yukiya; Yamada, Tomoaki; Matsui, Chihiro; Yamaga, Yusuke; Takeuchi, Ken

    2018-04-01

    In order to improve performance of solid-state drives (SSDs), hybrid SSDs have been proposed. Hybrid SSDs consist of more than two types of NAND flash memories or NAND flash memories and storage-class memories (SCMs). However, the cost of hybrid SSDs adopting SCMs is more expensive than that of NAND flash only SSDs because of the high bit cost of SCMs. This paper proposes unique hybrid SSDs with two-dimensional (2D) horizontal multi-level cell (MLC)/three-dimensional (3D) vertical triple-level cell (TLC) NAND flash memories to achieve higher cost-performance. The 2D-MLC/3D-TLC hybrid SSD achieves up to 31% higher performance than the conventional 2D-MLC/2D-TLC hybrid SSD. The factors of different performance between the proposed hybrid SSD and the conventional hybrid SSD are analyzed by changing its block size, read/write/erase latencies, and write unit of 3D-TLC NAND flash memory, by means of a transaction-level modeling simulator.

  17. Green infrastructure monitoring in Camden, NJ

    EPA Science Inventory

    The Camden County Municipal Utilities Authority (CCMUA) installed green infrastructure Stormwater Control Measures (SCMs) at multiple locations around the city of Camden, NJ. The SCMs include raised downspout planter boxes, rain gardens, and cisterns. The cisterns capture water ...

  18. Histidine-functionalized water-soluble nanoparticles for biomimetic nucleophilic/general-base catalysis under acidic conditions.

    PubMed

    Chadha, Geetika; Zhao, Yan

    2013-10-21

    Cross-linking the micelles of 4-dodecyloxybenzyltripropargylammonium bromide by 1,4-diazidobutane-2,3-diol in the presence of azide-functionalized imidazole derivatives yielded surface-cross-linked micelles (SCMs) with imidazole groups on the surface. The resulting water-soluble nanoparticles were found, by fluorescence spectroscopy, to contain hydrophobic binding sites. The imidazole groups promoted the photo-deprotonation of 2-naphthol at pH 6 and catalyzed the hydrolysis of p-nitrophenylacetate (PNPA) in aqueous solution at pH ≥ 4. Although the overall hydrolysis rate slowed down with decreasing solution pH, the catalytic effect of the imidazole became stronger because the reactions catalyzed by unfunctionalized SCMs slowed down much more. The unusual ability of the imidazole–SCMs to catalyze the hydrolysis of PNPA under acidic conditions was attributed to the local hydrophobicity and the positive nature of the SCMs.

  19. First year results on cistern monitoring in Camden, NJ

    EPA Science Inventory

    The Camden County Municipal Utilities Authority (CCMUA) installed green infrastructure Stormwater Control Measures (SCMs) at multiple locations around the city of Camden, NJ. The SCMs include raised downspout planter boxes, rain gardens, and cisterns. The cisterns capture water ...

  20. Laboratory investigation of nanomaterials to improve the permeability and strength of concrete.

    DOT National Transportation Integrated Search

    2010-02-01

    Concretes containing various supplementary cementitious materials (SCMs) such as silica fume, fly ash, and slag have improved properties. Nanomaterials (a nanometer, nm, is 10-9 m), new SCMs with possible applications in concrete, have the smallest p...

  1. Cistern Performance for Stormwater Management in Camden, NJ - abstract

    EPA Science Inventory

    The Camden County Municipal Utilities Authority (CCMUA) installed different types of green infrastructure Stormwater Control Measures (SCMs) at locations around the city of Camden, NJ. The installed SCMs include cisterns. Cisterns provide a cost effective approach to reduce st...

  2. Cistern Performance for Stormwater Management in Camden, NJ - presentation

    EPA Science Inventory

    The Camden County Municipal Utilities Authority (CCMUA) installed different types of green infrastructure Stormwater Control Measures (SCMs) at locations around the city of Camden, NJ. The installed SCMs include cisterns. Cisterns provide a cost effective approach to reduce st...

  3. Admixture compatibility of alternative supplementary cementitious materials for pavement and structural concrete.

    DOT National Transportation Integrated Search

    2014-08-01

    The objectives of this research project were: (1) to gain a better understanding about the interaction among alternative SCMS and : chemical admixtures in Portland cement mixtures; and (2) to facilitate implementation of alternative SCMs in transport...

  4. First year update on green infrastructure monitoring in Camden, NJ

    EPA Science Inventory

    The Camden County Municipal Utilities Authority (CCMUA) installed green infrastructure Stormwater Control Measures (SCMs) at multiple locations around the city of Camden, NJ. The SCMs include raised downspout planter boxes, rain gardens, and cisterns. The cisterns capture water ...

  5. Process optimization of helium cryo plant operation for SST-1 superconducting magnet system

    NASA Astrophysics Data System (ADS)

    Panchal, P.; Panchal, R.; Patel, R.; Mahesuriya, G.; Sonara, D.; Srikanth G, L. N.; Garg, A.; Christian, D.; Bairagi, N.; Sharma, R.; Patel, K.; Shah, P.; Nimavat, H.; Purwar, G.; Patel, J.; Tanna, V.; Pradhan, S.

    2017-02-01

    Several plasma discharge campaigns have been carried out in steady state superconducting tokamak (SST-1). SST-1 has toroidal field (TF) and poloidal field (PF) superconducting magnet system (SCMS). The TF coils system is cooled to 4.5 - 4.8 K at 1.5 - 1.7 bar(a) under two phase flow condition using 1.3 kW helium cryo plant. Experience revealed that the PF coils demand higher pressure heads even at lower temperatures in comparison to TF coils because of its longer hydraulic path lengths. Thermal run away are observed within PF coils because of single common control valve for all PF coils in distribution system having non-uniform lengths. Thus it is routine practice to stop the cooling of PF path and continue only TF cooling at SCMS inlet temperature of ˜ 14 K. In order to achieve uniform cool down, different control logic is adopted to make cryo stable system. In adopted control logic, the SCMS are cooled down to 80 K at constant inlet pressure of 9 bar(a). After authorization of turbine A/B, the SCMS inlet pressure is gradually controlled by refrigeration J-T valve to achieve stable operation window for cryo system. This paper presents process optimization for cryo plant operation for SST-1 SCMS.

  6. Phosphorus retention in stormwater control structures across streamflow in urban and suburban watersheds

    EPA Science Inventory

    Recent studies have shown that stormwater control measures (SCMs) are less effective at retaining phosphorus (P) than nitrogen. We compared P retention between two urban/suburban SCMs and their adjacent restored stream reaches at the Baltimore Long-Term Ecological Study (LTER) s...

  7. Attenuation of copper in runoff from copper roofing materials by two stormwater control measures.

    PubMed

    LaBarre, William J; Ownby, David R; Lev, Steven M; Rader, Kevin J; Casey, Ryan E

    2016-01-01

    Concerns have been raised over diffuse and non-point sources of metals including releases from copper (Cu) roofs during storm events. A picnic shelter with a partitioned Cu roof was constructed with two types of stormwater control measures (SCMs), bioretention planter boxes and biofiltration swales, to evaluate the ability of the SCMs to attenuate Cu in stormwater runoff from the roof. Cu was measured as it entered the SCMs from the roof as influent as well as after it left the SCMs as effluent. Samples from twenty-six storms were collected with flow-weighted composite sampling. Samples from seven storms were collected with discrete sampling. Total Cu in composite samples of the influent waters ranged from 306 to 2863 μg L(-1) and had a median concentration of 1087 μg L(-1). Total Cu in the effluent from the planter boxes ranged from 28 to 141 μg L(-1), with a median of 66 μg L(-1). Total Cu in effluent from the swales ranged from 7 to 51 μg L(-1) with a median of 28 μg L(-1). Attenuation in the planter boxes ranged from 85 to 99% with a median of 94% by concentration and in the swales ranged from 93 to 99% with a median of 99%. As the roof aged, discrete storm events showed a pronounced first-flush effect of Cu in SCM influent but this was less pronounced in the planter outlets. Stormwater retention time in the media varied with antecedent conditions, stormwater intensity and volume with median values from 6.6 to 73.5 min. Based on local conditions, a previously-published Cu weathering model gave a predicted Cu runoff rate of 2.02 g m(-2) yr(-1). The measured rate based on stormwater sampling was 2.16 g m(-2) yr(-1). Overall, both SCMs were highly successful at retaining and preventing offsite transport of Cu from Cu roof runoff. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. SCM-Forcing Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Shaocheng; Tang, Shuaiqi; Zhang, Yunyan

    2016-07-01

    Single-Column Model (SCM) Forcing Data are derived from the ARM facility observational data using the constrained variational analysis approach (Zhang and Lin 1997 and Zhang et al., 2001). The resulting products include both the large-scale forcing terms and the evaluation fields, which can be used for driving the SCMs and Cloud Resolving Models (CRMs) and validating model simulations.

  9. Systematic Analysis of Splice-Site-Creating Mutations in Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jayasinghe, Reyka G.; Cao, Song; Gao, Qingsong

    For the past decade, cancer genomic studies have focused on mutations leading to splice-site disruption, overlooking those having splice-creating potential. Here, we applied a bioinformatic tool, MiSplice, for the large-scale discovery of splice-site-creating mutations (SCMs) across 8,656 TCGA tumors. We report 1,964 originally mis-annotated mutations having clear evidence of creating alternative splice junctions. TP53 and GATA3 have 26 and 18 SCMs, respectively, and ATRX has 5 from lower-grade gliomas. Mutations in 11 genes, including PARP1, BRCA1, and BAP1, were experimentally validated for splice-site-creating function. Notably, we found that neoantigens induced by SCMs are likely several folds more immunogenic compared tomore » missense mutations, exemplified by the recurrent GATA3 SCM. Further, high expression of PD-1 and PD-L1 was observed in tumors with SCMs, suggesting candidates for immune blockade therapy. Finally, our work highlights the importance of integrating DNA and RNA data for understanding the functional and the clinical implications of mutations in human diseases.« less

  10. Systematic Analysis of Splice-Site-Creating Mutations in Cancer.

    PubMed

    Jayasinghe, Reyka G; Cao, Song; Gao, Qingsong; Wendl, Michael C; Vo, Nam Sy; Reynolds, Sheila M; Zhao, Yanyan; Climente-González, Héctor; Chai, Shengjie; Wang, Fang; Varghese, Rajees; Huang, Mo; Liang, Wen-Wei; Wyczalkowski, Matthew A; Sengupta, Sohini; Li, Zhi; Payne, Samuel H; Fenyö, David; Miner, Jeffrey H; Walter, Matthew J; Vincent, Benjamin; Eyras, Eduardo; Chen, Ken; Shmulevich, Ilya; Chen, Feng; Ding, Li

    2018-04-03

    For the past decade, cancer genomic studies have focused on mutations leading to splice-site disruption, overlooking those having splice-creating potential. Here, we applied a bioinformatic tool, MiSplice, for the large-scale discovery of splice-site-creating mutations (SCMs) across 8,656 TCGA tumors. We report 1,964 originally mis-annotated mutations having clear evidence of creating alternative splice junctions. TP53 and GATA3 have 26 and 18 SCMs, respectively, and ATRX has 5 from lower-grade gliomas. Mutations in 11 genes, including PARP1, BRCA1, and BAP1, were experimentally validated for splice-site-creating function. Notably, we found that neoantigens induced by SCMs are likely several folds more immunogenic compared to missense mutations, exemplified by the recurrent GATA3 SCM. Further, high expression of PD-1 and PD-L1 was observed in tumors with SCMs, suggesting candidates for immune blockade therapy. Our work highlights the importance of integrating DNA and RNA data for understanding the functional and the clinical implications of mutations in human diseases. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  11. Systematic Analysis of Splice-Site-Creating Mutations in Cancer

    DOE PAGES

    Jayasinghe, Reyka G.; Cao, Song; Gao, Qingsong; ...

    2018-04-05

    For the past decade, cancer genomic studies have focused on mutations leading to splice-site disruption, overlooking those having splice-creating potential. Here, we applied a bioinformatic tool, MiSplice, for the large-scale discovery of splice-site-creating mutations (SCMs) across 8,656 TCGA tumors. We report 1,964 originally mis-annotated mutations having clear evidence of creating alternative splice junctions. TP53 and GATA3 have 26 and 18 SCMs, respectively, and ATRX has 5 from lower-grade gliomas. Mutations in 11 genes, including PARP1, BRCA1, and BAP1, were experimentally validated for splice-site-creating function. Notably, we found that neoantigens induced by SCMs are likely several folds more immunogenic compared tomore » missense mutations, exemplified by the recurrent GATA3 SCM. Further, high expression of PD-1 and PD-L1 was observed in tumors with SCMs, suggesting candidates for immune blockade therapy. Finally, our work highlights the importance of integrating DNA and RNA data for understanding the functional and the clinical implications of mutations in human diseases.« less

  12. Influence of variable chemical conditions on EDTA-enhanced transport of metal ions in mildly acidic groundwater

    USGS Publications Warehouse

    Kent, D.B.; Davis, J.A.; Joye, J.L.; Curtis, G.P.

    2008-01-01

    Adsorption of Ni and Pb on aquifer sediments from Cape Cod, Massachusetts, USA increased with increasing pH and metal-ion concentration. Adsorption could be described quantitatively using a semi-mechanistic surface complexation model (SCM), in which adsorption is described using chemical reactions between metal ions and adsorption sites. Equilibrium reactive transport simulations incorporating the SCMs, formation of metal-ion-EDTA complexes, and either Fe(III)-oxyhydroxide solubility or Zn desorption from sediments identified important factors responsible for trends observed during transport experiments conducted with EDTA complexes of Ni, Zn, and Pb in the Cape Cod aquifer. Dissociation of Pb-EDTA by Fe(III) is more favorable than Ni-EDTA because of differences in Ni- and Pb-adsorption to the sediments. Dissociation of Ni-EDTA becomes more favorable with decreasing Ni-EDTA concentration and decreasing pH. In contrast to Ni, Pb-EDTA can be dissociated by Zn desorbed from the aquifer sediments. Variability in adsorbed Zn concentrations has a large impact on Pb-EDTA dissociation.

  13. Evaluation of Intercomparisons of Four Different Types of Model Simulating TWP-ICE

    NASA Technical Reports Server (NTRS)

    Petch, Jon; Hill, Adrian; Davies, Laura; Fridlind, Ann; Jakob, Christian; Lin, Yanluan; Xie, Shaoecheng; Zhu, Ping

    2013-01-01

    Four model intercomparisons were run and evaluated using the TWP-ICE field campaign, each involving different types of atmospheric model. Here we highlight what can be learnt from having single-column model (SCM), cloud-resolving model (CRM), global atmosphere model (GAM) and limited-area model (LAM) intercomparisons all based around the same field campaign. We also make recommendations for anyone planning further large multi-model intercomparisons to ensure they are of maximum value to the model development community. CRMs tended to match observations better than other model types, although there were exceptions such as outgoing long-wave radiation. All SCMs grew large temperature and moisture biases and performed worse than other model types for many diagnostics. The GAMs produced a delayed and significantly reduced peak in domain-average rain rate when compared to the observations. While it was shown that this was in part due to the analysis used to drive these models, the LAMs were also driven by this analysis and did not have the problem to the same extent. Based on differences between the models with parametrized convection (SCMs and GAMs) and those without (CRMs and LAMs), we speculate that that having explicit convection helps to constrain liquid water whereas the ice contents are controlled more by the representation of the microphysics.

  14. Standardization of the Self Control and Self-Management Skills Scale (SCMS) on the Student of University of Najran

    ERIC Educational Resources Information Center

    Al-Smadi, Marwan Saleh; Bani-Abduh, Yahya Mohammed

    2017-01-01

    This study aimed to standardize self-control and self-management skills (SCMS), Mezo 2009 , on students in the university of Najran And to identify the psychometric properties of the scale in the Arab Environment the society of Najran University student by taking a number of Procedures (Validity and reliability of the Scale ) and to get the Arabic…

  15. Adaptation of Self-Control and Self-Management Scale (SCMS) into Turkish Culture: A Study on Reliability and Validity

    ERIC Educational Resources Information Center

    Ercoskun, Muhammet Hanifi

    2016-01-01

    The aim of this study is to adapt self-control and self-management scale (SCMS) developed by Mezo into Turkish and to test it considering gender and academic achievement variables. The scale was translated from English to Turkish for linguistic validity and then this scale was translated into English using back translation. The original and…

  16. Stormwater management network effectiveness and implications for urban watershed function: A critical review

    USGS Publications Warehouse

    Jefferson, Anne J.; Bhaskar, Aditi S.; Hopkins, Kristina G.; Fanelli, Rosemary; Avellaneda, Pedro M.; McMillan, Sara K.

    2017-01-01

    Deleterious effects of urban stormwater are widely recognized. In several countries, regulations have been put into place to improve the conditions of receiving water bodies, but planning and engineering of stormwater control is typically carried out at smaller scales. Quantifying cumulative effectiveness of many stormwater control measures on a watershed scale is critical to understanding how small-scale practices translate to urban river health. We review 100 empirical and modelling studies of stormwater management effectiveness at the watershed scale in diverse physiographic settings. Effects of networks with stormwater control measures (SCMs) that promote infiltration and harvest have been more intensively studied than have detention-based SCM networks. Studies of peak flows and flow volumes are common, whereas baseflow, groundwater recharge, and evapotranspiration have received comparatively little attention. Export of nutrients and suspended sediments have been the primary water quality focus in the United States, whereas metals, particularly those associated with sediments, have received greater attention in Europe and Australia. Often, quantifying cumulative effects of stormwater management is complicated by needing to separate its signal from the signal of urbanization itself, innate watershed characteristics that lead to a range of hydrologic and water quality responses, and the varying functions of multiple types of SCMs. Biases in geographic distribution of study areas, and size and impervious surface cover of watersheds studied also limit our understanding of responses. We propose hysteretic trajectories for how watershed function responds to increasing imperviousness and stormwater management. Even where impervious area is treated with SCMs, watershed function may not be restored to its predevelopment condition because of the lack of treatment of all stormwater generated from impervious surfaces; non-additive effects of individual SCMs; and persistence of urban effects beyond impervious surfaces. In most cases, pollutant load decreases largely result from run-off reductions rather than lowered solute or particulate concentrations. Understanding interactions between natural and built landscapes, including stormwater management strategies, is critical for successfully managing detrimental impacts of stormwater at the watershed scale.

  17. Enhancing the Trajectory Generation of a Stair-Climbing Mobility System

    PubMed Central

    Chocoteco, Jose Abel

    2017-01-01

    Recent advances in mobile robotic technologies have enabled significant progress to be made in the development of Stair-Climbing Mobility Systems (SCMSs) for people with mobility impairments and limitations. These devices are mainly characterized by their ability to negotiate those architectural barriers associated with climbing stairs (curbs, ramps, etc.). The development of advanced trajectory generators with which to surpass such architectural barriers is one of the most important aspects of SCMSs that has not yet been appropriately exploited. These advanced trajectory generators have a considerable influence on the time invested in the stair climbing process and on passenger comfort and, consequently, provide people with physical disabilities with greater independence and a higher quality of life. In this paper, we propose a new nonlinear trajectory generator for an SCMS. This generator balances the stair-climbing time and the user’s comfort and includes the most important constraints inherent to the system behavior: the geometry of the architectural barrier, the reconfigurable nature of the SCMS (discontinuous states), SCMS state-transition diagrams, comfort restrictions and physical limitations as regards the actuators, speed and acceleration. The SCMS was tested on a real two-step staircase using different time-comfort combinations and different climbing strategies to verify the effectiveness and the robustness of the proposed approach.

  18. Single-Chain Magnets Based on Octacyanotungstate with the Highest Energy Barriers for Cyanide Compounds.

    PubMed

    Wei, Rong-Min; Cao, Fan; Li, Jing; Yang, Li; Han, Yuan; Zhang, Xiu-Ling; Zhang, Zaichao; Wang, Xin-Yi; Song, You

    2016-04-13

    By introducing large counter cations as the spacer, two isolated 3, 3-ladder compounds, (Ph4P)[Co(II)(3-Mepy)2.7(H2O)0.3W(V)(CN)8] · 0.6H2O (1) and (Ph4As)[Co(II)(3-Mepy)3W(V)(CN)8] (2, 3-Mepy = 3-methylpyridine), were synthesized and characterized. Static and dynamic magnetic characterizations reveal that compounds 1 and 2 both behave as the single-chain magnets (SCMs) with very high energy barriers: 252(9) K for 1 and 224(7) K for 2, respectively. These two compounds display the highest relaxation barriers for cyano-bridged SCMs and are preceded only by two cobalt(II)-radical compounds among all SCMs. Meanwhile, a large coercive field of 26.2 kOe (1) and 22.6 kOe (2) were observed at 1.8 K.

  19. Indicators of Arctic Sea Ice Bistability in Climate Model Simulations and Observations

    DTIC Science & Technology

    2014-09-30

    ultimately developed a novel mathematical method to solve the system of equations involving the addition of a numerical “ ghost ” layer, as described in the...balance models ( EBMs ) and (ii) seasonally-varying single-column models (SCMs). As described in Approach item #1, we developed an idealized model that...includes both latitudinal and seasonal variations (Fig. 1). The model reduces to a standard EBM or SCM as limiting cases in the parameter space, thus

  20. Bayesian Evaluation of Dynamical Soil Carbon Models Using Soil Carbon Flux Data

    NASA Astrophysics Data System (ADS)

    Xie, H. W.; Romero-Olivares, A.; Guindani, M.; Allison, S. D.

    2017-12-01

    2016 was Earth's hottest year in the modern temperature record and the third consecutive record-breaking year. As the planet continues to warm, temperature-induced changes in respiration rates of soil microbes could reduce the amount of carbon sequestered in the soil organic carbon (SOC) pool, one of the largest terrestrial stores of carbon. This would accelerate temperature increases. In order to predict the future size of the SOC pool, mathematical soil carbon models (SCMs) describing interactions between the biosphere and atmosphere are needed. SCMs must be validated before they can be chosen for predictive use. In this study, we check two SCMs called CON and AWB for consistency with observed data using Bayesian goodness of fit testing that can be used in the future to compare other models. We compare the fit of the models to longitudinal soil respiration data from a meta-analysis of soil heating experiments using a family of Bayesian goodness of fit metrics called information criteria (IC), including the Widely Applicable Information Criterion (WAIC), the Leave-One-Out Information Criterion (LOOIC), and the Log Pseudo Marginal Likelihood (LPML). These IC's take the entire posterior distribution into account, rather than just one outputted model fit line. A lower WAIC and LOOIC and larger LPML indicate a better fit. We compare AWB and CON with fixed steady state model pool sizes. At equivalent SOC, dissolved organic carbon, and microbial pool sizes, CON always outperforms AWB quantitatively by all three IC's used. AWB monotonically improves in fit as we reduce the SOC steady state pool size while fixing all other pool sizes, and the same is almost true for CON. The AWB model with the lowest SOC is the best performing AWB model, while the CON model with the second lowest SOC is the best performing model. We observe that AWB displays more changes in slope sign and qualitatively displays more adaptive dynamics, which prevents AWB from being fully ruled out for predictive use, but based on IC's, CON is clearly the superior model for fitting the data. Hence, we demonstrate that Bayesian goodness of fit testing with information criteria helps us rigorously determine the consistency of models with data. Models that demonstrate their consistency to multiple data sets with our approach can then be selected for further refinement.

  1. Bioretention storm water control measures decrease the toxicity of copper roof runoff.

    PubMed

    LaBarre, William J; Ownby, David R; Rader, Kevin J; Lev, Steven M; Casey, Ryan E

    2017-06-01

    The present study evaluated the ability of 2 different bioretention storm water control measures (SCMs), planter boxes and swales, to decrease the toxicity of sheet copper (Cu) roofing runoff to Daphnia magna. The present study quantified changes in storm water chemistry as it passed through the bioretention systems and utilized the biotic ligand model (BLM) to assess whether the observed D. magna toxicity could be predicted by variations found in water chemistry. Laboratory toxicity tests were performed using select storm samples with D. magna cultured under low ionic strength conditions that were appropriate for the low ionic strength of the storm water samples being tested. The SCMs decreased toxicity of Cu roof runoff in both the BLM results and the storm water bioassays. Water exiting the SCMs was substantially higher than influent runoff in pH, ions, alkalinity, and dissolved organic carbon and substantially lower in total and dissolved Cu. Daphnids experienced complete mortality in untreated runoff from the Cu roof (the SCM influent); however, for planter and swale effluents, survival averaged 86% and 95%, respectively. The present study demonstrated that conventional bioretention practices, including planter boxes and swales, are capable of decreasing the risk of adverse effects from sheet Cu roof runoff to receiving systems, even before considering dilution of effluents in those receiving systems and associated further reductions in copper bioavailability. Environ Toxicol Chem 2017;36:1680-1688. © 2016 SETAC. © 2016 SETAC.

  2. Ground Fluidization Promotes Rapid Running of a Lightweight Robot

    DTIC Science & Technology

    2013-01-01

    SCMs ) (Wood et al., 2008) have enabled the development of small, lightweight robots (∼ 10 cm, ∼ 20 g) (Hoover et al., 2010; Birkmeyer et al., 2009) such...communicated to the controller through a Bluetooth wireless interface. 2.1.2. Model granular media We used 3.0±0.2 mm diam- eter glass particles (density

  3. Unbiased and robust quantification of synchronization between spikes and local field potential.

    PubMed

    Li, Zhaohui; Cui, Dong; Li, Xiaoli

    2016-08-30

    In neuroscience, relating the spiking activity of individual neurons to the local field potential (LFP) of neural ensembles is an increasingly useful approach for studying rhythmic neuronal synchronization. Many methods have been proposed to measure the strength of the association between spikes and rhythms in the LFP recordings, and most existing measures are dependent upon the total number of spikes. In the present work, we introduce a robust approach for quantifying spike-LFP synchronization which performs reliably for limited samples of data. The measure is termed as spike-triggered correlation matrix synchronization (SCMS), which takes LFP segments centered on each spike as multi-channel signals and calculates the index of spike-LFP synchronization by constructing a correlation matrix. The simulation based on artificial data shows that the SCMS output almost does not change with the sample size. This property is of crucial importance when making comparisons between different experimental conditions. When applied to actual neuronal data recorded from the monkey primary visual cortex, it is found that the spike-LFP synchronization strength shows orientation selectivity to drifting gratings. In comparison to another unbiased method, pairwise phase consistency (PPC), the proposed SCMS behaves better for noisy spike trains by means of numerical simulations. This study demonstrates the basic idea and calculating process of the SCMS method. Considering its unbiasedness and robustness, the measure is of great advantage to characterize the synchronization between spike trains and rhythms present in LFP. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Excellently guarded materials against UV and oxygen in the surfactant molecular complex crystal matrix

    NASA Astrophysics Data System (ADS)

    Ichikawa, Haruyo; Iimura, Nahoko; Hirata, Hirotaka

    2000-07-01

    Crystalline surfactant molecular complexes (SCMs) generated between quaternary ammonium cationic surfactants such as CTAB and various additives disclose their excellent protective properties from UV light and oxygen to complex additive materials, which are occluded in the complex crystal matrix. The effects of UV and oxygen were followed by the absorption decay of additive chromophores in comparing that of naked additive specimens with that of those in the complexed state. From the decay profiles, the rate constants and the half-life times were estimated under the assumptions in which the photo and oxidation processes were dominated in accordance with the first-ordered reaction. The results afford us promising prospects in extending the shelf-life of every material, above all medicinal drug, with the consequence that these obtained values evidently demonstrate the remarkably suppressed rate and extremely elongated half-life times.

  5. Design and reliability analysis of high-speed and continuous data recording system based on disk array

    NASA Astrophysics Data System (ADS)

    Jiang, Changlong; Ma, Cheng; He, Ning; Zhang, Xugang; Wang, Chongyang; Jia, Huibo

    2002-12-01

    In many real-time fields the sustained high-speed data recording system is required. This paper proposes a high-speed and sustained data recording system based on the complex-RAID 3+0. The system consists of Array Controller Module (ACM), String Controller Module (SCM) and Main Controller Module (MCM). ACM implemented by an FPGA chip is used to split the high-speed incoming data stream into several lower-speed streams and generate one parity code stream synchronously. It also can inversely recover the original data stream while reading. SCMs record lower-speed streams from the ACM into the SCSI disk drivers. In the SCM, the dual-page buffer technology is adopted to implement speed-matching function and satisfy the need of sustainable recording. MCM monitors the whole system, controls ACM and SCMs to realize the data stripping, reconstruction, and recovery functions. The method of how to determine the system scale is presented. At the end, two new ways Floating Parity Group (FPG) and full 2D-Parity Group (full 2D-PG) are proposed to improve the system reliability and compared with the Traditional Parity Group (TPG). This recording system can be used conveniently in many areas of data recording, storing, playback and remote backup with its high-reliability.

  6. Sense and Respond Logistics: Integrating Prediction, Responsiveness, and Control Capabilities

    DTIC Science & Technology

    2006-01-01

    logistics SAR sense and respond SCM Supply Chain Management SCN Supply Chain Network SIDA sense, interpret, decide, act SOS source of supply TCN...commodity supply chain management ( SCM ), will have WS- SCMs that focus on integrating information for a particular MDS. 8 In the remainder of this...developed applications of ABMs for SCM .21 Applications of Agents and Agent-Based Modeling Agents have been used in telecommunications, e-commerce

  7. Surface complexation modeling of Cu(II) adsorption on mixtures of hydrous ferric oxide and kaolinite

    PubMed Central

    Lund, Tracy J; Koretsky, Carla M; Landry, Christopher J; Schaller, Melinda S; Das, Soumya

    2008-01-01

    Background The application of surface complexation models (SCMs) to natural sediments and soils is hindered by a lack of consistent models and data for large suites of metals and minerals of interest. Furthermore, the surface complexation approach has mostly been developed and tested for single solid systems. Few studies have extended the SCM approach to systems containing multiple solids. Results Cu adsorption was measured on pure hydrous ferric oxide (HFO), pure kaolinite (from two sources) and in systems containing mixtures of HFO and kaolinite over a wide range of pH, ionic strength, sorbate/sorbent ratios and, for the mixed solid systems, using a range of kaolinite/HFO ratios. Cu adsorption data measured for the HFO and kaolinite systems was used to derive diffuse layer surface complexation models (DLMs) describing Cu adsorption. Cu adsorption on HFO is reasonably well described using a 1-site or 2-site DLM. Adsorption of Cu on kaolinite could be described using a simple 1-site DLM with formation of a monodentate Cu complex on a variable charge surface site. However, for consistency with models derived for weaker sorbing cations, a 2-site DLM with a variable charge and a permanent charge site was also developed. Conclusion Component additivity predictions of speciation in mixed mineral systems based on DLM parameters derived for the pure mineral systems were in good agreement with measured data. Discrepancies between the model predictions and measured data were similar to those observed for the calibrated pure mineral systems. The results suggest that quantifying specific interactions between HFO and kaolinite in speciation models may not be necessary. However, before the component additivity approach can be applied to natural sediments and soils, the effects of aging must be further studied and methods must be developed to estimate reactive surface areas of solid constituents in natural samples. PMID:18783619

  8. Surface speciation of yttrium and neodymium sorbed on rutile: Interpretations using the change distribution model.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ridley, Mora K.; Hiemstra, T; Machesky, Michael L.

    2012-01-01

    The adsorption of Y3+ and Nd3+ onto rutile has been evaluated over a wide range of pH (3 11) and surface loading conditions, as well as at two ionic strengths (0.03 and 0.3 m), and temperatures (25 and 50 C). The experimental results reveal the same adsorption behavior for the two trivalent ions onto the rutile surface, with Nd3+ first adsorbing at slightly lower pH values. The adsorption of both Y3+ and Nd3+ commences at pH values below the pHznpc of rutile. The experimental results were evaluated using a charge distribution (CD) and multisite complexation (MUSIC) model, and Basic Sternmore » layer description of the electric double layer (EDL). The coordination geometry of possible surface complexes were constrained by molecular-level information obtained from X-ray standing wave measurements and molecular dynamic (MD) simulation studies. X-ray standing wave measurements showed an inner-sphere tetradentate complex for Y3+ adsorption onto the (110) rutile surface (Zhang et al., 2004b). TheMDsimulation studies suggest additional bidentate complexes may form. The CD values for all surface species were calculated based on a bond valence interpretation of the surface complexes identified by X-ray and MD. The calculated CD values were corrected for the effect of dipole orientation of interfacial water. At low pH, the tetradentate complex provided excellent fits to the Y3+ and Nd3+ experimental data. The experimental and surface complexation modeling results show a strong pH dependence, and suggest that the tetradentate surface species hydrolyze with increasing pH. Furthermore, with increased surface loading of Y3+ on rutile the tetradentate binding mode was augmented by a hydrolyzed-bidentate Y3+ surface complex. Collectively, the experimental and surface complexation modeling results demonstrate that solution chemistry and surface loading impacts Y3+ surface speciation. The approach taken of incorporating molecular-scale information into surface complexation models (SCMs) should aid in elucidating a fundamental understating of ion-adsorption reactions.« less

  9. Surface speciation of yttrium and neodymium sorbed on rutile: Interpretations using the charge distribution model

    NASA Astrophysics Data System (ADS)

    Ridley, Moira K.; Hiemstra, Tjisse; Machesky, Michael L.; Wesolowski, David J.; van Riemsdijk, Willem H.

    2012-10-01

    The adsorption of Y3+ and Nd3+ onto rutile has been evaluated over a wide range of pH (3-11) and surface loading conditions, as well as at two ionic strengths (0.03 and 0.3 m), and temperatures (25 and 50 °C). The experimental results reveal the same adsorption behavior for the two trivalent ions onto the rutile surface, with Nd3+ first adsorbing at slightly lower pH values. The adsorption of both Y3+ and Nd3+ commences at pH values below the pHznpc of rutile. The experimental results were evaluated using a charge distribution (CD) and multisite complexation (MUSIC) model, and Basic Stern layer description of the electric double layer (EDL). The coordination geometry of possible surface complexes were constrained by molecular-level information obtained from X-ray standing wave measurements and molecular dynamic (MD) simulation studies. X-ray standing wave measurements showed an inner-sphere tetradentate complex for Y3+ adsorption onto the (1 1 0) rutile surface (Zhang et al., 2004b). The MD simulation studies suggest additional bidentate complexes may form. The CD values for all surface species were calculated based on a bond valence interpretation of the surface complexes identified by X-ray and MD. The calculated CD values were corrected for the effect of dipole orientation of interfacial water. At low pH, the tetradentate complex provided excellent fits to the Y3+ and Nd3+ experimental data. The experimental and surface complexation modeling results show a strong pH dependence, and suggest that the tetradentate surface species hydrolyze with increasing pH. Furthermore, with increased surface loading of Y3+ on rutile the tetradentate binding mode was augmented by a hydrolyzed-bidentate Y3+ surface complex. Collectively, the experimental and surface complexation modeling results demonstrate that solution chemistry and surface loading impacts Y3+ surface speciation. The approach taken of incorporating molecular-scale information into surface complexation models (SCMs) should aid in elucidating a fundamental understating of ion-adsorption reactions.

  10. Confronting Models with Data: The GEWEX Cloud Systems Study

    NASA Technical Reports Server (NTRS)

    Randall, David; Curry, Judith; Duynkerke, Peter; Krueger, Steven; Moncrieff, Mitchell; Ryan, Brian; Starr, David OC.; Miller, Martin; Rossow, William; Tselioudis, George

    2002-01-01

    The GEWEX Cloud System Study (GCSS; GEWEX is the Global Energy and Water Cycle Experiment) was organized to promote development of improved parameterizations of cloud systems for use in climate and numerical weather prediction models, with an emphasis on the climate applications. The strategy of GCSS is to use two distinct kinds of models to analyze and understand observations of the behavior of several different types of clouds systems. Cloud-system-resolving models (CSRMs) have high enough spatial and temporal resolutions to represent individual cloud elements, but cover a wide enough range of space and time scales to permit statistical analysis of simulated cloud systems. Results from CSRMs are compared with detailed observations, representing specific cases based on field experiments, and also with statistical composites obtained from satellite and meteorological analyses. Single-column models (SCMs) are the surgically extracted column physics of atmospheric general circulation models. SCMs are used to test cloud parameterizations in an un-coupled mode, by comparison with field data and statistical composites. In the original GCSS strategy, data is collected in various field programs and provided to the CSRM Community, which uses the data to "certify" the CSRMs as reliable tools for the simulation of particular cloud regimes, and then uses the CSRMs to develop parameterizations, which are provided to the GCM Community. We report here the results of a re-thinking of the scientific strategy of GCSS, which takes into account the practical issues that arise in confronting models with data. The main elements of the proposed new strategy are a more active role for the large-scale modeling community, and an explicit recognition of the importance of data integration.

  11. Climatic and Landscape Controls on Storage Capacity of Urban Stormwater Control Measures (SCMs): Implications for Stormwater-Stream Connectivity

    NASA Astrophysics Data System (ADS)

    Fanelli, R. M.; Prestegaard, K. L.; Palmer, M.

    2015-12-01

    Urbanization alters watershed hydrological processes; impervious surfaces increase runoff generation, while storm sewer networks increase connectivity between runoff sources and streams. Stormwater control measures (SCMs) that enhance stormwater infiltration have been proposed to mitigate these effects by functioning as stormwater sinks. Regenerative stormwater conveyances structures (RSCs) are an example of infiltration-based SCMs that are placed between storm sewer outfalls and perennial stream networks. Given their location, RSCs act as critical nodes that regulate stormwater-stream connectivity. Therefore, the storage capacity of a RSC structure may exert a major control on the frequency, duration, and magnitude of these connections. This project examined both hydrogeological and hydro-climatic factors that could influence storage capacity of RSC structures. We selected three headwater (5-48 ha) urban watersheds near Annapolis, Maryland, USA. Each watershed is drained by first-order perennial streams and has been implemented with a RSC structure. We conducted high-frequency precipitation and stream stage monitoring below the outlet of each RSC structure for a 1-year period. We also instrumented one of the RSC structures with groundwater wells to monitor changes in subsurface storage over time. Using these data, we 1) identified rainfall thresholds for RSC storage capacity exceedance; 2) quantified the frequency and duration of connectivity when the storage capacity of each RSC was exceeded; and 3) evaluated both event-scale and seasonal changes in groundwater levels within the RSC structure. Precipitation characteristics and antecedent precipitation indices influenced the frequency and duration of stormwater-stream connections. We hypothesize both infiltration limitations and storage limitations of the RSCs contributed to the temporal patterns we observed in stormwater-stream connectivity. We also observed reduced storage potential as contributing area and percent impervious cover increased. Overall, the efficacy of urban SCMs for mitigating the impacts of urbanization and reducing stormwater-stream connectivity is dependent on both climate and the landscape context in which they are placed.

  12. Evaluation of ternary cementitious combinations : tech summary.

    DOT National Transportation Integrated Search

    2012-02-01

    Portland cement concrete (PCC) is the worlds most versatile and utilized construction material. Modern concrete consists of six : main ingredients: coarse aggregate, sand, portland cement, supplementary cementitious materials (SCMs), chemical admi...

  13. Role of competing ions in the mobilization of arsenic in groundwater of Bengal Basin: insight from surface complexation modeling.

    PubMed

    Biswas, Ashis; Gustafsson, Jon Petter; Neidhardt, Harald; Halder, Dipti; Kundu, Amit K; Chatterjee, Debashis; Berner, Zsolt; Bhattacharya, Prosun

    2014-05-15

    This study assesses the role of competing ions in the mobilization of arsenic (As) by surface complexation modeling of the temporal variability of As in groundwater. The potential use of two different surface complexation models (SCMs), developed for ferrihydrite and goethite, has been explored to account for the temporal variation of As(III) and As(V) concentration, monitored in shallow groundwater of Bengal Basin over a period of 20 months. The SCM for ferrihydrite appears as the better predictor of the observed variation in both As(III) and As(V) concentrations in the study sites. It is estimated that among the competing ions, PO4(3-) is the major competitor of As(III) and As(V) adsorption onto Fe oxyhydroxide, and the competition ability decreases in the order PO4(3-) ≫ Fe(II) > H4SiO4 = HCO3(-). It is further revealed that a small change in pH can also have a significant effect on the mobility of As(III) and As(V) in the aquifers. A decrease in pH increases the concentration of As(III), whereas it decreases the As(V) concentration and vice versa. The present study suggests that the reductive dissolution of Fe oxyhydroxide alone cannot explain the observed high As concentration in groundwater of the Bengal Basin. This study supports the view that the reductive dissolution of Fe oxyhydroxide followed by competitive sorption reactions with the aquifer sediment is the processes responsible for As enrichment in groundwater. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Catchment-scale stormwater management via economic incentives – An overview and lessons-learned

    USGS Publications Warehouse

    Schuster, W.; Garmestani, A.S.; Green, O.O.; Rhea, l.K.; Roy, Allison; Thurston, H.W.; Myers, Baden Robert; Beecham, Simon; Lucke, Terry; Boogaard, Floris

    2013-01-01

    Long-term field studies of the effectiveness and sustainability of decentralized stormwater management are rare. From 2005-2011, we tested an incentive-based approach to citizen participation in stormwater management in the Shepherd Creek catchment, located in Cincinnati, OH, USA. Hydrologic, biological, and water quality data were characterized in a baseline monitoring effort 2005- 2007. Reverse auctions held successively in 2007 and 2008 engaged citizens to voluntarily bid on stormwater control measures (SCMs); and successful bids led to implementation of SCMs, which led to an enhancement of catchment detention capacity. We tested for attributes of sustainability (coconsideration of social, economic, and environmental (hydrologic, soils, aquatic biology) aspects), and summarize lessons-learned. Our results and outcomes provide a basis for planning future field studies that more fully determine the effectiveness of stormwater management in terms of sustainability.

  15. Evaluation of ternary cementitious combinations : research project capsule.

    DOT National Transportation Integrated Search

    2009-03-01

    PROBLEM: Many entities currently use fly ash, slag, and other supplementary cementitious materials (SCMs) in Portland cement concrete (PCC) pavement and structures. Although the body of knowledge is limited, several states are currently using ternary...

  16. Laboratory investigation of the use of volcanic ash in concrete : final report.

    DOT National Transportation Integrated Search

    2016-09-01

    Supplementary cementitious materials (SCMs) are commonly used in KDOT concrete pavements and : bridge decks to improve strength and permeability characteristics. The supplementary cementitious materials : allowed under current KDOT specifications are...

  17. Laboratory investigation of the use of volcanic ash in concrete : technical summary.

    DOT National Transportation Integrated Search

    2016-09-01

    Supplementary cementitious materials (SCMs) are commonly used in KDOT : concrete pavements and bridge decks to improve strength and permeability : characteristics. The supplementary cementitious materials allowed under : current KDOT specifications a...

  18. Reducing cement content in concrete mixtures : [research brief].

    DOT National Transportation Integrated Search

    2011-12-01

    Concrete mixtures contain crushed rock or gravel, and sand, bound together by Portland cement in combination with supplemental cementitious materials (SCMs), which harden through a chemical reaction with water. Portland cement is the most costly comp...

  19. New concrete mixtures turn waste into quality roads : fact sheet.

    DOT National Transportation Integrated Search

    2011-11-01

    Many entities currently use fly ash, slag, and other supplementary cementitious materials (SCMs) in Portland cement concrete (PCC) pavement and structures. Although the body of knowledge is limited, several states are currently using ternary cementit...

  20. Optimising the laboratory supply chain: The key to effective laboratory services

    PubMed Central

    Williams, Jason; Smith, Peter; Kuritsky, Joel

    2014-01-01

    Background The Supply Chain Management System (SCMS) is a contract managed under the Partnership for Supply Chain Management (PFSCM) consortium by the United States Agency for International Development (USAID). SCMS procures commodities for programmes supported by the US President’s Emergency Plan for AIDS Relief (PEPFAR). From 2005 to mid-2012, PEPFAR, through SCMS, spent approximately $384 million on non-pharmaceutical commodities. Of this, an estimated $90m was used to purchase flow cytometry technology, largely for flow cytometry platforms and reagents. Objectives The purpose of this paper is to highlight the cost differences between low, medium and high utilisation rates of common CD4 testing instruments that have been procured though PEPFAR funding. Method A scale of costs per test as a function of test volume through the machine was calculated for the two most common CD4 testing machines used in HIV programmes: Becton Dickinson (BD) FACSCount™ and BD FACSCalibur™. Instrument utilisation data collected at the facility level in three selected countries were then used to calculate the onsite cost-per-test experienced in each country. Results Cost analyses indicated that a target of at least 40% utilisation for FACSCount™ and 15% utilisation for FACSCalibur™, respectively, closely approach maximal per-test cost efficiency. The average utilisation rate for CD4 testing instruments varies widely by country, level of laboratory and partner (0% − 68%). Conclusion Our analysis indicates that, because cost-per-test is related inversely to sample throughput, the underutilisation of flow cytometry machines is resulting in an increase in average cost-per-test for many instruments. PMID:29043175

  1. Influence of Three Permeable Pavement Surfaces on Nitrogen Treatment

    EPA Science Inventory

    Nitrogen is a stressor of concern in many nutrient sensitive watersheds often associated with algal blooms and resulting fish kills. Communities are increasingly installing green infrastructure stormwater control measures (SCMs) to reduce pollutant loads associated with stormwat...

  2. Retrofitting with bioretention and a swale to treat bridge deck stormwater runoff.

    DOT National Transportation Integrated Search

    2010-07-28

    Stormwater runoff from roadways is a source of surface water pollution in North Carolina. The North Carolina Department of Transportation (NCDOT) is required to implement stormwater control measures (SCMs) in the linear environment. NCDOT has specifi...

  3. 0-6717 : investigation of alternative supplementary cementing materials (SCMs) : [project summary].

    DOT National Transportation Integrated Search

    2014-08-01

    In Texas, Class F fly ash is extensively used as a : supplementary cementing material (SCM) : because of its ability to control thermal cracking : in mass concrete and to mitigate deleterious : expansions in concrete from alkali-silica reaction : (AS...

  4. Effects of stormwater management and stream restoration on watershed nitrogen retention

    EPA Science Inventory

    Restoring urban infrastructure and managing the nitrogen cycle represent emerging challenges for urban water quality. We investigated whether stormwater control measures (SCMs), a form of green infrastructure, integrated into restored and degraded urban stream networks can influ...

  5. Research Update from EPA Permeable Parking Lot in Edison, NJ

    EPA Science Inventory

    Communities are increasingly installing green infrastructure stormwater control measures (SCMs) to reduce pollutant loads associated with stormwater runoff. Permeable pavement is a SCM that has limited research on working-scale, side-by-side performance of different pavement sur...

  6. Development of performance properties of ternary mixtures : laboratory study on concrete.

    DOT National Transportation Integrated Search

    2011-03-01

    This research project is a comprehensive study of how supplementary cementitious materials (SCMs) can be used to : improve the performance of concrete mixtures. This report summarizes the findings of the Laboratory Study on Concrete : phase of this w...

  7. Comparison of Cirrus Cloud Models: A Project of the GEWEX Cloud System Study (GCSS) Working Group on Cirrus Cloud Systems

    NASA Technical Reports Server (NTRS)

    Starr, David O'C.; Benedetti, Angela; Boehm, Matt; Brown, Philip R. A.; Gierens, Klaus M.; Girard, Eric; Giraud, Vincent; Jakob, Christian; Jensen, Eric

    2000-01-01

    The GEWEX Cloud System Study (GCSS, GEWEX is the Global Energy and Water Cycle Experiment) is a community activity aiming to promote development of improved cloud parameterizations for application in the large-scale general circulation models (GCMs) used for climate research and for numerical weather prediction. The GCSS strategy is founded upon the use of cloud-system models (CSMs). These are "process" models with sufficient spatial and temporal resolution to represent individual cloud elements, but spanning a wide range of space and time scales to enable statistical analysis of simulated cloud systems. GCSS also employs single-column versions of the parametric cloud models (SCMs) used in GCMs. GCSS has working groups on boundary-layer clouds, cirrus clouds, extratropical layer cloud systems, precipitating deep convective cloud systems, and polar clouds.

  8. Design and performance of crack-free environmentally friendly concrete "crack-free eco-crete".

    DOT National Transportation Integrated Search

    2014-08-01

    High-performance concrete (HPC) is characterized by high content of cement and supplementary cementitious materials (SCMs). : Using high binder content, low water-to-cementitious material ratio (w/cm), and various chemical admixtures in the HPC can r...

  9. Principles for urban stormwater management to protect stream ecosystems

    USGS Publications Warehouse

    Walsh, Christopher J.; Booth, Derek B.; Burns, Matthew J.; Fletcher, Tim D.; Hale, Rebecca L.; Hoang, Lan N.; Livingston, Grant; Rippy, Megan A.; Roy, Allison; Scoggins, Mateo; Wallace, Angela

    2016-01-01

    Urban stormwater runoff is a critical source of degradation to stream ecosystems globally. Despite broad appreciation by stream ecologists of negative effects of stormwater runoff, stormwater management objectives still typically center on flood and pollution mitigation without an explicit focus on altered hydrology. Resulting management approaches are unlikely to protect the ecological structure and function of streams adequately. We present critical elements of stormwater management necessary for protecting stream ecosystems through 5 principles intended to be broadly applicable to all urban landscapes that drain to a receiving stream: 1) the ecosystems to be protected and a target ecological state should be explicitly identified; 2) the postdevelopment balance of evapotranspiration, stream flow, and infiltration should mimic the predevelopment balance, which typically requires keeping significant runoff volume from reaching the stream; 3) stormwater control measures (SCMs) should deliver flow regimes that mimic the predevelopment regime in quality and quantity; 4) SCMs should have capacity to store rain events for all storms that would not have produced widespread surface runoff in a predevelopment state, thereby avoiding increased frequency of disturbance to biota; and 5) SCMs should be applied to all impervious surfaces in the catchment of the target stream. These principles present a range of technical and social challenges. Existing infrastructural, institutional, or governance contexts often prevent application of the principles to the degree necessary to achieve effective protection or restoration, but significant potential exists for multiple co-benefits from SCM technologies (e.g., water supply and climate-change adaptation) that may remove barriers to implementation. Our set of ideal principles for stream protection is intended as a guide for innovators who seek to develop new approaches to stormwater management rather than accept seemingly insurmountable historical constraints, which guarantee future, ongoing degradation.

  10. A Case Study on Nitrogen Uptake and Denitrification in a Restored Urban Stream in Baltimore, Maryland

    EPA Science Inventory

    Restoring urban infrastructure and managing the nitrogen cycle represent emerging challenges for urban water quality. We investigated whether stormwater control measures (SCMs), a form of green infrastructure, integrated into restored and degraded urban stream networks can influe...

  11. Internal hydrological mechanism of permeable pavement and interaction with subsurface water

    EPA Science Inventory

    Many communities are implementing green infrastructure stormwater control measures (SCMs) in urban environments across the U.S. to mimic pre-urban, natural hydrology more closely. Permeable pavement is one SCM infrastructure that has been commonly selected for both new and retro...

  12. Adopted Methodology for Cool-Down of SST-1 Superconducting Magnet System: Operational Experience with the Helium Refrigerator

    NASA Astrophysics Data System (ADS)

    Sahu, A. K.; Sarkar, B.; Panchal, P.; Tank, J.; Bhattacharya, R.; Panchal, R.; Tanna, V. L.; Patel, R.; Shukla, P.; Patel, J. C.; Singh, M.; Sonara, D.; Sharma, R.; Duggar, R.; Saxena, Y. C.

    2008-03-01

    The 1.3 kW at 4.5 K helium refrigerator / liquefier (HRL) was commissioned during the year 2003. The HRL was operated with its different modes as per the functional requirements of the experiments. The superconducting magnets system (SCMS) of SST-1 was successfully cooled down to 4.5 K. The actual loads were different from the originally predicted boundary conditions and an adjustment in the thermodynamic balance of the refrigerator was necessary. This led to enhanced capacity, which was achieved without any additional hardware. The required control system for the HRL was tuned to achieve the stable thermodynamic balance, while keeping the turbines' operating parameters at optimized conditions. An extra mass flow rate requirement was met by exploiting the margin available with the compressor station. The methodology adopted to modify the capacity of the HRL, the safety precautions and experience of SCMS cool down to 4.5 K, are discussed.

  13. Tomographic Imaging on Distributed Unattended Ground Sensor Arrays

    DTIC Science & Technology

    2002-05-14

    communication, the recently released Bluetooth standard warrants investigation into its usefulness on ground sensors. Although not as powerful or as fast...NTSC,” June 2001, http://archive.ncsa.uiuc.edu/ SCMS /training/general/details/ntsc.html [14] Techfest, “PCI local bus technical summary,” 1999, http

  14. Comparison of Cirrus Cloud Models: A Project of the GEWEX Cloud System Study (GCSS) Working Group on Cirrus Cloud Systems

    NASA Technical Reports Server (NTRS)

    Starr, David OC.; Benedetti, Angela; Boehm, Matt; Brown, Philip R. A.; Gierens, Klaus M.; Girard, Eric; Giraud, Vincent; Jakob, Christian; Jensen, Eric; Khvorostyanov, Vitaly; hide

    2000-01-01

    The GEWEX Cloud System Study (GCSS, GEWEX is the Global Energy and Water Cycle Experiment) is a community activity aiming to promote development of improved cloud parameterizations for application in the large-scale general circulation models (GCMs) used for climate research and for numerical weather prediction (Browning et al, 1994). The GCSS strategy is founded upon the use of cloud-system models (CSMs). These are "process" models with sufficient spatial and temporal resolution to represent individual cloud elements, but spanning a wide range of space and time scales to enable statistical analysis of simulated cloud systems. GCSS also employs single-column versions of the parametric cloud models (SCMs) used in GCMs. GCSS has working groups on boundary-layer clouds, cirrus clouds, extratropical layer cloud systems, precipitating deep convective cloud systems, and polar clouds.

  15. Evaluation of permeable friction course (PFC), roadside filter strips, dry swales, and wetland swales for treatment of highway stormwater runoff.

    DOT National Transportation Integrated Search

    2011-01-07

    Stormwater runoff from roadways is a source of surface water pollution in North Carolina. The North Carolina Department of Transportation (NCDOT) is required to implement stormwater control measures (SCMs) in the linear environment. NCDOT has specifi...

  16. Deicer scaling resistance of concrete mixtures containing slag cement. Phase 2 : evaluation of different laboratory scaling test methods.

    DOT National Transportation Integrated Search

    2012-07-01

    With the use of supplementary cementing materials (SCMs) in concrete mixtures, salt scaling tests such as ASTM C672 have been found to be overly aggressive and do correlate well with field scaling performance. The reasons for this are thought to be b...

  17. Extending Student Learning Opportunities in a 6-8 Middle School

    ERIC Educational Resources Information Center

    Waggoner, Christine; Cline, Lisa

    2006-01-01

    In 2004, South Charlotte Middle School (SCMS), Charlotte, North Carolina, was named "A School to Watch" by the National Forum to Accelerate Middle Grades Reform. One of the program components cited as highly successful by the visiting committee representing the Forum was the provision of an enrichment period called the ninth block. Ninth…

  18. EPA’s Summary Report of the Collaborative Green Infrastructure Pilot Project for the Middle Blue River in Kansas City, MO

    EPA Science Inventory

    The United States Environmental Protection Agency evaluated the performance of a hybrid green-gray infrastructure pilot project installed into the Marlborough Neighborhood by the Kansas City Water Services Department. Kansas City installed 135 vegetated SCMs, 24,290 square feet o...

  19. Long-Term and Seismic Performance of Concrete-Filled Steel Tube Columns with Conventional and High-Volume SCM Concrete

    DOT National Transportation Integrated Search

    2012-06-01

    Production of Portland Cement for concrete is a major source of CO2 emission. Concrete can be made more sustainable by replacing a large volume of the cement with Supplementary Cementitous Materials (SCMs) such as fly ash and slag. The amount of ceme...

  20. Investigation of Self Consolidating Concrete Containing High Volume of Supplementary Cementitious Materials and Recycled Asphalt Pavement Aggregates

    NASA Astrophysics Data System (ADS)

    Patibandla, Varun chowdary

    The use of sustainable technologies such as supplementary cementitiuous materials (SCMs), and/or recycled materials is expected to positively affect the performance of concrete mixtures. However, it is important to study and qualify such mixtures and check if the required specifications of their intended application are met before they can be implemented in practice. This study presents the results of a laboratory investigation of Self Consolidating concrete (SCC) containing sustainable technologies. A total of twelve concrete mixtures were prepared with various combinations of fly ash, slag, and recycled asphalt pavement (RAP). The mixtures were divided into three groups with constant water to cementitiuous materials ratio of 0.37, and based on the RAP content; 0, 25, and 50% of coarse aggregate replaced by RAP. All mixtures were prepared to achieve a target slump flow equal to or higher than 500 mm (24in). A control mixture for each group was prepared with 100% Portland cement whereas all other mixtures were designed to have up to 70% of portland cement replaced by a combination of supplementary cementitiuous materials (SCMs) such as class C fly ash and granulated blast furnace slag. The properties of fresh concrete investigated in this study include flowability, deformability; filling capacity, and resistance to segregation. In addition, the compressive strength at 3, 14, and 28 days, the tensile strength, and the unrestrained shrinkage up to 80 days was also investigated. As expected the inclusion of the sustainable technologies affected both fresh and hardened concrete properties. Analysis of the experimental data indicated that inclusion of RAP not only reduces the ultimate strength, but it also affected the compressive strength development rate. Moreover, several mixes satisfied compressive strength requirements for pavements and bridges; those mixes included relatively high percentages of SCMs and RAP. Based on the results obtained in this study, it is not recommended to replace the coarse aggregate in SCC by more than 25% RAP.

  1. A surface complexation model of YREE sorption on Ulva lactuca in 0.05-5.0 M NaCl solutions

    NASA Astrophysics Data System (ADS)

    Zoll, Alison M.; Schijf, Johan

    2012-11-01

    We present distribution coefficients, log iKS, for the sorption of yttrium and the rare earth elements (YREEs) on BCR-279, a dehydrated tissue homogenate of a marine macroalga, Ulva lactuca, resembling materials featured in chemical engineering studies aimed at designing renewable biosorbents. Sorption experiments were conducted in NaCl solutions of different ionic strength (0.05, 0.5, and 5.0 M) at T = 25 °C over the pH range 2.7-8.5. Distribution coefficients based on separation of the dissolved and particulate phase by conventional filtration (<0.22 μm) were corrected for the effect of colloid-bound YREEs (>3 kDa) using an existing pH-dependent model. Colloid-corrected values were renormalized to free-cation concentrations by accounting for YREE hydrolysis and chloride complexation. At each ionic strength, the pH dependence of the renormalized values is accurately described with a non-electrostatic surface complexation model (SCM) that incorporates YREE binding to three monoprotic functional groups, previously characterized by alkalimetric titration, as well as binding of YREE-hydroxide complexes (MOH2+) to the least acidic one (pKa ∼ 9.5). In non-linear regressions of the distribution coefficients as a function of pH, each pKa was fixed at its reported value, while stability constants of the four YREE surface complexes were used as adjustable parameters. Data for a single fresh U. lactuca specimen in 0.5 M NaCl show generally the same pH-dependent behavior but a lower degree of sorption and were excluded from the regressions. Good linear free-energy relations (LFERs) between stability constants of the YREE-acetate and YREE-hydroxide solution complex and surface complexes with the first and third functional group, respectively, support their prior tentative identifications as carboxyl and phenol. A similar confirmation for the second group is precluded by insufficient knowledge of the stability of YREE-phosphate complexes and a perceived lack of YREE binding in 0.05 M NaCl; this issue awaits further study. The results indicate that SCMs can be successfully applied to sorbents as daunting as marine organic matter. Despite remnant challenges, for instance resolving the contributions of individual groups to the aggregate sorption signal, our approach helps formalize seaweed’s avowed promise as an ideal biomonitor or biofilter of metal pollution in environments ranging from freshwaters to brines by uncovering what chemical mechanisms underlie its pronounced affinity for YREEs and other surface-reactive elements.

  2. Three-dimensional constrained variational analysis: Approach and application to analysis of atmospheric diabatic heating and derivative fields during an ARM SGP intensive observational period

    NASA Astrophysics Data System (ADS)

    Tang, Shuaiqi; Zhang, Minghua

    2015-08-01

    Atmospheric vertical velocities and advective tendencies are essential large-scale forcing data to drive single-column models (SCMs), cloud-resolving models (CRMs), and large-eddy simulations (LESs). However, they cannot be directly measured from field measurements or easily calculated with great accuracy. In the Atmospheric Radiation Measurement Program (ARM), a constrained variational algorithm (1-D constrained variational analysis (1DCVA)) has been used to derive large-scale forcing data over a sounding network domain with the aid of flux measurements at the surface and top of the atmosphere (TOA). The 1DCVA algorithm is now extended into three dimensions (3DCVA) along with other improvements to calculate gridded large-scale forcing data, diabatic heating sources (Q1), and moisture sinks (Q2). Results are presented for a midlatitude cyclone case study on 3 March 2000 at the ARM Southern Great Plains site. These results are used to evaluate the diabatic heating fields in the available products such as Rapid Update Cycle, ERA-Interim, National Centers for Environmental Prediction Climate Forecast System Reanalysis, Modern-Era Retrospective Analysis for Research and Applications, Japanese 55-year Reanalysis, and North American Regional Reanalysis. We show that although the analysis/reanalysis generally captures the atmospheric state of the cyclone, their biases in the derivative terms (Q1 and Q2) at regional scale of a few hundred kilometers are large and all analyses/reanalyses tend to underestimate the subgrid-scale upward transport of moist static energy in the lower troposphere. The 3DCVA-gridded large-scale forcing data are physically consistent with the spatial distribution of surface and TOA measurements of radiation, precipitation, latent and sensible heat fluxes, and clouds that are better suited to force SCMs, CRMs, and LESs. Possible applications of the 3DCVA are discussed.

  3. Alignment of Information Systems with Supply Chains: Impacts on Supply Chain Performance and Organizational Performance

    ERIC Educational Resources Information Center

    Qrunfleh, Sufian M.

    2010-01-01

    Over the past decade, an important focus of researchers has been on supply chain management (SCM), as many organizations believe that effective SCM is the key to building and sustaining competitive advantage for their products/services. To manage the supply chain, companies need to adopt an SCM strategy (SCMS) and implement appropriate SCM…

  4. Implementation of Network Leader Sponsored Supply Chain Management Systems: A Case Study of Supplier IT Business Value

    ERIC Educational Resources Information Center

    Miller, Mark S.

    2010-01-01

    This qualitative multiple-case study was conducted to explore and understand how the implementation of required relationship-specific supply chain management system (SCMS) dictated by the network leader within a supplier network affects a supplier organization. The study, on a very broad sense, attempted to research the current validity of how the…

  5. Commissioning and Operational Experience with 1 kW Class Helium Refrigerator/Liquefier for SST-1

    NASA Astrophysics Data System (ADS)

    Dhard, C. P.; Sarkar, B.; Misra, Ruchi; Sahu, A. K.; Tanna, V. L.; Tank, J.; Panchal, P.; Patel, J. C.; Phadke, G. D.; Saxena, Y. C.

    2004-06-01

    The helium refrigerator/liquefier (R/L) for the Steady State Super conducting Tokamak (SST-1) has been developed with very stringent specifications for the different operational modes. The total refrigeration capacity is 650 W at 4.5 K and liquefaction capacity of 200 l/h. A cold circulation pump is used for the forced flow cooling of 300 g/s supercritical helium (SHe) for the magnet system (SCMS). The R/L has been designed also to absorb a 200 W transient heat load of the SCMS. The plant consists of a compressor station, oil removal system, on-line purifier, Main Control Dewar (MCD) with associated heat exchangers, cold circulation pump and warm gas management system. An Integrated Flow Control and Distribution System (IFDCS) has been designed, fabricated and installed for distribution of SHe in the toroidal and poloidal field coils as well as liquid helium for cooling of 10 pairs of current leads. A SCADA based control system has been designed using PLC for R/L as well as IFDCS. The R/L has been commissioned and required parameters were achieved confirming to the process. All the test results and commissioning experiences are discussed in this paper.

  6. A simple model of the effect of ocean ventilation on ocean heat uptake

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nadiga, Balasubramanya T.; Urban, Nathan Mark

    Presentation includes slides on Earth System Models vs. Simple Climate Models; A Popular SCM: Energy Balance Model of Anomalies; On calibrating against one ESM experiment, the SCM correctly captures that ESM's surface warming response with other forcings; Multi-Model Analysis: Multiple ESMs, Single SCM; Posterior Distributions of ECS; However In Excess of 90% of TOA Energy Imbalance is Sequestered in the World Oceans; Heat Storage in the Two Layer Model; Heat Storage in the Two Layer Model; Including TOA Rad. Imbalance and Ocean Heat in Calibration Improves Repr., but Significant Errors Persist; Improved Vertical Resolution Does Not Fix Problem; A Seriesmore » of Expts. Confirms That Anomaly-Diffusing Models Cannot Properly Represent Ocean Heat Uptake; Physics of the Thermocline; Outcropping Isopycnals and Horizontally-Averaged Layers; Local interactions between outcropping isopycnals leads to non-local interactions between horizontally-averaged layers; Both Surface Warming and Ocean Heat are Well Represented With Just 4 Layers; A Series of Expts. Confirms That When Non-Local Interactions are Allowed, the SCMs Can Represent Both Surface Warming and Ocean Heat Uptake; and Summary and Conclusions.« less

  7. Domain walls in single-chain magnets

    NASA Astrophysics Data System (ADS)

    Pianet, Vivien; Urdampilleta, Matias; Colin, Thierry; Clérac, Rodolphe; Coulon, Claude

    2017-12-01

    The topology and creation energy of domain walls in different magnetic chains (called Single-Chain Magnets or SCMs) are discussed. As these domain walls, that can be seen as "defects", are known to control both static and dynamic properties of these one-dimensional systems, their study and understanding are necessary first steps before a deeper discussion of the SCM properties at finite temperature. The starting point of the paper is the simple regular ferromagnetic chain for which the characteristics of the domain walls are well known. Then two cases will be discussed (i) the "mixed chains" in which isotropic and anisotropic classical spins alternate, and (ii) the so-called "canted chains" where two different easy axis directions are present. In particular, we show that "strictly narrow" domain walls no longer exist in these more complex cases, while a cascade of phase transitions is found for canted chains as the canting angle approaches 45∘. The consequence for thermodynamic properties is briefly discussed in the last part of the paper.

  8. Evaluation of ternary blended cements for use in transportation concrete structures

    NASA Astrophysics Data System (ADS)

    Gilliland, Amanda Louise

    This thesis investigates the use of ternary blended cement concrete mixtures for transportation structures. The study documents technical properties of three concrete mixtures used in federally funded transportation projects in Utah, Kansas, and Michigan that used ternary blended cement concrete mixtures. Data were also collected from laboratory trial batches of ternary blended cement concrete mixtures with mixture designs similar to those of the field projects. The study presents the technical, economic, and environmental advantages of ternary blended cement mixtures. Different barriers of implementation for using ternary blended cement concrete mixtures in transportation projects are addressed. It was concluded that there are no technical, economic, or environmental barriers that exist when using most ternary blended cement concrete mixtures. The technical performance of the ternary blended concrete mixtures that were studied was always better than ordinary portland cement concrete mixtures. The ternary blended cements showed increased durability against chloride ion penetration, alkali silica reaction, and reaction to sulfates. These blends also had less linear shrinkage than ordinary portland cement concrete and met all strength requirements. The increased durability would likely reduce life cycle costs associated with concrete pavement and concrete bridge decks. The initial cost of ternary mixtures can be higher or lower than ordinary portland cement, depending on the supplementary cementitious materials used. Ternary blended cement concrete mixtures produce less carbon dioxide emissions than ordinary portland cement mixtures. This reduces the carbon footprint of construction projects. The barriers associated with implementing ternary blended cement concrete for transportation projects are not significant. Supplying fly ash returns any investment costs for the ready mix plant, including silos and other associated equipment. State specifications can make designing ternary blended cements more acceptable by eliminating arbitrary limitations for supplementary cementitious materials (SCMs) use and changing to performance-based standards. Performance-based standards require trial batching of concrete mixture designs, which can be used to optimize ternary combinations of portland cement and SCMs. States should be aware of various SCMs that are appropriate for the project type and its environment.

  9. Combined Circumferential and Longitudinal Left Ventricular Systolic Dysfunction in Patients with Rheumatoid Arthritis without Overt Cardiac Disease.

    PubMed

    Cioffi, Giovanni; Viapiana, Ombretta; Ognibeni, Federica; Dalbeni, Andrea; Gatti, Davide; Mazzone, Carmine; Faganello, Giorgio; Di Lenarda, Andrea; Adami, Silvano; Rossini, Maurizio

    2016-07-01

    Patients with rheumatoid arthritis have an increased risk for cardiovascular disease. Because of accelerated atherosclerosis and changes in left ventricular (LV) geometry, circumferential and longitudinal (C&L) LV systolic dysfunction (LVSD) may be impaired in these patients despite preserved LV ejection fraction. The aim of this study was to determine the prevalence of and factors associated with combined C&L LVSD in patients with rheumatoid arthritis. One hundred ninety-eight outpatients with rheumatoid arthritis without overt cardiac disease were prospectively analyzed from January through June 2014 and compared with 198 matched control subjects. C&L systolic function was evaluated by stress-corrected midwall shortening (sc-MS) and tissue Doppler mitral annular peak systolic velocity (S'). Combined C&L LVSD was defined if sc-MS was <86.5% and S' was <9.0 cm/sec (the 10th percentiles of sc-MS and S' derived in 132 healthy subjects). Combined C&L LVSD was detected in 56 patients (28%) and was associated with LV mass (odds ratio, 1.03; 95% CI, 1.01-1.06; P = .04) and concentric LV geometry (odds ratio, 2.76; 95% CI, 1.07-7.15; P = .03). By multiple logistic regression analysis, rheumatoid arthritis emerged as an independent predictor of combined C&L LVSD (odds ratio, 2.57; 95% CI, 1.06-6.25). The relationship between sc-MS and S' was statistically significant in the subgroup of 142 patients without combined C&L LVSD (r = 0.40, F < 0.001), having the best fitting by a linear function (sc-MS = 58.1 + 3.34 × peak S'; r(2) = 0.19, P < .0001), absent in patients with combined C&L LVSD. Combined C&L LVSD is detectable in about one fourth of patients with asymptomatic rheumatoid arthritis and is associated with LV concentric remodeling and hypertrophy. Rheumatoid arthritis predicts this worrisome condition, which may explain the increased risk for cardiovascular events in these patients. The aim of this "notice of clarification" is to analyze in brief the similarities and to underline the differences between the current article (defined as "paper J") and a separate article entitled "Prevalence and Factors Associated with Subclinical Left Ventricular Systolic Dysfunction Evaluated by Mid-Wall Mechanics in Rheumatoid Arthritis" (defined as "paper E"), which was written several months before paper J, and recently accepted for publication by the journal "Echocardiography" (Cioffi et al. http://dx.doi.org/10.1111/echo.13186). We wish to explain more clearly how the manuscript described in "paper J" relates to the "paper E" and the context in which it ought to be considered. Data in both papers were derived from the same prospective database, so that it would appear questionable if the number of the enrolled patients and/or their clinical/laboratory/echocardiographic characteristics were different. Accordingly, both papers reported that 198 patients with rheumatoid arthritis (RA) were considered and their characteristics were identical, due to the fact that they were the same subjects (this circumstance is common and mandatory among all studies in which the patients were recruited from the same database). These are the similarities between the papers. In paper E, which was written several months before paper J, we focused on the prevalence and factors associated with impaired circumferential left ventricular (LV) systolic function measured as mid-wall shortening (corrected for circumferential end-systolic stress). We found that 110 patients (56% of the whole population) demonstrated this feature. Thus, these 110 patients were the object of the study described in paper E, in which we specifically analyzed the factors associated with the impairment of stress-corrected mid-wall shortening (sc-MS). The conclusions of that paper were: (i) subclinical LV systolic dysfunction (LVSD) is detectable in more than half RA population without overt cardiac disease as measured by sc-MS, (ii) RA per se is associated with LVSD, and (iii) in RA patients only LV relative wall thickness was associated with impaired sc-MS based upon multivariate logistic regression analysis. Differently, in the paper J, we focused on the prevalence and factors associated with combined impairment of circumferential and longitudinal shortening (C&L) in 198 asymptomatic patients with RA. We found that 56 patients (28% of the whole population) presented this feature. Thus, these 56 patients were analyzed in detail in this study, as well as the factors associated with the combined impairment of C&L shortening. In paper J, we evaluated sc-MS as an indicator of circumferential systolic LV shortening, and we also determined the average of tissue Doppler measures of maximal systolic mitral annular velocity at four different sampling sites ( S') as an indicator of longitudinal LV systolic shortening. This approach clearly demonstrates that in paper J, we analyzed data deriving from the tissue Doppler analysis, which were not taken into any consideration in paper E. The investigation described in paper J made evident several original and clinically relevant findings. In patients with RA: (i) the condition of combined C&L left ventricular systolic dysfunction (LVSD) is frequent; (ii) these patients have comparable clinical and laboratory characteristics with those without combined C&L LVSD, but exhibit remarkable concentric LV geometry and increased LV mass, a phenotype that can be consider a model of compensated asymptomatic chronic heart failure; (iii) RA is an independent factor associated with combined C&L LVSD; (iv) no relationship between indexes of circumferential and longitudinal function exists in patients with combined C&L LVSD, while it is statistically significant and positive when the subgroup of patients without combined C&L LVSD is considered, having the best fitting by a linear function. All these findings are unique to the paper J and are not presented (they could not have been) in paper E. It appears clear that, starting from the same 198 patients included in the database, different sub-groups of patients were selected and analyzed in the two papers (they had different echocardiographic characteristics) and, consequently, different factors emerged by the statistical analyses as covariates associated with the different phenotypes of LVSD considered. Importantly, both papers E and J had a very long gestation because all reviewers for the different journals found several and important issues that merited to be addressed: a lot of changes were proposed and much additional information was required, particularly by the reviewers of paper E. Considering this context, it emerges that although paper E was written well before paper J, the two manuscripts were accepted at the same time (we received the letters of acceptance within a couple of weeks). Thus, the uncertainty about the fate of both manuscripts made it very difficult (if not impossible) to cite either of them in the other one and, afterward, we just did not think about this point anymore. Of note, the idea to combine in the analysis longitudinal function came therefore well after the starting process of revision of the paper E and was, in some way inspired by a reviewer's comment. That is why we did not put both findings in the same paper. We think that our explanations provide the broad audience of your journal a perspective of transparency and our respect for the readers' right to understand how the work described in the paper J relates to other work by our research group. Giovanni Cioffi On behalf of all co-authors Ombretta Viapiana, Federica Ognibeni, Andrea Dalbeni, Davide Gatti, Carmine Mazzone, Giorgio Faganello, Andrea Di Lenarda, Silvano Adami, and Maurizio Rossini. Copyright © 2016 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.

  10. Continuous Evaluation of Fast Processes in Climate Models Using ARM Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Zhijin; Sha, Feng; Liu, Yangang

    2016-02-02

    This five-year award supports the project “Continuous Evaluation of Fast Processes in Climate Models Using ARM Measurements (FASTER)”. The goal of this project is to produce accurate, consistent and comprehensive data sets for initializing both single column models (SCMs) and cloud resolving models (CRMs) using data assimilation. A multi-scale three-dimensional variational data assimilation scheme (MS-3DVAR) has been implemented. This MS-3DVAR system is built on top of WRF/GSI. The Community Gridpoint Statistical Interpolation (GSI) system is an operational data assimilation system at the National Centers for Environmental Prediction (NCEP) and has been implemented in the Weather Research and Forecast (WRF) model.more » This MS-3DVAR is further enhanced by the incorporation of a land surface 3DVAR scheme and a comprehensive aerosol 3DVAR scheme. The data assimilation implementation focuses in the ARM SGP region. ARM measurements are assimilated along with other available satellite and radar data. Reanalyses are then generated for a few selected period of time. This comprehensive data assimilation system has also been employed for other ARM-related applications.« less

  11. Cistern Performance for Stormwater Management in Camden ...

    EPA Pesticide Factsheets

    The Camden County Municipal Utilities Authority (CCMUA) installed different types of green infrastructure Stormwater Control Measures (SCMs) at locations around the city of Camden, NJ. The installed SCMs include cisterns. Cisterns provide a cost effective approach to reduce stormwater runoff volume and peak discharge. The collected water can be used as a substitute for potable water in some applications. This presentation focuses on five cisterns that were monitored as part of a capture and use system at community gardens. The cisterns capture water from existing rooftops or shade structures installed by CCMUA as part of the project. Cistern volumes varied from 305 gallons to 1100 gallons based on the available roof area. Water level was monitored at 10-minute intervals using pressure transducers and rainfall was recorded using tipping bucket rain gauges. Soil moisture was monitored near the root zone using frequency domain reflectometer buried under selected plants. These data were analyzed to better understand the supply and demand relationship. Cisterns were sampled at 6 to 8 week intervals through the growing season for determination of microorganism, nutrients and metal concentrations. The analyses detected Antimony, Arsenic, Barium, Copper, Lead, Manganese, Nickel, Vanadium and Zinc. Concentration of all these metals were below recommended water quality criteria for irrigation by EPA guideline for water reuse. The total nitrogen and phosphorous concen

  12. Paper 5643 - Role of Maintenance in the Performance of Stormwater Control Measures

    NASA Astrophysics Data System (ADS)

    Hunt, W. F., III; Merriman, L.; Winston, R.; Brown, R. A.

    2014-12-01

    Stormwater Control Measures are required by jurisdictions across the USA and internationally to treat runoff quantity and quality. Like any anthropogenic device, these systems must be maintained. However, often times once a system has been constructed, it is neglected, either assumed it will work in perpetuity or (more likely) just forgotten. Recent research on multiple stormwater practices illustrates the pitfalls of neglecting certain practices, while highlighting other SCMs that are resilient despite lack of care. The focus of this presentation will be to highlight three often-used SCMs, constructed stormwater wetlands, bioretention, and permeable pavement, describing each SCM's failure modes. The degree to which water quality and hydrologic mitigation function is lost will be presented for each practice. Moreover, design and construction guidance will be provided so that the exposure to failure mechanisms is limited for each practice. Of the three practices, it appears that their resilience to failure is (in descending order): constructed stormwater wetlands, bioretention, and permeable pavement. One key to the former two practices robustness seems to be the important role in vegetation, which helps heal "wounds" of neglect. Because constructed stormwater wetlands do not rely upon filtration, they tend to be slighly less prone to failure than bioretention (which is a filtration-based SCM).

  13. A method for combining search coil and fluxgate magnetometer data to reveal finer structures in reconnection physics

    NASA Astrophysics Data System (ADS)

    Argall, M. R.; Caide, A.; Chen, L.; Torbert, R. B.

    2012-12-01

    Magnetometers have been used to measure terrestrial and extraterrestrial magnetic fields in space exploration ever since Sputnik 3. Modern space missions, such as Cluster, RBSP, and MMS incorporate both search coil magnetometers (SCMs) and fluxgate magnetometers (FGMs) in their instrument suites: FGMs work well at low frequencies while SCMs perform better at high frequencies. In analyzing the noise floor of these instruments, a cross-over region is apparent around 0.3-1.5Hz. The satellite separation of MMS and average speeds of field convection and plasma flows at the subsolar magnetopause make this a crucial range for the upcoming MMS mission. The method presented here combines the signals from SCM and FGM by taking a weighted average of both in this frequency range in order to draw out key features, such as narrow current sheet structures, that would otherwise not be visible. The technique is applied to burst mode Cluster data for reported magnetopause and magnetotail reconnection events to demonstrate the power of the combined data. This technique is also applied to data from the the EMFISIS instrument on the RBSP mission. The authors acknowledge and thank the FGM and STAFF team for the use of their data from the CLUSTER Active Archive.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Li -Chen; Lu, Jie; Weck, Marcus

    In shell cross-linked micelles (SCMs) containing acid sites in the shell and base sites in the core are prepared from amphiphilic poly(2-oxazoline) triblock copolymers. These materials are utilized as two-chamber nanoreactors for a prototypical acid-base bifunctional tandem deacetalization-nitroaldol reaction. Furthermore, the acid and base sites are localized in different regions of the micelle, allowing the two steps in the reaction sequence to largely proceed in separate compartments, akin to the compartmentalization that occurs in biological systems.

  15. Robust Averaging of Covariances for EEG Recordings Classification in Motor Imagery Brain-Computer Interfaces.

    PubMed

    Uehara, Takashi; Sartori, Matteo; Tanaka, Toshihisa; Fiori, Simone

    2017-06-01

    The estimation of covariance matrices is of prime importance to analyze the distribution of multivariate signals. In motor imagery-based brain-computer interfaces (MI-BCI), covariance matrices play a central role in the extraction of features from recorded electroencephalograms (EEGs); therefore, correctly estimating covariance is crucial for EEG classification. This letter discusses algorithms to average sample covariance matrices (SCMs) for the selection of the reference matrix in tangent space mapping (TSM)-based MI-BCI. Tangent space mapping is a powerful method of feature extraction and strongly depends on the selection of a reference covariance matrix. In general, the observed signals may include outliers; therefore, taking the geometric mean of SCMs as the reference matrix may not be the best choice. In order to deal with the effects of outliers, robust estimators have to be used. In particular, we discuss and test the use of geometric medians and trimmed averages (defined on the basis of several metrics) as robust estimators. The main idea behind trimmed averages is to eliminate data that exhibit the largest distance from the average covariance calculated on the basis of all available data. The results of the experiments show that while the geometric medians show little differences from conventional methods in terms of classification accuracy in the classification of electroencephalographic recordings, the trimmed averages show significant improvement for all subjects.

  16. Cool Down Experiences with the SST-1 Helium Cryogenics System before and after Current Feeders System Modification

    NASA Astrophysics Data System (ADS)

    Patel, R.; Panchal, P.; Panchal, R.; Tank, J.; Mahesuriya, G.; Sonara, D.; Srikanth, G. L. N.; Garg, A.; Bairagi, N.; Christian, D.; Patel, K.; Shah, P.; Nimavat, H.; Sharma, R.; Patel, J. C.; Gupta, N. C.; Prasad, U.; Sharma, A. N.; Tanna, V. L.; Pradhan, S.

    The SST-1 machine comprises a superconducting magnet system (SCMS), which includes TF and PF magnets. In order to charge the SCMS, we need superconducting current feeders consisting of SC feeders and vapor cooled current leads (VCCLs). We have installed all 10 (+/-) pairs of VCCLs for the TF and PF systems. While conducting initial engineering validation of the SST-1 machine, our prime objective was to produce circular plasma using only the TF system. During the SST-1 campaign I to VI, we have to stop the PF magnets cooling in order to get the cryo- stable conditions for current charging of the TF magnets system. In that case, the cooling of the PF current leads is not essential. It has been also observed that after aborting the PF system cooling, there was a limited experimental window of TF operation. Therefore, in the recent SST-1 campaign-VII, we removed the PF current leads (9 pairs) and kept only single (+/-) pair of the 10,000 A rated VCCLs to realize the charging of the TF system for the extended window of operation. We have observed a better cryogenic stability in the TF magnets after modifications in the CFS. In this paper, we report the comparison of the cool down performance for the SST-1 machine operation before and after modifications of the current feeders system.

  17. Acid–base bifunctional shell cross-linked micelle nanoreactor for one-pot tandem reaction

    DOE PAGES

    Lee, Li -Chen; Lu, Jie; Weck, Marcus; ...

    2015-12-29

    In shell cross-linked micelles (SCMs) containing acid sites in the shell and base sites in the core are prepared from amphiphilic poly(2-oxazoline) triblock copolymers. These materials are utilized as two-chamber nanoreactors for a prototypical acid-base bifunctional tandem deacetalization-nitroaldol reaction. Furthermore, the acid and base sites are localized in different regions of the micelle, allowing the two steps in the reaction sequence to largely proceed in separate compartments, akin to the compartmentalization that occurs in biological systems.

  18. Switching single chain magnet behavior via photoinduced bidirectional metal-to-metal charge transfer† †Electronic supplementary information (ESI) available: Synthesis and physical measurement details. Crystal data in CIF format and additional figures (Fig. S1–S15). CCDC 1528877. For ESI and crystallographic data in CIF or other electronic format see DOI: 10.1039/c7sc03401f

    PubMed Central

    Jiang, Wenjing; Jiao, Chengqi; Meng, Yinshan; Zhao, Liang; Liu, Qiang

    2017-01-01

    The preparation of single-chain magnets (SCMs) with photo-switchable bistable states is essential for the development of high-density photo-recording devices. However, the reversible switching of the SCM behavior upon light irradiation is a formidable challenge. Here we report a well-isolated double zigzag chain {[Fe(bpy)(CN)4]2[Co(phpy)2]}·2H2O (bpy = 2,2′-bipyridine, phpy = 4-phenylpyridine), which exhibits reversible redox reactions with interconversion between FeIIILS(μ-CN)CoIIHS(μ-NC)FeIIILS (LS = low-spin, HS = high-spin) and FeIIILS(μ-CN)CoIIILS(μ-NC)FeIILS linkages under alternating irradiation with 808 and 532 nm lasers. The bidirectional photo-induced metal-to-metal charge transfer results in significant changes of anisotropy and intrachain magnetic interactions, reversibly switching the SCM behavior. The on-switching SCM behavior driven by light irradiation at 808 nm could be reversibly switched off by irradiation at 532 nm. The results provide an additional and independent way to control the bistable states of SCMs by switching in the 0 → 1 → 0 sequence, with potential applications in high density storage and molecular switches. PMID:29629126

  19. An ensemble constrained variational analysis of atmospheric forcing data and its application to evaluate clouds in CAM5: Ensemble 3DCVA and Its Application

    DOE PAGES

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    2016-01-05

    Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less

  20. An ensemble constrained variational analysis of atmospheric forcing data and its application to evaluate clouds in CAM5: Ensemble 3DCVA and Its Application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less

  1. Theory based interventions for caries related sugar intake in adults: systematic review.

    PubMed

    Al Rawahi, Said Hartih; Asimakopoulou, Koula; Newton, Jonathon Timothy

    2017-07-25

    Theories of behavior change are essential in the design of effective behaviour change strategies. No studies have assessed the effectiveness of interventions based on psychological theories to reduce sugar intake related to dental caries. The study assessed the effect of interventions based on Social Congition Models (SCMs) on sugar intake in adults, when compared with educational interventions or no intervention. A range of papers were considered: Systematic review Systematic Reviews with or without Meta Analyses; Randomised Controlled Trials; Controlled Clinical Trials and Before and after studies, of interventions based on Social Cognition Models aimed at dietary intake of sugar in adults. The Cochrane database including: Oral Health Group's Trials Register (2015), MEDLINE (from 1966 to September 2015), EMBASE (from 1980 to September 2015), PsycINFO (from 1966 to September 2015) were searched. No article met the full eligibility criteria for the current systematic review so no articles were included. There is a need for more clinical trials to assess the effectiveness of interventions based on psychological theory in reducing dietary sugar intake among adults. PROSPERO: CRD42015026357 .

  2. Generalized Oseen transformation for and enhancement of Bragg characteristics of electro-optic structurally chiral materials

    NASA Astrophysics Data System (ADS)

    Lakhtakia, Akhlesh

    2006-05-01

    The Oseen transformation is generalized to define a non-electro-optic structurally chiral material, wherein propagation along the axis of chirality is equivalent to that in an electro-optic SCM with local 4¯2m point group symmetry. This generalization shows that the exploitation of the Pockels effect amounts to an enhancement of the effective local birefringence, which in turn can enhance the characteristics of the circular Bragg phenomenon. Electro-optic SCMs can therefore serve as efficient and electrically controllable circular- and elliptical-polarization rejection filters.

  3. Durability of pulp fiber-cement composites

    NASA Astrophysics Data System (ADS)

    Mohr, Benjamin J.

    Wood pulp fibers are a unique reinforcing material as they are non-hazardous, renewable, and readily available at relatively low cost compared to other commercially available fibers. Today, pulp fiber-cement composites can be found in products such as extruded non-pressure pipes and non-structural building materials, mainly thin-sheet products. Although natural fibers have been used historically to reinforce various building materials, little scientific effort has been devoted to the examination of natural fibers to reinforce engineering materials until recently. The need for this type of fundamental research has been emphasized by widespread awareness of moisture-related failures of some engineered materials; these failures have led to the filing of national- and state-level class action lawsuits against several manufacturers. Thus, if pulp fiber-cement composites are to be used for exterior structural applications, the effects of cyclical wet/dry (rain/heat) exposure on performance must be known. Pulp fiber-cement composites have been tested in flexure to examine the progression of strength and toughness degradation. Based on scanning electron microscopy (SEM), environmental scanning electron microscopy (ESEM), energy dispersive spectroscopy (EDS), a three-part model describing the mechanisms of progressive degradation has been proposed: (1) initial fiber-cement/fiber interlayer debonding, (2) reprecipitation of crystalline and amorphous ettringite within the void space at the former fiber-cement interface, and (3) fiber embrittlement due to reprecipitation of calcium hydroxide filling the spaces within the fiber cell wall structure. Finally, as a means to mitigate kraft pulp fiber-cement composite degradation, the effects of partial portland cement replacement with various supplementary cementitious materials (SCMs) has been investigated for their effect on mitigating kraft pulp fiber-cement composite mechanical property degradation (i.e., strength and toughness losses) during wet/dry cycling. SCMs have been found to be effective in mitigating composite degradation through several processes, including a reduction in the calcium hydroxide content, stabilization of monosulfate by maintaining pore solution pH, and a decrease in ettringite reprecipitation accomplished by increased binding of aluminum in calcium aluminate phases and calcium in the calcium silicate hydrate (C-S-H) phase.

  4. Stormwater Runoff and Water Quality Modeling in Urban Maryland

    NASA Astrophysics Data System (ADS)

    Wang, J.; Forman, B. A.; Natarajan, P.; Davis, A.

    2015-12-01

    Urbanization significantly affects storm water runoff through the creation of new impervious surfaces such as highways, parking lots, and rooftops. Such changes can adversely impact the downstream receiving water bodies in terms of physical, chemical, and biological conditions. In order to mitigate the effects of urbanization on downstream water bodies, stormwater control measures (SCMs) have been widely used (e.g., infiltration basins, bioswales). A suite of observations from an infiltration basin installed adjacent to a highway in urban Maryland was used to evaluate stormwater runoff attenuation and pollutant removal rates at the well-instrumented SCM study site. In this study, the Storm Water Management Model (SWMM) was used to simulate the performance of the SCM. An automatic, split-sample calibration framework was developed to improve SWMM performance efficiency. The results indicate SWMM can accurately reproduce the hydraulic response of the SCM (in terms of reproducing measured inflow and outflow) during synoptic scale storm events lasting more than one day, but is less accurate during storm events lasting only a few hours. Similar results were found for a suite of modeled (and observed) water quality constituents, including suspended sediment, metals, N, P, and chloride.

  5. Offline GCSS Intercomparison of Cloud-Radiation Interaction and Surface Fluxes

    NASA Technical Reports Server (NTRS)

    Tao, W.-K.; Johnson, D.; Krueger, S.; Zulauf, M.; Donner, L.; Seman, C.; Petch, J.; Gregory, J.

    2004-01-01

    Simulations of deep tropical clouds by both cloud-resolving models (CRMs) and single-column models (SCMs) in the GEWEX Cloud System Study (GCSS) Working Group 4 (WG4; Precipitating Convective Cloud Systems), Case 2 (19-27 December 1992, TOGA-COARE IFA) have produced large differences in the mean heating and moistening rates (-1 to -5 K and -2 to 2 grams per kilogram respectively). Since the large-scale advective temperature and moisture "forcing" are prescribed for this case, a closer examination of two of the remaining external types of "forcing", namely radiative heating and air/sea hear and moisture transfer, are warranted. This paper examines the current radiation and surface flux of parameterizations used in the cloud models participating in the GCSS WG4, be executing the models "offline" for one time step (12 s) for a prescribed atmospheric state, then examining the surface and radiation fluxes from each model. The dynamic, thermodynamic, and microphysical fluids are provided by the GCE-derived model output for Case 2 during a period of very active deep convection (westerly wind burst). The surface and radiation fluxes produced from the models are then divided into prescribed convective, stratiform, and clear regions in order to examine the role that clouds play in the flux parameterizations. The results suggest that the differences between the models are attributed more to the surface flux parameterizations than the radiation schemes.

  6. Can We Use Single-Column Models for Understanding the Boundary Layer Cloud-Climate Feedback?

    NASA Astrophysics Data System (ADS)

    Dal Gesso, S.; Neggers, R. A. J.

    2018-02-01

    This study explores how to drive Single-Column Models (SCMs) with existing data sets of General Circulation Model (GCM) outputs, with the aim of studying the boundary layer cloud response to climate change in the marine subtropical trade wind regime. The EC-EARTH SCM is driven with the large-scale tendencies and boundary conditions as derived from two different data sets, consisting of high-frequency outputs of GCM simulations. SCM simulations are performed near Barbados Cloud Observatory in the dry season (January-April), when fair-weather cumulus is the dominant low-cloud regime. This climate regime is characterized by a near equilibrium in the free troposphere between the long-wave radiative cooling and the large-scale advection of warm air. In the SCM, this equilibrium is ensured by scaling the monthly mean dynamical tendency of temperature and humidity such that it balances that of the model physics in the free troposphere. In this setup, the high-frequency variability in the forcing is maintained, and the boundary layer physics acts freely. This technique yields representative cloud amount and structure in the SCM for the current climate. Furthermore, the cloud response to a sea surface warming of 4 K as produced by the SCM is consistent with that of the forcing GCM.

  7. Evaluation of Cirrus Cloud Simulations Using ARM Data - Development of a Case Study Data Set

    NASA Technical Reports Server (NTRS)

    O'C.Starr, David; Demoz, Belay; Lare, Andrew; Poellot, Michael; Sassen, Kenneth; Heymsfield, Andrew; Brown, Philip; Mace, Jay; Einaudi, Franco (Technical Monitor)

    2001-01-01

    Cloud-resolving models (CRMs) provide an effective linkage in terms of parameters and scales between observations and the parametric treatments of clouds in global climate models (GCMs). They also represent the best understanding of the physical processes acting to determine cloud system lifecycle. The goal of this project is to improve state-of-the-art CRMs used for studies of cirrus clouds and to establish a relative calibration with GCMs through comparisons among CRMs, single column model (SCM) versions of the GCMs, and observations. This project will compare and evaluate a variety of CRMs and SCMs, under the auspices of the GEWEX Cloud Systems Study (GCSS) Working Group on Cirrus Cloud Systems (WG2), using ARM data acquired at the Southern Great Plains (SGP) site. This poster will report on progress in developing a suitable WG2 case study data set based on the September 26, 1996 ARM IOP case - the Hurricane Nora outflow case. The environmental data (input) will be described as well as the wealth of validating cloud observations. We plan to also show results of preliminary simulations. The science questions to be addressed derive significantly from results of the GCSS WG2 cloud model comparison projects, which will be briefly summarized.

  8. A modified ASTM C1012 procedure for qualifying blended cements containing limestone and SCMs for use in sulfate-rich environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barcelo, Laurent, E-mail: laurent.barcelo@lafarge.com; Lafarge Centre de Recherche, 95 rue du Montmurier, 38291 St Quentin Fallavier; Gartner, Ellis

    2014-09-15

    Blended Portland cements containing up to 15% limestone have recently been introduced into Canada and the USA. These cements were initially not allowed for use in sulfate environments but this restriction has been lifted in the Canadian cement specification, provided that the “limestone cement” includes sufficient SCM and that it passes a modified version of the CSA A3004-C8 (equivalent to ASTM C1012) test procedure run at a low temperature (5 °C). This new procedure is proposed as a means of predicting the risk of the thaumasite form of sulfate attack in concretes containing limestone cements. The goal of the presentmore » study was to better understand how this approach works both in practice and in theory. Results from three different laboratories utilizing the CSA A3004-C8 test procedure are compared and analyzed, while also taking into account the results of thermodynamic modeling and of thaumasite formation experiments conducted in dilute suspensions.« less

  9. Human Adenovirus Core Protein V Is Targeted by the Host SUMOylation Machinery To Limit Essential Viral Functions.

    PubMed

    Freudenberger, Nora; Meyer, Tina; Groitl, Peter; Dobner, Thomas; Schreiner, Sabrina

    2018-02-15

    Human adenoviruses (HAdV) are nonenveloped viruses containing a linear, double-stranded DNA genome surrounded by an icosahedral capsid. To allow proper viral replication, the genome is imported through the nuclear pore complex associated with viral core proteins. Until now, the role of these incoming virion proteins during the early phase of infection was poorly understood. The core protein V is speculated to bridge the core and the surrounding capsid. It binds the genome in a sequence-independent manner and localizes in the nucleus of infected cells, accumulating at nucleoli. Here, we show that protein V contains conserved SUMO conjugation motifs (SCMs). Mutation of these consensus motifs resulted in reduced SUMOylation of the protein; thus, protein V represents a novel target of the host SUMOylation machinery. To understand the role of protein V SUMO posttranslational modification during productive HAdV infection, we generated a replication-competent HAdV with SCM mutations within the protein V coding sequence. Phenotypic analyses revealed that these SCM mutations are beneficial for adenoviral replication. Blocking protein V SUMOylation at specific sites shifts the onset of viral DNA replication to earlier time points during infection and promotes viral gene expression. Simultaneously, the altered kinetics within the viral life cycle are accompanied by more efficient proteasomal degradation of host determinants and increased virus progeny production than that observed during wild-type infection. Taken together, our studies show that protein V SUMOylation reduces virus growth; hence, protein V SUMOylation represents an important novel aspect of the host antiviral strategy to limit virus replication and thereby points to potential intervention strategies. IMPORTANCE Many decades of research have revealed that HAdV structural proteins promote viral entry and mainly physical stability of the viral genome in the capsid. Our work over the last years showed that this concept needs expansion as the functions are more diverse. We showed that capsid protein VI regulates the antiviral response by modulation of the transcription factor Daxx during infection. Moreover, core protein VII interacts with SPOC1 restriction factor, which is beneficial for efficient viral gene expression. Here, we were able to show that core protein V also represents a novel substrate of the host SUMOylation machinery and contains several conserved SCMs; mutation of these consensus motifs reduced SUMOylation of the protein. Unexpectedly, we observed that introducing these mutations into HAdV promotes adenoviral replication. In conclusion, we offer novel insights into adenovirus core proteins and provide evidence that SUMOylation of HAdV factors regulates replication efficiency. Copyright © 2018 American Society for Microbiology.

  10. The trans-species core SELF: the emergence of active cultural and neuro-ecological agents through self-related processing within subcortical-cortical midline networks.

    PubMed

    Panksepp, Jaak; Northoff, Georg

    2009-03-01

    The nature of "the self" has been one of the central problems in philosophy and more recently in neuroscience. This raises various questions: (i) Can we attribute a self to animals? (ii) Do animals and humans share certain aspects of their core selves, yielding a trans-species concept of self? (iii) What are the neural processes that underlie a possible trans-species concept of self? (iv) What are the developmental aspects and do they result in various levels of self-representation? Drawing on recent literature from both human and animal research, we suggest a trans-species concept of self that is based upon what has been called a "core-self" which can be described by self-related processing (SRP) as a specific mode of interaction between organism and environment. When we refer to specific neural networks, we will here refer to the underlying system as the "core-SELF." The core-SELF provides primordial neural coordinates that represent organisms as living creatures-at the lowest level this elaborates interoceptive states along with raw emotional feelings (i.e., the intentions in action of a primordial core-SELF) while higher medial cortical levels facilitate affective-cognitive integration (yielding a fully-developed nomothetic core-self). Developmentally, SRP allows stimuli from the environment to be related and linked to organismic needs, signaled and processed within core-self structures within subcorical-cortical midline structures (SCMS) that provide the foundation for epigenetic emergence of ecologically framed, higher idiographic forms of selfhood across different individuals within a species. These functions ultimately operate as a coordinated network. We postulate that core SRP operates automatically, is deeply affective, and is developmentally and epigenetically connected to sensory-motor and higher cognitive abilities. This core-self is mediated by SCMS, embedded in visceral and instinctual representations of the body that are well integrated with basic attentional, emotional and motivational functions that are apparently shared between humans, non-human mammals, and perhaps in a proto-SELF form, other vertebrates. Such a trans-species concept of organismic coherence is thoroughly biological and affective at the lowest levels of a complex neural network, and culturally and ecologically molded at higher levels of neural processing. It allows organisms to selectively adapt to and integrate with physical and social environments. Such a psychobiologically universal, but environmentally diversified, concept may promote novel trans-species studies of the core-self across mammalian species.

  11. Characterizing particle-scale equilibrium adsorption and kinetics of uranium(VI) desorption from U-contaminated sediments

    USGS Publications Warehouse

    Stoliker, Deborah L.; Liu, Chongxuan; Kent, Douglas B.; Zachara, John M.

    2013-01-01

    Rates of U(VI) release from individual dry-sieved size fractions of a field-aggregated, field-contaminated composite sediment from the seasonally saturated lower vadose zone of the Hanford 300-Area were examined in flow-through reactors to maintain quasi-constant chemical conditions. The principal source of variability in equilibrium U(VI) adsorption properties of the various size fractions was the impact of variable chemistry on adsorption. This source of variability was represented using surface complexation models (SCMs) with different stoichiometric coefficients with respect to hydrogen ion and carbonate concentrations for the different size fractions. A reactive transport model incorporating equilibrium expressions for cation exchange and calcite dissolution, along with rate expressions for aerobic respiration and silica dissolution, described the temporal evolution of solute concentrations observed during the flow-through reactor experiments. Kinetic U(VI) desorption was well described using a multirate SCM with an assumed lognormal distribution for the mass-transfer rate coefficients. The estimated mean and standard deviation of the rate coefficients were the same for all <2 mm size fractions but differed for the 2–8 mm size fraction. Micropore volumes, assessed using t-plots to analyze N2 desorption data, were also the same for all dry-sieved <2 mm size fractions, indicating a link between micropore volumes and mass-transfer rate properties. Pore volumes for dry-sieved size fractions exceeded values for the corresponding wet-sieved fractions. We hypothesize that repeated field wetting and drying cycles lead to the formation of aggregates and/or coatings containing (micro)pore networks which provided an additional mass-transfer resistance over that associated with individual particles. The 2–8 mm fraction exhibited a larger average and standard deviation in the distribution of mass-transfer rate coefficients, possibly caused by the abundance of microporous basaltic rock fragments.

  12. Elucidating the role of surface passivating ligand structural parameters in hole wave function delocalization in semiconductor cluster molecules.

    PubMed

    Teunis, Meghan B; Nagaraju, Mulpuri; Dutta, Poulami; Pu, Jingzhi; Muhoberac, Barry B; Sardar, Rajesh; Agarwal, Mangilal

    2017-09-28

    This article describes the mechanisms underlying electronic interactions between surface passivating ligands and (CdSe) 34 semiconductor cluster molecules (SCMs) that facilitate band-gap engineering through the delocalization of hole wave functions without altering their inorganic core. We show here both experimentally and through density functional theory calculations that the expansion of the hole wave function beyond the SCM boundary into the ligand monolayer depends not only on the pre-binding energetic alignment of interfacial orbitals between the SCM and surface passivating ligands but is also strongly influenced by definable ligand structural parameters such as the extent of their π-conjugation [π-delocalization energy; pyrene (Py), anthracene (Anth), naphthalene (Naph), and phenyl (Ph)], binding mode [dithiocarbamate (DTC, -NH-CS 2 - ), carboxylate (-COO - ), and amine (-NH 2 )], and binding head group [-SH, -SeH, and -TeH]. We observe an unprecedentedly large ∼650 meV red-shift in the lowest energy optical absorption band of (CdSe) 34 SCMs upon passivating their surface with Py-DTC ligands and the trend is found to be Ph- < Naph- < Anth- < Py-DTC. This shift is reversible upon removal of Py-DTC by triethylphosphine gold(i) chloride treatment at room temperature. Furthermore, we performed temperature-dependent (80-300 K) photoluminescence lifetime measurements, which show longer lifetime at lower temperature, suggesting a strong influence of hole wave function delocalization rather than carrier trapping and/or phonon-mediated relaxation. Taken together, knowledge of how ligands electronically interact with the SCM surface is crucial to semiconductor nanomaterial research in general because it allows the tuning of electronic properties of nanomaterials for better charge separation and enhanced charge transfer, which in turn will increase optoelectronic device and photocatalytic efficiencies.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version ofmore » the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. As a result, other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.« less

  14. Defining reactive sites on hydrated mineral surfaces: Rhombohedral carbonate minerals

    NASA Astrophysics Data System (ADS)

    Villegas-Jiménez, Adrián; Mucci, Alfonso; Pokrovsky, Oleg S.; Schott, Jacques

    2009-08-01

    Despite the success of surface complexation models (SCMs) to interpret the adsorptive properties of mineral surfaces, their construct is sometimes incompatible with fundamental chemical and/or physical constraints, and thus, casts doubts on the physical-chemical significance of the derived model parameters. In this paper, we address the definition of primary surface sites (i.e., adsorption units) at hydrated carbonate mineral surfaces and discuss its implications to the formulation and calibration of surface equilibria for these minerals. Given the abundance of experimental and theoretical information on the structural properties of the hydrated (10.4) cleavage calcite surface, this mineral was chosen for a detailed theoretical analysis of critical issues relevant to the definition of primary surface sites. Accordingly, a single, generic charge-neutral surface site ( tbnd CaCO 3·H 2O 0) is defined for this mineral whereupon mass-action expressions describing adsorption equilibria were formulated. The one-site scheme, analogous to previously postulated descriptions of metal oxide surfaces, allows for a simple, yet realistic, molecular representation of surface reactions and provides a generalized reference state suitable for the calculation of sorption equilibria for rhombohedral carbonate minerals via Law of Mass Action (LMA) and Gibbs Energy Minimization (GEM) approaches. The one-site scheme is extended to other rhombohedral carbonate minerals and tested against published experimental data for magnesite and dolomite in aqueous solutions. A simplified SCM based on this scheme can successfully reproduce surface charge, reasonably simulate the electrokinetic behavior of these minerals, and predict surface speciation agreeing with available spectroscopic data. According to this model, a truly amphoteric behavior is displayed by these surfaces across the pH scale but at circum-neutral pH (5.8-8.2) and relatively high ΣCO 2 (⩾1 mM), proton/bicarbonate co-adsorption becomes important and leads to the formation of a charge-neutral H 2CO 3-like surface species which may largely account for the surface charge-buffering behavior and the relatively wide range of pH values of isoelectric points (pH iep) reported in the literature for these minerals.

  15. Evaluation of Cirrus Cloud Simulations using ARM Data-Development of Case Study Data Set

    NASA Technical Reports Server (NTRS)

    Starr, David OC.; Demoz, Belay; Wang, Yansen; Lin, Ruei-Fong; Lare, Andrew; Mace, Jay; Poellot, Michael; Sassen, Kenneth; Brown, Philip

    2002-01-01

    Cloud-resolving models (CRMs) are being increasingly used to develop parametric treatments of clouds and related processes for use in global climate models (GCMs). CRMs represent the integrated knowledge of the physical processes acting to determine cloud system lifecycle and are well matched to typical observational data in terms of physical parameters/measurables and scale-resolved physical processes. Thus, they are suitable for direct comparison to field observations for model validation and improvement. The goal of this project is to improve state-of-the-art CRMs used for studies of cirrus clouds and to establish a relative calibration with GCMs through comparisons among CRMs, single column model (SCM) versions of the GCMs, and observations. The objective is to compare and evaluate a variety of CRMs and SCMs, under the auspices of the GEWEX Cloud Systems Study (GCSS) Working Group on Cirrus Cloud Systems (WG2), using ARM data acquired at the Southern Great Plains (SGP) site. This poster will report on progress in developing a suitable WG2 case study data set based on the September 26, 1996 ARM IOP case - the Hurricane Nora outflow case. Progress is assessing cloud and other environmental conditions will be described. Results of preliminary simulations using a regional cloud system model (MM5) and a CRM will be discussed. Focal science questions for the model comparison are strongly based on results of the idealized GCSS WG2 cirrus cloud model comparison projects (Idealized Cirrus Cloud Model Comparison Project and Cirrus Parcel Model Comparison Project), which will also be briefly summarized.

  16. Spiking cortical model based non-local means method for despeckling multiframe optical coherence tomography data

    NASA Astrophysics Data System (ADS)

    Gu, Yameng; Zhang, Xuming

    2017-05-01

    Optical coherence tomography (OCT) images are severely degraded by speckle noise. Existing methods for despeckling multiframe OCT data cannot deliver sufficient speckle suppression while preserving image details well. To address this problem, the spiking cortical model (SCM) based non-local means (NLM) method has been proposed in this letter. In the proposed method, the considered frame and two neighboring frames are input into three SCMs to generate the temporal series of pulse outputs. The normalized moment of inertia (NMI) of the considered patches in the pulse outputs is extracted to represent the rotational and scaling invariant features of the corresponding patches in each frame. The pixel similarity is computed based on the Euclidean distance between the NMI features and used as the weight. Each pixel in the considered frame is restored by the weighted averaging of all pixels in the pre-defined search window in the three frames. Experiments on the real multiframe OCT data of the pig eye demonstrate the advantage of the proposed method over the frame averaging method, the multiscale sparsity based tomographic denoising method, the wavelet-based method and the traditional NLM method in terms of visual inspection and objective metrics such as signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), equivalent number of looks (ENL) and cross-correlation (XCOR).

  17. Magneto-structural correlations in a family of Fe(II)Re(IV)(CN)2 single-chain magnets: density functional theory and ab initio calculations.

    PubMed

    Zhang, Yi-Quan; Luo, Cheng-Lin; Wu, Xin-Bao; Wang, Bing-Wu; Gao, Song

    2014-04-07

    Until now, the expressions of the anisotropic energy barriers Δξ and ΔA, using the uniaxial magnetic anisotropy D, the intrachain coupling strength J, and the high-spin ground state S for single-chain magnets (SCMs) in the intermediate region between the Ising and the Heisenberg limits, were unknown. To explore this relationship, we used density functional theory and ab initio methods to obtain expressions of Δξ and ΔA in terms of D, J, and S of six R4Fe(II)-Re(IV)Cl4(CN)2 (R = diethylformamide (1), dibutylformamide (2), dimethylformamide (3), dimethylbutyramide (4), dimethylpropionamide (5), and diethylacetamide (6)) SCMs in the intermediate region. The ΔA value for compounds 1-3 was very similar to the magnetic anisotropic energy of a single Fe(II), while the value of Δξ was predicted using the exchange interaction of Fe(II) with the neighboring Re(IV), which could be expressed as 2JSReSFe. Similar to compounds 1-3, the anisotropy energy barrier ΔA of compounds 4 and 5 was also equal to (Di - Ei)SFe(2), but the correlation energy Δξ was closely equal to 2JSReSFe(cos 98.4 - cos 180) due to the reversal of the spins on the opposite Fe(II). For compound 6, one unit cell of Re(IV)Fe(II) was regarded as a domain wall since it had two different Re(IV)-Fe(II) couplings. Thus, the Δξ of compound 6 was expressed as 4J″SRe1Fe1SRe2Fe2, where J″ was the coupling constant of the neighboring unit cells of Re1Fe1 and Re2Fe2, and ΔA was equal to the anisotropic energy barrier of one domain wall given by DRe1Fe1(S(2)Re1Fe1 - 1/4).

  18. Non-uniform overland flow-infiltration model for roadside swales

    NASA Astrophysics Data System (ADS)

    García-Serrana, María; Gulliver, John S.; Nieber, John L.

    2017-09-01

    There is a need to quantify the hydrologic performance of vegetated roadside swales (drainage ditches) as stormwater control measures (SCMs). To quantify their infiltration performance in both the side slope and the channel of the swale, a model has been developed for coupling a Green-Ampt-Mein-Larson (GAML) infiltration submodel with kinematic wave submodels for both overland flow down the side slope and open channel flow for flow in the ditch. The coupled GAML submodel and overland flow submodel has been validated using data collected in twelve simulated runoff tests in three different highways located in the Minneapolis-St. Paul metropolitan area, MN. The percentage of the total water infiltrated into the side slope is considerably greater than into the channel. Thus, the side slope of a roadside swale is the main component contributing to the loss of runoff by infiltration and the channel primarily conveys the water that runs off the side slope, for the typical design found in highways. Finally, as demonstrated in field observations and the model, the fraction of the runoff/rainfall infiltrated (Vi∗) into the roadside swale appears to increase with a dimensionless saturated hydraulic conductivity (Ks∗), which is a function of the saturated hydraulic conductivity, rainfall intensity, and dimensions of the swale and contributing road surface. For design purposes, the relationship between Vi∗ and Ks∗ can provide a rough estimate of the fraction of runoff/rainfall infiltrated with the few essential parameters that appear to dominate the results.

  19. Development of the Large-Scale Forcing Data to Support MC3E Cloud Modeling Studies

    NASA Astrophysics Data System (ADS)

    Xie, S.; Zhang, Y.

    2011-12-01

    The large-scale forcing fields (e.g., vertical velocity and advective tendencies) are required to run single-column and cloud-resolving models (SCMs/CRMs), which are the two key modeling frameworks widely used to link field data to climate model developments. In this study, we use an advanced objective analysis approach to derive the required forcing data from the soundings collected by the Midlatitude Continental Convective Cloud Experiment (MC3E) in support of its cloud modeling studies. MC3E is the latest major field campaign conducted during the period 22 April 2011 to 06 June 2011 in south-central Oklahoma through a joint effort between the DOE ARM program and the NASA Global Precipitation Measurement Program. One of its primary goals is to provide a comprehensive dataset that can be used to describe the large-scale environment of convective cloud systems and evaluate model cumulus parameterizations. The objective analysis used in this study is the constrained variational analysis method. A unique feature of this approach is the use of domain-averaged surface and top-of-the atmosphere (TOA) observations (e.g., precipitation and radiative and turbulent fluxes) as constraints to adjust atmospheric state variables from soundings by the smallest possible amount to conserve column-integrated mass, moisture, and static energy so that the final analysis data is dynamically and thermodynamically consistent. To address potential uncertainties in the surface observations, an ensemble forcing dataset will be developed. Multi-scale forcing will be also created for simulating various scale convective systems. At the meeting, we will provide more details about the forcing development and present some preliminary analysis of the characteristics of the large-scale forcing structures for several selected convective systems observed during MC3E.

  20. Investigating the dependence of SCM simulated precipitation and clouds on the spatial scale of large-scale forcing at SGP [Investigating the scale dependence of SCM simulated precipitation and cloud by using gridded forcing data at SGP

    DOE PAGES

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    2017-08-05

    Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version ofmore » the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. As a result, other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.« less

  1. A review on carbonation study in concrete

    NASA Astrophysics Data System (ADS)

    Venkat Rao, N.; Meena, T.

    2017-11-01

    In this paper the authors have reviewed the carbonation studies which are a vital durability property of concrete. One of the major causes for deterioration and destruction of concrete is carbonation. The mechanism of carbonation involves the penetration carbon dioxide (CO2) into the concrete porous system to form an environment by reducing the pH around the reinforcement and initiation of the corrosion process. The paper also endeavours to focus and elucidate the gravity of importance, the process and chemistry of carbonate and how the various parameters like water/cement ratio, curing, depth of concrete cones, admixtures, grade of concrete, strength of concrete, porosity and permeability effect carbonation in concrete. The role of Supplementary Cementitious Materials (SCMs) like Ground granulated Blast Furnace Slag (GGBS) and Silica Fume (SF) has also been reviewed along with the influence of depth of carbonation.

  2. Historical Perspective of Split Cord Malformations: A Tale of Two Cords.

    PubMed

    Saker, Erfanul; Loukas, Marios; Fisahn, Christian; Oskouian, Rod J; Tubbs, R Shane

    2017-01-01

    Our appreciation and understanding of what is now known as the split cord malformation (SCM) have a long history. The oldest known example of SCM is from roughly AD 100. Other isolated examples can be found in the large body of work of the pathologists of the 1800s, where the SCMs were found incidentally during autopsies. SCM has a rich history and has intrigued physicians for over 200 years. Many well-known figures from the past such as Chiari and von Recklinghausen, both pathologists, made early postmortem descriptions of SCM. With the advent of MRI, these pathological embryological derailments can now often be detected and appreciated early and during life. Our understanding and ability to treat these congenital malformations as well as the terminology used to describe them have changed over the last several decades. © 2016 S. Karger AG, Basel.

  3. Piezoresistive Soft Condensed Matter Sensor for Body-Mounted Vital Function Applications

    PubMed Central

    Melnykowycz, Mark; Tschudin, Michael; Clemens, Frank

    2016-01-01

    A soft condensed matter sensor (SCMS) designed to measure strains on the human body is presented. The hybrid material based on carbon black (CB) and a thermoplastic elastomer (TPE) was bonded to a textile elastic band and used as a sensor on the human wrist to measure hand motion by detecting the movement of tendons in the wrist. Additionally it was able to track the blood pulse wave of a person, allowing for the determination of pulse wave peaks corresponding to the systole and diastole blood pressures in order to calculate the heart rate. Sensor characterization was done using mechanical cycle testing, and the band sensor achieved a gauge factor of 4–6.3 while displaying low signal relaxation when held at a strain levels. Near-linear signal performance was displayed when loading to successively higher strain levels up to 50% strain. PMID:26959025

  4. Piezoresistive Soft Condensed Matter Sensor for Body-Mounted Vital Function Applications.

    PubMed

    Melnykowycz, Mark; Tschudin, Michael; Clemens, Frank

    2016-03-04

    A soft condensed matter sensor (SCMS) designed to measure strains on the human body is presented. The hybrid material based on carbon black (CB) and a thermoplastic elastomer (TPE) was bonded to a textile elastic band and used as a sensor on the human wrist to measure hand motion by detecting the movement of tendons in the wrist. Additionally it was able to track the blood pulse wave of a person, allowing for the determination of pulse wave peaks corresponding to the systole and diastole blood pressures in order to calculate the heart rate. Sensor characterization was done using mechanical cycle testing, and the band sensor achieved a gauge factor of 4-6.3 while displaying low signal relaxation when held at a strain levels. Near-linear signal performance was displayed when loading to successively higher strain levels up to 50% strain.

  5. Design and Performance of McRas in SCMs and GEOS I/II GCMs

    NASA Technical Reports Server (NTRS)

    Sud, Yogesh C.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    The design of a prognostic cloud scheme named McRAS (Microphysics of clouds with Relaxed Arakawa-Schubert Scheme) for general circulation models (GCMs) will be discussed. McRAS distinguishes three types of clouds: (1) convective, (2) stratiform, and (3) boundary-layer types. The convective clouds transform and merge into stratiform clouds on an hourly time-scale, while the boundary-layer clouds merge into the stratiform clouds instantly. The cloud condensate converts into precipitation following the auto-conversion equations of Sundqvist that contain a parametric adaptation for the Bergeron-Findeisen process of ice crystal growth and collection of cloud condensate by precipitation. All clouds convect, advect, as well as diffuse both horizontally and vertically with a fully interactive cloud-microphysics throughout the life-cycle of the cloud, while the optical properties of clouds are derived from the statistical distribution of hydrometeors and idealized cloud geometry. An evaluation of McRAS in a single column model (SCM) with the GATE Phase III and 5-ARN CART datasets has shown that together with the rest of the model physics, McRAS can simulate the observed temperature, humidity, and precipitation without many systematic errors. The time history and time mean incloud water and ice distribution, fractional cloudiness, cloud optical thickness, origin of precipitation in the convective anvil and towers, and the convective updraft and downdraft velocities and mass fluxes all show a realistic behavior. Performance of McRAS in GEOS 11 GCM shows several satisfactory features but some of the remaining deficiencies suggest need for additional research involving convective triggers and inhibitors, provision for continuously detraining updraft, a realistic scheme for cumulus gravity wave drag, and refinements to physical conditions for ascertaining cloud detrainment level.

  6. Evaluation of a single column model at the Southern Great Plains climate research facility

    NASA Astrophysics Data System (ADS)

    Kennedy, Aaron D.

    Despite recent advancements in global climate modeling, models produce a large range of climate sensitivities for the Earth. This range of sensitivities results in part from uncertainties in modeling clouds. To understand and to improve cloud parameterizations in Global Climate Models (GCMs), simulations should be evaluated using observations of clouds. Detailed studies can be conducted at Atmospheric Radiation Measurements (ARM) sites which provide adequate observations and forcing for Single Column Model (SCM) studies. Unfortunately, forcing for SCMs is sparse and not available for many locations or times. This study had two main goals: (1) evaluate clouds from the GISS Model E AR5 SCM at the ARM Southern Great Plains site and (2) determine whether reanalysis-based forcing was feasible at this location. To accomplish these goals, multiple model runs were conducted from 1999--2008 using forcing provided by ARM and forcing developed from the North American Regional Reanalysis (NARR). To better understand cloud biases and differences in the forcings, atmospheric states were classified using Self Organizing Maps (SOMs). Although model simulations had many similarities with the observations, there were several noticeable biases. Deep clouds had a negative bias year-round and this was attributed to clouds being too thin during frontal systems and a lack of convection during the spring and summer. These results were consistent regardless of the forcing used. During August, SCM simulations had a positive bias for low clouds. This bias varied with the forcing suggesting that part of the problem was tied to errors in the forcing. NARR forcing had many favorable characteristics when compared to ARM observations and forcing. In particular, temperature and wind information were more accurate than ARM when compared to balloon soundings. During the cool season, NARR forcing produced results similar to ARM with reasonable precipitation and a similar cloud field. Although NARR vertical velocities were weaker than ARM during the convective season, these simulations were able to capture the majority of convective events. The limiting factor for NARR was humidity biases in the upper troposphere during the summer months. Prior to releasing this forcing to the modeling community, this issue must be investigated further.

  7. The optimization of concrete mixtures for use in highway applications

    NASA Astrophysics Data System (ADS)

    Moini, Mohamadreza

    Portland cement concrete is most used commodity in the world after water. Major part of civil and transportation infrastructure including bridges, roadway pavements, dams, and buildings is made of concrete. In addition to this, concrete durability is often of major concerns. In 2013 American Society of Civil Engineers (ASCE) estimated that an annual investment of 170 billion on roads and 20.5 billion for bridges is needed on an annual basis to substantially improve the condition of infrastructure. Same article reports that one-third of America's major roads are in poor or mediocre condition [1]. However, portland cement production is recognized with approximately one cubic meter of carbon dioxide emission. Indeed, the proper and systematic design of concrete mixtures for highway applications is essential as concrete pavements represent up to 60% of interstate highway systems with heavier traffic loads. Combined principles of material science and engineering can provide adequate methods and tools to facilitate the concrete design and improve the existing specifications. In the same manner, the durability must be addressed in the design and enhancement of long-term performance. Concrete used for highway pavement applications has low cement content and can be placed at low slump. However, further reduction of cement content (e.g., versus current specifications of Wisconsin Department of Transportation to 315-338 kg/m 3 (530-570 lb/yd3) for mainstream concrete pavements and 335 kg/m3 (565 lb/yd3) for bridge substructure and superstructures) requires delicate design of the mixture to maintain the expected workability, overall performance, and long-term durability in the field. The design includes, but not limited to optimization of aggregates, supplementary cementitious materials (SCMs), chemical and air-entraining admixtures. This research investigated various theoretical and experimental methods of aggregate optimization applicable for the reduction of cement content. Conducted research enabled further reduction of cement contents to 250 kg/m3 (420 lb/yd3) as required for the design of sustainable concrete pavements. This research demonstrated that aggregate packing can be used in multiple ways as a tool to optimize the aggregates assemblies and achieve the optimal particle size distribution of aggregate blends. The SCMs, and air-entraining admixtures were selected to comply with existing WisDOT performance requirements and chemical admixtures were selected using the separate optimization study excluded from this thesis. The performance of different concrete mixtures was evaluated for fresh properties, strength development, and compressive and flexural strength ranging from 1 to 360 days. The methods and tools discussed in this research are applicable, but not limited to concrete pavement applications. The current concrete proportioning standards such as ACI 211 or current WisDOT roadway standard specifications (Part 5: Structures, Section 501: Concrete) for concrete have limited or no recommendations, methods or guidelines on aggregate optimization, the use of ternary aggregate blends (e.g., such as those used in asphalt industry), the optimization of SCMs (e.g., class F and C fly ash, slag, metakaolin, silica fume), modern superplasticizers (such as polycarboxylate ether, PCE) and air-entraining admixtures. This research has demonstrated that the optimization of concrete mixture proportions can be achieved by the use and proper selection of optimal aggregate blends and result in 12% to 35% reduction of cement content and also more than 50% enhancement of performance. To prove the proposed concrete proportioning method the following steps were performed: • The experimental aggregate packing was investigated using northern and southern source of aggregates from Wisconsin; • The theoretical aggregate packing models were utilized and results were compared with experiments; • Multiple aggregate optimization methods (e.g., optimal grading, coarseness chart) were studied and compared to aggregate packing results and performance of experimented concrete mixtures; • Optimal aggregate blends were selected and used for concrete mixtures; • The optimal dosage of admixtures were selected for three types of plasticizing and superplasticizing admixtures based on a separately conducted study; • The SCM dosages were selected based on current WisDOT specifications; • The optimal air-entraining admixture dosage was investigated based on performance of preliminary concrete mixtures; • Finally, optimal concrete mixtures were tested for fresh properties, compressive strength development, modulus of rupture, at early ages (1day) and ultimate ages (360 days). • Durability performance indicators for optimal concrete mixtures were also tested for resistance of concrete to rapid chloride permeability (RCP) at 30 days and 90 days and resistance to rapid freezing and thawing at 56 days.

  8. Intercomparison Project on Parameterizations of Large-Scale Dynamics for Simulations of Tropical Convection

    NASA Astrophysics Data System (ADS)

    Sobel, A. H.; Wang, S.; Bellon, G.; Sessions, S. L.; Woolnough, S.

    2013-12-01

    Parameterizations of large-scale dynamics have been developed in the past decade for studying the interaction between tropical convection and large-scale dynamics, based on our physical understanding of the tropical atmosphere. A principal advantage of these methods is that they offer a pathway to attack the key question of what controls large-scale variations of tropical deep convection. These methods have been used with both single column models (SCMs) and cloud-resolving models (CRMs) to study the interaction of deep convection with several kinds of environmental forcings. While much has been learned from these efforts, different groups' efforts are somewhat hard to compare. Different models, different versions of the large-scale parameterization methods, and experimental designs that differ in other ways are used. It is not obvious which choices are consequential to the scientific conclusions drawn and which are not. The methods have matured to the point that there is value in an intercomparison project. In this context, the Global Atmospheric Systems Study - Weak Temperature Gradient (GASS-WTG) project was proposed at the Pan-GASS meeting in September 2012. The weak temperature gradient approximation is one method to parameterize large-scale dynamics, and is used in the project name for historical reasons and simplicity, but another method, the damped gravity wave (DGW) method, will also be used in the project. The goal of the GASS-WTG project is to develop community understanding of the parameterization methods currently in use. Their strengths, weaknesses, and functionality in models with different physics and numerics will be explored in detail, and their utility to improve our understanding of tropical weather and climate phenomena will be further evaluated. This presentation will introduce the intercomparison project, including background, goals, and overview of the proposed experimental design. Interested groups will be invited to join (it will not be too late), and preliminary results will be presented.

  9. Erythrocyte-like hollow carbon capsules and their application in proton exchange membrane fuel cells.

    PubMed

    Kim, Jung Ho; Yu, Jong-Sung

    2010-12-14

    Hierarchical nanostructured erythrocyte-like hollow carbon (EHC) with a hollow hemispherical macroporous core of ca. 230 nm in diameter and 30-40 nm thick mesoporous shell was synthesized and explored as a cathode catalyst support in a proton exchange membrane fuel cell (PEMFC). The morphology control of EHC was successfully achieved using solid core/mesoporous shell (SCMS) silica template and different styrene/furfuryl alcohol mixture compositions by a nanocasting method. The EHC-supported Pt (20 wt%) cathodes prepared have demonstrated markedly enhanced catalytic activity towards oxygen reduction reactions (ORRs) and greatly improved PEMFC polarization performance compared to carbon black Vulcan XC-72 (VC)-supported ones, probably due to the superb structural characteristics of the EHC such as uniform size, well-developed porosity, large specific surface area and pore volume. In particular, Pt/EHC cathodes exhibited ca. 30-60% higher ORR activity than a commercial Johnson Matthey Pt catalyst at a low catalyst loading of 0.2 mg Pt cm(-2).

  10. On the Interaction between Superabsorbent Hydrogels and Cementitious Materials

    NASA Astrophysics Data System (ADS)

    Farzanian, Khashayar

    Autogenous shrinkage induced cracking is a major concern in high performance concretes (HPC), which are produced with low water to cement ratios. Internal curing to maintain high relative humidity in HPC with the use of an internal water reservoir has proven effective in mitigating autogenous shrinkage in HPC. Superabsorbent polymers (SAP) or hydrogels have received increasing attention as an internal curing agent in recent years. A key advantage of SAP is its versatility in size distribution and absorption/desorption characteristics, which allow it to be adapted to specific mix designs. Understanding the behavior of superabsorbent hydrogels in cementitious materials is critical for accurate design of internal curing. The primary goal of this study is to fundamentally understand the interaction between superabsorbent hydrogels and cementitious materials. In the first step, the effect of chemical and mechanical conditions on the absorption of hydrogels is investigated. In the second step, the desorption of hydrogels in contact with porous cementitious materials is examined to aid in understanding the mechanisms of water release from superabsorbent hydrogels (SAP) into cementitious materials. The dependence of hydrogel desorption on the microstructure of cementitious materials and relative humidity is studied. It is shown that the capillary forces developed at the interface between the hydrogel and cementitious materials increased the desorption of the hydrogels. The size of hydrogels is shown to influence desorption, beyond the known size dependence of bulk diffusion, through debonding from the cementitious matrix, thereby decreasing the effect of the Laplace pressure on desorption. In the third step, the desorption of hydrogels synthesized with varied chemical compositions in cementitious materials are investigated. The absorption, chemical structure and mechanical response of hydrogels swollen in a cement mixture are studied. The effect of the capillary forces on the desorption of hydrogels is investigated in relation to the chemical composition of the hydrogels. In the second set of experiments of this part, the behavior of the hydrogels in a hydrating cement paste is monitored by tracking the size and morphology evolution of hydrogels interacting with the cement paste matrix. It is shown that the changes on the surface characteristics of hydrogels as a result of interactions with the pore solution and cement particles can affect the desorption rate of hydrogels in contact with a porous cementitious material. Scanning electron microscopic (SEM) examination demonstrates two different desorption modes with distinct morphologies of hydrogels depending on the chemical composition of hydrogels. The effect of the interfacial bonding between the hydrogels and the cementitious matrix and its relation to the desorption is illustrated. The desorption of hydrogels with different chemical compositions in blended cement mixture containing different supplementary cementitious materials (SCMs) such as slag, fly ash, silica fume and two types of glass powders, are examined. The absorption/desorption kinetics of hydrogels in different hydrating blended cement mixtures are monitored by freeze drying the samples at different times. The surface characteristics of different hydrogels after interaction with pore solution, cement particles and SCMs particles are examined and their relation to interfacial bonding is illustrated. It is shown that different SCMs can cause distinct changes on interfacial bonding. The understanding of hydrogel behavior in cementitious materials helps with accurate mixture design for internal curing. The kinetics of desorption is crucial for the purpose of internal curing. The understanding of release mechanisms and the change in the hydrogel morphology is important for the self-healing and self-sealing applications. Two major contributions of this research are (1) to show the effect of capillary forces developed at the interface between cementitious matrix and hydrogel which can increase the rate of desorption dramatically and (2) to illustrate the chemo-physical interaction between cement pore solution and hydrating particles with hydrogels which can affect the interfacial bonding between hydrogel and cement. These two main contributions will be useful to understand the absorption and desorption behavior of hydrogel in cementitious materials. Two main strengths of experimental procedures of this research are (1) use of in-house synthesis of hydrogels that permits establishing a link between the chemical composition of hydrogels and their behavior in cementitious materials and (2) use of freeze drying for the first time to monitor the behavior of hydrogels interacting with a hydrating cementitious matrix.

  11. Near-Infrared Monitoring of Volatiles in Frozen Lunar Simulants While Drilling

    NASA Technical Reports Server (NTRS)

    Roush, Ted L.; Colaprete, Anthony; Elphic, Richard C.; Forgione, Joshua; White, Bruce; McMurray, Robert; Cook, Amanda M.; Bielawski, Richard; Fritzler, Erin L.; Thompson, Sarah J.; hide

    2016-01-01

    In Situ Resource Utilization (ISRU) focuses on using local resources for mission consumables. The approach can reduce mission cost and risk. Lunar polar volatiles, e.g. water ice, have been detected via remote sensing measurements and represent a potential resource for both humans and propellant. The exact nature of the horizontal and depth distribution of the ice remains to be documented in situ. NASA's Resource Prospector mission (RP) is intended to investigate the polar volatiles using a rover, drill, and the RESOLVE science package. RP component level hardware is undergoing testing in relevant lunar conditions (cryovacuum). In March 2015 a series of drilling tests were undertaken using the Honeybee Robotics RP Drill, Near-Infrared Volatile Spectrometer System (NIRVSS), and sample capture mechanisms (SCM) inside a 'dirty' thermal vacuum chamber at the NASA Glenn Research Center. The goal of these tests was to investigate the ability of NIRVSS to monitor volatiles during drilling activities and assess delivery of soil sample transfer to the SCMs in order to elucidate the concept of operations associated with this regolith sampling method.

  12. Identification and Characterization of Mutations Affecting Sporulation in Saccharomyces Cerevisiae

    PubMed Central

    Smith, L. M.; Robbins, L. G.; Kennedy, A.; Magee, P. T.

    1988-01-01

    Mutations affecting the synthesis of the sporulation amyloglucosidase were isolated in a homothallic strain of Saccharomyces cerevisiae, SCMS7-1. Two were found, both of which were deficient in sporulation at 34°. One, SL484, sporulated to 50% normal levels at 30° but less than 5% at 34° or 22°. The other, SL641, failed to sporulate at any temperature. Both mutants were blocked before premeiotic DNA synthesis, and both complemented spo1, spo3, and spo7. Genetic analysis of the mutation in SL484 indicated linkage to TRP5 and placed the gene 10 map units from TRP5 on chromosome VII. A plasmid containing an insert which complements the mutation in SL484 fails to complement SL641. We therefore conclude that these two mutations are in separate genes and we propose to call these genes SPO17 and SPO18. These two genes are (with SPO7, SPO8, and SPO9) among the earliest identified in the sporulation pathway and may interact directly with the positive and negative regulators RME and IME. PMID:3147221

  13. SCM Paste Samples Exposed To Aggressive Solutions. Cementitious Barriers Partnership

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foster, T.

    This report summarizes experimental work performed by SIMCO Technologies Inc. (SIMCO) as part of the Cementitious Barriers Partnership (CBP) project. The test series followed an experimental program dedicated to the study of ordinary Portland cement (OPC) hydrated cement pastes exposed to aggressive solutions. In the present study, the scope is extended to hydrated cement pastes incorporating supplementary cementitious materials (SCM) such as fly ash and ground granulated blast furnace slag (GGBFS). Also, the range of aggressive contact solutions was expanded. The experimental program aimed at testing aggressive contact solutions that more closely mimic the chemical composition of saltstone pore solution.more » Five different solutions, some of which incorporated high levels of carbonate and nitrate, were placed in contact with four different hydrated cement paste mixes. In all solutions, 150 mmol/L of SO 4 2– (14 400 ppm) were present. The solutions included different pH conditions and different sodium content. Two paste mixes were equivalent to Vault 1/4 and Vault 2 concrete mixes used at SRS in storage structures. Two additional paste mixes, cast at the same water-to-cement ratio and using the same cements but without SCMs, were also tested. The damage evolution in samples was monitored using ultrasonic pulse velocity (UPV) and mass measurements. After three and twelve months of exposure conditions, samples were taken out of solution containers and analyzed to perform migration tests and porosity measurements. Globally, results were in line with the previous study and confirmed that high pH may limit the formation of some deleterious phases like gypsum. In this case, ettringite may form but is not necessarily associated with damage. However, the high concentration of sodium may be associated with the formation of an AFm-like mineral called U-phase. The most significant evidences of damage were all associated with the Vault 2 paste analog. This material proved very sensitive to high pH. All measurement techniques used to monitor and evaluate damage to samples indicated significant alterations to this mix when immersed in contact solutions containing sodium hydroxide. It was hypothesized that the low cement content, combined with high silica content coming from silica fume, fly ash and GGBFS led to the presence unreacted silica. It is possible that the pozzolanic reaction of these SCMs could not be activated due to the low alkali content, a direct consequence of low cement content. In this scenario, the material end up having a lot of silica available to react upon contact with sodium hydroxide, possibly forming a gel that may be similar to the gel formed in alkali-silica reactions. This scenario needs further experimental confirmation, but it may well explain the poor behavior of mix PV2 in presence of NaOH.« less

  14. Stormwater infiltration and the 'urban karst' - A review

    NASA Astrophysics Data System (ADS)

    Bonneau, Jeremie; Fletcher, Tim D.; Costelloe, Justin F.; Burns, Matthew J.

    2017-09-01

    The covering of native soils with impervious surfaces (e.g. roofs, roads, and pavement) prevents infiltration of rainfall into the ground, resulting in increased surface runoff and decreased groundwater recharge. When this excess water is managed using stormwater drainage systems, flow and water quality regimes of urban streams are severely altered, leading to the degradation of their ecosystems. Urban streams restoration requires alternative approaches towards stormwater management, which aim to restore the flow regime towards pre-development conditions. The practice of stormwater infiltration-achieved using a range of stormwater source-control measures (SCMs)-is central to restoring baseflow. Despite this, little is known about what happens to the infiltrated water. Current knowledge about the impact of stormwater infiltration on flow regimes was reviewed. Infiltration systems were found to be efficient at attenuating high-flow hydrology (reducing peak magnitudes and frequencies) at a range of scales (parcel, streetscape, catchment). Several modelling studies predict a positive impact of stormwater infiltration on baseflow, and empirical evidence is emerging, but the fate of infiltrated stormwater remains unclear. It is not known how infiltrated water travels along the subsurface pathways that characterise the urban environment, in particular the 'urban karst', which results from networks of human-made subsurface pathways, e.g. stormwater and sanitary sewer pipes and associated high permeability trenches. Seepage of groundwater into and around such pipes is possible, meaning some infiltrated stormwater could travel along artificial pathways. The catchment-scale ability of infiltration systems to restore groundwater recharge and baseflow is thus ambiguous. Further understanding of the fate of infiltrated stormwater is required to ensure infiltration systems deliver optimal outcomes for waterway flow regimes.

  15. Compressive strength of concrete by partial replacement of cement with metakaolin

    NASA Astrophysics Data System (ADS)

    Ganesh, Y. S. V.; Durgaiyya, P.; Shivanarayana, Ch.; Prasad, D. S. V.

    2017-07-01

    Metakaolin or calcined kaolin, other type of pozzolan, produced by calcination has the capability to replace silica fume as an alternative material. Supplementary cementitious materials have been widely used all over the world in concrete due to their economic and environmental benefits; hence, they have drawn much attention in recent years. Mineral admixtures such as fly ash, rice husk ash, silica fume etc. are more commonly used SCMs. They help in obtaining both higher performance and economy. Metakaolin is also one of such non - conventional material, which can be utilized beneficially in the construction industry. This paper presents the results of an experimental investigations carried out to find the suitability of metakaolin in production of concrete. In the present work, the results of a study carried out to investigate the effects of Metakaolin on compressive strength of concrete are presented. The referral concrete M30 was made using 43 grade OPC and the other mixes were prepared by replacing part of OPC with Metakaolin. The replacement levels were 5%, 10%, 15% and 20%(by weight) for Metakaolin. The various results, which indicate the effect of replacement of cement by metakalion on concrete, are presented in this paper to draw useful conclusions.

  16. Towards a Scalable Group Vehicle-based Security System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, Jason M

    2016-01-01

    In August 2014, the National Highway Traffic Safety Administration (NHTSA) proposed new rulemaking to require V2V communication in light vehicles. To establish trust in the basic safety messages (BSMs) that are exchanged by vehicles to improve driver safety, a vehicle public key infrastructure (VPKI) is required. We outline a system where a group or groups of vehicles manage and generate their own BSM signing keys and authenticating certificates -- a Vehicle-Based Security System (VBSS). Based on our preliminary examination, we assert the mechanisms exist to implement a VBSS that supports V2V communications; however, maintaining uniform trust throughout the system whilemore » protecting individual privacy does require reliance on nascent group signature technology which may require a significant amount of communication overhead for trust maintenance. To better evaluate the VBSS approach, we compare it to the proposed Security Credential Management System (SCMS) in four major areas including bootstrapping, pseudonym provisioning, BSM signing and authentication, and revocation. System scale, driver privacy, and the distribution and dynamics of participants make designing an effective VPKI an interesting and challenging problem; no clear-cut strategy exists to satisfy the security and privacy expectations in a highly efficient way. More work is needed in VPKI research, so the life-saving promise of V2V technology can be achieved.« less

  17. Field data collection, analysis, and adaptive management of green infrastructure in the urban water cycle in Cleveland and Columbus, OH

    NASA Astrophysics Data System (ADS)

    Darner, R.; Shuster, W.

    2016-12-01

    Expansion of the urban environment can alter the landscape and creates challenges for how cities deal with energy and water. Large volumes of stormwater in areas that have combined septic and stormwater systems present on challenge. Managing the water as near to the source as possible by creates an environment that allows more infiltration and evapotranspiration. Stormwater control measures (SCM) associated with this type of development, often called green infrastructure, include rain gardens, pervious or porous pavements, bioswales, green or blue roofs, and others. In this presentation, we examine the hydrology of green infrastructure in urban sewersheds in Cleveland and Columbus, OH. We present the need for data throughout the water cycle and challenges to collecting field data at a small scale (single rain garden instrumented to measure inflows, outflow, weather, soil moisture, and groundwater levels) and at a macro scale (a project including low-cost rain gardens, highly engineered rain gardens, groundwater wells, weather stations, soil moisture, and combined sewer flow monitoring). Results will include quantifying the effectiveness of SCMs in intercepting stormwater for different precipitation event sizes. Small scale deployment analysis will demonstrate the role of active adaptive management in the ongoing optimization over multiple years of data collection.

  18. EPA's Summary Report of the Collaborative Green ...

    EPA Pesticide Factsheets

    The United States Environmental Protection Agency evaluated the performance of a hybrid green-gray infrastructure pilot project installed into the Marlborough Neighborhood by the Kansas City Water Services Department. Kansas City installed 135 vegetated SCMs, 24,290 square feet of porous or permeable pavement, and 292,000 gallons of underground storage space in the residential neighborhood which drained 54% of the total 100 areas studied. Independently, both the Environmental Protection Agency and Kansas City determined that the green-gray combined infrastructure reduced the sewer flow runoff volume by approximately 30% in the combined sewer when the after test conditions were compared to before test conditions. It was also determined that the average drop in concentrations was 52% +/-34% for total suspended solids, 51% +/-33% suspended solid concentration, 37% +/-22% national turbidity units, and 50% average diameter particle size was 21% +/-59% when the outlet from one bioretention measure was compared to the inlet. There was only one storm in which nitrate and phosphate could be compared. The nitrate concentration was reduced by 52% and phosphate reduced by 57%. All analyzed influent samples were non-detect for lead and zinc (< 50 ug/L). Greater than 50% of the total copper concentrations were in the dissolved form. Fecal coliform concentrations were unexpectedly high, with concentrations often above the upper detection limit of 6 million most probable

  19. Fatigue Analysis Before and After Shaker Exercise: Physiologic Tool for Exercise Design

    PubMed Central

    White, Kevin T.; Easterling, Caryn; Roberts, Niles; Shaker, Reza

    2016-01-01

    Recent studies suggest that the Shaker exercise induces fatigue in the upper esophageal sphincter (UES) opening muscles and sternocleidomastoid (SCM), with the SCMs fatiguing earliest. The aim of this study was to measure fatigue induced by the isometric portion of the Shaker exercise by measuring the rate of change in the median frequency (MF rate) of the power spectral density (PSD) function, which is interpreted as proportional to the rate of fatigue, from surface electromyography (EMG) of suprahyoid (SHM), infrahyoid (IHM), and SCM. EMG data compared fatigue-related changes from 20-, 40-, and 60-s isometric hold durations of the Shaker exercise. We found that fatigue-related changes were manifested during the 20-s hold. The findings confirm that the SCM fatigues initially and as fast as or faster than the SHM and IHM. In addition, upon completion of the exercise protocol, the SCM had a decreased MF rate, implying improved fatigue resistance, while the SHM and IHM showed increased MF rates, implying that these muscles increased their fatiguing effort. We conclude that the Shaker exercise initially leads to increased fatigue resistance of the SCM, after which the exercise loads the less fatigue-resistant SHM and IHM, potentiating the therapeutic effect of the Shaker exercise regimen with continued exercise performance. PMID:18369673

  20. Feasibility Tests on Concrete with Very-High-Volume Supplementary Cementitious Materials

    PubMed Central

    Yang, Keun-Hyeok; Jeon, Yong-Su

    2014-01-01

    The objective of this study is to examine the compressive strength and durability of very high-volume SCM concrete. The prepared 36 concrete specimens were classified into two groups according to their designed 28-day compressive strength. For the high-volume SCM, the FA level was fixed at a weight ratio of 0.4 and the GGBS level varied between the weight ratio of 0.3 and 0.5, which resulted in 70–90% replacement of OPC. To enhance the compressive strength of very high-volume SCM concrete at an early age, the unit water content was controlled to be less than 150 kg/m3, and a specially modified polycarboxylate-based water-reducing agent was added. Test results showed that as SCM ratio (R SCM) increased, the strength gain ratio at an early age relative to the 28-day strength tended to decrease, whereas that at a long-term age increased up to R SCM of 0.8, beyond which it decreased. In addition, the beneficial effect of SCMs on the freezing-and-thawing and chloride resistances of the concrete decreased at R SCM of 0.9. Hence, it is recommended that R SCM needs to be restricted to less than 0.8–0.85 in order to obtain a consistent positive influence on the compressive strength and durability of SCM concrete. PMID:25162049

  1. An aposymbiotic primary coral polyp counteracts acidification by active pH regulation

    NASA Astrophysics Data System (ADS)

    Ohno, Yoshikazu; Iguchi, Akira; Shinzato, Chuya; Inoue, Mayuri; Suzuki, Atsushi; Sakai, Kazuhiko; Nakamura, Takashi

    2017-01-01

    Corals build their skeletons using extracellular calcifying fluid located in the tissue-skeleton interface. However, the mechanism by which corals control the transport of calcium and other ions from seawater and the mechanism of constant alkalization of calcifying fluid are largely unknown. To address these questions, we performed direct pH imaging at calcification sites (subcalicoblastic medium, SCM) to visualize active pH upregulation in live aposymbiotic primary coral polyps treated with HCl-acidified seawater. Active alkalization was observed in all individuals using vital staining method while the movement of HPTS and Alexa Fluor to SCM suggests that certain ions such as H+ could diffuse via a paracellular pathway to SCM. Among them, we discovered acid-induced oscillations in the pH of SCM (pHSCM), observed in 24% of polyps examined. In addition, we discovered acid-induced pH up-regulation waves in 21% of polyps examined, which propagated among SCMs after exposure to acidified seawater. Our results showed that corals can regulate pHSCM more dynamically than was previously believed. These observations will have important implications for determining how corals regulate pHSCM during calcification. We propose that corals can sense ambient seawater pH via their innate pH-sensitive systems and regulate pHSCM using several unknown pH-regulating ion transporters that coordinate with multicellular signaling occurring in coral tissue.

  2. Volcano-related materials in concretes: a comprehensive review.

    PubMed

    Cai, Gaochuang; Noguchi, Takafumi; Degée, Hervé; Zhao, Jun; Kitagaki, Ryoma

    2016-04-01

    Massive volcano-related materials (VRMs) erupted from volcanoes bring the impacts to natural environment and humanity health worldwide, which include generally volcanic ash (VA), volcanic pumice (VP), volcanic tuff (VT), etc. Considering the pozzolanic activities and mechanical characters of these materials, civil engineers propose to use them in low carbon/cement and environment-friendly concrete industries as supplementary cementitious materials (SCMs) or artificial/natural aggregates. The utilization of VRMs in concretes has attracted increasing and pressing attentions from research community. Through a literature review, this paper presents comprehensively the properties of VRMs and VRM concretes (VRMCs), including the physical and chemical properties of raw VRMs and VRMCs, and the fresh, microstructural and mechanical properties of VRMCs. Besides, considering environmental impacts and the development of long-term properties, the durability and stability properties of VRMCs also are summarized in this paper. The former focuses on the resistance properties of VRMCs when subjected to aggressive environmental impacts such as chloride, sulfate, seawater, and freezing-thawing. The latter mainly includes the fatigue, creep, heat-insulating, and expansion properties of VRMCs. This study will be helpful to promote the sustainability in concrete industries, protect natural environment, and reduce the impacts of volcano disaster. Based on this review, some main conclusions are discussed and important recommendations regarding future research on the application of VRMs in concrete industries are provided.

  3. On the Way to Appropriate Model Complexity

    NASA Astrophysics Data System (ADS)

    Höge, M.

    2016-12-01

    When statistical models are used to represent natural phenomena they are often too simple or too complex - this is known. But what exactly is model complexity? Among many other definitions, the complexity of a model can be conceptualized as a measure of statistical dependence between observations and parameters (Van der Linde, 2014). However, several issues remain when working with model complexity: A unique definition for model complexity is missing. Assuming a definition is accepted, how can model complexity be quantified? How can we use a quantified complexity to the better of modeling? Generally defined, "complexity is a measure of the information needed to specify the relationships between the elements of organized systems" (Bawden & Robinson, 2015). The complexity of a system changes as the knowledge about the system changes. For models this means that complexity is not a static concept: With more data or higher spatio-temporal resolution of parameters, the complexity of a model changes. There are essentially three categories into which all commonly used complexity measures can be classified: (1) An explicit representation of model complexity as "Degrees of freedom" of a model, e.g. effective number of parameters. (2) Model complexity as code length, a.k.a. "Kolmogorov complexity": The longer the shortest model code, the higher its complexity (e.g. in bits). (3) Complexity defined via information entropy of parametric or predictive uncertainty. Preliminary results show that Bayes theorem allows for incorporating all parts of the non-static concept of model complexity like data quality and quantity or parametric uncertainty. Therefore, we test how different approaches for measuring model complexity perform in comparison to a fully Bayesian model selection procedure. Ultimately, we want to find a measure that helps to assess the most appropriate model.

  4. Starch Biosynthesis during Pollen Maturation Is Associated with Altered Patterns of Gene Expression in Maize1

    PubMed Central

    Datta, Rupali; Chamusco, Karen C.; Chourey, Prem S.

    2002-01-01

    Starch biosynthesis during pollen maturation is not well understood in terms of genes/proteins and intracellular controls that regulate it in developing pollen. We have studied two specific developmental stages: “early,” characterized by the lack of starch, before or during pollen mitosis I; and “late,” an actively starch-filling post-pollen mitosis I phase in S-type cytoplasmic male-sterile (S-CMS) and two related male-fertile genotypes. The male-fertile starch-positive, but not the CMS starch-deficient, genotypes showed changes in the expression patterns of a large number of genes during this metabolic transition. In addition to a battery of housekeeping genes of carbohydrate metabolism, we observed changes in hexose transporter, plasma membrane H+-ATPase, ZmMADS1, and 14-3-3 proteins. Reduction or deficiency in 14-3-3 protein levels in all three major cellular sites (amyloplasts [starch], mitochondria, and cytosol) in male-sterile relative to male-fertile genotypes are of potential interest because of interorganellar communication in this CMS system. Further, the levels of hexose sugars were significantly reduced in male-sterile as compared with male-fertile tissues, not only at “early” and “late” stages but also at an earlier point during meiosis. Collectively, these data suggest that combined effects of both reduced sugars and their reduced flux in starch biosynthesis along with a strong possibility for altered redox passage may lead to the observed temporal changes in gene expressions, and ultimately pollen sterility. PMID:12481048

  5. Shi-style cervical manipulations for cervical radiculopathy

    PubMed Central

    Cui, Xue-jun; Yao, Min; Ye, Xiu-lan; Wang, Ping; Zhong, Wei-hong; Zhang, Rui-chun; Li, Hui-ying; Hu, Zhi-jun; Tang, Zhan-ying; Wang, Wei-min; Qiao, Wei-ping; Sun, Yue-li; Li, Jun; Gao, Yang; Shi, Qi; Wang, Yongjun

    2017-01-01

    Abstract Background: There is a lack of high-quality evidence supporting the use of manipulation therapy for patients with cervical radiculopathy (CR). This study aimed to evaluate the effectiveness of Shi-style cervical manipulations (SCMs) versus mechanical cervical traction (MCT) for CR. Methods: This was a randomized, open-label, controlled trial carried out at 5 hospitals in patients with CR for at least 2 weeks and neck pain. The patients received 6 treatments of SCM (n = 179) or MCT (n = 180) over 2 weeks. The primary outcome was participant-rated disability (neck disability index), measured 2 weeks after randomization. The secondary outcomes were participant-rated pain (visual analog scale) and health-related quality of life (36-Item Short Form Health Survey [SF-36]). Assessments were performed before, during, and after (2, 4, 12, and 24 weeks) intervention. Results: After 2 weeks of treatment, the SCM group showed a greater improvement in participant-rated disability compared with the control group (P = .018). The SCM group reported less disability compared with the control group (P < .001) during the 26-week follow-up. The difference was particularly important at 6 months (mean −28.91 ± 16.43, P < .001). Significant improvements in SF-36 were noted in both groups after 2 weeks of treatment, but there were no differences between the 2 groups. Conclusion: SCM could be a better option than MCT for the treatment of CR-related pain and disability. PMID:28767566

  6. The modern trends in space electromagnetic instrumentation

    NASA Astrophysics Data System (ADS)

    Korepanov, V. E.

    The future trends of the experimental plasma physics development in outer space demand more and more exact and sophisticated scientific instrumentation. Moreover, the situation is complicated by constant reduction of financial support of scientific research, even in leading countries. This resulted in the development of mini; micro and nanosatellites with low price and short preparation time. Consequently, it provoked the creation of new generation of scientific instruments with reduced weight and power consumption but increased level of metrological parameters. The recent state of the development of electromagnetic (EM) sensors for microsatellites is reported. For flux-gate magnetometers (FGM) the reduction of weight as well as power consumption was achieved not only due to the use of new electronic components but also because of the new operation mode development. The scientific and technological study allowed to decrease FGM noise and now the typical noise figure is about 10 picotesla rms at 1 Hz and the record one is below 1 picotesla. The super-light version of search-coil magnetometers (SCM) was created as the result of intensive research. These new SCMs can have about six decades of operational frequency band with upper limit ˜ 1 MHz and noise level of few femtotesla with total weight about 75 grams, including electronics. A new instrument.- wave probe (WP) - which combines three independent sensors in one body - SCM, split Langmuir probe and electric potential sensor - was created. The developed theory confirms that WP can directly measure the wave vector components in space plasmas.

  7. Elements of complexity in subsurface modeling, exemplified with three case studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freedman, Vicky L.; Truex, Michael J.; Rockhold, Mark

    2017-04-03

    There are complexity elements to consider when applying subsurface flow and transport models to support environmental analyses. Modelers balance the benefits and costs of modeling along the spectrum of complexity, taking into account the attributes of more simple models (e.g., lower cost, faster execution, easier to explain, less mechanistic) and the attributes of more complex models (higher cost, slower execution, harder to explain, more mechanistic and technically defensible). In this paper, modeling complexity is examined with respect to considering this balance. The discussion of modeling complexity is organized into three primary elements: 1) modeling approach, 2) description of process, andmore » 3) description of heterogeneity. Three examples are used to examine these complexity elements. Two of the examples use simulations generated from a complex model to develop simpler models for efficient use in model applications. The first example is designed to support performance evaluation of soil vapor extraction remediation in terms of groundwater protection. The second example investigates the importance of simulating different categories of geochemical reactions for carbon sequestration and selecting appropriate simplifications for use in evaluating sequestration scenarios. In the third example, the modeling history for a uranium-contaminated site demonstrates that conservative parameter estimates were inadequate surrogates for complex, critical processes and there is discussion on the selection of more appropriate model complexity for this application. All three examples highlight how complexity considerations are essential to create scientifically defensible models that achieve a balance between model simplification and complexity.« less

  8. Elements of complexity in subsurface modeling, exemplified with three case studies

    NASA Astrophysics Data System (ADS)

    Freedman, Vicky L.; Truex, Michael J.; Rockhold, Mark L.; Bacon, Diana H.; Freshley, Mark D.; Wellman, Dawn M.

    2017-09-01

    There are complexity elements to consider when applying subsurface flow and transport models to support environmental analyses. Modelers balance the benefits and costs of modeling along the spectrum of complexity, taking into account the attributes of more simple models (e.g., lower cost, faster execution, easier to explain, less mechanistic) and the attributes of more complex models (higher cost, slower execution, harder to explain, more mechanistic and technically defensible). In this report, modeling complexity is examined with respect to considering this balance. The discussion of modeling complexity is organized into three primary elements: (1) modeling approach, (2) description of process, and (3) description of heterogeneity. Three examples are used to examine these complexity elements. Two of the examples use simulations generated from a complex model to develop simpler models for efficient use in model applications. The first example is designed to support performance evaluation of soil-vapor-extraction remediation in terms of groundwater protection. The second example investigates the importance of simulating different categories of geochemical reactions for carbon sequestration and selecting appropriate simplifications for use in evaluating sequestration scenarios. In the third example, the modeling history for a uranium-contaminated site demonstrates that conservative parameter estimates were inadequate surrogates for complex, critical processes and there is discussion on the selection of more appropriate model complexity for this application. All three examples highlight how complexity considerations are essential to create scientifically defensible models that achieve a balance between model simplification and complexity.

  9. Hydrological model parameter dimensionality is a weak measure of prediction uncertainty

    NASA Astrophysics Data System (ADS)

    Pande, S.; Arkesteijn, L.; Savenije, H.; Bastidas, L. A.

    2015-04-01

    This paper shows that instability of hydrological system representation in response to different pieces of information and associated prediction uncertainty is a function of model complexity. After demonstrating the connection between unstable model representation and model complexity, complexity is analyzed in a step by step manner. This is done measuring differences between simulations of a model under different realizations of input forcings. Algorithms are then suggested to estimate model complexity. Model complexities of the two model structures, SAC-SMA (Sacramento Soil Moisture Accounting) and its simplified version SIXPAR (Six Parameter Model), are computed on resampled input data sets from basins that span across the continental US. The model complexities for SIXPAR are estimated for various parameter ranges. It is shown that complexity of SIXPAR increases with lower storage capacity and/or higher recession coefficients. Thus it is argued that a conceptually simple model structure, such as SIXPAR, can be more complex than an intuitively more complex model structure, such as SAC-SMA for certain parameter ranges. We therefore contend that magnitudes of feasible model parameters influence the complexity of the model selection problem just as parameter dimensionality (number of parameters) does and that parameter dimensionality is an incomplete indicator of stability of hydrological model selection and prediction problems.

  10. 40 CFR 80.49 - Fuels to be used in augmenting the complex emission model through vehicle testing.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... complex emission model through vehicle testing. 80.49 Section 80.49 Protection of Environment... Reformulated Gasoline § 80.49 Fuels to be used in augmenting the complex emission model through vehicle testing... augmenting the complex emission model with a parameter not currently included in the complex emission model...

  11. 40 CFR 80.49 - Fuels to be used in augmenting the complex emission model through vehicle testing.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... complex emission model through vehicle testing. 80.49 Section 80.49 Protection of Environment... Reformulated Gasoline § 80.49 Fuels to be used in augmenting the complex emission model through vehicle testing... augmenting the complex emission model with a parameter not currently included in the complex emission model...

  12. 40 CFR 80.49 - Fuels to be used in augmenting the complex emission model through vehicle testing.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... complex emission model through vehicle testing. 80.49 Section 80.49 Protection of Environment... Reformulated Gasoline § 80.49 Fuels to be used in augmenting the complex emission model through vehicle testing... augmenting the complex emission model with a parameter not currently included in the complex emission model...

  13. 40 CFR 80.49 - Fuels to be used in augmenting the complex emission model through vehicle testing.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... complex emission model through vehicle testing. 80.49 Section 80.49 Protection of Environment... Reformulated Gasoline § 80.49 Fuels to be used in augmenting the complex emission model through vehicle testing... augmenting the complex emission model with a parameter not currently included in the complex emission model...

  14. Clinical Complexity in Medicine: A Measurement Model of Task and Patient Complexity.

    PubMed

    Islam, R; Weir, C; Del Fiol, G

    2016-01-01

    Complexity in medicine needs to be reduced to simple components in a way that is comprehensible to researchers and clinicians. Few studies in the current literature propose a measurement model that addresses both task and patient complexity in medicine. The objective of this paper is to develop an integrated approach to understand and measure clinical complexity by incorporating both task and patient complexity components focusing on the infectious disease domain. The measurement model was adapted and modified for the healthcare domain. Three clinical infectious disease teams were observed, audio-recorded and transcribed. Each team included an infectious diseases expert, one infectious diseases fellow, one physician assistant and one pharmacy resident fellow. The transcripts were parsed and the authors independently coded complexity attributes. This baseline measurement model of clinical complexity was modified in an initial set of coding processes and further validated in a consensus-based iterative process that included several meetings and email discussions by three clinical experts from diverse backgrounds from the Department of Biomedical Informatics at the University of Utah. Inter-rater reliability was calculated using Cohen's kappa. The proposed clinical complexity model consists of two separate components. The first is a clinical task complexity model with 13 clinical complexity-contributing factors and 7 dimensions. The second is the patient complexity model with 11 complexity-contributing factors and 5 dimensions. The measurement model for complexity encompassing both task and patient complexity will be a valuable resource for future researchers and industry to measure and understand complexity in healthcare.

  15. Are more complex physiological models of forest ecosystems better choices for plot and regional predictions?

    Treesearch

    Wenchi Jin; Hong S. He; Frank R. Thompson

    2016-01-01

    Process-based forest ecosystem models vary from simple physiological, complex physiological, to hybrid empirical-physiological models. Previous studies indicate that complex models provide the best prediction at plot scale with a temporal extent of less than 10 years, however, it is largely untested as to whether complex models outperform the other two types of models...

  16. A Systematic Review of Conceptual Frameworks of Medical Complexity and New Model Development.

    PubMed

    Zullig, Leah L; Whitson, Heather E; Hastings, Susan N; Beadles, Chris; Kravchenko, Julia; Akushevich, Igor; Maciejewski, Matthew L

    2016-03-01

    Patient complexity is often operationalized by counting multiple chronic conditions (MCC) without considering contextual factors that can affect patient risk for adverse outcomes. Our objective was to develop a conceptual model of complexity addressing gaps identified in a review of published conceptual models. We searched for English-language MEDLINE papers published between 1 January 2004 and 16 January 2014. Two reviewers independently evaluated abstracts and all authors contributed to the development of the conceptual model in an iterative process. From 1606 identified abstracts, six conceptual models were selected. One additional model was identified through reference review. Each model had strengths, but several constructs were not fully considered: 1) contextual factors; 2) dynamics of complexity; 3) patients' preferences; 4) acute health shocks; and 5) resilience. Our Cycle of Complexity model illustrates relationships between acute shocks and medical events, healthcare access and utilization, workload and capacity, and patient preferences in the context of interpersonal, organizational, and community factors. This model may inform studies on the etiology of and changes in complexity, the relationship between complexity and patient outcomes, and intervention development to improve modifiable elements of complex patients.

  17. Evaluation of the RT-LAMP and LAMP methods for detection of Mycobacterium tuberculosis.

    PubMed

    Wu, Dandan; Kang, Jiwen; Li, Baosheng; Sun, Dianxing

    2018-05-01

    The current methods for detecting Mycobacterium tuberculosis (Mtb) are not clinically optimal. Standard culture methods (SCMs) are slow, costly, or unreliable, and loop-mediated isothermal amplification (LAMP) cannot differentiate live Mtb. This study compared reverse transcription (RT)-LAMP, LAMP, and an SCM for detecting Mtb. A first experiment tested the sensitivity and specificity of primers for 9 species of Mycobacterium (H37Rv, M. intracellulare, M. marinum, M. kansasii, M. avium, M. flavescens, M. smegmatis, M. fortuitum, and M. chelonae); and 3 non-Mycobacterium species (Staphylococcus aureus, Pseudomonas aeruginosa, and Klebsiella pneumoniae). A second experiment tested sputum specimens for the presence of Mtb, from 100 patients with tuberculosis (clinical) and 22 from patients without tuberculosis (control), using Roche solid culture (SCM), LAMP, and RT-LAMP. In the clinical samples. The rates of positivity for Mtb of the SCM, LAMP, and RT-LAMP methods were 88%, 92%, and 100%, respectively. The difference in detection rate was significant between RT-LAMP and SCM, but RT-LAMP and LAMP were comparable. In the control group, the detection rates were nil for all three methods. The specificities of the methods were similar. The sensitivity of RT-LAMP was ~10-fold higher than that of LAMP for detecting Mtb. Unlike LAMP, RT-LAMP could identify viable bacteria, and was able to detect a single copy of Mtb. Among SCM, LAMP, and RT-LAMP, the latter is the most suitable for wide use in the lower-level hospitals and clinics of China for detecting Mtb in sputum samples. © 2017 Wiley Periodicals, Inc.

  18. Single Case Method in Psychology: How to Improve as a Possible Methodology in Quantitative Research.

    PubMed

    Krause-Kjær, Elisa; Nedergaard, Jensine I

    2015-09-01

    Awareness of including Single-Case Method (SCM), as a possible methodology in quantitative research in the field of psychology, has been argued as useful, e.g., by Hurtado-Parrado and López-López (IPBS: Integrative Psychological & Behavioral Science, 49:2, 2015). Their article introduces a historical and conceptual analysis of SCMs and proposes changing the, often prevailing, tendency of neglecting SCM as an alternative to Null Hypothesis Significance Testing (NHST). This article contributes by putting a new light on SCM as an equally important methodology in psychology. The intention of the present article is to elaborate this point of view further by discussing one of the most fundamental requirements as well as main characteristics of SCM regarding temporality. In this respect that; "…performance is assessed continuously over time and under different conditions…" Hurtado-Parrado and López-López (IPBS: Integrative Psychological & Behavioral Science, 49:2, 2015). Defining principles when it comes to particular units of analysis, both synchronic (spatial) and diachronic (temporal) elements should be incorporated. In this article misunderstandings of the SCM will be adduced, and further the temporality will be described in order to propose how the SCM could have a more severe usability in psychological research. It is further discussed how to implement SCM in psychological methodology. It is suggested that one solution might be to reconsider the notion of time in psychological research to cover more than a variable of control and in this respect also include the notion of time as an irreversible unity within life.

  19. What do we gain from simplicity versus complexity in species distribution models?

    USGS Publications Warehouse

    Merow, Cory; Smith, Matthew J.; Edwards, Thomas C.; Guisan, Antoine; McMahon, Sean M.; Normand, Signe; Thuiller, Wilfried; Wuest, Rafael O.; Zimmermann, Niklaus E.; Elith, Jane

    2014-01-01

    Species distribution models (SDMs) are widely used to explain and predict species ranges and environmental niches. They are most commonly constructed by inferring species' occurrence–environment relationships using statistical and machine-learning methods. The variety of methods that can be used to construct SDMs (e.g. generalized linear/additive models, tree-based models, maximum entropy, etc.), and the variety of ways that such models can be implemented, permits substantial flexibility in SDM complexity. Building models with an appropriate amount of complexity for the study objectives is critical for robust inference. We characterize complexity as the shape of the inferred occurrence–environment relationships and the number of parameters used to describe them, and search for insights into whether additional complexity is informative or superfluous. By building ‘under fit’ models, having insufficient flexibility to describe observed occurrence–environment relationships, we risk misunderstanding the factors shaping species distributions. By building ‘over fit’ models, with excessive flexibility, we risk inadvertently ascribing pattern to noise or building opaque models. However, model selection can be challenging, especially when comparing models constructed under different modeling approaches. Here we argue for a more pragmatic approach: researchers should constrain the complexity of their models based on study objective, attributes of the data, and an understanding of how these interact with the underlying biological processes. We discuss guidelines for balancing under fitting with over fitting and consequently how complexity affects decisions made during model building. Although some generalities are possible, our discussion reflects differences in opinions that favor simpler versus more complex models. We conclude that combining insights from both simple and complex SDM building approaches best advances our knowledge of current and future species ranges.

  20. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations.

    PubMed

    Hu, Eric Y; Bouteiller, Jean-Marie C; Song, Dong; Baudry, Michel; Berger, Theodore W

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.

  1. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations

    PubMed Central

    Hu, Eric Y.; Bouteiller, Jean-Marie C.; Song, Dong; Baudry, Michel; Berger, Theodore W.

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations. PMID:26441622

  2. A Novel BA Complex Network Model on Color Template Matching

    PubMed Central

    Han, Risheng; Yue, Guangxue; Ding, Hui

    2014-01-01

    A novel BA complex network model of color space is proposed based on two fundamental rules of BA scale-free network model: growth and preferential attachment. The scale-free characteristic of color space is discovered by analyzing evolving process of template's color distribution. And then the template's BA complex network model can be used to select important color pixels which have much larger effects than other color pixels in matching process. The proposed BA complex network model of color space can be easily integrated into many traditional template matching algorithms, such as SSD based matching and SAD based matching. Experiments show the performance of color template matching results can be improved based on the proposed algorithm. To the best of our knowledge, this is the first study about how to model the color space of images using a proper complex network model and apply the complex network model to template matching. PMID:25243235

  3. A novel BA complex network model on color template matching.

    PubMed

    Han, Risheng; Shen, Shigen; Yue, Guangxue; Ding, Hui

    2014-01-01

    A novel BA complex network model of color space is proposed based on two fundamental rules of BA scale-free network model: growth and preferential attachment. The scale-free characteristic of color space is discovered by analyzing evolving process of template's color distribution. And then the template's BA complex network model can be used to select important color pixels which have much larger effects than other color pixels in matching process. The proposed BA complex network model of color space can be easily integrated into many traditional template matching algorithms, such as SSD based matching and SAD based matching. Experiments show the performance of color template matching results can be improved based on the proposed algorithm. To the best of our knowledge, this is the first study about how to model the color space of images using a proper complex network model and apply the complex network model to template matching.

  4. Application of 3D Laser Scanning Technology in Complex Rock Foundation Design

    NASA Astrophysics Data System (ADS)

    Junjie, Ma; Dan, Lu; Zhilong, Liu

    2017-12-01

    Taking the complex landform of Tanxi Mountain Landscape Bridge as an example, the application of 3D laser scanning technology in the mapping of complex rock foundations is studied in this paper. A set of 3D laser scanning technologies are formed and several key engineering problems are solved. The first is 3D laser scanning technology of complex landforms. 3D laser scanning technology is used to obtain a complete 3D point cloud data model of the complex landform. The detailed and accurate results of the surveying and mapping decrease the measuring time and supplementary measuring times. The second is 3D collaborative modeling of the complex landform. A 3D model of the complex landform is established based on the 3D point cloud data model. The super-structural foundation model is introduced for 3D collaborative design. The optimal design plan is selected and the construction progress is accelerated. And the last is finite-element analysis technology of the complex landform foundation. A 3D model of the complex landform is introduced into ANSYS for building a finite element model to calculate anti-slide stability of the rock, and provides a basis for the landform foundation design and construction.

  5. Are Model Transferability And Complexity Antithetical? Insights From Validation of a Variable-Complexity Empirical Snow Model in Space and Time

    NASA Astrophysics Data System (ADS)

    Lute, A. C.; Luce, Charles H.

    2017-11-01

    The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.

  6. Teacher Modeling Using Complex Informational Texts

    ERIC Educational Resources Information Center

    Fisher, Douglas; Frey, Nancy

    2015-01-01

    Modeling in complex texts requires that teachers analyze the text for factors of qualitative complexity and then design lessons that introduce students to that complexity. In addition, teachers can model the disciplinary nature of content area texts as well as word solving and comprehension strategies. Included is a planning guide for think aloud.

  7. Predicting protein complexes using a supervised learning method combined with local structural information.

    PubMed

    Dong, Yadong; Sun, Yongqi; Qin, Chao

    2018-01-01

    The existing protein complex detection methods can be broadly divided into two categories: unsupervised and supervised learning methods. Most of the unsupervised learning methods assume that protein complexes are in dense regions of protein-protein interaction (PPI) networks even though many true complexes are not dense subgraphs. Supervised learning methods utilize the informative properties of known complexes; they often extract features from existing complexes and then use the features to train a classification model. The trained model is used to guide the search process for new complexes. However, insufficient extracted features, noise in the PPI data and the incompleteness of complex data make the classification model imprecise. Consequently, the classification model is not sufficient for guiding the detection of complexes. Therefore, we propose a new robust score function that combines the classification model with local structural information. Based on the score function, we provide a search method that works both forwards and backwards. The results from experiments on six benchmark PPI datasets and three protein complex datasets show that our approach can achieve better performance compared with the state-of-the-art supervised, semi-supervised and unsupervised methods for protein complex detection, occasionally significantly outperforming such methods.

  8. Computer-aided molecular modeling techniques for predicting the stability of drug cyclodextrin inclusion complexes in aqueous solutions

    NASA Astrophysics Data System (ADS)

    Faucci, Maria Teresa; Melani, Fabrizio; Mura, Paola

    2002-06-01

    Molecular modeling was used to investigate factors influencing complex formation between cyclodextrins and guest molecules and predict their stability through a theoretical model based on the search for a correlation between experimental stability constants ( Ks) and some theoretical parameters describing complexation (docking energy, host-guest contact surfaces, intermolecular interaction fields) calculated from complex structures at a minimum conformational energy, obtained through stochastic methods based on molecular dynamic simulations. Naproxen, ibuprofen, ketoprofen and ibuproxam were used as model drug molecules. Multiple Regression Analysis allowed identification of the significant factors for the complex stability. A mathematical model ( r=0.897) related log Ks with complex docking energy and lipophilic molecular fields of cyclodextrin and drug.

  9. Research on complex 3D tree modeling based on L-system

    NASA Astrophysics Data System (ADS)

    Gang, Chen; Bin, Chen; Yuming, Liu; Hui, Li

    2018-03-01

    L-system as a fractal iterative system could simulate complex geometric patterns. Based on the field observation data of trees and knowledge of forestry experts, this paper extracted modeling constraint rules and obtained an L-system rules set. Using the self-developed L-system modeling software the L-system rule set was parsed to generate complex tree 3d models.The results showed that the geometrical modeling method based on l-system could be used to describe the morphological structure of complex trees and generate 3D tree models.

  10. Evaluation of protein-protein docking model structures using all-atom molecular dynamics simulations combined with the solution theory in the energy representation

    NASA Astrophysics Data System (ADS)

    Takemura, Kazuhiro; Guo, Hao; Sakuraba, Shun; Matubayasi, Nobuyuki; Kitao, Akio

    2012-12-01

    We propose a method to evaluate binding free energy differences among distinct protein-protein complex model structures through all-atom molecular dynamics simulations in explicit water using the solution theory in the energy representation. Complex model structures are generated from a pair of monomeric structures using the rigid-body docking program ZDOCK. After structure refinement by side chain optimization and all-atom molecular dynamics simulations in explicit water, complex models are evaluated based on the sum of their conformational and solvation free energies, the latter calculated from the energy distribution functions obtained from relatively short molecular dynamics simulations of the complex in water and of pure water based on the solution theory in the energy representation. We examined protein-protein complex model structures of two protein-protein complex systems, bovine trypsin/CMTI-1 squash inhibitor (PDB ID: 1PPE) and RNase SA/barstar (PDB ID: 1AY7), for which both complex and monomer structures were determined experimentally. For each system, we calculated the energies for the crystal complex structure and twelve generated model structures including the model most similar to the crystal structure and very different from it. In both systems, the sum of the conformational and solvation free energies tended to be lower for the structure similar to the crystal. We concluded that our energy calculation method is useful for selecting low energy complex models similar to the crystal structure from among a set of generated models.

  11. Evaluation of protein-protein docking model structures using all-atom molecular dynamics simulations combined with the solution theory in the energy representation.

    PubMed

    Takemura, Kazuhiro; Guo, Hao; Sakuraba, Shun; Matubayasi, Nobuyuki; Kitao, Akio

    2012-12-07

    We propose a method to evaluate binding free energy differences among distinct protein-protein complex model structures through all-atom molecular dynamics simulations in explicit water using the solution theory in the energy representation. Complex model structures are generated from a pair of monomeric structures using the rigid-body docking program ZDOCK. After structure refinement by side chain optimization and all-atom molecular dynamics simulations in explicit water, complex models are evaluated based on the sum of their conformational and solvation free energies, the latter calculated from the energy distribution functions obtained from relatively short molecular dynamics simulations of the complex in water and of pure water based on the solution theory in the energy representation. We examined protein-protein complex model structures of two protein-protein complex systems, bovine trypsin/CMTI-1 squash inhibitor (PDB ID: 1PPE) and RNase SA/barstar (PDB ID: 1AY7), for which both complex and monomer structures were determined experimentally. For each system, we calculated the energies for the crystal complex structure and twelve generated model structures including the model most similar to the crystal structure and very different from it. In both systems, the sum of the conformational and solvation free energies tended to be lower for the structure similar to the crystal. We concluded that our energy calculation method is useful for selecting low energy complex models similar to the crystal structure from among a set of generated models.

  12. Mathematic modeling of complex aquifer: Evian Natural Mineral Water case study considering lumped and distributed models.

    NASA Astrophysics Data System (ADS)

    Henriot, abel; Blavoux, bernard; Travi, yves; Lachassagne, patrick; Beon, olivier; Dewandel, benoit; Ladouche, bernard

    2013-04-01

    The Evian Natural Mineral Water (NMW) aquifer is a highly heterogeneous Quaternary glacial deposits complex composed of three main units, from bottom to top: - The "Inferior Complex" mainly composed of basal and impermeable till lying on the Alpine rocks. It outcrops only at the higher altitudes but is known in depth through drilled holes. - The "Gavot Plateau Complex" is an interstratified complex of mainly basal and lateral till up to 400 m thick. It outcrops at heights above approximately 850 m a.m.s.l. and up to 1200 m a.m.s.l. over a 30 km² area. It is the main recharge area known for the hydromineral system. - The "Terminal Complex" from which the Evian NMW is emerging at 410 m a.m.s.l. It is composed of sand and gravel Kame terraces that allow water to flow from the deep "Gavot Plateau Complex" permeable layers to the "Terminal Complex". A thick and impermeable terminal till caps and seals the system. Aquifer is then confined at its downstream area. Because of heterogeneity and complexity of this hydrosystem, distributed modeling tools are difficult to implement at the whole system scale: important hypothesis would have to be made about geometry, hydraulic properties, boundary conditions for example and extrapolation would lead with no doubt to unacceptable errors. Consequently a modeling strategy is being developed and leads also to improve the conceptual model of the hydrosystem. Lumped models mainly based on tritium time series allow the whole hydrosystem to be modeled combining in series: an exponential model (superficial aquifers of the "Gavot Plateau Complex"), a dispersive model (Gavot Plateau interstratified complex) and a piston flow model (sand and gravel from the Kame terraces) respectively 8, 60 and 2.5 years of mean transit time. These models provide insight on the governing parameters for the whole mineral aquifer. They help improving the current conceptual model and are to be improved with other environmental tracers such as CFC, SF6. A deterministic approach (distributed model; flow and transport) is performed at the scale of the terminal complex. The geometry of the system is quite well known from drill holes and the aquifer properties from data processing of hydraulic heads and pumping tests interpretation. A multidisciplinary approach (hydrodynamic, hydrochemistry, geology, isotopes) for the recharge area (Gavot Plateau Complex) aims to provide better constraint for the upstream boundary of distributed model. More, perfect tracer modeling approach highly constrains fitting of this distributed model. The result is a high resolution conceptual model leading to a future operational management tool of the aquifer.

  13. Intelligent Decisions Need Intelligent Choice of Models and Data - a Bayesian Justifiability Analysis for Models with Vastly Different Complexity

    NASA Astrophysics Data System (ADS)

    Nowak, W.; Schöniger, A.; Wöhling, T.; Illman, W. A.

    2016-12-01

    Model-based decision support requires justifiable models with good predictive capabilities. This, in turn, calls for a fine adjustment between predictive accuracy (small systematic model bias that can be achieved with rather complex models), and predictive precision (small predictive uncertainties that can be achieved with simpler models with fewer parameters). The implied complexity/simplicity trade-off depends on the availability of informative data for calibration. If not available, additional data collection can be planned through optimal experimental design. We present a model justifiability analysis that can compare models of vastly different complexity. It rests on Bayesian model averaging (BMA) to investigate the complexity/performance trade-off dependent on data availability. Then, we disentangle the complexity component from the performance component. We achieve this by replacing actually observed data by realizations of synthetic data predicted by the models. This results in a "model confusion matrix". Based on this matrix, the modeler can identify the maximum model complexity that can be justified by the available (or planned) amount and type of data. As a side product, the matrix quantifies model (dis-)similarity. We apply this analysis to aquifer characterization via hydraulic tomography, comparing four models with a vastly different number of parameters (from a homogeneous model to geostatistical random fields). As a testing scenario, we consider hydraulic tomography data. Using subsets of these data, we determine model justifiability as a function of data set size. The test case shows that geostatistical parameterization requires a substantial amount of hydraulic tomography data to be justified, while a zonation-based model can be justified with more limited data set sizes. The actual model performance (as opposed to model justifiability), however, depends strongly on the quality of prior geological information.

  14. Complex systems as lenses on learning and teaching

    NASA Astrophysics Data System (ADS)

    Hurford, Andrew C.

    From metaphors to mathematized models, the complexity sciences are changing the ways disciplines view their worlds, and ideas borrowed from complexity are increasingly being used to structure conversations and guide research on teaching and learning. The purpose of this corpus of research is to further those conversations and to extend complex systems ideas, theories, and modeling to curricula and to research on learning and teaching. A review of the literatures of learning and of complexity science and a discussion of the intersections between those disciplines are provided. The work reported represents an evolving model of learning qua complex system and that evolution is the result of iterative cycles of design research. One of the signatures of complex systems is the presence of scale invariance and this line of research furnishes empirical evidence of scale invariant behaviors in the activity of learners engaged in participatory simulations. The offered discussion of possible causes for these behaviors and chaotic phase transitions in human learning favors real-time optimization of decision-making as the means for producing such behaviors. Beyond theoretical development and modeling, this work includes the development of teaching activities intended to introduce pre-service mathematics and science teachers to complex systems. While some of the learning goals for this activity focused on the introduction of complex systems as a content area, we also used complex systems to frame perspectives on learning. Results of scoring rubrics and interview responses from students illustrate attributes of the proposed model of complex systems learning and also how these pre-service teachers made sense of the ideas. Correlations between established theories of learning and a complex adaptive systems model of learning are established and made explicit, and a means for using complex systems ideas for designing instruction is offered. It is a fundamental assumption of this research and researcher that complex systems ideas and understandings can be appropriated from more complexity-developed disciplines and put to use modeling and building increasingly productive understandings of learning and teaching.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, Philip LaRoche

    At the end of his life, Stephen Jay Kline, longtime professor of mechanical engineering at Stanford University, completed a book on how to address complex systems. The title of the book is 'Conceptual Foundations of Multi-Disciplinary Thinking' (1995), but the topic of the book is systems. Kline first establishes certain limits that are characteristic of our conscious minds. Kline then establishes a complexity measure for systems and uses that complexity measure to develop a hierarchy of systems. Kline then argues that our minds, due to their characteristic limitations, are unable to model the complex systems in that hierarchy. Computers aremore » of no help to us here. Our attempts at modeling these complex systems are based on the way we successfully model some simple systems, in particular, 'inert, naturally-occurring' objects and processes, such as what is the focus of physics. But complex systems overwhelm such attempts. As a result, the best we can do in working with these complex systems is to use a heuristic, what Kline calls the 'Guideline for Complex Systems.' Kline documents the problems that have developed due to 'oversimple' system models and from the inappropriate application of a system model from one domain to another. One prominent such problem is the Procrustean attempt to make the disciplines that deal with complex systems be 'physics-like.' Physics deals with simple systems, not complex ones, using Kline's complexity measure. The models that physics has developed are inappropriate for complex systems. Kline documents a number of the wasteful and dangerous fallacies of this type.« less

  16. On the dangers of model complexity without ecological justification in species distribution modeling

    Treesearch

    David M. Bell; Daniel R. Schlaepfer

    2016-01-01

    Although biogeographic patterns are the product of complex ecological processes, the increasing com-plexity of correlative species distribution models (SDMs) is not always motivated by ecological theory,but by model fit. The validity of model projections, such as shifts in a species’ climatic niche, becomesquestionable particularly during extrapolations, such as for...

  17. A Peep into the Uncertainty-Complexity-Relevance Modeling Trilemma through Global Sensitivity and Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Munoz-Carpena, R.; Muller, S. J.; Chu, M.; Kiker, G. A.; Perz, S. G.

    2014-12-01

    Model Model complexity resulting from the need to integrate environmental system components cannot be understated. In particular, additional emphasis is urgently needed on rational approaches to guide decision making through uncertainties surrounding the integrated system across decision-relevant scales. However, in spite of the difficulties that the consideration of modeling uncertainty represent for the decision process, it should not be avoided or the value and science behind the models will be undermined. These two issues; i.e., the need for coupled models that can answer the pertinent questions and the need for models that do so with sufficient certainty, are the key indicators of a model's relevance. Model relevance is inextricably linked with model complexity. Although model complexity has advanced greatly in recent years there has been little work to rigorously characterize the threshold of relevance in integrated and complex models. Formally assessing the relevance of the model in the face of increasing complexity would be valuable because there is growing unease among developers and users of complex models about the cumulative effects of various sources of uncertainty on model outputs. In particular, this issue has prompted doubt over whether the considerable effort going into further elaborating complex models will in fact yield the expected payback. New approaches have been proposed recently to evaluate the uncertainty-complexity-relevance modeling trilemma (Muller, Muñoz-Carpena and Kiker, 2011) by incorporating state-of-the-art global sensitivity and uncertainty analysis (GSA/UA) in every step of the model development so as to quantify not only the uncertainty introduced by the addition of new environmental components, but the effect that these new components have over existing components (interactions, non-linear responses). Outputs from the analysis can also be used to quantify system resilience (stability, alternative states, thresholds or tipping points) in the face of environmental and anthropogenic change (Perz, Muñoz-Carpena, Kiker and Holt, 2013), and through MonteCarlo mapping potential management activities over the most important factors or processes to influence the system towards behavioral (desirable) outcomes (Chu-Agor, Muñoz-Carpena et al., 2012).

  18. Everyday value conflicts and integrative complexity of thought.

    PubMed

    Myyry, Liisa

    2002-12-01

    This study examined the value pluralism model in everyday value conflicts, and the effect of issue context on complexity of thought. According to the cognitive manager model we hypothesized that respondents would obtain a higher level of integrative complexity on personal issues that on professional and general issues. We also explored the relations of integrative complexity to value priorities, measured by the Schwartz Value Survey, and to emotional empathy. The value pluralism model was not supported by the data collected from 126 university students from social science, business and technology. The cognitive manager model was partially confirmed by data from females but not from males. Concerning value priorities, more complex respondents had higher regard for self-transcendence values, and less complex respondents for self-enhancement values Emotional empathy was also significantly related to complexity score.

  19. Accuracy of travel time distribution (TTD) models as affected by TTD complexity, observation errors, and model and tracer selection

    USGS Publications Warehouse

    Green, Christopher T.; Zhang, Yong; Jurgens, Bryant C.; Starn, J. Jeffrey; Landon, Matthew K.

    2014-01-01

    Analytical models of the travel time distribution (TTD) from a source area to a sample location are often used to estimate groundwater ages and solute concentration trends. The accuracies of these models are not well known for geologically complex aquifers. In this study, synthetic datasets were used to quantify the accuracy of four analytical TTD models as affected by TTD complexity, observation errors, model selection, and tracer selection. Synthetic TTDs and tracer data were generated from existing numerical models with complex hydrofacies distributions for one public-supply well and 14 monitoring wells in the Central Valley, California. Analytical TTD models were calibrated to synthetic tracer data, and prediction errors were determined for estimates of TTDs and conservative tracer (NO3−) concentrations. Analytical models included a new, scale-dependent dispersivity model (SDM) for two-dimensional transport from the watertable to a well, and three other established analytical models. The relative influence of the error sources (TTD complexity, observation error, model selection, and tracer selection) depended on the type of prediction. Geological complexity gave rise to complex TTDs in monitoring wells that strongly affected errors of the estimated TTDs. However, prediction errors for NO3− and median age depended more on tracer concentration errors. The SDM tended to give the most accurate estimates of the vertical velocity and other predictions, although TTD model selection had minor effects overall. Adding tracers improved predictions if the new tracers had different input histories. Studies using TTD models should focus on the factors that most strongly affect the desired predictions.

  20. Challenges in Developing Models Describing Complex Soil Systems

    NASA Astrophysics Data System (ADS)

    Simunek, J.; Jacques, D.

    2014-12-01

    Quantitative mechanistic models that consider basic physical, mechanical, chemical, and biological processes have the potential to be powerful tools to integrate our understanding of complex soil systems, and the soil science community has often called for models that would include a large number of these diverse processes. However, once attempts have been made to develop such models, the response from the community has not always been overwhelming, especially after it discovered that these models are consequently highly complex, requiring not only a large number of parameters, not all of which can be easily (or at all) measured and/or identified, and which are often associated with large uncertainties, but also requiring from their users deep knowledge of all/most of these implemented physical, mechanical, chemical and biological processes. Real, or perceived, complexity of these models then discourages users from using them even for relatively simple applications, for which they would be perfectly adequate. Due to the nonlinear nature and chemical/biological complexity of the soil systems, it is also virtually impossible to verify these types of models analytically, raising doubts about their applicability. Code inter-comparisons, which is then likely the most suitable method to assess code capabilities and model performance, requires existence of multiple models of similar/overlapping capabilities, which may not always exist. It is thus a challenge not only to developed models describing complex soil systems, but also to persuade the soil science community in using them. As a result, complex quantitative mechanistic models are still an underutilized tool in soil science research. We will demonstrate some of the challenges discussed above on our own efforts in developing quantitative mechanistic models (such as HP1/2) for complex soil systems.

  1. Cachexia induces head and neck changes in locally advanced oropharyngeal carcinoma during definitive cisplatin and image-guided volumetric-modulated arc radiation therapy.

    PubMed

    Mazzola, R; Ricchetti, F; Fiorentino, A; Di Paola, G; Fersino, S; Giaj Levra, N; Ruggieri, R; Alongi, F

    2016-06-01

    Cancer cachexia is a syndrome characterized by weight loss (WL) and sarcopenia. Aim of the study was to assess the impact of cachexia on head and neck changes during definitive cisplatin and image-guided volumetric-modulated arc radiation therapy in a series of locally advanced oropharyngeal cancer. Volume variations of sternocleidomastoid muscle (SCM) were considered as surrogate of muscle changes related to sarcopenia. Two head and neck diameters, encompassing the cranial limits of II and III nodal levels (defined as 'head diameter' and 'neck diameter', respectively), were measured. All parameters were defined retrospectively by means of on-board cone beam computed tomography images at 1-8th to 15-22th and at last fraction (fx) of radiotherapy (RT). Cachexia was defined as WL >5% during treatment. Analysis was conducted correlating the parameter changes with three WL ranges: <5, 5-9 and>10%. Thirty patients were evaluated. One hundred and fifty contoured SCMs and three hundred diameters were collected. Median WL was 6.5% (range, 0-16%). The most significant SCM shrinkage was recorded at 15th fx (mean 1.6 cc) related to WL 5-9% and WL >10% (P 0.001). For 'head diameter', the peak reduction was recorded at the 15th fx (mean 8 mm), statistically correlated to WL >10% (P 0.001). The peak reduction in 'neck diameter' was registered at the 22th fx (mean 6 mm), with a gradual reduction until the end of treatment for WL >5%. In a homogeneous cohort of patients, present study quantified the impact of cachexia on head and neck changes. Present data could provide adaptive RT implications for further investigations.

  2. Piezoresistive Carbon-based Hybrid Sensor for Body-Mounted Biomedical Applications

    NASA Astrophysics Data System (ADS)

    Melnykowycz, M.; Tschudin, M.; Clemens, F.

    2017-02-01

    For body-mounted sensor applications, the evolution of soft condensed matter sensor (SCMS) materials offer conformability andit enables mechanical compliance between the body surface and the sensing mechanism. A piezoresistive hybrid sensor and compliant meta-material sub-structure provided a way to engineer sensor physical designs through modification of the mechanical properties of the compliant design. A piezoresistive fiber sensor was produced by combining a thermoplastic elastomer (TPE) matrix with Carbon Black (CB) particles in 1:1 mass ratio. Feedstock was extruded in monofilament fiber form (diameter of 300 microns), resulting in a highly stretchable sensor (strain sensor range up to 100%) with linear resistance signal response. The soft condensed matter sensor was integrated into a hybrid design including a 3D printed metamaterial structure combined with a soft silicone. An auxetic unit cell was chosen (with negative Poisson’s Ratio) in the design in order to combine with the soft silicon, which exhibits a high Poisson’s Ratio. The hybrid sensor design was subjected to mechanical tensile testing up to 50% strain (with gauge factor calculation for sensor performance), and then utilized for strain-based sensing applications on the body including gesture recognition and vital function monitoring including blood pulse-wave and breath monitoring. A 10 gesture Natural User Interface (NUI) test protocol was utilized to show the effectiveness of a single wrist-mounted sensor to identify discrete gestures including finger and hand motions. These hand motions were chosen specifically for Human Computer Interaction (HCI) applications. The blood pulse-wave signal was monitored with the hand at rest, in a wrist-mounted. In addition different breathing patterns were investigated, including normal breathing and coughing, using a belt and chest-mounted configuration.

  3. On the maximum-entropy/autoregressive modeling of time series

    NASA Technical Reports Server (NTRS)

    Chao, B. F.

    1984-01-01

    The autoregressive (AR) model of a random process is interpreted in the light of the Prony's relation which relates a complex conjugate pair of poles of the AR process in the z-plane (or the z domain) on the one hand, to the complex frequency of one complex harmonic function in the time domain on the other. Thus the AR model of a time series is one that models the time series as a linear combination of complex harmonic functions, which include pure sinusoids and real exponentials as special cases. An AR model is completely determined by its z-domain pole configuration. The maximum-entropy/autogressive (ME/AR) spectrum, defined on the unit circle of the z-plane (or the frequency domain), is nothing but a convenient, but ambiguous visual representation. It is asserted that the position and shape of a spectral peak is determined by the corresponding complex frequency, and the height of the spectral peak contains little information about the complex amplitude of the complex harmonic functions.

  4. Nonlinear complexity of random visibility graph and Lempel-Ziv on multitype range-intensity interacting financial dynamics

    NASA Astrophysics Data System (ADS)

    Zhang, Yali; Wang, Jun

    2017-09-01

    In an attempt to investigate the nonlinear complex evolution of financial dynamics, a new financial price model - the multitype range-intensity contact (MRIC) financial model, is developed based on the multitype range-intensity interacting contact system, in which the interaction and transmission of different types of investment attitudes in a stock market are simulated by viruses spreading. Two new random visibility graph (VG) based analyses and Lempel-Ziv complexity (LZC) are applied to study the complex behaviors of return time series and the corresponding random sorted series. The VG method is the complex network theory, and the LZC is a non-parametric measure of complexity reflecting the rate of new pattern generation of a series. In this work, the real stock market indices are considered to be comparatively studied with the simulation data of the proposed model. Further, the numerical empirical study shows the similar complexity behaviors between the model and the real markets, the research confirms that the financial model is reasonable to some extent.

  5. Rainfall runoff modelling of the Upper Ganga and Brahmaputra basins using PERSiST.

    PubMed

    Futter, M N; Whitehead, P G; Sarkar, S; Rodda, H; Crossman, J

    2015-06-01

    There are ongoing discussions about the appropriate level of complexity and sources of uncertainty in rainfall runoff models. Simulations for operational hydrology, flood forecasting or nutrient transport all warrant different levels of complexity in the modelling approach. More complex model structures are appropriate for simulations of land-cover dependent nutrient transport while more parsimonious model structures may be adequate for runoff simulation. The appropriate level of complexity is also dependent on data availability. Here, we use PERSiST; a simple, semi-distributed dynamic rainfall-runoff modelling toolkit to simulate flows in the Upper Ganges and Brahmaputra rivers. We present two sets of simulations driven by single time series of daily precipitation and temperature using simple (A) and complex (B) model structures based on uniform and hydrochemically relevant land covers respectively. Models were compared based on ensembles of Bayesian Information Criterion (BIC) statistics. Equifinality was observed for parameters but not for model structures. Model performance was better for the more complex (B) structural representations than for parsimonious model structures. The results show that structural uncertainty is more important than parameter uncertainty. The ensembles of BIC statistics suggested that neither structural representation was preferable in a statistical sense. Simulations presented here confirm that relatively simple models with limited data requirements can be used to credibly simulate flows and water balance components needed for nutrient flux modelling in large, data-poor basins.

  6. Assessing Complexity in Learning Outcomes--A Comparison between the SOLO Taxonomy and the Model of Hierarchical Complexity

    ERIC Educational Resources Information Center

    Stålne, Kristian; Kjellström, Sofia; Utriainen, Jukka

    2016-01-01

    An important aspect of higher education is to educate students who can manage complex relationships and solve complex problems. Teachers need to be able to evaluate course content with regard to complexity, as well as evaluate students' ability to assimilate complex content and express it in the form of a learning outcome. One model for evaluating…

  7. Multifaceted Modelling of Complex Business Enterprises

    PubMed Central

    2015-01-01

    We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control. PMID:26247591

  8. Multifaceted Modelling of Complex Business Enterprises.

    PubMed

    Chakraborty, Subrata; Mengersen, Kerrie; Fidge, Colin; Ma, Lin; Lassen, David

    2015-01-01

    We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control.

  9. Nonlinear complexity behaviors of agent-based 3D Potts financial dynamics with random environments

    NASA Astrophysics Data System (ADS)

    Xing, Yani; Wang, Jun

    2018-02-01

    A new microscopic 3D Potts interaction financial price model is established in this work, to investigate the nonlinear complexity behaviors of stock markets. 3D Potts model, which extends the 2D Potts model to three-dimensional, is a cubic lattice model to explain the interaction behavior among the agents. In order to explore the complexity of real financial markets and the 3D Potts financial model, a new random coarse-grained Lempel-Ziv complexity is proposed to certain series, such as the price returns, the price volatilities, and the random time d-returns. Then the composite multiscale entropy (CMSE) method is applied to the intrinsic mode functions (IMFs) and the corresponding shuffled data to study the complexity behaviors. The empirical results indicate that the 3D financial model is feasible.

  10. Syntactic Complexity as an Aspect of Text Complexity

    ERIC Educational Resources Information Center

    Frantz, Roger S.; Starr, Laura E.; Bailey, Alison L.

    2015-01-01

    Students' ability to read complex texts is emphasized in the Common Core State Standards (CCSS) for English Language Arts and Literacy. The standards propose a three-part model for measuring text complexity. Although the model presents a robust means for determining text complexity based on a variety of features inherent to a text as well as…

  11. Cumulative complexity: a functional, patient-centered model of patient complexity can improve research and practice.

    PubMed

    Shippee, Nathan D; Shah, Nilay D; May, Carl R; Mair, Frances S; Montori, Victor M

    2012-10-01

    To design a functional, patient-centered model of patient complexity with practical applicability to analytic design and clinical practice. Existing literature on patient complexity has mainly identified its components descriptively and in isolation, lacking clarity as to their combined functions in disrupting care or to how complexity changes over time. The authors developed a cumulative complexity model, which integrates existing literature and emphasizes how clinical and social factors accumulate and interact to complicate patient care. A narrative literature review is used to explicate the model. The model emphasizes a core, patient-level mechanism whereby complicating factors impact care and outcomes: the balance between patient workload of demands and patient capacity to address demands. Workload encompasses the demands on the patient's time and energy, including demands of treatment, self-care, and life in general. Capacity concerns ability to handle work (e.g., functional morbidity, financial/social resources, literacy). Workload-capacity imbalances comprise the mechanism driving patient complexity. Treatment and illness burdens serve as feedback loops, linking negative outcomes to further imbalances, such that complexity may accumulate over time. With its components largely supported by existing literature, the model has implications for analytic design, clinical epidemiology, and clinical practice. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. Reassessing Geophysical Models of the Bushveld Complex in 3D

    NASA Astrophysics Data System (ADS)

    Cole, J.; Webb, S. J.; Finn, C.

    2012-12-01

    Conceptual geophysical models of the Bushveld Igneous Complex show three possible geometries for its mafic component: 1) Separate intrusions with vertical feeders for the eastern and western lobes (Cousins, 1959) 2) Separate dipping sheets for the two lobes (Du Plessis and Kleywegt, 1987) 3) A single saucer-shaped unit connected at depth in the central part between the two lobes (Cawthorn et al, 1998) Model three incorporates isostatic adjustment of the crust in response to the weight of the dense mafic material. The model was corroborated by results of a broadband seismic array over southern Africa, known as the Southern African Seismic Experiment (SASE) (Nguuri, et al, 2001; Webb et al, 2004). This new information about the crustal thickness only became available in the last decade and could not be considered in the earlier models. Nevertheless, there is still on-going debate as to which model is correct. All of the models published up to now have been done in 2 or 2.5 dimensions. This is not well suited to modelling the complex geometry of the Bushveld intrusion. 3D modelling takes into account effects of variations in geometry and geophysical properties of lithologies in a full three dimensional sense and therefore affects the shape and amplitude of calculated fields. The main question is how the new knowledge of the increased crustal thickness, as well as the complexity of the Bushveld Complex, will impact on the gravity fields calculated for the existing conceptual models, when modelling in 3D. The three published geophysical models were remodelled using full 3Dl potential field modelling software, and including crustal thickness obtained from the SASE. The aim was not to construct very detailed models, but to test the existing conceptual models in an equally conceptual way. Firstly a specific 2D model was recreated in 3D, without crustal thickening, to establish the difference between 2D and 3D results. Then the thicker crust was added. Including the less dense, thicker crust underneath the Bushveld Complex necessitates the presence of dense material in the central area between the eastern and western lobes. The simplest way to achieve this is to model the mafic component of the Bushveld Complex as a single intrusion. This is similar to what the first students of the Bushveld Complex suggested. Conceptual models are by definition simplified versions of the real situation, and the geometry of the Bushveld Complex is expected to be much more intricate. References Cawthorn, R.G., Cooper, G.R.J., Webb, S.J. (1998). Connectivity between the western and eastern limbs of the Bushveld Complex. S Afr J Geol, 101, 291-298. Cousins, C.A. (1959). The structure of the mafic portion of the Bushveld Igneous Complex. Trans Geol Soc S Afr, 62, 179-189. Du Plessis, A., Kleywegt, R.J. (1987). A dipping sheet model for the mafic lobes of the Bushveld Complex. S Afr J Geol, 90, 1-6. Nguuri, T.K., Gore, J., James, D.E., Webb, S.J., Wright, C., Zengeni, T.G., Gwavava, O., Snoke, J.A. and Kaapvaal Seismic Group. (2001). Crustal structure beneath southern Africa and its implications for the formation and evolution of the Kaapvaal and Zimbabwe cratons. Geoph Res Lett, 28, 2501-2504. Webb, S.J., Cawthorn, R.G., Nguuri, T., James, D. (2004). Gravity modelling of Bushveld Complex connectivity supported by Southern African Seismic Experiment results, S Afr J Geol, 107, 207-218.

  13. A simple model clarifies the complicated relationships of complex networks

    PubMed Central

    Zheng, Bojin; Wu, Hongrun; Kuang, Li; Qin, Jun; Du, Wenhua; Wang, Jianmin; Li, Deyi

    2014-01-01

    Real-world networks such as the Internet and WWW have many common traits. Until now, hundreds of models were proposed to characterize these traits for understanding the networks. Because different models used very different mechanisms, it is widely believed that these traits origin from different causes. However, we find that a simple model based on optimisation can produce many traits, including scale-free, small-world, ultra small-world, Delta-distribution, compact, fractal, regular and random networks. Moreover, by revising the proposed model, the community-structure networks are generated. By this model and the revised versions, the complicated relationships of complex networks are illustrated. The model brings a new universal perspective to the understanding of complex networks and provide a universal method to model complex networks from the viewpoint of optimisation. PMID:25160506

  14. Pattern-oriented modeling of agent-based complex systems: Lessons from ecology

    USGS Publications Warehouse

    Grimm, Volker; Revilla, Eloy; Berger, Uta; Jeltsch, Florian; Mooij, Wolf M.; Railsback, Steven F.; Thulke, Hans-Hermann; Weiner, Jacob; Wiegand, Thorsten; DeAngelis, Donald L.

    2005-01-01

    Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity.

  15. Pattern-Oriented Modeling of Agent-Based Complex Systems: Lessons from Ecology

    NASA Astrophysics Data System (ADS)

    Grimm, Volker; Revilla, Eloy; Berger, Uta; Jeltsch, Florian; Mooij, Wolf M.; Railsback, Steven F.; Thulke, Hans-Hermann; Weiner, Jacob; Wiegand, Thorsten; DeAngelis, Donald L.

    2005-11-01

    Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity.

  16. Epidemic threshold of the susceptible-infected-susceptible model on complex networks

    NASA Astrophysics Data System (ADS)

    Lee, Hyun Keun; Shim, Pyoung-Seop; Noh, Jae Dong

    2013-06-01

    We demonstrate that the susceptible-infected-susceptible (SIS) model on complex networks can have an inactive Griffiths phase characterized by a slow relaxation dynamics. It contrasts with the mean-field theoretical prediction that the SIS model on complex networks is active at any nonzero infection rate. The dynamic fluctuation of infected nodes, ignored in the mean field approach, is responsible for the inactive phase. It is proposed that the question whether the epidemic threshold of the SIS model on complex networks is zero or not can be resolved by the percolation threshold in a model where nodes are occupied in degree-descending order. Our arguments are supported by the numerical studies on scale-free network models.

  17. Questions regarding the predictive value of one evolved complex adaptive system for a second: exemplified by the SOD1 mouse.

    PubMed

    Greek, Ray; Hansen, Lawrence A

    2013-11-01

    We surveyed the scientific literature regarding amyotrophic lateral sclerosis, the SOD1 mouse model, complex adaptive systems, evolution, drug development, animal models, and philosophy of science in an attempt to analyze the SOD1 mouse model of amyotrophic lateral sclerosis in the context of evolved complex adaptive systems. Humans and animals are examples of evolved complex adaptive systems. It is difficult to predict the outcome from perturbations to such systems because of the characteristics of complex systems. Modeling even one complex adaptive system in order to predict outcomes from perturbations is difficult. Predicting outcomes to one evolved complex adaptive system based on outcomes from a second, especially when the perturbation occurs at higher levels of organization, is even more problematic. Using animal models to predict human outcomes to perturbations such as disease and drugs should have a very low predictive value. We present empirical evidence confirming this and suggest a theory to explain this phenomenon. We analyze the SOD1 mouse model of amyotrophic lateral sclerosis in order to illustrate this position. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Modelling the evolution of complex conductivity during calcite precipitation on glass beads

    NASA Astrophysics Data System (ADS)

    Leroy, Philippe; Li, Shuai; Jougnot, Damien; Revil, André; Wu, Yuxin

    2017-04-01

    When pH and alkalinity increase, calcite frequently precipitates and hence modifies the petrophysical properties of porous media. The complex conductivity method can be used to directly monitor calcite precipitation in porous media because it is sensitive to the evolution of the mineralogy, pore structure and its connectivity. We have developed a mechanistic grain polarization model considering the electrochemical polarization of the Stern and diffuse layers surrounding calcite particles. Our complex conductivity model depends on the surface charge density of the Stern layer and on the electrical potential at the onset of the diffuse layer, which are computed using a basic Stern model of the calcite/water interface. The complex conductivity measurements of Wu et al. on a column packed with glass beads where calcite precipitation occurs are reproduced by our surface complexation and complex conductivity models. The evolution of the size and shape of calcite particles during the calcite precipitation experiment is estimated by our complex conductivity model. At the early stage of the calcite precipitation experiment, modelled particles sizes increase and calcite particles flatten with time because calcite crystals nucleate at the surface of glass beads and grow into larger calcite grains. At the later stage of the calcite precipitation experiment, modelled sizes and cementation exponents of calcite particles decrease with time because large calcite grains aggregate over multiple glass beads and only small calcite crystals polarize.

  19. Musculoskeletal modelling of human ankle complex: Estimation of ankle joint moments.

    PubMed

    Jamwal, Prashant K; Hussain, Shahid; Tsoi, Yun Ho; Ghayesh, Mergen H; Xie, Sheng Quan

    2017-05-01

    A musculoskeletal model for the ankle complex is vital in order to enhance the understanding of neuro-mechanical control of ankle motions, diagnose ankle disorders and assess subsequent treatments. Motions at the human ankle and foot, however, are complex due to simultaneous movements at the two joints namely, the ankle joint and the subtalar joint. The musculoskeletal elements at the ankle complex, such as ligaments, muscles and tendons, have intricate arrangements and exhibit transient and nonlinear behaviour. This paper develops a musculoskeletal model of the ankle complex considering the biaxial ankle structure. The model provides estimates of overall mechanical characteristics (motion and moments) of ankle complex through consideration of forces applied along ligaments and muscle-tendon units. The dynamics of the ankle complex and its surrounding ligaments and muscle-tendon units is modelled and formulated into a state space model to facilitate simulations. A graphical user interface is also developed during this research in order to include the visual anatomical information by converting it to quantitative information on coordinates. Validation of the ankle model was carried out by comparing its outputs with those published in literature as well as with experimental data obtained from an existing parallel ankle rehabilitation robot. Qualitative agreement was observed between the model and measured data for both, the passive and active ankle motions during trials in terms of displacements and moments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Impact of gastrectomy procedural complexity on surgical outcomes and hospital comparisons.

    PubMed

    Mohanty, Sanjay; Paruch, Jennifer; Bilimoria, Karl Y; Cohen, Mark; Strong, Vivian E; Weber, Sharon M

    2015-08-01

    Most risk adjustment approaches adjust for patient comorbidities and the primary procedure. However, procedures done at the same time as the index case may increase operative risk and merit inclusion in adjustment models for fair hospital comparisons. Our objectives were to evaluate the impact of surgical complexity on postoperative outcomes and hospital comparisons in gastric cancer surgery. Patients who underwent gastric resection for cancer were identified from a large clinical dataset. Procedure complexity was characterized using secondary procedure CPT codes and work relative value units (RVUs). Regression models were developed to evaluate the association between complexity variables and outcomes. The impact of complexity adjustment on model performance and hospital comparisons was examined. Among 3,467 patients who underwent gastrectomy for adenocarcinoma, 2,171 operations were distal and 1,296 total. A secondary procedure was reported for 33% of distal gastrectomies and 59% of total gastrectomies. Six of 10 secondary procedures were associated with adverse outcomes. For example, patients who underwent a synchronous bowel resection had a higher risk of mortality (odds ratio [OR], 2.14; 95% CI, 1.07-4.29) and reoperation (OR, 2.09; 95% CI, 1.26-3.47). Model performance was slightly better for nearly all outcomes with complexity adjustment (mortality c-statistics: standard model, 0.853; secondary procedure model, 0.858; RVU model, 0.855). Hospital ranking did not change substantially after complexity adjustment. Surgical complexity variables are associated with adverse outcomes in gastrectomy, but complexity adjustment does not affect hospital rankings appreciably. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Calibration of Complex Subsurface Reaction Models Using a Surrogate-Model Approach

    EPA Science Inventory

    Application of model assessment techniques to complex subsurface reaction models involves numerous difficulties, including non-trivial model selection, parameter non-uniqueness, and excessive computational burden. To overcome these difficulties, this study introduces SAMM (Simult...

  2. Foundations for Streaming Model Transformations by Complex Event Processing.

    PubMed

    Dávid, István; Ráth, István; Varró, Dániel

    2018-01-01

    Streaming model transformations represent a novel class of transformations to manipulate models whose elements are continuously produced or modified in high volume and with rapid rate of change. Executing streaming transformations requires efficient techniques to recognize activated transformation rules over a live model and a potentially infinite stream of events. In this paper, we propose foundations of streaming model transformations by innovatively integrating incremental model query, complex event processing (CEP) and reactive (event-driven) transformation techniques. Complex event processing allows to identify relevant patterns and sequences of events over an event stream. Our approach enables event streams to include model change events which are automatically and continuously populated by incremental model queries. Furthermore, a reactive rule engine carries out transformations on identified complex event patterns. We provide an integrated domain-specific language with precise semantics for capturing complex event patterns and streaming transformations together with an execution engine, all of which is now part of the Viatra reactive transformation framework. We demonstrate the feasibility of our approach with two case studies: one in an advanced model engineering workflow; and one in the context of on-the-fly gesture recognition.

  3. The effects of numerical-model complexity and observation type on estimated porosity values

    USGS Publications Warehouse

    Starn, Jeffrey; Bagtzoglou, Amvrossios C.; Green, Christopher T.

    2015-01-01

    The relative merits of model complexity and types of observations employed in model calibration are compared. An existing groundwater flow model coupled with an advective transport simulation of the Salt Lake Valley, Utah (USA), is adapted for advective transport, and effective porosity is adjusted until simulated tritium concentrations match concentrations in samples from wells. Two calibration approaches are used: a “complex” highly parameterized porosity field and a “simple” parsimonious model of porosity distribution. The use of an atmospheric tracer (tritium in this case) and apparent ages (from tritium/helium) in model calibration also are discussed. Of the models tested, the complex model (with tritium concentrations and tritium/helium apparent ages) performs best. Although tritium breakthrough curves simulated by complex and simple models are very generally similar, and there is value in the simple model, the complex model is supported by a more realistic porosity distribution and a greater number of estimable parameters. Culling the best quality data did not lead to better calibration, possibly because of processes and aquifer characteristics that are not simulated. Despite many factors that contribute to shortcomings of both the models and the data, useful information is obtained from all the models evaluated. Although any particular prediction of tritium breakthrough may have large errors, overall, the models mimic observed trends.

  4. Application of surface complexation models to anion adsorption by natural materials

    USDA-ARS?s Scientific Manuscript database

    Various chemical models of ion adsorption will be presented and discussed. Chemical models, such as surface complexation models, provide a molecular description of anion adsorption reactions using an equilibrium approach. Two such models, the constant capacitance model and the triple layer model w...

  5. Modeling of protein binary complexes using structural mass spectrometry data

    PubMed Central

    Kamal, J.K. Amisha; Chance, Mark R.

    2008-01-01

    In this article, we describe a general approach to modeling the structure of binary protein complexes using structural mass spectrometry data combined with molecular docking. In the first step, hydroxyl radical mediated oxidative protein footprinting is used to identify residues that experience conformational reorganization due to binding or participate in the binding interface. In the second step, a three-dimensional atomic structure of the complex is derived by computational modeling. Homology modeling approaches are used to define the structures of the individual proteins if footprinting detects significant conformational reorganization as a function of complex formation. A three-dimensional model of the complex is constructed from these binary partners using the ClusPro program, which is composed of docking, energy filtering, and clustering steps. Footprinting data are used to incorporate constraints—positive and/or negative—in the docking step and are also used to decide the type of energy filter—electrostatics or desolvation—in the successive energy-filtering step. By using this approach, we examine the structure of a number of binary complexes of monomeric actin and compare the results to crystallographic data. Based on docking alone, a number of competing models with widely varying structures are observed, one of which is likely to agree with crystallographic data. When the docking steps are guided by footprinting data, accurate models emerge as top scoring. We demonstrate this method with the actin/gelsolin segment-1 complex. We also provide a structural model for the actin/cofilin complex using this approach which does not have a crystal or NMR structure. PMID:18042684

  6. Brainlab: A Python Toolkit to Aid in the Design, Simulation, and Analysis of Spiking Neural Networks with the NeoCortical Simulator.

    PubMed

    Drewes, Rich; Zou, Quan; Goodman, Philip H

    2009-01-01

    Neuroscience modeling experiments often involve multiple complex neural network and cell model variants, complex input stimuli and input protocols, followed by complex data analysis. Coordinating all this complexity becomes a central difficulty for the experimenter. The Python programming language, along with its extensive library packages, has emerged as a leading "glue" tool for managing all sorts of complex programmatic tasks. This paper describes a toolkit called Brainlab, written in Python, that leverages Python's strengths for the task of managing the general complexity of neuroscience modeling experiments. Brainlab was also designed to overcome the major difficulties of working with the NCS (NeoCortical Simulator) environment in particular. Brainlab is an integrated model-building, experimentation, and data analysis environment for the powerful parallel spiking neural network simulator system NCS.

  7. Brainlab: A Python Toolkit to Aid in the Design, Simulation, and Analysis of Spiking Neural Networks with the NeoCortical Simulator

    PubMed Central

    Drewes, Rich; Zou, Quan; Goodman, Philip H.

    2008-01-01

    Neuroscience modeling experiments often involve multiple complex neural network and cell model variants, complex input stimuli and input protocols, followed by complex data analysis. Coordinating all this complexity becomes a central difficulty for the experimenter. The Python programming language, along with its extensive library packages, has emerged as a leading “glue” tool for managing all sorts of complex programmatic tasks. This paper describes a toolkit called Brainlab, written in Python, that leverages Python's strengths for the task of managing the general complexity of neuroscience modeling experiments. Brainlab was also designed to overcome the major difficulties of working with the NCS (NeoCortical Simulator) environment in particular. Brainlab is an integrated model-building, experimentation, and data analysis environment for the powerful parallel spiking neural network simulator system NCS. PMID:19506707

  8. Stochastic simulation of multiscale complex systems with PISKaS: A rule-based approach.

    PubMed

    Perez-Acle, Tomas; Fuenzalida, Ignacio; Martin, Alberto J M; Santibañez, Rodrigo; Avaria, Rodrigo; Bernardin, Alejandro; Bustos, Alvaro M; Garrido, Daniel; Dushoff, Jonathan; Liu, James H

    2018-03-29

    Computational simulation is a widely employed methodology to study the dynamic behavior of complex systems. Although common approaches are based either on ordinary differential equations or stochastic differential equations, these techniques make several assumptions which, when it comes to biological processes, could often lead to unrealistic models. Among others, model approaches based on differential equations entangle kinetics and causality, failing when complexity increases, separating knowledge from models, and assuming that the average behavior of the population encompasses any individual deviation. To overcome these limitations, simulations based on the Stochastic Simulation Algorithm (SSA) appear as a suitable approach to model complex biological systems. In this work, we review three different models executed in PISKaS: a rule-based framework to produce multiscale stochastic simulations of complex systems. These models span multiple time and spatial scales ranging from gene regulation up to Game Theory. In the first example, we describe a model of the core regulatory network of gene expression in Escherichia coli highlighting the continuous model improvement capacities of PISKaS. The second example describes a hypothetical outbreak of the Ebola virus occurring in a compartmentalized environment resembling cities and highways. Finally, in the last example, we illustrate a stochastic model for the prisoner's dilemma; a common approach from social sciences describing complex interactions involving trust within human populations. As whole, these models demonstrate the capabilities of PISKaS providing fertile scenarios where to explore the dynamics of complex systems. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  9. A multi-element cosmological model with a complex space-time topology

    NASA Astrophysics Data System (ADS)

    Kardashev, N. S.; Lipatova, L. N.; Novikov, I. D.; Shatskiy, A. A.

    2015-02-01

    Wormhole models with a complex topology having one entrance and two exits into the same space-time of another universe are considered, as well as models with two entrances from the same space-time and one exit to another universe. These models are used to build a model of a multi-sheeted universe (a multi-element model of the "Multiverse") with a complex topology. Spherical symmetry is assumed in all the models. A Reissner-Norström black-hole model having no singularity beyond the horizon is constructed. The strength of the central singularity of the black hole is analyzed.

  10. History matching of a complex epidemiological model of human immunodeficiency virus transmission by using variance emulation.

    PubMed

    Andrianakis, I; Vernon, I; McCreesh, N; McKinley, T J; Oakley, J E; Nsubuga, R N; Goldstein, M; White, R G

    2017-08-01

    Complex stochastic models are commonplace in epidemiology, but their utility depends on their calibration to empirical data. History matching is a (pre)calibration method that has been applied successfully to complex deterministic models. In this work, we adapt history matching to stochastic models, by emulating the variance in the model outputs, and therefore accounting for its dependence on the model's input values. The method proposed is applied to a real complex epidemiological model of human immunodeficiency virus in Uganda with 22 inputs and 18 outputs, and is found to increase the efficiency of history matching, requiring 70% of the time and 43% fewer simulator evaluations compared with a previous variant of the method. The insight gained into the structure of the human immunodeficiency virus model, and the constraints placed on it, are then discussed.

  11. Acquisition of Complex Systemic Thinking: Mental Models of Evolution

    ERIC Educational Resources Information Center

    d'Apollonia, Sylvia T.; Charles, Elizabeth S.; Boyd, Gary M.

    2004-01-01

    We investigated the impact of introducing college students to complex adaptive systems on their subsequent mental models of evolution compared to those of students taught in the same manner but with no reference to complex systems. The students' mental models (derived from similarity ratings of 12 evolutionary terms using the pathfinder algorithm)…

  12. Designing an Educational Game with Ten Steps to Complex Learning

    ERIC Educational Resources Information Center

    Enfield, Jacob

    2012-01-01

    Few instructional design (ID) models exist which are specific for developing educational games. Moreover, those extant ID models have not been rigorously evaluated. No ID models were found which focus on educational games with complex learning objectives. "Ten Steps to Complex Learning" (TSCL) is based on the four component instructional…

  13. Designing novel cellulase systems through agent-based modeling and global sensitivity analysis.

    PubMed

    Apte, Advait A; Senger, Ryan S; Fong, Stephen S

    2014-01-01

    Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement.

  14. Designing novel cellulase systems through agent-based modeling and global sensitivity analysis

    PubMed Central

    Apte, Advait A; Senger, Ryan S; Fong, Stephen S

    2014-01-01

    Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement. PMID:24830736

  15. Predictive model of complexity in early palliative care: a cohort of advanced cancer patients (PALCOM study).

    PubMed

    Tuca, Albert; Gómez-Martínez, Mónica; Prat, Aleix

    2018-01-01

    Model of early palliative care (PC) integrated in oncology is based on shared care from the diagnosis to the end of life and is mainly focused on patients with greater complexity. However, there is no definition or tools to evaluate PC complexity. The objectives of the study were to identify the factors influencing level determination of complexity, propose predictive models, and build a complexity scale of PC. We performed a prospective, observational, multicenter study in a cohort of advanced cancer patients with an estimated prognosis ≤ 6 months. An ad hoc structured evaluation including socio-demographic and clinical data, symptom burden, functional and cognitive status, psychosocial problems, and existential-ethic dilemmas was recorded systematically. According to this multidimensional evaluation, investigator classified patients as high, medium, or low palliative complexity, associated to need of basic or specialized PC. Logistic regression was used to identify the variables influencing determination of level of PC complexity and explore predictive models. We included 324 patients; 41% were classified as having high PC complexity and 42.9% as medium, both levels being associated with specialized PC. Variables influencing determination of PC complexity were as follows: high symptom burden (OR 3.19 95%CI: 1.72-6.17), difficult pain (OR 2.81 95%CI:1.64-4.9), functional status (OR 0.99 95%CI:0.98-0.9), and social-ethical existential risk factors (OR 3.11 95%CI:1.73-5.77). Logistic analysis of variables allowed construct a complexity model and structured scales (PALCOM 1 and 2) with high predictive value (AUC ROC 76%). This study provides a new model and tools to assess complexity in palliative care, which may be very useful to manage referral to specialized PC services, and agree intensity of their intervention in a model of early-shared care integrated in oncology.

  16. On Using Meta-Modeling and Multi-Modeling to Address Complex Problems

    ERIC Educational Resources Information Center

    Abu Jbara, Ahmed

    2013-01-01

    Models, created using different modeling techniques, usually serve different purposes and provide unique insights. While each modeling technique might be capable of answering specific questions, complex problems require multiple models interoperating to complement/supplement each other; we call this Multi-Modeling. To address the syntactic and…

  17. Bursting Transition Dynamics Within the Pre-Bötzinger Complex

    NASA Astrophysics Data System (ADS)

    Duan, Lixia; Chen, Xi; Tang, Xuhui; Su, Jianzhong

    The pre-Bötzinger complex of the mammalian brain stem plays a crucial role in the respiratory rhythms generation. Neurons within the pre-Bötzinger complex have been found experimentally to yield different firing activities. In this paper, we study the spiking and bursting activities related to the respiratory rhythms in the pre-Bötzinger complex based on a mathematical model proposed by Butera. Using the one-dimensional first recurrence map induced by dynamics, we investigate the different bursting patterns and their transition of the pre-Bötzinger complex neurons based on the Butera model, after we derived a one-dimensional map from the dynamical characters of the differential equations, and we obtained conditions for the transition of different bursting patterns. These analytical results were verified through numerical simulations. We conclude that the one-dimensional map contains similar rhythmic patterns as the Butera model and can be used as a simpler modeling tool to study fast-slow models like pre-Bötzinger complex neural circuit.

  18. Template-based structure modeling of protein-protein interactions

    PubMed Central

    Szilagyi, Andras; Zhang, Yang

    2014-01-01

    The structure of protein-protein complexes can be constructed by using the known structure of other protein complexes as a template. The complex structure templates are generally detected either by homology-based sequence alignments or, given the structure of monomer components, by structure-based comparisons. Critical improvements have been made in recent years by utilizing interface recognition and by recombining monomer and complex template libraries. Encouraging progress has also been witnessed in genome-wide applications of template-based modeling, with modeling accuracy comparable to high-throughput experimental data. Nevertheless, bottlenecks exist due to the incompleteness of the proteinprotein complex structure library and the lack of methods for distant homologous template identification and full-length complex structure refinement. PMID:24721449

  19. The Naïve Overfitting Index Selection (NOIS): A new method to optimize model complexity for hyperspectral data

    NASA Astrophysics Data System (ADS)

    Rocha, Alby D.; Groen, Thomas A.; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Willemen, Louise

    2017-11-01

    The growing number of narrow spectral bands in hyperspectral remote sensing improves the capacity to describe and predict biological processes in ecosystems. But it also poses a challenge to fit empirical models based on such high dimensional data, which often contain correlated and noisy predictors. As sample sizes, to train and validate empirical models, seem not to be increasing at the same rate, overfitting has become a serious concern. Overly complex models lead to overfitting by capturing more than the underlying relationship, and also through fitting random noise in the data. Many regression techniques claim to overcome these problems by using different strategies to constrain complexity, such as limiting the number of terms in the model, by creating latent variables or by shrinking parameter coefficients. This paper is proposing a new method, named Naïve Overfitting Index Selection (NOIS), which makes use of artificially generated spectra, to quantify the relative model overfitting and to select an optimal model complexity supported by the data. The robustness of this new method is assessed by comparing it to a traditional model selection based on cross-validation. The optimal model complexity is determined for seven different regression techniques, such as partial least squares regression, support vector machine, artificial neural network and tree-based regressions using five hyperspectral datasets. The NOIS method selects less complex models, which present accuracies similar to the cross-validation method. The NOIS method reduces the chance of overfitting, thereby avoiding models that present accurate predictions that are only valid for the data used, and too complex to make inferences about the underlying process.

  20. Modeling the Internet of Things, Self-Organizing and Other Complex Adaptive Communication Networks: A Cognitive Agent-Based Computing Approach.

    PubMed

    Laghari, Samreen; Niazi, Muaz A

    2016-01-01

    Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach.

  1. Modeling and complexity of stochastic interacting Lévy type financial price dynamics

    NASA Astrophysics Data System (ADS)

    Wang, Yiduan; Zheng, Shenzhou; Zhang, Wei; Wang, Jun; Wang, Guochao

    2018-06-01

    In attempt to reproduce and investigate nonlinear dynamics of security markets, a novel nonlinear random interacting price dynamics, which is considered as a Lévy type process, is developed and investigated by the combination of lattice oriented percolation and Potts dynamics, which concerns with the instinctive random fluctuation and the fluctuation caused by the spread of the investors' trading attitudes, respectively. To better understand the fluctuation complexity properties of the proposed model, the complexity analyses of random logarithmic price return and corresponding volatility series are preformed, including power-law distribution, Lempel-Ziv complexity and fractional sample entropy. In order to verify the rationality of the proposed model, the corresponding studies of actual security market datasets are also implemented for comparison. The empirical results reveal that this financial price model can reproduce some important complexity features of actual security markets to some extent. The complexity of returns decreases with the increase of parameters γ1 and β respectively, furthermore, the volatility series exhibit lower complexity than the return series

  2. Regulation of the protein-conducting channel by a bound ribosome

    PubMed Central

    Gumbart, James; Trabuco, Leonardo G.; Schreiner, Eduard; Villa, Elizabeth; Schulten, Klaus

    2009-01-01

    Summary During protein synthesis, it is often necessary for the ribosome to form a complex with a membrane-bound channel, the SecY/Sec61 complex, in order to translocate nascent proteins across a cellular membrane. Structural data on the ribosome-channel complex are currently limited to low-resolution cryo-electron microscopy maps, including one showing a bacterial ribosome bound to a monomeric SecY complex. Using that map along with available atomic-level models of the ribosome and SecY, we have determined, through molecular dynamics flexible fitting (MDFF), an atomic-resolution model of the ribosome-channel complex. We characterized computationally the sites of ribosome-SecY interaction within the complex and determined the effect of ribosome binding on the SecY channel. We also constructed a model of a ribosome in complex with a SecY dimer by adding a second copy of SecY to the MDFF-derived model. The study involved 2.7-million-atom simulations over altogether nearly 50 ns. PMID:19913480

  3. Are our dynamic water quality models too complex? A comparison of a new parsimonious phosphorus model, SimplyP, and INCA-P

    NASA Astrophysics Data System (ADS)

    Jackson-Blake, L. A.; Sample, J. E.; Wade, A. J.; Helliwell, R. C.; Skeffington, R. A.

    2017-07-01

    Catchment-scale water quality models are increasingly popular tools for exploring the potential effects of land management, land use change and climate change on water quality. However, the dynamic, catchment-scale nutrient models in common usage are complex, with many uncertain parameters requiring calibration, limiting their usability and robustness. A key question is whether this complexity is justified. To explore this, we developed a parsimonious phosphorus model, SimplyP, incorporating a rainfall-runoff model and a biogeochemical model able to simulate daily streamflow, suspended sediment, and particulate and dissolved phosphorus dynamics. The model's complexity was compared to one popular nutrient model, INCA-P, and the performance of the two models was compared in a small rural catchment in northeast Scotland. For three land use classes, less than six SimplyP parameters must be determined through calibration, the rest may be based on measurements, while INCA-P has around 40 unmeasurable parameters. Despite substantially simpler process-representation, SimplyP performed comparably to INCA-P in both calibration and validation and produced similar long-term projections in response to changes in land management. Results support the hypothesis that INCA-P is overly complex for the study catchment. We hope our findings will help prompt wider model comparison exercises, as well as debate among the water quality modeling community as to whether today's models are fit for purpose. Simpler models such as SimplyP have the potential to be useful management and research tools, building blocks for future model development (prototype code is freely available), or benchmarks against which more complex models could be evaluated.

  4. 2.5D complex resistivity modeling and inversion using unstructured grids

    NASA Astrophysics Data System (ADS)

    Xu, Kaijun; Sun, Jie

    2016-04-01

    The characteristic of complex resistivity on rock and ore has been recognized by people for a long time. Generally we have used the Cole-Cole Model(CCM) to describe complex resistivity. It has been proved that the electrical anomaly of geologic body can be quantitative estimated by CCM parameters such as direct resistivity(ρ0), chargeability(m), time constant(τ) and frequency dependence(c). Thus it is very important to obtain the complex parameters of geologic body. It is difficult to approximate complex structures and terrain using traditional rectangular grid. In order to enhance the numerical accuracy and rationality of modeling and inversion, we use an adaptive finite-element algorithm for forward modeling of the frequency-domain 2.5D complex resistivity and implement the conjugate gradient algorithm in the inversion of 2.5D complex resistivity. An adaptive finite element method is applied for solving the 2.5D complex resistivity forward modeling of horizontal electric dipole source. First of all, the CCM is introduced into the Maxwell's equations to calculate the complex resistivity electromagnetic fields. Next, the pseudo delta function is used to distribute electric dipole source. Then the electromagnetic fields can be expressed in terms of the primary fields caused by layered structure and the secondary fields caused by inhomogeneities anomalous conductivity. At last, we calculated the electromagnetic fields response of complex geoelectric structures such as anticline, syncline, fault. The modeling results show that adaptive finite-element methods can automatically improve mesh generation and simulate complex geoelectric models using unstructured grids. The 2.5D complex resistivity invertion is implemented based the conjugate gradient algorithm.The conjugate gradient algorithm doesn't need to compute the sensitivity matrix but directly computes the sensitivity matrix or its transpose multiplying vector. In addition, the inversion target zones are segmented with fine grids and the background zones are segmented with big grid, the method can reduce the grid amounts of inversion, it is very helpful to improve the computational efficiency. The inversion results verify the validity and stability of conjugate gradient inversion algorithm. The results of theoretical calculation indicate that the modeling and inversion of 2.5D complex resistivity using unstructured grids are feasible. Using unstructured grids can improve the accuracy of modeling, but the large number of grids inversion is extremely time-consuming, so the parallel computation for the inversion is necessary. Acknowledgments: We thank to the support of the National Natural Science Foundation of China(41304094).

  5. Complex networks generated by the Penna bit-string model: Emergence of small-world and assortative mixing

    NASA Astrophysics Data System (ADS)

    Li, Chunguang; Maini, Philip K.

    2005-10-01

    The Penna bit-string model successfully encompasses many phenomena of population evolution, including inheritance, mutation, evolution, and aging. If we consider social interactions among individuals in the Penna model, the population will form a complex network. In this paper, we first modify the Verhulst factor to control only the birth rate, and introduce activity-based preferential reproduction of offspring in the Penna model. The social interactions among individuals are generated by both inheritance and activity-based preferential increase. Then we study the properties of the complex network generated by the modified Penna model. We find that the resulting complex network has a small-world effect and the assortative mixing property.

  6. Using SEM to Analyze Complex Survey Data: A Comparison between Design-Based Single-Level and Model-Based Multilevel Approaches

    ERIC Educational Resources Information Center

    Wu, Jiun-Yu; Kwok, Oi-man

    2012-01-01

    Both ad-hoc robust sandwich standard error estimators (design-based approach) and multilevel analysis (model-based approach) are commonly used for analyzing complex survey data with nonindependent observations. Although these 2 approaches perform equally well on analyzing complex survey data with equal between- and within-level model structures…

  7. Contingency Detection in a Complex World: A Developmental Model and Implications for Atypical Development

    ERIC Educational Resources Information Center

    Northrup, Jessie Bolz

    2017-01-01

    The present article proposes a new developmental model of how young infants adapt and respond to complex contingencies in their environment, and how this influences development. The model proposes that typically developing infants adjust to an increasingly complex environment in ways that make it easier for them to allocate limited attentional…

  8. Scientific and technical complex for modeling, researching and testing of rocket-space vehicles’ electric power installations

    NASA Astrophysics Data System (ADS)

    Bezruchko, Konstantin; Davidov, Albert

    2009-01-01

    In the given article scientific and technical complex for modeling, researching and testing of rocket-space vehicles' power installations which was created in Power Source Laboratory of National Aerospace University "KhAI" is described. This scientific and technical complex gives the opportunity to replace the full-sized tests on model tests and to reduce financial and temporary inputs at modeling, researching and testing of rocket-space vehicles' power installations. Using the given complex it is possible to solve the problems of designing and researching of rocket-space vehicles' power installations efficiently, and also to provide experimental researches of physical processes and tests of solar and chemical batteries of rocket-space complexes and space vehicles. Scientific and technical complex also allows providing accelerated tests, diagnostics, life-time control and restoring of chemical accumulators for rocket-space vehicles' power supply systems.

  9. Food-web complexity emerging from ecological dynamics on adaptive networks.

    PubMed

    Garcia-Domingo, Josep L; Saldaña, Joan

    2007-08-21

    Food webs are complex networks describing trophic interactions in ecological communities. Since Robert May's seminal work on random structured food webs, the complexity-stability debate is a central issue in ecology: does network complexity increase or decrease food-web persistence? A multi-species predator-prey model incorporating adaptive predation shows that the action of ecological dynamics on the topology of a food web (whose initial configuration is generated either by the cascade model or by the niche model) render, when a significant fraction of adaptive predators is present, similar hyperbolic complexity-persistence relationships as those observed in empirical food webs. It is also shown that the apparent positive relation between complexity and persistence in food webs generated under the cascade model, which has been pointed out in previous papers, disappears when the final connection is used instead of the initial one to explain species persistence.

  10. An overview of structurally complex network-based modeling of public opinion in the “We the Media” era

    NASA Astrophysics Data System (ADS)

    Wang, Guanghui; Wang, Yufei; Liu, Yijun; Chi, Yuxue

    2018-05-01

    As the transmission of public opinion on the Internet in the “We the Media” era tends to be supraterritorial, concealed and complex, the traditional “point-to-surface” transmission of information has been transformed into “point-to-point” reciprocal transmission. A foundation for studies of the evolution of public opinion and its transmission on the Internet in the “We the Media” era can be laid by converting the massive amounts of fragmented information on public opinion that exists on “We the Media” platforms into structurally complex networks of information. This paper describes studies of structurally complex network-based modeling of public opinion on the Internet in the “We the Media” era from the perspective of the development and evolution of complex networks. The progress that has been made in research projects relevant to the structural modeling of public opinion on the Internet is comprehensively summarized. The review considers aspects such as regular grid-based modeling of the rules that describe the propagation of public opinion on the Internet in the “We the Media” era, social network modeling, dynamic network modeling, and supernetwork modeling. Moreover, an outlook for future studies that address complex network-based modeling of public opinion on the Internet is put forward as a summary from the perspective of modeling conducted using the techniques mentioned above.

  11. Using iMCFA to Perform the CFA, Multilevel CFA, and Maximum Model for Analyzing Complex Survey Data.

    PubMed

    Wu, Jiun-Yu; Lee, Yuan-Hsuan; Lin, John J H

    2018-01-01

    To construct CFA, MCFA, and maximum MCFA with LISREL v.8 and below, we provide iMCFA (integrated Multilevel Confirmatory Analysis) to examine the potential multilevel factorial structure in the complex survey data. Modeling multilevel structure for complex survey data is complicated because building a multilevel model is not an infallible statistical strategy unless the hypothesized model is close to the real data structure. Methodologists have suggested using different modeling techniques to investigate potential multilevel structure of survey data. Using iMCFA, researchers can visually set the between- and within-level factorial structure to fit MCFA, CFA and/or MAX MCFA models for complex survey data. iMCFA can then yield between- and within-level variance-covariance matrices, calculate intraclass correlations, perform the analyses and generate the outputs for respective models. The summary of the analytical outputs from LISREL is gathered and tabulated for further model comparison and interpretation. iMCFA also provides LISREL syntax of different models for researchers' future use. An empirical and a simulated multilevel dataset with complex and simple structures in the within or between level was used to illustrate the usability and the effectiveness of the iMCFA procedure on analyzing complex survey data. The analytic results of iMCFA using Muthen's limited information estimator were compared with those of Mplus using Full Information Maximum Likelihood regarding the effectiveness of different estimation methods.

  12. Robust Decision Making: The Cognitive and Computational Modeling of Team Problem Solving for Decision Making under Complex and Dynamic Conditions

    DTIC Science & Technology

    2015-07-14

    AFRL-OSR-VA-TR-2015-0202 Robust Decision Making: The Cognitive and Computational Modeling of Team Problem Solving for Decision Making under Complex...Computational Modeling of Team Problem Solving for Decision Making Under Complex and Dynamic Conditions 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-12-1...functioning as they solve complex problems, and propose the means to improve the performance of teams, under changing or adversarial conditions. By

  13. Surface complexation modeling

    USDA-ARS?s Scientific Manuscript database

    Adsorption-desorption reactions are important processes that affect the transport of contaminants in the environment. Surface complexation models are chemical models that can account for the effects of variable chemical conditions, such as pH, on adsorption reactions. These models define specific ...

  14. Research on application of intelligent computation based LUCC model in urbanization process

    NASA Astrophysics Data System (ADS)

    Chen, Zemin

    2007-06-01

    Global change study is an interdisciplinary and comprehensive research activity with international cooperation, arising in 1980s, with the largest scopes. The interaction between land use and cover change, as a research field with the crossing of natural science and social science, has become one of core subjects of global change study as well as the front edge and hot point of it. It is necessary to develop research on land use and cover change in urbanization process and build an analog model of urbanization to carry out description, simulation and analysis on dynamic behaviors in urban development change as well as to understand basic characteristics and rules of urbanization process. This has positive practical and theoretical significance for formulating urban and regional sustainable development strategy. The effect of urbanization on land use and cover change is mainly embodied in the change of quantity structure and space structure of urban space, and LUCC model in urbanization process has been an important research subject of urban geography and urban planning. In this paper, based upon previous research achievements, the writer systematically analyzes the research on land use/cover change in urbanization process with the theories of complexity science research and intelligent computation; builds a model for simulating and forecasting dynamic evolution of urban land use and cover change, on the basis of cellular automation model of complexity science research method and multi-agent theory; expands Markov model, traditional CA model and Agent model, introduces complexity science research theory and intelligent computation theory into LUCC research model to build intelligent computation-based LUCC model for analog research on land use and cover change in urbanization research, and performs case research. The concrete contents are as follows: 1. Complexity of LUCC research in urbanization process. Analyze urbanization process in combination with the contents of complexity science research and the conception of complexity feature to reveal the complexity features of LUCC research in urbanization process. Urban space system is a complex economic and cultural phenomenon as well as a social process, is the comprehensive characterization of urban society, economy and culture, and is a complex space system formed by society, economy and nature. It has dissipative structure characteristics, such as opening, dynamics, self-organization, non-balance etc. Traditional model cannot simulate these social, economic and natural driving forces of LUCC including main feedback relation from LUCC to driving force. 2. Establishment of Markov extended model of LUCC analog research in urbanization process. Firstly, use traditional LUCC research model to compute change speed of regional land use through calculating dynamic degree, exploitation degree and consumption degree of land use; use the theory of fuzzy set to rewrite the traditional Markov model, establish structure transfer matrix of land use, forecast and analyze dynamic change and development trend of land use, and present noticeable problems and corresponding measures in urbanization process according to research results. 3. Application of intelligent computation research and complexity science research method in LUCC analog model in urbanization process. On the basis of detailed elaboration of the theory and the model of LUCC research in urbanization process, analyze the problems of existing model used in LUCC research (namely, difficult to resolve many complexity phenomena in complex urban space system), discuss possible structure realization forms of LUCC analog research in combination with the theories of intelligent computation and complexity science research. Perform application analysis on BP artificial neural network and genetic algorithms of intelligent computation and CA model and MAS technology of complexity science research, discuss their theoretical origins and their own characteristics in detail, elaborate the feasibility of them in LUCC analog research, and bring forward improvement methods and measures on existing problems of this kind of model. 4. Establishment of LUCC analog model in urbanization process based on theories of intelligent computation and complexity science. Based on the research on abovementioned BP artificial neural network, genetic algorithms, CA model and multi-agent technology, put forward improvement methods and application assumption towards their expansion on geography, build LUCC analog model in urbanization process based on CA model and Agent model, realize the combination of learning mechanism of BP artificial neural network and fuzzy logic reasoning, express the regulation with explicit formula, and amend the initial regulation through self study; optimize network structure of LUCC analog model and methods and procedures of model parameters with genetic algorithms. In this paper, I introduce research theory and methods of complexity science into LUCC analog research and presents LUCC analog model based upon CA model and MAS theory. Meanwhile, I carry out corresponding expansion on traditional Markov model and introduce the theory of fuzzy set into data screening and parameter amendment of improved model to improve the accuracy and feasibility of Markov model in the research on land use/cover change.

  15. A Primer for Model Selection: The Decisive Role of Model Complexity

    NASA Astrophysics Data System (ADS)

    Höge, Marvin; Wöhling, Thomas; Nowak, Wolfgang

    2018-03-01

    Selecting a "best" model among several competing candidate models poses an often encountered problem in water resources modeling (and other disciplines which employ models). For a modeler, the best model fulfills a certain purpose best (e.g., flood prediction), which is typically assessed by comparing model simulations to data (e.g., stream flow). Model selection methods find the "best" trade-off between good fit with data and model complexity. In this context, the interpretations of model complexity implied by different model selection methods are crucial, because they represent different underlying goals of modeling. Over the last decades, numerous model selection criteria have been proposed, but modelers who primarily want to apply a model selection criterion often face a lack of guidance for choosing the right criterion that matches their goal. We propose a classification scheme for model selection criteria that helps to find the right criterion for a specific goal, i.e., which employs the correct complexity interpretation. We identify four model selection classes which seek to achieve high predictive density, low predictive error, high model probability, or shortest compression of data. These goals can be achieved by following either nonconsistent or consistent model selection and by either incorporating a Bayesian parameter prior or not. We allocate commonly used criteria to these four classes, analyze how they represent model complexity and what this means for the model selection task. Finally, we provide guidance on choosing the right type of criteria for specific model selection tasks. (A quick guide through all key points is given at the end of the introduction.)

  16. Process consistency in models: The importance of system signatures, expert knowledge, and process complexity

    NASA Astrophysics Data System (ADS)

    Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J.; Savenije, H. H. G.; Gascuel-Odoux, C.

    2014-09-01

    Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus, ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study, the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by four calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce a suite of hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by "prior constraints," inferred from expert knowledge to ensure a model which behaves well with respect to the modeler's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model setup exhibited increased performance in the independent test period and skill to better reproduce all tested signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if counter-balanced by prior constraints, can significantly increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge-driven strategy of constraining models.

  17. Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems.

    PubMed

    Williams, Richard A; Timmis, Jon; Qwarnstrom, Eva E

    2016-01-01

    Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model.

  18. Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems

    PubMed Central

    Timmis, Jon; Qwarnstrom, Eva E.

    2016-01-01

    Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model. PMID:27571414

  19. Network model of bilateral power markets based on complex networks

    NASA Astrophysics Data System (ADS)

    Wu, Yang; Liu, Junyong; Li, Furong; Yan, Zhanxin; Zhang, Li

    2014-06-01

    The bilateral power transaction (BPT) mode becomes a typical market organization with the restructuring of electric power industry, the proper model which could capture its characteristics is in urgent need. However, the model is lacking because of this market organization's complexity. As a promising approach to modeling complex systems, complex networks could provide a sound theoretical framework for developing proper simulation model. In this paper, a complex network model of the BPT market is proposed. In this model, price advantage mechanism is a precondition. Unlike other general commodity transactions, both of the financial layer and the physical layer are considered in the model. Through simulation analysis, the feasibility and validity of the model are verified. At same time, some typical statistical features of BPT network are identified. Namely, the degree distribution follows the power law, the clustering coefficient is low and the average path length is a bit long. Moreover, the topological stability of the BPT network is tested. The results show that the network displays a topological robustness to random market member's failures while it is fragile against deliberate attacks, and the network could resist cascading failure to some extent. These features are helpful for making decisions and risk management in BPT markets.

  20. Modeling uranium(VI) adsorption onto montmorillonite under varying carbonate concentrations: A surface complexation model accounting for the spillover effect on surface potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tournassat, C.; Tinnacher, R. M.; Grangeon, S.

    The prediction of U(VI) adsorption onto montmorillonite clay is confounded by the complexities of: (1) the montmorillonite structure in terms of adsorption sites on basal and edge surfaces, and the complex interactions between the electrical double layers at these surfaces, and (2) U(VI) solution speciation, which can include cationic, anionic and neutral species. Previous U(VI)-montmorillonite adsorption and modeling studies have typically expanded classical surface complexation modeling approaches, initially developed for simple oxides, to include both cation exchange and surface complexation reactions. However, previous models have not taken into account the unique characteristics of electrostatic surface potentials that occur at montmorillonitemore » edge sites, where the electrostatic surface potential of basal plane cation exchange sites influences the surface potential of neighboring edge sites (‘spillover’ effect).« less

  1. Modeling uranium(VI) adsorption onto montmorillonite under varying carbonate concentrations: A surface complexation model accounting for the spillover effect on surface potential

    DOE PAGES

    Tournassat, C.; Tinnacher, R. M.; Grangeon, S.; ...

    2017-10-06

    The prediction of U(VI) adsorption onto montmorillonite clay is confounded by the complexities of: (1) the montmorillonite structure in terms of adsorption sites on basal and edge surfaces, and the complex interactions between the electrical double layers at these surfaces, and (2) U(VI) solution speciation, which can include cationic, anionic and neutral species. Previous U(VI)-montmorillonite adsorption and modeling studies have typically expanded classical surface complexation modeling approaches, initially developed for simple oxides, to include both cation exchange and surface complexation reactions. However, previous models have not taken into account the unique characteristics of electrostatic surface potentials that occur at montmorillonitemore » edge sites, where the electrostatic surface potential of basal plane cation exchange sites influences the surface potential of neighboring edge sites (‘spillover’ effect).« less

  2. Impact of earthquake source complexity and land elevation data resolution on tsunami hazard assessment and fatality estimation

    NASA Astrophysics Data System (ADS)

    Muhammad, Ario; Goda, Katsuichiro

    2018-03-01

    This study investigates the impact of model complexity in source characterization and digital elevation model (DEM) resolution on the accuracy of tsunami hazard assessment and fatality estimation through a case study in Padang, Indonesia. Two types of earthquake source models, i.e. complex and uniform slip models, are adopted by considering three resolutions of DEMs, i.e. 150 m, 50 m, and 10 m. For each of the three grid resolutions, 300 complex source models are generated using new statistical prediction models of earthquake source parameters developed from extensive finite-fault models of past subduction earthquakes, whilst 100 uniform slip models are constructed with variable fault geometry without slip heterogeneity. The results highlight that significant changes to tsunami hazard and fatality estimates are observed with regard to earthquake source complexity and grid resolution. Coarse resolution (i.e. 150 m) leads to inaccurate tsunami hazard prediction and fatality estimation, whilst 50-m and 10-m resolutions produce similar results. However, velocity and momentum flux are sensitive to the grid resolution and hence, at least 10-m grid resolution needs to be implemented when considering flow-based parameters for tsunami hazard and risk assessments. In addition, the results indicate that the tsunami hazard parameters and fatality number are more sensitive to the complexity of earthquake source characterization than the grid resolution. Thus, the uniform models are not recommended for probabilistic tsunami hazard and risk assessments. Finally, the findings confirm that uncertainties of tsunami hazard level and fatality in terms of depth, velocity and momentum flux can be captured and visualized through the complex source modeling approach. From tsunami risk management perspectives, this indeed creates big data, which are useful for making effective and robust decisions.

  3. Sparkle model for AM1 calculation of lanthanide complexes: improved parameters for europium.

    PubMed

    Rocha, Gerd B; Freire, Ricardo O; Da Costa, Nivan B; De Sá, Gilberto F; Simas, Alfredo M

    2004-04-05

    In the present work, we sought to improve our sparkle model for the calculation of lanthanide complexes, SMLC,in various ways: (i) inclusion of the europium atomic mass, (ii) reparametrization of the model within AM1 from a new response function including all distances of the coordination polyhedron for tris(acetylacetonate)(1,10-phenanthroline) europium(III), (iii) implementation of the model in the software package MOPAC93r2, and (iv) inclusion of spherical Gaussian functions in the expression which computes the core-core repulsion energy. The parametrization results indicate that SMLC II is superior to the previous version of the model because Gaussian functions proved essential if one requires a better description of the geometries of the complexes. In order to validate our parametrization, we carried out calculations on 96 europium(III) complexes, selected from Cambridge Structural Database 2003, and compared our predicted ground state geometries with the experimental ones. Our results show that this new parametrization of the SMLC model, with the inclusion of spherical Gaussian functions in the core-core repulsion energy, is better capable of predicting the Eu-ligand distances than the previous version. The unsigned mean error for all interatomic distances Eu-L, in all 96 complexes, which, for the original SMLC is 0.3564 A, is lowered to 0.1993 A when the model was parametrized with the inclusion of two Gaussian functions. Our results also indicate that this model is more applicable to europium complexes with beta-diketone ligands. As such, we conclude that this improved model can be considered a powerful tool for the study of lanthanide complexes and their applications, such as the modeling of light conversion molecular devices.

  4. The practical use of simplicity in developing ground water models

    USGS Publications Warehouse

    Hill, M.C.

    2006-01-01

    The advantages of starting with simple models and building complexity slowly can be significant in the development of ground water models. In many circumstances, simpler models are characterized by fewer defined parameters and shorter execution times. In this work, the number of parameters is used as the primary measure of simplicity and complexity; the advantages of shorter execution times also are considered. The ideas are presented in the context of constructing ground water models but are applicable to many fields. Simplicity first is put in perspective as part of the entire modeling process using 14 guidelines for effective model calibration. It is noted that neither very simple nor very complex models generally produce the most accurate predictions and that determining the appropriate level of complexity is an ill-defined process. It is suggested that a thorough evaluation of observation errors is essential to model development. Finally, specific ways are discussed to design useful ground water models that have fewer parameters and shorter execution times.

  5. Complexity and demographic explanations of cumulative culture.

    PubMed

    Querbes, Adrien; Vaesen, Krist; Houkes, Wybo

    2014-01-01

    Formal models have linked prehistoric and historical instances of technological change (e.g., the Upper Paleolithic transition, cultural loss in Holocene Tasmania, scientific progress since the late nineteenth century) to demographic change. According to these models, cumulation of technological complexity is inhibited by decreasing--while favoured by increasing--population levels. Here we show that these findings are contingent on how complexity is defined: demography plays a much more limited role in sustaining cumulative culture in case formal models deploy Herbert Simon's definition of complexity rather than the particular definitions of complexity hitherto assumed. Given that currently available empirical evidence doesn't afford discriminating proper from improper definitions of complexity, our robustness analyses put into question the force of recent demographic explanations of particular episodes of cultural change.

  6. Statistical Analysis of Complexity Generators for Cost Estimation

    NASA Technical Reports Server (NTRS)

    Rowell, Ginger Holmes

    1999-01-01

    Predicting the cost of cutting edge new technologies involved with spacecraft hardware can be quite complicated. A new feature of the NASA Air Force Cost Model (NAFCOM), called the Complexity Generator, is being developed to model the complexity factors that drive the cost of space hardware. This parametric approach is also designed to account for the differences in cost, based on factors that are unique to each system and subsystem. The cost driver categories included in this model are weight, inheritance from previous missions, technical complexity, and management factors. This paper explains the Complexity Generator framework, the statistical methods used to select the best model within this framework, and the procedures used to find the region of predictability and the prediction intervals for the cost of a mission.

  7. Unified analysis of ensemble and single-complex optical spectral data from light-harvesting complex-2 chromoproteins for gaining deeper insight into bacterial photosynthesis

    NASA Astrophysics Data System (ADS)

    Pajusalu, Mihkel; Kunz, Ralf; Rätsep, Margus; Timpmann, Kõu; Köhler, Jürgen; Freiberg, Arvi

    2015-11-01

    Bacterial light-harvesting pigment-protein complexes are very efficient at converting photons into excitons and transferring them to reaction centers, where the energy is stored in a chemical form. Optical properties of the complexes are known to change significantly in time and also vary from one complex to another; therefore, a detailed understanding of the variations on the level of single complexes and how they accumulate into effects that can be seen on the macroscopic scale is required. While experimental and theoretical methods exist to study the spectral properties of light-harvesting complexes on both individual complex and bulk ensemble levels, they have been developed largely independently of each other. To fill this gap, we simultaneously analyze experimental low-temperature single-complex and bulk ensemble optical spectra of the light-harvesting complex-2 (LH2) chromoproteins from the photosynthetic bacterium Rhodopseudomonas acidophila in order to find a unique theoretical model consistent with both experimental situations. The model, which satisfies most of the observations, combines strong exciton-phonon coupling with significant disorder, characteristic of the proteins. We establish a detailed disorder model that, in addition to containing a C2-symmetrical modulation of the site energies, distinguishes between static intercomplex and slow conformational intracomplex disorders. The model evaluations also verify that, despite best efforts, the single-LH2-complex measurements performed so far may be biased toward complexes with higher Huang-Rhys factors.

  8. Unified analysis of ensemble and single-complex optical spectral data from light-harvesting complex-2 chromoproteins for gaining deeper insight into bacterial photosynthesis.

    PubMed

    Pajusalu, Mihkel; Kunz, Ralf; Rätsep, Margus; Timpmann, Kõu; Köhler, Jürgen; Freiberg, Arvi

    2015-01-01

    Bacterial light-harvesting pigment-protein complexes are very efficient at converting photons into excitons and transferring them to reaction centers, where the energy is stored in a chemical form. Optical properties of the complexes are known to change significantly in time and also vary from one complex to another; therefore, a detailed understanding of the variations on the level of single complexes and how they accumulate into effects that can be seen on the macroscopic scale is required. While experimental and theoretical methods exist to study the spectral properties of light-harvesting complexes on both individual complex and bulk ensemble levels, they have been developed largely independently of each other. To fill this gap, we simultaneously analyze experimental low-temperature single-complex and bulk ensemble optical spectra of the light-harvesting complex-2 (LH2) chromoproteins from the photosynthetic bacterium Rhodopseudomonas acidophila in order to find a unique theoretical model consistent with both experimental situations. The model, which satisfies most of the observations, combines strong exciton-phonon coupling with significant disorder, characteristic of the proteins. We establish a detailed disorder model that, in addition to containing a C_{2}-symmetrical modulation of the site energies, distinguishes between static intercomplex and slow conformational intracomplex disorders. The model evaluations also verify that, despite best efforts, the single-LH2-complex measurements performed so far may be biased toward complexes with higher Huang-Rhys factors.

  9. Complex Rotation Quantum Dynamic Neural Networks (CRQDNN) using Complex Quantum Neuron (CQN): Applications to time series prediction.

    PubMed

    Cui, Yiqian; Shi, Junyou; Wang, Zili

    2015-11-01

    Quantum Neural Networks (QNN) models have attracted great attention since it innovates a new neural computing manner based on quantum entanglement. However, the existing QNN models are mainly based on the real quantum operations, and the potential of quantum entanglement is not fully exploited. In this paper, we proposes a novel quantum neuron model called Complex Quantum Neuron (CQN) that realizes a deep quantum entanglement. Also, a novel hybrid networks model Complex Rotation Quantum Dynamic Neural Networks (CRQDNN) is proposed based on Complex Quantum Neuron (CQN). CRQDNN is a three layer model with both CQN and classical neurons. An infinite impulse response (IIR) filter is embedded in the Networks model to enable the memory function to process time series inputs. The Levenberg-Marquardt (LM) algorithm is used for fast parameter learning. The networks model is developed to conduct time series predictions. Two application studies are done in this paper, including the chaotic time series prediction and electronic remaining useful life (RUL) prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Turbulence modeling needs of commercial CFD codes: Complex flows in the aerospace and automotive industries

    NASA Technical Reports Server (NTRS)

    Befrui, Bizhan A.

    1995-01-01

    This viewgraph presentation discusses the following: STAR-CD computational features; STAR-CD turbulence models; common features of industrial complex flows; industry-specific CFD development requirements; applications and experiences of industrial complex flows, including flow in rotating disc cavities, diffusion hole film cooling, internal blade cooling, and external car aerodynamics; and conclusions on turbulence modeling needs.

  11. Variable Complexity Optimization of Composite Structures

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.

    2002-01-01

    The use of several levels of modeling in design has been dubbed variable complexity modeling. The work under the grant focused on developing variable complexity modeling strategies with emphasis on response surface techniques. Applications included design of stiffened composite plates for improved damage tolerance, the use of response surfaces for fitting weights obtained by structural optimization, and design against uncertainty using response surface techniques.

  12. Bim Automation: Advanced Modeling Generative Process for Complex Structures

    NASA Astrophysics Data System (ADS)

    Banfi, F.; Fai, S.; Brumana, R.

    2017-08-01

    The new paradigm of the complexity of modern and historic structures, which are characterised by complex forms, morphological and typological variables, is one of the greatest challenges for building information modelling (BIM). Generation of complex parametric models needs new scientific knowledge concerning new digital technologies. These elements are helpful to store a vast quantity of information during the life cycle of buildings (LCB). The latest developments of parametric applications do not provide advanced tools, resulting in time-consuming work for the generation of models. This paper presents a method capable of processing and creating complex parametric Building Information Models (BIM) with Non-Uniform to NURBS) with multiple levels of details (Mixed and ReverseLoD) based on accurate 3D photogrammetric and laser scanning surveys. Complex 3D elements are converted into parametric BIM software and finite element applications (BIM to FEA) using specific exchange formats and new modelling tools. The proposed approach has been applied to different case studies: the BIM of modern structure for the courtyard of West Block on Parliament Hill in Ottawa (Ontario) and the BIM of Masegra Castel in Sondrio (Italy), encouraging the dissemination and interaction of scientific results without losing information during the generative process.

  13. Balancing model complexity and measurements in hydrology

    NASA Astrophysics Data System (ADS)

    Van De Giesen, N.; Schoups, G.; Weijs, S. V.

    2012-12-01

    The Data Processing Inequality implies that hydrological modeling can only reduce, and never increase, the amount of information available in the original data used to formulate and calibrate hydrological models: I(X;Z(Y)) ≤ I(X;Y). Still, hydrologists around the world seem quite content building models for "their" watersheds to move our discipline forward. Hydrological models tend to have a hybrid character with respect to underlying physics. Most models make use of some well established physical principles, such as mass and energy balances. One could argue that such principles are based on many observations, and therefore add data. These physical principles, however, are applied to hydrological models that often contain concepts that have no direct counterpart in the observable physical universe, such as "buckets" or "reservoirs" that fill up and empty out over time. These not-so-physical concepts are more like the Artificial Neural Networks and Support Vector Machines of the Artificial Intelligence (AI) community. Within AI, one quickly came to the realization that by increasing model complexity, one could basically fit any dataset but that complexity should be controlled in order to be able to predict unseen events. The more data are available to train or calibrate the model, the more complex it can be. Many complexity control approaches exist in AI, with Solomonoff inductive inference being one of the first formal approaches, the Akaike Information Criterion the most popular, and Statistical Learning Theory arguably being the most comprehensive practical approach. In hydrology, complexity control has hardly been used so far. There are a number of reasons for that lack of interest, the more valid ones of which will be presented during the presentation. For starters, there are no readily available complexity measures for our models. Second, some unrealistic simplifications of the underlying complex physics tend to have a smoothing effect on possible model outcomes, thereby preventing the most obvious results of over-fitting. Thirdly, dependence within and between time series poses an additional analytical problem. Finally, there are arguments to be made that the often discussed "equifinality" in hydrological models is simply a different manifestation of the lack of complexity control. In turn, this points toward a general idea, which is actually quite popular in sciences other than hydrology, that additional data gathering is a good way to increase the information content of our descriptions of hydrological reality.

  14. Modelos estereoquimicos na quimica de coordenacao e organometalica de lantanideos e actinideos: aplicacoes a complexos de torio (iv) com boratos de polipirazolilo (Stereochemical models in lanthanide and actinide coordination and organometallic chemistry: Applications to thorium (IV) complexes with polypyrazolylborates). Doctoral thesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Almeida, J.C.M.

    1990-01-01

    A detailed analysis is made of two stereochemical models commonly used in lanthanide and actinide coordination and organometallic chemistry. Li Xing-fu's Cone Packing Model and K. N. Raymond's Ionic Model. Corrections are introduced in the first model as a basis to discuss the stability and structure of known complexes. A Steric Coordination Number is defined for the second model, based on the solid angle to correlate metal-ligand distances in complexes with the ionic radii of the elements and to assign effective radii to the ligands, related to the donating power of the coordinating atoms. As an application of the models,more » the syntheses and characterizations of thorium(IV) complexes with polypyrazolylborates. (HBPz3) {sup -1} and (HB(3.5-Me2Pz)3) {sup -1}, and alkoxides, aryloxides, carboxylates, amides, thiolates, alkyls and cyclopentadienyl are described and their stabilities discussed. The geometries of the complexes in the solid and in solution are discussed and a mechanism is proposed to explain the fluxionality in solution of the complexes with (HBPz3) {sup -1}.« less

  15. Routine Discovery of Complex Genetic Models using Genetic Algorithms

    PubMed Central

    Moore, Jason H.; Hahn, Lance W.; Ritchie, Marylyn D.; Thornton, Tricia A.; White, Bill C.

    2010-01-01

    Simulation studies are useful in various disciplines for a number of reasons including the development and evaluation of new computational and statistical methods. This is particularly true in human genetics and genetic epidemiology where new analytical methods are needed for the detection and characterization of disease susceptibility genes whose effects are complex, nonlinear, and partially or solely dependent on the effects of other genes (i.e. epistasis or gene-gene interaction). Despite this need, the development of complex genetic models that can be used to simulate data is not always intuitive. In fact, only a few such models have been published. We have previously developed a genetic algorithm approach to discovering complex genetic models in which two single nucleotide polymorphisms (SNPs) influence disease risk solely through nonlinear interactions. In this paper, we extend this approach for the discovery of high-order epistasis models involving three to five SNPs. We demonstrate that the genetic algorithm is capable of routinely discovering interesting high-order epistasis models in which each SNP influences risk of disease only through interactions with the other SNPs in the model. This study opens the door for routine simulation of complex gene-gene interactions among SNPs for the development and evaluation of new statistical and computational approaches for identifying common, complex multifactorial disease susceptibility genes. PMID:20948983

  16. Post-closure biosphere assessment modelling: comparison of complex and more stylised approaches.

    PubMed

    Walke, Russell C; Kirchner, Gerald; Xu, Shulan; Dverstorp, Björn

    2015-10-01

    Geological disposal facilities are the preferred option for high-level radioactive waste, due to their potential to provide isolation from the surface environment (biosphere) on very long timescales. Assessments need to strike a balance between stylised models and more complex approaches that draw more extensively on site-specific information. This paper explores the relative merits of complex versus more stylised biosphere models in the context of a site-specific assessment. The more complex biosphere modelling approach was developed by the Swedish Nuclear Fuel and Waste Management Co (SKB) for the Formark candidate site for a spent nuclear fuel repository in Sweden. SKB's approach is built on a landscape development model, whereby radionuclide releases to distinct hydrological basins/sub-catchments (termed 'objects') are represented as they evolve through land rise and climate change. Each of seventeen of these objects is represented with more than 80 site specific parameters, with about 22 that are time-dependent and result in over 5000 input values per object. The more stylised biosphere models developed for this study represent releases to individual ecosystems without environmental change and include the most plausible transport processes. In the context of regulatory review of the landscape modelling approach adopted in the SR-Site assessment in Sweden, the more stylised representation has helped to build understanding in the more complex modelling approaches by providing bounding results, checking the reasonableness of the more complex modelling, highlighting uncertainties introduced through conceptual assumptions and helping to quantify the conservatisms involved. The more stylised biosphere models are also shown capable of reproducing the results of more complex approaches. A major recommendation is that biosphere assessments need to justify the degree of complexity in modelling approaches as well as simplifying and conservative assumptions. In light of the uncertainties concerning the biosphere on very long timescales, stylised biosphere models are shown to provide a useful point of reference in themselves and remain a valuable tool for nuclear waste disposal licencing procedures. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Managing Complex Interoperability Solutions using Model-Driven Architecture

    DTIC Science & Technology

    2011-06-01

    such as Oracle or MySQL . Each data model for a specific RDBMS is a distinct PSM. Or the system may want to exchange information with other C2...reduced number of transformations, e.g., from an RDBMS physical schema to the corresponding SQL script needed to instantiate the tables in a relational...tance of models. In engineering, a model serves several purposes: 1. It presents an abstract view of a complex system or of a complex information

  18. Modeling the Internet of Things, Self-Organizing and Other Complex Adaptive Communication Networks: A Cognitive Agent-Based Computing Approach

    PubMed Central

    2016-01-01

    Background Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. Purpose It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. Method We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. Results The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach. PMID:26812235

  19. Complexity reduction of biochemical rate expressions.

    PubMed

    Schmidt, Henning; Madsen, Mads F; Danø, Sune; Cedersund, Gunnar

    2008-03-15

    The current trend in dynamical modelling of biochemical systems is to construct more and more mechanistically detailed and thus complex models. The complexity is reflected in the number of dynamic state variables and parameters, as well as in the complexity of the kinetic rate expressions. However, a greater level of complexity, or level of detail, does not necessarily imply better models, or a better understanding of the underlying processes. Data often does not contain enough information to discriminate between different model hypotheses, and such overparameterization makes it hard to establish the validity of the various parts of the model. Consequently, there is an increasing demand for model reduction methods. We present a new reduction method that reduces complex rational rate expressions, such as those often used to describe enzymatic reactions. The method is a novel term-based identifiability analysis, which is easy to use and allows for user-specified reductions of individual rate expressions in complete models. The method is one of the first methods to meet the classical engineering objective of improved parameter identifiability without losing the systems biology demand of preserved biochemical interpretation. The method has been implemented in the Systems Biology Toolbox 2 for MATLAB, which is freely available from http://www.sbtoolbox2.org. The Supplementary Material contains scripts that show how to use it by applying the method to the example models, discussed in this article.

  20. Parameterization and Sensitivity Analysis of a Complex Simulation Model for Mosquito Population Dynamics, Dengue Transmission, and Their Control

    PubMed Central

    Ellis, Alicia M.; Garcia, Andres J.; Focks, Dana A.; Morrison, Amy C.; Scott, Thomas W.

    2011-01-01

    Models can be useful tools for understanding the dynamics and control of mosquito-borne disease. More detailed models may be more realistic and better suited for understanding local disease dynamics; however, evaluating model suitability, accuracy, and performance becomes increasingly difficult with greater model complexity. Sensitivity analysis is a technique that permits exploration of complex models by evaluating the sensitivity of the model to changes in parameters. Here, we present results of sensitivity analyses of two interrelated complex simulation models of mosquito population dynamics and dengue transmission. We found that dengue transmission may be influenced most by survival in each life stage of the mosquito, mosquito biting behavior, and duration of the infectious period in humans. The importance of these biological processes for vector-borne disease models and the overwhelming lack of knowledge about them make acquisition of relevant field data on these biological processes a top research priority. PMID:21813844

  1. Simplified process model discovery based on role-oriented genetic mining.

    PubMed

    Zhao, Weidong; Liu, Xi; Dai, Weihui

    2014-01-01

    Process mining is automated acquisition of process models from event logs. Although many process mining techniques have been developed, most of them are based on control flow. Meanwhile, the existing role-oriented process mining methods focus on correctness and integrity of roles while ignoring role complexity of the process model, which directly impacts understandability and quality of the model. To address these problems, we propose a genetic programming approach to mine the simplified process model. Using a new metric of process complexity in terms of roles as the fitness function, we can find simpler process models. The new role complexity metric of process models is designed from role cohesion and coupling, and applied to discover roles in process models. Moreover, the higher fitness derived from role complexity metric also provides a guideline for redesigning process models. Finally, we conduct case study and experiments to show that the proposed method is more effective for streamlining the process by comparing with related studies.

  2. Emulating a System Dynamics Model with Agent-Based Models: A Methodological Case Study in Simulation of Diabetes Progression

    DOE PAGES

    Schryver, Jack; Nutaro, James; Shankar, Mallikarjun

    2015-10-30

    An agent-based simulation model hierarchy emulating disease states and behaviors critical to progression of diabetes type 2 was designed and implemented in the DEVS framework. The models are translations of basic elements of an established system dynamics model of diabetes. In this model hierarchy, which mimics diabetes progression over an aggregated U.S. population, was dis-aggregated and reconstructed bottom-up at the individual (agent) level. Four levels of model complexity were defined in order to systematically evaluate which parameters are needed to mimic outputs of the system dynamics model. Moreover, the four estimated models attempted to replicate stock counts representing disease statesmore » in the system dynamics model, while estimating impacts of an elderliness factor, obesity factor and health-related behavioral parameters. Health-related behavior was modeled as a simple realization of the Theory of Planned Behavior, a joint function of individual attitude and diffusion of social norms that spread over each agent s social network. Although the most complex agent-based simulation model contained 31 adjustable parameters, all models were considerably less complex than the system dynamics model which required numerous time series inputs to make its predictions. In all three elaborations of the baseline model provided significantly improved fits to the output of the system dynamics model. The performances of the baseline agent-based model and its extensions illustrate a promising approach to translate complex system dynamics models into agent-based model alternatives that are both conceptually simpler and capable of capturing main effects of complex local agent-agent interactions.« less

  3. Emulating a System Dynamics Model with Agent-Based Models: A Methodological Case Study in Simulation of Diabetes Progression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schryver, Jack; Nutaro, James; Shankar, Mallikarjun

    An agent-based simulation model hierarchy emulating disease states and behaviors critical to progression of diabetes type 2 was designed and implemented in the DEVS framework. The models are translations of basic elements of an established system dynamics model of diabetes. In this model hierarchy, which mimics diabetes progression over an aggregated U.S. population, was dis-aggregated and reconstructed bottom-up at the individual (agent) level. Four levels of model complexity were defined in order to systematically evaluate which parameters are needed to mimic outputs of the system dynamics model. Moreover, the four estimated models attempted to replicate stock counts representing disease statesmore » in the system dynamics model, while estimating impacts of an elderliness factor, obesity factor and health-related behavioral parameters. Health-related behavior was modeled as a simple realization of the Theory of Planned Behavior, a joint function of individual attitude and diffusion of social norms that spread over each agent s social network. Although the most complex agent-based simulation model contained 31 adjustable parameters, all models were considerably less complex than the system dynamics model which required numerous time series inputs to make its predictions. In all three elaborations of the baseline model provided significantly improved fits to the output of the system dynamics model. The performances of the baseline agent-based model and its extensions illustrate a promising approach to translate complex system dynamics models into agent-based model alternatives that are both conceptually simpler and capable of capturing main effects of complex local agent-agent interactions.« less

  4. Comparing an annual and daily time-step model for predicting field-scale phosphorus loss

    USDA-ARS?s Scientific Manuscript database

    Numerous models exist for describing phosphorus (P) losses from agricultural fields. The complexity of these models varies considerably ranging from simple empirically-based annual time-step models to more complex process-based daily time step models. While better accuracy is often assumed with more...

  5. Updating the debate on model complexity

    USGS Publications Warehouse

    Simmons, Craig T.; Hunt, Randall J.

    2012-01-01

    As scientists who are trying to understand a complex natural world that cannot be fully characterized in the field, how can we best inform the society in which we live? This founding context was addressed in a special session, “Complexity in Modeling: How Much is Too Much?” convened at the 2011 Geological Society of America Annual Meeting. The session had a variety of thought-provoking presentations—ranging from philosophy to cost-benefit analyses—and provided some areas of broad agreement that were not evident in discussions of the topic in 1998 (Hunt and Zheng, 1999). The session began with a short introduction during which model complexity was framed borrowing from an economic concept, the Law of Diminishing Returns, and an example of enjoyment derived by eating ice cream. Initially, there is increasing satisfaction gained from eating more ice cream, to a point where the gain in satisfaction starts to decrease, ending at a point when the eater sees no value in eating more ice cream. A traditional view of model complexity is similar—understanding gained from modeling can actually decrease if models become unnecessarily complex. However, oversimplified models—those that omit important aspects of the problem needed to make a good prediction—can also limit and confound our understanding. Thus, the goal of all modeling is to find the “sweet spot” of model sophistication—regardless of whether complexity was added sequentially to an overly simple model or collapsed from an initial highly parameterized framework that uses mathematics and statistics to attain an optimum (e.g., Hunt et al., 2007). Thus, holistic parsimony is attained, incorporating “as simple as possible,” as well as the equally important corollary “but no simpler.”

  6. ADAM: analysis of discrete models of biological systems using computer algebra.

    PubMed

    Hinkelmann, Franziska; Brandon, Madison; Guang, Bonny; McNeill, Rustin; Blekherman, Grigoriy; Veliz-Cuba, Alan; Laubenbacher, Reinhard

    2011-07-20

    Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics.

  7. Moving alcohol prevention research forward-Part I: introducing a complex systems paradigm.

    PubMed

    Apostolopoulos, Yorghos; Lemke, Michael K; Barry, Adam E; Lich, Kristen Hassmiller

    2018-02-01

    The drinking environment is a complex system consisting of a number of heterogeneous, evolving and interacting components, which exhibit circular causality and emergent properties. These characteristics reduce the efficacy of commonly used research approaches, which typically do not account for the underlying dynamic complexity of alcohol consumption and the interdependent nature of diverse factors influencing misuse over time. We use alcohol misuse among college students in the United States as an example for framing our argument for a complex systems paradigm. A complex systems paradigm, grounded in socio-ecological and complex systems theories and computational modeling and simulation, is introduced. Theoretical, conceptual, methodological and analytical underpinnings of this paradigm are described in the context of college drinking prevention research. The proposed complex systems paradigm can transcend limitations of traditional approaches, thereby fostering new directions in alcohol prevention research. By conceptualizing student alcohol misuse as a complex adaptive system, computational modeling and simulation methodologies and analytical techniques can be used. Moreover, use of participatory model-building approaches to generate simulation models can further increase stakeholder buy-in, understanding and policymaking. A complex systems paradigm for research into alcohol misuse can provide a holistic understanding of the underlying drinking environment and its long-term trajectory, which can elucidate high-leverage preventive interventions. © 2017 Society for the Study of Addiction.

  8. Model development and validation of geometrically complex eddy current coils using finite element methods

    NASA Astrophysics Data System (ADS)

    Brown, Alexander; Eviston, Connor

    2017-02-01

    Multiple FEM models of complex eddy current coil geometries were created and validated to calculate the change of impedance due to the presence of a notch. Capable realistic simulations of eddy current inspections are required for model assisted probability of detection (MAPOD) studies, inversion algorithms, experimental verification, and tailored probe design for NDE applications. An FEM solver was chosen to model complex real world situations including varying probe dimensions and orientations along with complex probe geometries. This will also enable creation of a probe model library database with variable parameters. Verification and validation was performed using other commercially available eddy current modeling software as well as experimentally collected benchmark data. Data analysis and comparison showed that the created models were able to correctly model the probe and conductor interactions and accurately calculate the change in impedance of several experimental scenarios with acceptable error. The promising results of the models enabled the start of an eddy current probe model library to give experimenters easy access to powerful parameter based eddy current models for alternate project applications.

  9. Complexity and Demographic Explanations of Cumulative Culture

    PubMed Central

    Querbes, Adrien; Vaesen, Krist; Houkes, Wybo

    2014-01-01

    Formal models have linked prehistoric and historical instances of technological change (e.g., the Upper Paleolithic transition, cultural loss in Holocene Tasmania, scientific progress since the late nineteenth century) to demographic change. According to these models, cumulation of technological complexity is inhibited by decreasing— while favoured by increasing—population levels. Here we show that these findings are contingent on how complexity is defined: demography plays a much more limited role in sustaining cumulative culture in case formal models deploy Herbert Simon's definition of complexity rather than the particular definitions of complexity hitherto assumed. Given that currently available empirical evidence doesn't afford discriminating proper from improper definitions of complexity, our robustness analyses put into question the force of recent demographic explanations of particular episodes of cultural change. PMID:25048625

  10. Measuring case-mix complexity of tertiary care hospitals using DRGs.

    PubMed

    Park, Hayoung; Shin, Youngsoo

    2004-02-01

    The objectives of the study were to develop a model that measures and evaluates case-mix complexity of tertiary care hospitals, and to examine the characteristics of such a model. Physician panels defined three classes of case complexity and assigned disease categories represented by Adjacent Diagnosis Related Groups (ADRGs) to one of three case complexity classes. Three types of scores, indicating proportions of inpatients in each case complexity class standardized by the proportions at the national level, were defined to measure the case-mix complexity of a hospital. Discharge information for about 10% of inpatient episodes at 85 hospitals with bed size larger than 400 and their input structure and research and education activity were used to evaluate the case-mix complexity model. Results show its power to predict hospitals with the expected functions of tertiary care hospitals, i.e. resource intensive care, expensive input structure, and high levels of research and education activities.

  11. Theoretical Modeling and Electromagnetic Response of Complex Metamaterials

    DTIC Science & Technology

    2017-03-06

    AFRL-AFOSR-VA-TR-2017-0042 Theoretical Modeling and Electromagnetic Response of Complex Metamaterials Andrea Alu UNIVERSITY OF TEXAS AT AUSTIN Final...Nov 2016 4. TITLE AND SUBTITLE Theoretical Modeling and Electromagnetic Response of Complex Metamaterials 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER...based on parity-time symmetric metasurfaces, and various advances in electromagnetic and acoustic theory and applications. Our findings have opened

  12. The value and cost of complexity in predictive modelling: role of tissue anisotropic conductivity and fibre tracts in neuromodulation

    NASA Astrophysics Data System (ADS)

    Salman Shahid, Syed; Bikson, Marom; Salman, Humaira; Wen, Peng; Ahfock, Tony

    2014-06-01

    Objectives. Computational methods are increasingly used to optimize transcranial direct current stimulation (tDCS) dose strategies and yet complexities of existing approaches limit their clinical access. Since predictive modelling indicates the relevance of subject/pathology based data and hence the need for subject specific modelling, the incremental clinical value of increasingly complex modelling methods must be balanced against the computational and clinical time and costs. For example, the incorporation of multiple tissue layers and measured diffusion tensor (DTI) based conductivity estimates increase model precision but at the cost of clinical and computational resources. Costs related to such complexities aggregate when considering individual optimization and the myriad of potential montages. Here, rather than considering if additional details change current-flow prediction, we consider when added complexities influence clinical decisions. Approach. Towards developing quantitative and qualitative metrics of value/cost associated with computational model complexity, we considered field distributions generated by two 4 × 1 high-definition montages (m1 = 4 × 1 HD montage with anode at C3 and m2 = 4 × 1 HD montage with anode at C1) and a single conventional (m3 = C3-Fp2) tDCS electrode montage. We evaluated statistical methods, including residual error (RE) and relative difference measure (RDM), to consider the clinical impact and utility of increased complexities, namely the influence of skull, muscle and brain anisotropic conductivities in a volume conductor model. Main results. Anisotropy modulated current-flow in a montage and region dependent manner. However, significant statistical changes, produced within montage by anisotropy, did not change qualitative peak and topographic comparisons across montages. Thus for the examples analysed, clinical decision on which dose to select would not be altered by the omission of anisotropic brain conductivity. Significance. Results illustrate the need to rationally balance the role of model complexity, such as anisotropy in detailed current flow analysis versus value in clinical dose design. However, when extending our analysis to include axonal polarization, the results provide presumably clinically meaningful information. Hence the importance of model complexity may be more relevant with cellular level predictions of neuromodulation.

  13. Quantifying uncertainty in high-resolution coupled hydrodynamic-ecosystem models

    NASA Astrophysics Data System (ADS)

    Allen, J. I.; Somerfield, P. J.; Gilbert, F. J.

    2007-01-01

    Marine ecosystem models are becoming increasingly complex and sophisticated, and are being used to estimate the effects of future changes in the earth system with a view to informing important policy decisions. Despite their potential importance, far too little attention has been, and is generally, paid to model errors and the extent to which model outputs actually relate to real-world processes. With the increasing complexity of the models themselves comes an increasing complexity among model results. If we are to develop useful modelling tools for the marine environment we need to be able to understand and quantify the uncertainties inherent in the simulations. Analysing errors within highly multivariate model outputs, and relating them to even more complex and multivariate observational data, are not trivial tasks. Here we describe the application of a series of techniques, including a 2-stage self-organising map (SOM), non-parametric multivariate analysis, and error statistics, to a complex spatio-temporal model run for the period 1988-1989 in the Southern North Sea, coinciding with the North Sea Project which collected a wealth of observational data. We use model output, large spatio-temporally resolved data sets and a combination of methodologies (SOM, MDS, uncertainty metrics) to simplify the problem and to provide tractable information on model performance. The use of a SOM as a clustering tool allows us to simplify the dimensions of the problem while the use of MDS on independent data grouped according to the SOM classification allows us to validate the SOM. The combination of classification and uncertainty metrics allows us to pinpoint the variables and associated processes which require attention in each region. We recommend the use of this combination of techniques for simplifying complex comparisons of model outputs with real data, and analysis of error distributions.

  14. Evidence for complex contagion models of social contagion from observational data

    PubMed Central

    Sprague, Daniel A.

    2017-01-01

    Social influence can lead to behavioural ‘fads’ that are briefly popular and quickly die out. Various models have been proposed for these phenomena, but empirical evidence of their accuracy as real-world predictive tools has so far been absent. Here we find that a ‘complex contagion’ model accurately describes the spread of behaviours driven by online sharing. We found that standard, ‘simple’, contagion often fails to capture both the rapid spread and the long tails of popularity seen in real fads, where our complex contagion model succeeds. Complex contagion also has predictive power: it successfully predicted the peak time and duration of the ALS Icebucket Challenge. The fast spread and longer duration of fads driven by complex contagion has important implications for activities such as publicity campaigns and charity drives. PMID:28686719

  15. XML Encoding of Features Describing Rule-Based Modeling of Reaction Networks with Multi-Component Molecular Complexes

    PubMed Central

    Blinov, Michael L.; Moraru, Ion I.

    2011-01-01

    Multi-state molecules and multi-component complexes are commonly involved in cellular signaling. Accounting for molecules that have multiple potential states, such as a protein that may be phosphorylated on multiple residues, and molecules that combine to form heterogeneous complexes located among multiple compartments, generates an effect of combinatorial complexity. Models involving relatively few signaling molecules can include thousands of distinct chemical species. Several software tools (StochSim, BioNetGen) are already available to deal with combinatorial complexity. Such tools need information standards if models are to be shared, jointly evaluated and developed. Here we discuss XML conventions that can be adopted for modeling biochemical reaction networks described by user-specified reaction rules. These could form a basis for possible future extensions of the Systems Biology Markup Language (SBML). PMID:21464833

  16. FLAME: A platform for high performance computing of complex systems, applied for three case studies

    DOE PAGES

    Kiran, Mariam; Bicak, Mesude; Maleki-Dizaji, Saeedeh; ...

    2011-01-01

    FLAME allows complex models to be automatically parallelised on High Performance Computing (HPC) grids enabling large number of agents to be simulated over short periods of time. Modellers are hindered by complexities of porting models on parallel platforms and time taken to run large simulations on a single machine, which FLAME overcomes. Three case studies from different disciplines were modelled using FLAME, and are presented along with their performance results on a grid.

  17. A Complex Systems Model Approach to Quantified Mineral Resource Appraisal

    USGS Publications Warehouse

    Gettings, M.E.; Bultman, M.W.; Fisher, F.S.

    2004-01-01

    For federal and state land management agencies, mineral resource appraisal has evolved from value-based to outcome-based procedures wherein the consequences of resource development are compared with those of other management options. Complex systems modeling is proposed as a general framework in which to build models that can evaluate outcomes. Three frequently used methods of mineral resource appraisal (subjective probabilistic estimates, weights of evidence modeling, and fuzzy logic modeling) are discussed to obtain insight into methods of incorporating complexity into mineral resource appraisal models. Fuzzy logic and weights of evidence are most easily utilized in complex systems models. A fundamental product of new appraisals is the production of reusable, accessible databases and methodologies so that appraisals can easily be repeated with new or refined data. The data are representations of complex systems and must be so regarded if all of their information content is to be utilized. The proposed generalized model framework is applicable to mineral assessment and other geoscience problems. We begin with a (fuzzy) cognitive map using (+1,0,-1) values for the links and evaluate the map for various scenarios to obtain a ranking of the importance of various links. Fieldwork and modeling studies identify important links and help identify unanticipated links. Next, the links are given membership functions in accordance with the data. Finally, processes are associated with the links; ideally, the controlling physical and chemical events and equations are found for each link. After calibration and testing, this complex systems model is used for predictions under various scenarios.

  18. Turbulence spectra in the noise source regions of the flow around complex surfaces

    NASA Technical Reports Server (NTRS)

    Olsen, W. A.; Boldman, D. R.

    1983-01-01

    The complex turbulent flow around three complex surfaces was measured in detail with a hot wire. The measured data include extensive spatial surveys of the mean velocity and turbulence intensity and measurements of the turbulence spectra and scale length at many locations. The publication of the turbulence data is completed by reporting a summary of the turbulence spectra that were measured within the noise source locations of the flow. The results suggest some useful simplifications in modeling the very complex turbulent flow around complex surfaces for aeroacoustic predictive models. The turbulence spectra also show that noise data from scale models of moderate size can be accurately scaled up to full size.

  19. Effect of shoulder model complexity in upper-body kinematics analysis of the golf swing.

    PubMed

    Bourgain, M; Hybois, S; Thoreux, P; Rouillon, O; Rouch, P; Sauret, C

    2018-06-25

    The golf swing is a complex full body movement during which the spine and shoulders are highly involved. In order to determine shoulder kinematics during this movement, multibody kinematics optimization (MKO) can be recommended to limit the effect of the soft tissue artifact and to avoid joint dislocations or bone penetration in reconstructed kinematics. Classically, in golf biomechanics research, the shoulder is represented by a 3 degrees-of-freedom model representing the glenohumeral joint. More complex and physiological models are already provided in the scientific literature. Particularly, the model used in this study was a full body model and also described motions of clavicles and scapulae. This study aimed at quantifying the effect of utilizing a more complex and physiological shoulder model when studying the golf swing. Results obtained on 20 golfers showed that a more complex and physiologically-accurate model can more efficiently track experimental markers, which resulted in differences in joint kinematics. Hence, the model with 3 degrees-of-freedom between the humerus and the thorax may be inadequate when combined with MKO and a more physiological model would be beneficial. Finally, results would also be improved through a subject-specific approach for the determination of the segment lengths. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Abstraction and model evaluation in category learning.

    PubMed

    Vanpaemel, Wolf; Storms, Gert

    2010-05-01

    Thirty previously published data sets, from seminal category learning tasks, are reanalyzed using the varying abstraction model (VAM). Unlike a prototype-versus-exemplar analysis, which focuses on extreme levels of abstraction only, a VAM analysis also considers the possibility of partial abstraction. Whereas most data sets support no abstraction when only the extreme possibilities are considered, we show that evidence for abstraction can be provided using the broader view on abstraction provided by the VAM. The present results generalize earlier demonstrations of partial abstraction (Vanpaemel & Storms, 2008), in which only a small number of data sets was analyzed. Following the dominant modus operandi in category learning research, Vanpaemel and Storms evaluated the models on their best fit, a practice known to ignore the complexity of the models under consideration. In the present study, in contrast, model evaluation not only relies on the maximal likelihood, but also on the marginal likelihood, which is sensitive to model complexity. Finally, using a large recovery study, it is demonstrated that, across the 30 data sets, complexity differences between the models in the VAM family are small. This indicates that a (computationally challenging) complexity-sensitive model evaluation method is uncalled for, and that the use of a (computationally straightforward) complexity-insensitive model evaluation method is justified.

  1. Discovering Link Communities in Complex Networks by an Integer Programming Model and a Genetic Algorithm

    PubMed Central

    Li, Zhenping; Zhang, Xiang-Sun; Wang, Rui-Sheng; Liu, Hongwei; Zhang, Shihua

    2013-01-01

    Identification of communities in complex networks is an important topic and issue in many fields such as sociology, biology, and computer science. Communities are often defined as groups of related nodes or links that correspond to functional subunits in the corresponding complex systems. While most conventional approaches have focused on discovering communities of nodes, some recent studies start partitioning links to find overlapping communities straightforwardly. In this paper, we propose a new quantity function for link community identification in complex networks. Based on this quantity function we formulate the link community partition problem into an integer programming model which allows us to partition a complex network into overlapping communities. We further propose a genetic algorithm for link community detection which can partition a network into overlapping communities without knowing the number of communities. We test our model and algorithm on both artificial networks and real-world networks. The results demonstrate that the model and algorithm are efficient in detecting overlapping community structure in complex networks. PMID:24386268

  2. The Effect of Sensor Performance on Safe Minefield Transit

    DTIC Science & Technology

    2002-12-01

    the results of the simpler model are not good approximations of the results obtained with the more complex model, suggesting that even greater complexity in maneuver modeling may be desirable for some purposes.

  3. Some Approaches to Modeling Complex Information Systems.

    ERIC Educational Resources Information Center

    Rao, V. Venkata; Zunde, Pranas

    1982-01-01

    Brief discussion of state-of-the-art of modeling complex information systems distinguishes between macrolevel and microlevel modeling of such systems. Network layout and hierarchical system models, simulation, information acquisition and dissemination, databases and information storage, and operating systems are described and assessed. Thirty-four…

  4. Development of structural model of adaptive training complex in ergatic systems for professional use

    NASA Astrophysics Data System (ADS)

    Obukhov, A. D.; Dedov, D. L.; Arkhipov, A. E.

    2018-03-01

    The article considers the structural model of the adaptive training complex (ATC), which reflects the interrelations between the hardware, software and mathematical model of ATC and describes the processes in this subject area. The description of the main components of software and hardware complex, their interaction and functioning within the common system are given. Also the article scrutinizers a brief description of mathematical models of personnel activity, a technical system and influences, the interactions of which formalize the regularities of ATC functioning. The studies of main objects of training complexes and connections between them will make it possible to realize practical implementation of ATC in ergatic systems for professional use.

  5. Assessment of wear dependence parameters in complex model of cutting tool wear

    NASA Astrophysics Data System (ADS)

    Antsev, A. V.; Pasko, N. I.; Antseva, N. V.

    2018-03-01

    This paper addresses wear dependence of the generic efficient life period of cutting tools taken as an aggregate of the law of tool wear rate distribution and dependence of parameters of this law's on the cutting mode, factoring in the random factor as exemplified by the complex model of wear. The complex model of wear takes into account the variance of cutting properties within one batch of tools, variance in machinability within one batch of workpieces, and the stochastic nature of the wear process itself. A technique of assessment of wear dependence parameters in a complex model of cutting tool wear is provided. The technique is supported by a numerical example.

  6. An egalitarian network model for the emergence of simple and complex cells in visual cortex

    PubMed Central

    Tao, Louis; Shelley, Michael; McLaughlin, David; Shapley, Robert

    2004-01-01

    We explain how simple and complex cells arise in a large-scale neuronal network model of the primary visual cortex of the macaque. Our model consists of ≈4,000 integrate-and-fire, conductance-based point neurons, representing the cells in a small, 1-mm2 patch of an input layer of the primary visual cortex. In the model the local connections are isotropic and nonspecific, and convergent input from the lateral geniculate nucleus confers cortical cells with orientation and spatial phase preference. The balance between lateral connections and lateral geniculate nucleus drive determines whether individual neurons in this recurrent circuit are simple or complex. The model reproduces qualitatively the experimentally observed distributions of both extracellular and intracellular measures of simple and complex response. PMID:14695891

  7. Comparing an annual and daily time-step model for predicting field-scale P loss

    USDA-ARS?s Scientific Manuscript database

    Several models with varying degrees of complexity are available for describing P movement through the landscape. The complexity of these models is dependent on the amount of data required by the model, the number of model parameters needed to be estimated, the theoretical rigor of the governing equa...

  8. EVALUATING PREDICTIVE ERRORS OF A COMPLEX ENVIRONMENTAL MODEL USING A GENERAL LINEAR MODEL AND LEAST SQUARE MEANS

    EPA Science Inventory

    A General Linear Model (GLM) was used to evaluate the deviation of predicted values from expected values for a complex environmental model. For this demonstration, we used the default level interface of the Regional Mercury Cycling Model (R-MCM) to simulate epilimnetic total mer...

  9. Modeling complexity in engineered infrastructure system: Water distribution network as an example

    NASA Astrophysics Data System (ADS)

    Zeng, Fang; Li, Xiang; Li, Ke

    2017-02-01

    The complex topology and adaptive behavior of infrastructure systems are driven by both self-organization of the demand and rigid engineering solutions. Therefore, engineering complex systems requires a method balancing holism and reductionism. To model the growth of water distribution networks, a complex network model was developed following the combination of local optimization rules and engineering considerations. The demand node generation is dynamic and follows the scaling law of urban growth. The proposed model can generate a water distribution network (WDN) similar to reported real-world WDNs on some structural properties. Comparison with different modeling approaches indicates that a realistic demand node distribution and co-evolvement of demand node and network are important for the simulation of real complex networks. The simulation results indicate that the efficiency of water distribution networks is exponentially affected by the urban growth pattern. On the contrary, the improvement of efficiency by engineering optimization is limited and relatively insignificant. The redundancy and robustness, on another aspect, can be significantly improved through engineering methods.

  10. Building a pseudo-atomic model of the anaphase-promoting complex.

    PubMed

    Kulkarni, Kiran; Zhang, Ziguo; Chang, Leifu; Yang, Jing; da Fonseca, Paula C A; Barford, David

    2013-11-01

    The anaphase-promoting complex (APC/C) is a large E3 ubiquitin ligase that regulates progression through specific stages of the cell cycle by coordinating the ubiquitin-dependent degradation of cell-cycle regulatory proteins. Depending on the species, the active form of the APC/C consists of 14-15 different proteins that assemble into a 20-subunit complex with a mass of approximately 1.3 MDa. A hybrid approach of single-particle electron microscopy and protein crystallography of individual APC/C subunits has been applied to generate pseudo-atomic models of various functional states of the complex. Three approaches for assigning regions of the EM-derived APC/C density map to specific APC/C subunits are described. This information was used to dock atomic models of APC/C subunits, determined either by protein crystallography or homology modelling, to specific regions of the APC/C EM map, allowing the generation of a pseudo-atomic model corresponding to 80% of the entire complex.

  11. A musculoskeletal model of the elbow joint complex

    NASA Technical Reports Server (NTRS)

    Gonzalez, Roger V.; Barr, Ronald E.; Abraham, Lawrence D.

    1993-01-01

    This paper describes a musculoskeletal model that represents human elbow flexion-extension and forearm pronation-supination. Musculotendon parameters and the skeletal geometry were determined for the musculoskeletal model in the analysis of ballistic elbow joint complex movements. The key objective was to develop a computational model, guided by optimal control, to investigate the relationship among patterns of muscle excitation, individual muscle forces, and movement kinematics. The model was verified using experimental kinematic, torque, and electromyographic data from volunteer subjects performing both isometric and ballistic elbow joint complex movements. In general, the model predicted kinematic and muscle excitation patterns similar to what was experimentally measured.

  12. Geometric modeling of subcellular structures, organelles, and multiprotein complexes

    PubMed Central

    Feng, Xin; Xia, Kelin; Tong, Yiying; Wei, Guo-Wei

    2013-01-01

    SUMMARY Recently, the structure, function, stability, and dynamics of subcellular structures, organelles, and multi-protein complexes have emerged as a leading interest in structural biology. Geometric modeling not only provides visualizations of shapes for large biomolecular complexes but also fills the gap between structural information and theoretical modeling, and enables the understanding of function, stability, and dynamics. This paper introduces a suite of computational tools for volumetric data processing, information extraction, surface mesh rendering, geometric measurement, and curvature estimation of biomolecular complexes. Particular emphasis is given to the modeling of cryo-electron microscopy data. Lagrangian-triangle meshes are employed for the surface presentation. On the basis of this representation, algorithms are developed for surface area and surface-enclosed volume calculation, and curvature estimation. Methods for volumetric meshing have also been presented. Because the technological development in computer science and mathematics has led to multiple choices at each stage of the geometric modeling, we discuss the rationales in the design and selection of various algorithms. Analytical models are designed to test the computational accuracy and convergence of proposed algorithms. Finally, we select a set of six cryo-electron microscopy data representing typical subcellular complexes to demonstrate the efficacy of the proposed algorithms in handling biomolecular surfaces and explore their capability of geometric characterization of binding targets. This paper offers a comprehensive protocol for the geometric modeling of subcellular structures, organelles, and multiprotein complexes. PMID:23212797

  13. Emulator-assisted data assimilation in complex models

    NASA Astrophysics Data System (ADS)

    Margvelashvili, Nugzar Yu; Herzfeld, Mike; Rizwi, Farhan; Mongin, Mathieu; Baird, Mark E.; Jones, Emlyn; Schaffelke, Britta; King, Edward; Schroeder, Thomas

    2016-09-01

    Emulators are surrogates of complex models that run orders of magnitude faster than the original model. The utility of emulators for the data assimilation into ocean models is still not well understood. High complexity of ocean models translates into high uncertainty of the corresponding emulators which may undermine the quality of the assimilation schemes based on such emulators. Numerical experiments with a chaotic Lorenz-95 model are conducted to illustrate this point and suggest a strategy to alleviate this problem through the localization of the emulation and data assimilation procedures. Insights gained through these experiments are used to design and implement data assimilation scenario for a 3D fine-resolution sediment transport model of the Great Barrier Reef (GBR), Australia.

  14. Introduction to a special section on ecohydrology of semiarid environments: Confronting mathematical models with ecosystem complexity

    NASA Astrophysics Data System (ADS)

    Svoray, Tal; Assouline, Shmuel; Katul, Gabriel

    2015-11-01

    Current literature provides large number of publications about ecohydrological processes and their effect on the biota in drylands. Given the limited laboratory and field experiments in such systems, many of these publications are based on mathematical models of varying complexity. The underlying implicit assumption is that the data set used to evaluate these models covers the parameter space of conditions that characterize drylands and that the models represent the actual processes with acceptable certainty. However, a question raised is to what extent these mathematical models are valid when confronted with observed ecosystem complexity? This Introduction reviews the 16 papers that comprise the Special Section on Eco-hydrology of Semiarid Environments: Confronting Mathematical Models with Ecosystem Complexity. The subjects studied in these papers include rainfall regime, infiltration and preferential flow, evaporation and evapotranspiration, annual net primary production, dispersal and invasion, and vegetation greening. The findings in the papers published in this Special Section show that innovative mathematical modeling approaches can represent actual field measurements. Hence, there are strong grounds for suggesting that mathematical models can contribute to greater understanding of ecosystem complexity through characterization of space-time dynamics of biomass and water storage as well as their multiscale interactions. However, the generality of the models and their low-dimensional representation of many processes may also be a "curse" that results in failures when particulars of an ecosystem are required. It is envisaged that the search for a unifying "general" model, while seductive, may remain elusive in the foreseeable future. It is for this reason that improving the merger between experiments and models of various degrees of complexity continues to shape the future research agenda.

  15. Mathematical Models to Determine Stable Behavior of Complex Systems

    NASA Astrophysics Data System (ADS)

    Sumin, V. I.; Dushkin, A. V.; Smolentseva, T. E.

    2018-05-01

    The paper analyzes a possibility to predict functioning of a complex dynamic system with a significant amount of circulating information and a large number of random factors impacting its functioning. Functioning of the complex dynamic system is described as a chaotic state, self-organized criticality and bifurcation. This problem may be resolved by modeling such systems as dynamic ones, without applying stochastic models and taking into account strange attractors.

  16. Tracer transport in soils and shallow groundwater: model abstraction with modern tools

    USDA-ARS?s Scientific Manuscript database

    Vadose zone controls contaminant transport from the surface to groundwater, and modeling transport in vadose zone has become a burgeoning field. Exceedingly complex models of subsurface contaminant transport are often inefficient. Model abstraction is the methodology for reducing the complexity of a...

  17. Fast computation of derivative based sensitivities of PSHA models via algorithmic differentiation

    NASA Astrophysics Data System (ADS)

    Leövey, Hernan; Molkenthin, Christian; Scherbaum, Frank; Griewank, Andreas; Kuehn, Nicolas; Stafford, Peter

    2015-04-01

    Probabilistic seismic hazard analysis (PSHA) is the preferred tool for estimation of potential ground-shaking hazard due to future earthquakes at a site of interest. A modern PSHA represents a complex framework which combines different models with possible many inputs. Sensitivity analysis is a valuable tool for quantifying changes of a model output as inputs are perturbed, identifying critical input parameters and obtaining insight in the model behavior. Differential sensitivity analysis relies on calculating first-order partial derivatives of the model output with respect to its inputs. Moreover, derivative based global sensitivity measures (Sobol' & Kucherenko '09) can be practically used to detect non-essential inputs of the models, thus restricting the focus of attention to a possible much smaller set of inputs. Nevertheless, obtaining first-order partial derivatives of complex models with traditional approaches can be very challenging, and usually increases the computation complexity linearly with the number of inputs appearing in the models. In this study we show how Algorithmic Differentiation (AD) tools can be used in a complex framework such as PSHA to successfully estimate derivative based sensitivities, as is the case in various other domains such as meteorology or aerodynamics, without no significant increase in the computation complexity required for the original computations. First we demonstrate the feasibility of the AD methodology by comparing AD derived sensitivities to analytically derived sensitivities for a basic case of PSHA using a simple ground-motion prediction equation. In a second step, we derive sensitivities via AD for a more complex PSHA study using a ground motion attenuation relation based on a stochastic method to simulate strong motion. The presented approach is general enough to accommodate more advanced PSHA studies of higher complexity.

  18. Low frequency complex dielectric (conductivity) response of dilute clay suspensions: Modeling and experiments.

    PubMed

    Hou, Chang-Yu; Feng, Ling; Seleznev, Nikita; Freed, Denise E

    2018-09-01

    In this work, we establish an effective medium model to describe the low-frequency complex dielectric (conductivity) dispersion of dilute clay suspensions. We use previously obtained low-frequency polarization coefficients for a charged oblate spheroidal particle immersed in an electrolyte as the building block for the Maxwell Garnett mixing formula to model the dilute clay suspension. The complex conductivity phase dispersion exhibits a near-resonance peak when the clay grains have a narrow size distribution. The peak frequency is associated with the size distribution as well as the shape of clay grains and is often referred to as the characteristic frequency. In contrast, if the size of the clay grains has a broad distribution, the phase peak is broadened and can disappear into the background of the canonical phase response of the brine. To benchmark our model, the low-frequency dispersion of the complex conductivity of dilute clay suspensions is measured using a four-point impedance measurement, which can be reliably calibrated in the frequency range between 0.1 Hz and 10 kHz. By using a minimal number of fitting parameters when reliable information is available as input for the model and carefully examining the issue of potential over-fitting, we found that our model can be used to fit the measured dispersion of the complex conductivity with reasonable parameters. The good match between the modeled and experimental complex conductivity dispersion allows us to argue that our simplified model captures the essential physics for describing the low-frequency dispersion of the complex conductivity of dilute clay suspensions. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Risk Modeling of Interdependent Complex Systems of Systems: Theory and Practice.

    PubMed

    Haimes, Yacov Y

    2018-01-01

    The emergence of the complexity characterizing our systems of systems (SoS) requires a reevaluation of the way we model, assess, manage, communicate, and analyze the risk thereto. Current models for risk analysis of emergent complex SoS are insufficient because too often they rely on the same risk functions and models used for single systems. These models commonly fail to incorporate the complexity derived from the networks of interdependencies and interconnectedness (I-I) characterizing SoS. There is a need to reevaluate currently practiced risk analysis to respond to this reality by examining, and thus comprehending, what makes emergent SoS complex. The key to evaluating the risk to SoS lies in understanding the genesis of characterizing I-I of systems manifested through shared states and other essential entities within and among the systems that constitute SoS. The term "essential entities" includes shared decisions, resources, functions, policies, decisionmakers, stakeholders, organizational setups, and others. This undertaking can be accomplished by building on state-space theory, which is fundamental to systems engineering and process control. This article presents a theoretical and analytical framework for modeling the risk to SoS with two case studies performed with the MITRE Corporation and demonstrates the pivotal contributions made by shared states and other essential entities to modeling and analysis of the risk to complex SoS. A third case study highlights the multifarious representations of SoS, which require harmonizing the risk analysis process currently applied to single systems when applied to complex SoS. © 2017 Society for Risk Analysis.

  20. Modeling of Wall-Bounded Complex Flows and Free Shear Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Zhu, Jiang; Lumley, John L.

    1994-01-01

    Various wall-bounded flows with complex geometries and free shear flows have been studied with a newly developed realizable Reynolds stress algebraic equation model. The model development is based on the invariant theory in continuum mechanics. This theory enables us to formulate a general constitutive relation for the Reynolds stresses. Pope was the first to introduce this kind of constitutive relation to turbulence modeling. In our study, realizability is imposed on the truncated constitutive relation to determine the coefficients so that, unlike the standard k-E eddy viscosity model, the present model will not produce negative normal stresses in any situations of rapid distortion. The calculations based on the present model have shown an encouraging success in modeling complex turbulent flows.

  1. Representing spatial and temporal complexity in ecohydrological models: a meta-analysis focusing on groundwater - surface water interactions

    NASA Astrophysics Data System (ADS)

    McDonald, Karlie; Mika, Sarah; Kolbe, Tamara; Abbott, Ben; Ciocca, Francesco; Marruedo, Amaia; Hannah, David; Schmidt, Christian; Fleckenstein, Jan; Karuse, Stefan

    2016-04-01

    Sub-surface hydrologic processes are highly dynamic, varying spatially and temporally with strong links to the geomorphology and hydrogeologic properties of an area. This spatial and temporal complexity is a critical regulator of biogeochemical and ecological processes within the interface groundwater - surface water (GW-SW) ecohydrological interface and adjacent ecosystems. Many GW-SW models have attempted to capture this spatial and temporal complexity with varying degrees of success. The incorporation of spatial and temporal complexity within GW-SW model configuration is important to investigate interactions with transient storage and subsurface geology, infiltration and recharge, and mass balance of exchange fluxes at the GW-SW ecohydrological interface. Additionally, characterising spatial and temporal complexity in GW-SW models is essential to derive predictions using realistic environmental conditions. In this paper we conduct a systematic Web of Science meta-analysis of conceptual, hydrodynamic, and reactive and heat transport models of the GW-SW ecohydrological interface since 2004 to explore how these models handled spatial and temporal complexity. The freshwater - groundwater ecohydrological interface was the most commonly represented in publications between 2004 and 2014 with 91% of papers followed by marine 6% and estuarine systems with 3% of papers. Of the GW-SW models published since 2004, the 52% have focused on hydrodynamic processes and <15% covered more than one process (e.g. heat and reactive transport). Within the hydrodynamic subset, 25% of models focused on a vertical depth of <5m. The primary scientific and technological limitations of incorporating spatial and temporal variability into GW-SW models are identified as the inclusion of woody debris, carbon sources, subsurface geological structures and bioclogging into model parameterization. The technological limitations influence the types of models applied, such as hydrostatic coupled models and fully intrinsic saturated and unsaturated models, and the assumptions or simplifications scientists apply to investigate the GW-SW ecohydrological interface. We investigated the type of modelling approaches applied across different scales (site, reach, catchment, nested catchments) and assessed the simplifications in environmental conditions and complexity that are commonly made in model configuration. Understanding the theoretical concepts that underpin these current modelling approaches is critical for scientists to develop measures to derive predictions from realistic environmental conditions at management relevant scales and establish best-practice modelling approaches for improving the scientific understanding and management of the GW-SW interface. Additionally, the assessment of current modelling approaches informs our proposed framework for the progress of GW-SW models in the future. The framework presented aims to increase future scientific, technological and management integration and the identification of research priorities to allow spatial and temporal complexity to be better incorporated into GW-SW models.

  2. Application of surface complexation models to anion adsorption by natural materials.

    PubMed

    Goldberg, Sabine

    2014-10-01

    Various chemical models of ion adsorption are presented and discussed. Chemical models, such as surface complexation models, provide a molecular description of anion adsorption reactions using an equilibrium approach. Two such models, the constant capacitance model and the triple layer model, are described in the present study. Characteristics common to all the surface complexation models are equilibrium constant expressions, mass and charge balances, and surface activity coefficient electrostatic potential terms. Methods for determining parameter values for surface site density, capacitances, and surface complexation constants also are discussed. Spectroscopic experimental methods of establishing ion adsorption mechanisms include vibrational spectroscopy, nuclear magnetic resonance spectroscopy, electron spin resonance spectroscopy, X-ray absorption spectroscopy, and X-ray reflectivity. Experimental determinations of point of zero charge shifts and ionic strength dependence of adsorption results and molecular modeling calculations also can be used to deduce adsorption mechanisms. Applications of the surface complexation models to heterogeneous natural materials, such as soils, using the component additivity and the generalized composite approaches are described. Emphasis is on the generalized composite approach for predicting anion adsorption by soils. Continuing research is needed to develop consistent and realistic protocols for describing ion adsorption reactions on soil minerals and soils. The availability of standardized model parameter databases for use in chemical speciation-transport models is critical. Published 2014 Wiley Periodicals Inc. on behalf of SETAC. This article is a US Government work and as such, is in the public domain in the in the United States of America.

  3. Semiempirical Quantum Chemistry Model for the Lanthanides: RM1 (Recife Model 1) Parameters for Dysprosium, Holmium and Erbium

    PubMed Central

    Filho, Manoel A. M.; Dutra, José Diogo L.; Rocha, Gerd B.; Simas, Alfredo M.; Freire, Ricardo O.

    2014-01-01

    Complexes of dysprosium, holmium, and erbium find many applications as single-molecule magnets, as contrast agents for magnetic resonance imaging, as anti-cancer agents, in optical telecommunications, etc. Therefore, the development of tools that can be proven helpful to complex design is presently an active area of research. In this article, we advance a major improvement to the semiempirical description of lanthanide complexes: the Recife Model 1, RM1, model for the lanthanides, parameterized for the trications of Dy, Ho, and Er. By representing such lanthanide in the RM1 calculation as a three-electron atom with a set of 5 d, 6 s, and 6 p semiempirical orbitals, the accuracy of the previous sparkle models, mainly concentrated on lanthanide-oxygen and lanthanide-nitrogen distances, is extended to other types of bonds in the trication complexes’ coordination polyhedra, such as lanthanide-carbon, lanthanide-chlorine, etc. This is even more important as, for example, lanthanide-carbon atom distances in the coordination polyhedra of the complexes comprise about 30% of all distances for all complexes of Dy, Ho, and Er considered. Our results indicate that the average unsigned mean error for the lanthanide-carbon distances dropped from an average of 0.30 Å, for the sparkle models, to 0.04 Å for the RM1 model for the lanthanides; for a total of 509 such distances for the set of all Dy, Ho, and Er complexes considered. A similar behavior took place for the other distances as well, such as lanthanide-chlorine, lanthanide-bromine, lanthanide, phosphorus and lanthanide-sulfur. Thus, the RM1 model for the lanthanides, being advanced in this article, broadens the range of application of semiempirical models to lanthanide complexes by including comprehensively many other types of bonds not adequately described by the previous models. PMID:24497945

  4. Rule-based modeling and simulations of the inner kinetochore structure.

    PubMed

    Tschernyschkow, Sergej; Herda, Sabine; Gruenert, Gerd; Döring, Volker; Görlich, Dennis; Hofmeister, Antje; Hoischen, Christian; Dittrich, Peter; Diekmann, Stephan; Ibrahim, Bashar

    2013-09-01

    Combinatorial complexity is a central problem when modeling biochemical reaction networks, since the association of a few components can give rise to a large variation of protein complexes. Available classical modeling approaches are often insufficient for the analysis of very large and complex networks in detail. Recently, we developed a new rule-based modeling approach that facilitates the analysis of spatial and combinatorially complex problems. Here, we explore for the first time how this approach can be applied to a specific biological system, the human kinetochore, which is a multi-protein complex involving over 100 proteins. Applying our freely available SRSim software to a large data set on kinetochore proteins in human cells, we construct a spatial rule-based simulation model of the human inner kinetochore. The model generates an estimation of the probability distribution of the inner kinetochore 3D architecture and we show how to analyze this distribution using information theory. In our model, the formation of a bridge between CenpA and an H3 containing nucleosome only occurs efficiently for higher protein concentration realized during S-phase but may be not in G1. Above a certain nucleosome distance the protein bridge barely formed pointing towards the importance of chromatin structure for kinetochore complex formation. We define a metric for the distance between structures that allow us to identify structural clusters. Using this modeling technique, we explore different hypothetical chromatin layouts. Applying a rule-based network analysis to the spatial kinetochore complex geometry allowed us to integrate experimental data on kinetochore proteins, suggesting a 3D model of the human inner kinetochore architecture that is governed by a combinatorial algebraic reaction network. This reaction network can serve as bridge between multiple scales of modeling. Our approach can be applied to other systems beyond kinetochores. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Molecular Basis for Structural Heterogeneity of an Intrinsically Disordered Protein Bound to a Partner by Combined ESI-IM-MS and Modeling

    NASA Astrophysics Data System (ADS)

    D'Urzo, Annalisa; Konijnenberg, Albert; Rossetti, Giulia; Habchi, Johnny; Li, Jinyu; Carloni, Paolo; Sobott, Frank; Longhi, Sonia; Grandori, Rita

    2015-03-01

    Intrinsically disordered proteins (IDPs) form biologically active complexes that can retain a high degree of conformational disorder, escaping structural characterization by conventional approaches. An example is offered by the complex between the intrinsically disordered NTAIL domain and the phosphoprotein X domain (PXD) from measles virus (MeV). Here, distinct conformers of the complex are detected by electrospray ionization-mass spectrometry (ESI-MS) and ion mobility (IM) techniques yielding estimates for the solvent-accessible surface area (SASA) in solution and the average collision cross-section (CCS) in the gas phase. Computational modeling of the complex in solution, based on experimental constraints, provides atomic-resolution structural models featuring different levels of compactness. The resulting models indicate high structural heterogeneity. The intermolecular interactions are predominantly hydrophobic, not only in the ordered core of the complex, but also in the dynamic, disordered regions. Electrostatic interactions become involved in the more compact states. This system represents an illustrative example of a hydrophobic complex that could be directly detected in the gas phase by native mass spectrometry. This work represents the first attempt to modeling the entire NTAIL domain bound to PXD at atomic resolution.

  6. A Thermodynamic Model to Estimate the Formation of Complex Nitrides of Al x Mg(1- x)N in Silicon Steel

    NASA Astrophysics Data System (ADS)

    Luo, Yan; Zhang, Lifeng; Li, Ming; Sridhar, Seetharaman

    2018-06-01

    A complex nitride of Al x Mg(1- x)N was observed in silicon steels. A thermodynamic model was developed to predict the ferrite/nitride equilibrium in the Fe-Al-Mg-N alloy system, using published binary solubility products for stoichiometric phases. The model was used to estimate the solubility product of nitride compound, equilibrium ferrite, and nitride compositions, and the amounts of each phase, as a function of steel composition and temperature. In the current model, the molar ratio Al/(Al + Mg) in the complex nitride was great due to the low dissolved magnesium in steel. For a steel containing 0.52 wt pct Als, 10 ppm T.Mg., and 20 ppm T.N. at 1100 K (827 °C), the complex nitride was expressed by Al0.99496Mg0.00504N and the solubility product of this complex nitride was 2.95 × 10-7. In addition, the solution temperature of the complex nitride increased with increasing the nitrogen and aluminum in steel. The good agreement between the prediction and the detected precipitate compositions validated the current model.

  7. Ambiguities in the identification of giant molecular cloud complexes from longitude-velocity diagrams

    NASA Technical Reports Server (NTRS)

    Adler, David S.; Roberts, William W., Jr.

    1992-01-01

    Techniques which use longitude-velocity diagrams to identify molecular cloud complexes in the disk of the Galaxy are investigated by means of model Galactic disks generated from N-body cloud-particle simulations. A procedure similar to the method used to reduce the low-level emission in Galactic l-v diagrams is employed to isolate complexes of emission in the model l-v diagram (LVCs) from the 'background'clouds. The LVCs produced in this manner yield a size-line-width relationship with a slope of 0.58 and a mass spectrum with a slope of 1.55, consistent with Galactic observations. It is demonstrated that associations identified as LVCs are often chance superpositions of clouds spread out along the line of sight in the disk of the model system. This indicates that the l-v diagram cannot be used to unambiguously determine the location of molecular cloud complexes in the model Galactic disk. The modeling results also indicate that the existence of a size-line-width relationship is not a reliable indicator of the physical nature of cloud complexes, in particular, whether the complexes are gravitationally bound objects.

  8. Complex fuzzy soft expert sets

    NASA Astrophysics Data System (ADS)

    Selvachandran, Ganeshsree; Hafeed, Nisren A.; Salleh, Abdul Razak

    2017-04-01

    Complex fuzzy sets and its accompanying theory although at its infancy, has proven to be superior to classical type-1 fuzzy sets, due its ability in representing time-periodic problem parameters and capturing the seasonality of the fuzziness that exists in the elements of a set. These are important characteristics that are pervasive in most real world problems. However, there are two major problems that are inherent in complex fuzzy sets: it lacks a sufficient parameterization tool and it does not have a mechanism to validate the values assigned to the membership functions of the elements in a set. To overcome these problems, we propose the notion of complex fuzzy soft expert sets which is a hybrid model of complex fuzzy sets and soft expert sets. This model incorporates the advantages of complex fuzzy sets and soft sets, besides having the added advantage of allowing the users to know the opinion of all the experts in a single model without the need for any additional cumbersome operations. As such, this model effectively improves the accuracy of representation of problem parameters that are periodic in nature, besides having a higher level of computational efficiency compared to similar models in literature.

  9. Application of Complex Adaptive Systems in Portfolio Management

    ERIC Educational Resources Information Center

    Su, Zheyuan

    2017-01-01

    Simulation-based methods are becoming a promising research tool in financial markets. A general Complex Adaptive System can be tailored to different application scenarios. Based on the current research, we built two models that would benefit portfolio management by utilizing Complex Adaptive Systems (CAS) in Agent-based Modeling (ABM) approach.…

  10. Rethinking Validation in Complex High-Stakes Assessment Contexts

    ERIC Educational Resources Information Center

    Koch, Martha J.; DeLuca, Christopher

    2012-01-01

    In this article we rethink validation within the complex contexts of high-stakes assessment. We begin by considering the utility of existing models for validation and argue that these models tend to overlook some of the complexities inherent to assessment use, including the multiple interpretations of assessment purposes and the potential…

  11. Elementary Teachers' Selection and Use of Visual Models

    ERIC Educational Resources Information Center

    Lee, Tammy D.; Jones, M. Gail

    2018-01-01

    As science grows in complexity, science teachers face an increasing challenge of helping students interpret models that represent complex science systems. Little is known about how teachers select and use models when planning lessons. This mixed methods study investigated the pedagogical approaches and visual models used by elementary in-service…

  12. Developing and Modeling Complex Social Interventions: Introducing the Connecting People Intervention

    ERIC Educational Resources Information Center

    Webber, Martin; Reidy, Hannah; Ansari, David; Stevens, Martin; Morris, David

    2016-01-01

    Objectives: Modeling the processes involved in complex social interventions is important in social work practice, as it facilitates their implementation and translation into different contexts. This article reports the process of developing and modeling the connecting people intervention (CPI), a model of practice that supports people with mental…

  13. Modeling Bivariate Longitudinal Hormone Profiles by Hierarchical State Space Models

    PubMed Central

    Liu, Ziyue; Cappola, Anne R.; Crofford, Leslie J.; Guo, Wensheng

    2013-01-01

    The hypothalamic-pituitary-adrenal (HPA) axis is crucial in coping with stress and maintaining homeostasis. Hormones produced by the HPA axis exhibit both complex univariate longitudinal profiles and complex relationships among different hormones. Consequently, modeling these multivariate longitudinal hormone profiles is a challenging task. In this paper, we propose a bivariate hierarchical state space model, in which each hormone profile is modeled by a hierarchical state space model, with both population-average and subject-specific components. The bivariate model is constructed by concatenating the univariate models based on the hypothesized relationship. Because of the flexible framework of state space form, the resultant models not only can handle complex individual profiles, but also can incorporate complex relationships between two hormones, including both concurrent and feedback relationship. Estimation and inference are based on marginal likelihood and posterior means and variances. Computationally efficient Kalman filtering and smoothing algorithms are used for implementation. Application of the proposed method to a study of chronic fatigue syndrome and fibromyalgia reveals that the relationships between adrenocorticotropic hormone and cortisol in the patient group are weaker than in healthy controls. PMID:24729646

  14. Modeling Bivariate Longitudinal Hormone Profiles by Hierarchical State Space Models.

    PubMed

    Liu, Ziyue; Cappola, Anne R; Crofford, Leslie J; Guo, Wensheng

    2014-01-01

    The hypothalamic-pituitary-adrenal (HPA) axis is crucial in coping with stress and maintaining homeostasis. Hormones produced by the HPA axis exhibit both complex univariate longitudinal profiles and complex relationships among different hormones. Consequently, modeling these multivariate longitudinal hormone profiles is a challenging task. In this paper, we propose a bivariate hierarchical state space model, in which each hormone profile is modeled by a hierarchical state space model, with both population-average and subject-specific components. The bivariate model is constructed by concatenating the univariate models based on the hypothesized relationship. Because of the flexible framework of state space form, the resultant models not only can handle complex individual profiles, but also can incorporate complex relationships between two hormones, including both concurrent and feedback relationship. Estimation and inference are based on marginal likelihood and posterior means and variances. Computationally efficient Kalman filtering and smoothing algorithms are used for implementation. Application of the proposed method to a study of chronic fatigue syndrome and fibromyalgia reveals that the relationships between adrenocorticotropic hormone and cortisol in the patient group are weaker than in healthy controls.

  15. Complexation of Cd, Ni, and Zn by DOC in polluted groundwater: A comparison of approaches using resin exchange, aquifer material sorption, and computer speciation models (WHAM and MINTEQA2)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christensen, J.B.; Christensen, T.H.

    1999-11-01

    Complexation of cadmium (Cd), nickel (Ni), and zinc (Zn) by dissolved organic carbon (DOC) in leachate-polluted groundwater was measured using a resin equilibrium method and an aquifer material sorption technique. The first method is commonly used in complexation studies, while the second method better represents aquifer conditions. The two approaches gave similar results. Metal-DOC complexation was measured over a range of DOC concentrations using the resin equilibrium method, and the results were compared to simulations made by two speciation models containing default databases on metal-DOC complexes (WHAM and MINTEQA2). The WHAM model gave reasonable estimates of Cd and Ni complexationmore » by DOC for both leachate-polluted groundwater samples. The estimated effect of complexation differed less than 50% from the experimental values corresponding to a deviation on the activity of the free metal ion of a factor of 2.5. The effect of DOC complexation for Zn was largely overestimated by the WHAM model, and it was found that using a binding constant of 1.7 instead of the default value of 1.3 would improve the fit between the simulations and experimental data. The MINTEQA2 model gave reasonable predictions of the complexation of Cd and Zn by DOC, whereas deviations in the estimated activity of the free Ni{sup 2+} ion as compared to experimental results are up to a factor of 5.« less

  16. Describing complex cells in primary visual cortex: a comparison of context and multi-filter LN models.

    PubMed

    Westö, Johan; May, Patrick J C

    2018-05-02

    Receptive field (RF) models are an important tool for deciphering neural responses to sensory stimuli. The two currently popular RF models are multi-filter linear-nonlinear (LN) models and context models. Models are, however, never correct and they rely on assumptions to keep them simple enough to be interpretable. As a consequence, different models describe different stimulus-response mappings, which may or may not be good approximations of real neural behavior. In the current study, we take up two tasks: First, we introduce new ways to estimate context models with realistic nonlinearities, that is, with logistic and exponential functions. Second, we evaluate context models and multi-filter LN models in terms of how well they describe recorded data from complex cells in cat primary visual cortex. Our results, based on single-spike information and correlation coefficients, indicate that context models outperform corresponding multi-filter LN models of equal complexity (measured in terms of number of parameters), with the best increase in performance being achieved by the novel context models. Consequently, our results suggest that the multi-filter LN-model framework is suboptimal for describing the behavior of complex cells: the context-model framework is clearly superior while still providing interpretable quantizations of neural behavior.

  17. Using multi-criteria analysis of simulation models to understand complex biological systems

    Treesearch

    Maureen C. Kennedy; E. David Ford

    2011-01-01

    Scientists frequently use computer-simulation models to help solve complex biological problems. Typically, such models are highly integrated, they produce multiple outputs, and standard methods of model analysis are ill suited for evaluating them. We show how multi-criteria optimization with Pareto optimality allows for model outputs to be compared to multiple system...

  18. Analysis of Immune Complex Structure by Statistical Mechanics and Light Scattering Techniques.

    NASA Astrophysics Data System (ADS)

    Busch, Nathan Adams

    1995-01-01

    The size and structure of immune complexes determine their behavior in the immune system. The chemical physics of the complex formation is not well understood; this is due in part to inadequate characterization of the proteins involved, and in part by lack of sufficiently well developed theoretical techniques. Understanding the complex formation will permit rational design of strategies for inhibiting tissue deposition of the complexes. A statistical mechanical model of the proteins based upon the theory of associating fluids was developed. The multipole electrostatic potential for each protein used in this study was characterized for net protein charge, dipole moment magnitude, and dipole moment direction. The binding sites, between the model antigen and antibodies, were characterized for their net surface area, energy, and position relative to the dipole moment of the protein. The equilibrium binding graphs generated with the protein statistical mechanical model compares favorably with experimental data obtained from radioimmunoassay results. The isothermal compressibility predicted by the model agrees with results obtained from dynamic light scattering. The statistical mechanics model was used to investigate association between the model antigen and selected pairs of antibodies. It was found that, in accordance to expectations from thermodynamic arguments, the highest total binding energy yielded complex distributions which were skewed to higher complex size. From examination of the simulated formation of ring structures from linear chain complexes, and from the joint shape probability surfaces, it was found that ring configurations were formed by the "folding" of linear chains until the ends are within binding distance. By comparing the single antigen/two antibody system which differ only in their respective binding site locations, it was found that binding site location influences complex size and shape distributions only when ring formation occurs. The internal potential energy of a ring complex is considerably less than that of the non-associating system; therefore the ring complexes are quite stable and show no evidence of breaking, and collapsing into smaller complexes. The ring formation will occur only in systems where the total free energy of each complex may be minimized. Thus, ring formation will occur even though entropically unfavorable conformations result if the total free energy can be minimized by doing so.

  19. Integrative Approach for Computationally Inferring Interactions between the Alpha and Beta Subunits of the Calcium-Activated Potassium Channel (BK): a Docking Study

    PubMed Central

    González, Janneth; Gálvez, Angela; Morales, Ludis; Barreto, George E.; Capani, Francisco; Sierra, Omar; Torres, Yolima

    2013-01-01

    Three-dimensional models of the alpha- and beta-1 subunits of the calcium-activated potassium channel (BK) were predicted by threading modeling. A recursive approach comprising of sequence alignment and model building based on three templates was used to build these models, with the refinement of non-conserved regions carried out using threading techniques. The complex formed by the subunits was studied by means of docking techniques, using 3D models of the two subunits, and an approach based on rigid-body structures. Structural effects of the complex were analyzed with respect to hydrogen-bond interactions and binding-energy calculations. Potential interaction sites of the complex were determined by referencing a study of the difference accessible surface area (DASA) of the protein subunits in the complex. PMID:23492851

  20. Improving a regional model using reduced complexity and parameter estimation

    USGS Publications Warehouse

    Kelson, Victor A.; Hunt, Randall J.; Haitjema, Henk M.

    2002-01-01

    The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model's prediction that, for a model that is properly calibrated for heads, regional drawdowns are relatively unaffected by the choice of aquifer properties, but that mine inflows are strongly affected. Paradoxically, by reducing model complexity, we have increased the understanding gained from the modeling effort.

  1. Improving a regional model using reduced complexity and parameter estimation.

    PubMed

    Kelson, Victor A; Hunt, Randall J; Haitjema, Henk M

    2002-01-01

    The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model's prediction that, for a model that is properly calibrated for heads, regional drawdowns are relatively unaffected by the choice of aquifer properties, but that mine inflows are strongly affected. Paradoxically, by reducing model complexity, we have increased the understanding gained from the modeling effort.

  2. Research Area 3: Mathematics (3.1 Modeling of Complex Systems)

    DTIC Science & Technology

    2017-10-31

    RESEARCH AREA 3: MATHEMATICS (3.1 Modeling of Complex Systems). Proposal should be directed to Dr. John Lavery The views, opinions and/or findings...so designated by other documentation. 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS (ES) U.S. Army Research Office P.O. Box 12211 Research ...Title: RESEARCH AREA 3: MATHEMATICS (3.1 Modeling of Complex Systems). Proposal should be directed to Dr. John Lavery Report Term: 0-Other Email

  3. Capturing tumor complexity in vitro: Comparative analysis of 2D and 3D tumor models for drug discovery.

    PubMed

    Stock, Kristin; Estrada, Marta F; Vidic, Suzana; Gjerde, Kjersti; Rudisch, Albin; Santo, Vítor E; Barbier, Michaël; Blom, Sami; Arundkar, Sharath C; Selvam, Irwin; Osswald, Annika; Stein, Yan; Gruenewald, Sylvia; Brito, Catarina; van Weerden, Wytske; Rotter, Varda; Boghaert, Erwin; Oren, Moshe; Sommergruber, Wolfgang; Chong, Yolanda; de Hoogt, Ronald; Graeser, Ralph

    2016-07-01

    Two-dimensional (2D) cell cultures growing on plastic do not recapitulate the three dimensional (3D) architecture and complexity of human tumors. More representative models are required for drug discovery and validation. Here, 2D culture and 3D mono- and stromal co-culture models of increasing complexity have been established and cross-comparisons made using three standard cell carcinoma lines: MCF7, LNCaP, NCI-H1437. Fluorescence-based growth curves, 3D image analysis, immunohistochemistry and treatment responses showed that end points differed according to cell type, stromal co-culture and culture format. The adaptable methodologies described here should guide the choice of appropriate simple and complex in vitro models.

  4. Capturing tumor complexity in vitro: Comparative analysis of 2D and 3D tumor models for drug discovery

    PubMed Central

    Stock, Kristin; Estrada, Marta F.; Vidic, Suzana; Gjerde, Kjersti; Rudisch, Albin; Santo, Vítor E.; Barbier, Michaël; Blom, Sami; Arundkar, Sharath C.; Selvam, Irwin; Osswald, Annika; Stein, Yan; Gruenewald, Sylvia; Brito, Catarina; van Weerden, Wytske; Rotter, Varda; Boghaert, Erwin; Oren, Moshe; Sommergruber, Wolfgang; Chong, Yolanda; de Hoogt, Ronald; Graeser, Ralph

    2016-01-01

    Two-dimensional (2D) cell cultures growing on plastic do not recapitulate the three dimensional (3D) architecture and complexity of human tumors. More representative models are required for drug discovery and validation. Here, 2D culture and 3D mono- and stromal co-culture models of increasing complexity have been established and cross-comparisons made using three standard cell carcinoma lines: MCF7, LNCaP, NCI-H1437. Fluorescence-based growth curves, 3D image analysis, immunohistochemistry and treatment responses showed that end points differed according to cell type, stromal co-culture and culture format. The adaptable methodologies described here should guide the choice of appropriate simple and complex in vitro models. PMID:27364600

  5. Qualitative models and experimental investigation of chaotic NOR gates and set/reset flip-flops

    NASA Astrophysics Data System (ADS)

    Rahman, Aminur; Jordan, Ian; Blackmore, Denis

    2018-01-01

    It has been observed through experiments and SPICE simulations that logical circuits based upon Chua's circuit exhibit complex dynamical behaviour. This behaviour can be used to design analogues of more complex logic families and some properties can be exploited for electronics applications. Some of these circuits have been modelled as systems of ordinary differential equations. However, as the number of components in newer circuits increases so does the complexity. This renders continuous dynamical systems models impractical and necessitates new modelling techniques. In recent years, some discrete dynamical models have been developed using various simplifying assumptions. To create a robust modelling framework for chaotic logical circuits, we developed both deterministic and stochastic discrete dynamical models, which exploit the natural recurrence behaviour, for two chaotic NOR gates and a chaotic set/reset flip-flop. This work presents a complete applied mathematical investigation of logical circuits. Experiments on our own designs of the above circuits are modelled and the models are rigorously analysed and simulated showing surprisingly close qualitative agreement with the experiments. Furthermore, the models are designed to accommodate dynamics of similarly designed circuits. This will allow researchers to develop ever more complex chaotic logical circuits with a simple modelling framework.

  6. Qualitative models and experimental investigation of chaotic NOR gates and set/reset flip-flops.

    PubMed

    Rahman, Aminur; Jordan, Ian; Blackmore, Denis

    2018-01-01

    It has been observed through experiments and SPICE simulations that logical circuits based upon Chua's circuit exhibit complex dynamical behaviour. This behaviour can be used to design analogues of more complex logic families and some properties can be exploited for electronics applications. Some of these circuits have been modelled as systems of ordinary differential equations. However, as the number of components in newer circuits increases so does the complexity. This renders continuous dynamical systems models impractical and necessitates new modelling techniques. In recent years, some discrete dynamical models have been developed using various simplifying assumptions. To create a robust modelling framework for chaotic logical circuits, we developed both deterministic and stochastic discrete dynamical models, which exploit the natural recurrence behaviour, for two chaotic NOR gates and a chaotic set/reset flip-flop. This work presents a complete applied mathematical investigation of logical circuits. Experiments on our own designs of the above circuits are modelled and the models are rigorously analysed and simulated showing surprisingly close qualitative agreement with the experiments. Furthermore, the models are designed to accommodate dynamics of similarly designed circuits. This will allow researchers to develop ever more complex chaotic logical circuits with a simple modelling framework.

  7. Regional-scale brine migration along vertical pathways due to CO2 injection - Part 2: A simulated case study in the North German Basin

    NASA Astrophysics Data System (ADS)

    Kissinger, Alexander; Noack, Vera; Knopf, Stefan; Konrad, Wilfried; Scheer, Dirk; Class, Holger

    2017-06-01

    Saltwater intrusion into potential drinking water aquifers due to the injection of CO2 into deep saline aquifers is one of the hazards associated with the geological storage of CO2. Thus, in a site-specific risk assessment, models for predicting the fate of the displaced brine are required. Practical simulation of brine displacement involves decisions regarding the complexity of the model. The choice of an appropriate level of model complexity depends on multiple criteria: the target variable of interest, the relevant physical processes, the computational demand, the availability of data, and the data uncertainty. In this study, we set up a regional-scale geological model for a realistic (but not real) onshore site in the North German Basin with characteristic geological features for that region. A major aim of this work is to identify the relevant parameters controlling saltwater intrusion in a complex structural setting and to test the applicability of different model simplifications. The model that is used to identify relevant parameters fully couples flow in shallow freshwater aquifers and deep saline aquifers. This model also includes variable-density transport of salt and realistically incorporates surface boundary conditions with groundwater recharge. The complexity of this model is then reduced in several steps, by neglecting physical processes (two-phase flow near the injection well, variable-density flow) and by simplifying the complex geometry of the geological model. The results indicate that the initial salt distribution prior to the injection of CO2 is one of the key parameters controlling shallow aquifer salinization. However, determining the initial salt distribution involves large uncertainties in the regional-scale hydrogeological parameterization and requires complex and computationally demanding models (regional-scale variable-density salt transport). In order to evaluate strategies for minimizing leakage into shallow aquifers, other target variables can be considered, such as the volumetric leakage rate into shallow aquifers or the pressure buildup in the injection horizon. Our results show that simplified models, which neglect variable-density salt transport, can reach an acceptable agreement with more complex models.

  8. Behavior of the gypsy moth life system model and development of synoptic model formulations

    Treesearch

    J. J. Colbert; Xu Rumei

    1991-01-01

    Aims of the research: The gypsy moth life system model (GMLSM) is a complex model which incorporates numerous components (both biotic and abiotic) and ecological processes. It is a detailed simulation model which has much biological reality. However, it has not yet been tested with life system data. For such complex models, evaluation and testing cannot be adequately...

  9. ADAM: Analysis of Discrete Models of Biological Systems Using Computer Algebra

    PubMed Central

    2011-01-01

    Background Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. Results We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Conclusions Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics. PMID:21774817

  10. Groundwater modelling in decision support: reflections on a unified conceptual framework

    NASA Astrophysics Data System (ADS)

    Doherty, John; Simmons, Craig T.

    2013-11-01

    Groundwater models are commonly used as basis for environmental decision-making. There has been discussion and debate in recent times regarding the issue of model simplicity and complexity. This paper contributes to this ongoing discourse. The selection of an appropriate level of model structural and parameterization complexity is not a simple matter. Although the metrics on which such selection should be based are simple, there are many competing, and often unquantifiable, considerations which must be taken into account as these metrics are applied. A unified conceptual framework is introduced and described which is intended to underpin groundwater modelling in decision support with a direct focus on matters regarding model simplicity and complexity.

  11. Modeling the chemistry of complex petroleum mixtures.

    PubMed Central

    Quann, R J

    1998-01-01

    Determining the complete molecular composition of petroleum and its refined products is not feasible with current analytical techniques because of the astronomical number of molecular components. Modeling the composition and behavior of such complex mixtures in refinery processes has accordingly evolved along a simplifying concept called lumping. Lumping reduces the complexity of the problem to a manageable form by grouping the entire set of molecular components into a handful of lumps. This traditional approach does not have a molecular basis and therefore excludes important aspects of process chemistry and molecular property fundamentals from the model's formulation. A new approach called structure-oriented lumping has been developed to model the composition and chemistry of complex mixtures at a molecular level. The central concept is to represent an individual molecular or a set of closely related isomers as a mathematical construct of certain specific and repeating structural groups. A complex mixture such as petroleum can then be represented as thousands of distinct molecular components, each having a mathematical identity. This enables the automated construction of large complex reaction networks with tens of thousands of specific reactions for simulating the chemistry of complex mixtures. Further, the method provides a convenient framework for incorporating molecular physical property correlations, existing group contribution methods, molecular thermodynamic properties, and the structure--activity relationships of chemical kinetics in the development of models. PMID:9860903

  12. How rare is complex life in the Milky Way?

    PubMed

    Bounama, Christine; von Bloh, Werner; Franck, Siegfried

    2007-10-01

    An integrated Earth system model was applied to calculate the number of habitable Earth-analog planets that are likely to have developed primitive (unicellular) and complex (multicellular) life in extrasolar planetary systems. The model is based on the global carbon cycle mediated by life and driven by increasing stellar luminosity and plate tectonics. We assumed that the hypothetical primitive and complex life forms differed in their temperature limits and CO(2) tolerances. Though complex life would be more vulnerable to environmental stress, its presence would amplify weathering processes on a terrestrial planet. The model allowed us to calculate the average number of Earth-analog planets that may harbor such life by using the formation rate of Earth-like planets in the Milky Way as well as the size of a habitable zone that could support primitive and complex life forms. The number of planets predicted to bear complex life was found to be approximately 2 orders of magnitude lower than the number predicted for primitive life forms. Our model predicted a maximum abundance of such planets around 1.8 Ga ago and allowed us to calculate the average distance between potentially habitable planets in the Milky Way. If the model predictions are accurate, the future missions DARWIN (up to a probability of 65%) and TPF (up to 20%) are likely to detect at least one planet with a biosphere composed of complex life.

  13. Influence of dissolved organic matter on the complexation of mercury under sulfidic conditions.

    PubMed

    Miller, Carrie L; Mason, Robert P; Gilmour, Cynthia C; Heyes, Andrew

    2007-04-01

    The complexation of Hg under sulfidic conditions influences its bioavailability for microbial methylation. Neutral dissolved Hg-sulfide complexes are readily available to Hg-methylating bacteria in culture, and thermodynamic models predict that inorganic Hg-sulfide complexes dominate dissolved Hg speciation under natural sulfidic conditions. However, these models have not been validated in the field. To examine the complexation of Hg in natural sulfidic waters, octanol/water partitioning methods were modified for use under environmentally relevant conditions, and a centrifuge ultrafiltration technique was developed. These techniques demonstrated much lower concentrations of dissolved Hg-sulfide complexes than predicted. Furthermore, the study revealed an interaction between Hg, dissolved organic matter (DOM), and sulfide that is not captured by current thermodynamic models. Whereas Hg forms strong complexes with DOM under oxic conditions, these complexes had not been expected to form in the presence of sulfide because of the stronger affinity of Hg for sulfide relative to its affinity for DOM. The observed interaction between Hg and DOM in the presence of sulfide likely involves the formation of a DOM-Hg-sulfide complex or results from the hydrophobic partitioning of neutral Hg-sulfide complexes into the higher-molecular-weight DOM. An understanding of the mechanism of this interaction and determination of complexation coefficients for the Hg-sulfide-DOM complex are needed to adequately assess how our new finding affects Hg bioavailability, sorption, and flux.

  14. Predicting perceived visual complexity of abstract patterns using computational measures: The influence of mirror symmetry on complexity perception

    PubMed Central

    Leder, Helmut

    2017-01-01

    Visual complexity is relevant for many areas ranging from improving usability of technical displays or websites up to understanding aesthetic experiences. Therefore, many attempts have been made to relate objective properties of images to perceived complexity in artworks and other images. It has been argued that visual complexity is a multidimensional construct mainly consisting of two dimensions: A quantitative dimension that increases complexity through number of elements, and a structural dimension representing order negatively related to complexity. The objective of this work is to study human perception of visual complexity utilizing two large independent sets of abstract patterns. A wide range of computational measures of complexity was calculated, further combined using linear models as well as machine learning (random forests), and compared with data from human evaluations. Our results confirm the adequacy of existing two-factor models of perceived visual complexity consisting of a quantitative and a structural factor (in our case mirror symmetry) for both of our stimulus sets. In addition, a non-linear transformation of mirror symmetry giving more influence to small deviations from symmetry greatly increased explained variance. Thus, we again demonstrate the multidimensional nature of human complexity perception and present comprehensive quantitative models of the visual complexity of abstract patterns, which might be useful for future experiments and applications. PMID:29099832

  15. Modeling uranium(VI) adsorption onto montmorillonite under varying carbonate concentrations: A surface complexation model accounting for the spillover effect on surface potential

    NASA Astrophysics Data System (ADS)

    Tournassat, C.; Tinnacher, R. M.; Grangeon, S.; Davis, J. A.

    2018-01-01

    The prediction of U(VI) adsorption onto montmorillonite clay is confounded by the complexities of: (1) the montmorillonite structure in terms of adsorption sites on basal and edge surfaces, and the complex interactions between the electrical double layers at these surfaces, and (2) U(VI) solution speciation, which can include cationic, anionic and neutral species. Previous U(VI)-montmorillonite adsorption and modeling studies have typically expanded classical surface complexation modeling approaches, initially developed for simple oxides, to include both cation exchange and surface complexation reactions. However, previous models have not taken into account the unique characteristics of electrostatic surface potentials that occur at montmorillonite edge sites, where the electrostatic surface potential of basal plane cation exchange sites influences the surface potential of neighboring edge sites ('spillover' effect). A series of U(VI) - Na-montmorillonite batch adsorption experiments was conducted as a function of pH, with variable U(VI), Ca, and dissolved carbonate concentrations. Based on the experimental data, a new type of surface complexation model (SCM) was developed for montmorillonite, that specifically accounts for the spillover effect using the edge surface speciation model by Tournassat et al. (2016a). The SCM allows for a prediction of U(VI) adsorption under varying chemical conditions with a minimum number of fitting parameters, not only for our own experimental results, but also for a number of published data sets. The model agreed well with many of these datasets without introducing a second site type or including the formation of ternary U(VI)-carbonato surface complexes. The model predictions were greatly impacted by utilizing analytical measurements of dissolved inorganic carbon (DIC) concentrations in individual sample solutions rather than assuming solution equilibration with a specific partial pressure of CO2, even when the gas phase was laboratory air. Because of strong aqueous U(VI)-carbonate solution complexes, the measurement of DIC concentrations was even important for systems set up in the 'absence' of CO2, due to low levels of CO2 contamination during the experiment.

  16. Variable speed limit strategies analysis with mesoscopic traffic flow model based on complex networks

    NASA Astrophysics Data System (ADS)

    Li, Shu-Bin; Cao, Dan-Ni; Dang, Wen-Xiu; Zhang, Lin

    As a new cross-discipline, the complexity science has penetrated into every field of economy and society. With the arrival of big data, the research of the complexity science has reached its summit again. In recent years, it offers a new perspective for traffic control by using complex networks theory. The interaction course of various kinds of information in traffic system forms a huge complex system. A new mesoscopic traffic flow model is improved with variable speed limit (VSL), and the simulation process is designed, which is based on the complex networks theory combined with the proposed model. This paper studies effect of VSL on the dynamic traffic flow, and then analyzes the optimal control strategy of VSL in different network topologies. The conclusion of this research is meaningful to put forward some reasonable transportation plan and develop effective traffic management and control measures to help the department of traffic management.

  17. The Skilled Counselor Training Model: Skills Acquisition, Self-Assessment, and Cognitive Complexity

    ERIC Educational Resources Information Center

    Little, Cassandra; Packman, Jill; Smaby, Marlowe H.; Maddux, Cleborne D.

    2005-01-01

    The authors evaluated the effectiveness of the Skilled Counselor Training Model (SCTM; M. H. Smaby, C. D. Maddux, E. Torres-Rivera, & R. Zimmick, 1999) in teaching counseling skills and in fostering counselor cognitive complexity. Counselor trainees who completed the SCTM had better counseling skills and higher levels of cognitive complexity than…

  18. Classrooms as Complex Adaptive Systems: A Relational Model

    ERIC Educational Resources Information Center

    Burns, Anne; Knox, John S.

    2011-01-01

    In this article, we describe and model the language classroom as a complex adaptive system (see Logan & Schumann, 2005). We argue that linear, categorical descriptions of classroom processes and interactions do not sufficiently explain the complex nature of classrooms, and cannot account for how classroom change occurs (or does not occur), over…

  19. Small-time Scale Network Traffic Prediction Based on Complex-valued Neural Network

    NASA Astrophysics Data System (ADS)

    Yang, Bin

    2017-07-01

    Accurate models play an important role in capturing the significant characteristics of the network traffic, analyzing the network dynamic, and improving the forecasting accuracy for system dynamics. In this study, complex-valued neural network (CVNN) model is proposed to further improve the accuracy of small-time scale network traffic forecasting. Artificial bee colony (ABC) algorithm is proposed to optimize the complex-valued and real-valued parameters of CVNN model. Small-scale traffic measurements data namely the TCP traffic data is used to test the performance of CVNN model. Experimental results reveal that CVNN model forecasts the small-time scale network traffic measurement data very accurately

  20. Hierarchical Modeling of Sequential Behavioral Data: Examining Complex Association Patterns in Mediation Models

    ERIC Educational Resources Information Center

    Dagne, Getachew A.; Brown, C. Hendricks; Howe, George W.

    2007-01-01

    This article presents new methods for modeling the strength of association between multiple behaviors in a behavioral sequence, particularly those involving substantively important interaction patterns. Modeling and identifying such interaction patterns becomes more complex when behaviors are assigned to more than two categories, as is the case…

  1. Calibration of an Unsteady Groundwater Flow Model for a Complex, Strongly Heterogeneous Aquifer

    NASA Astrophysics Data System (ADS)

    Curtis, Z. K.; Liao, H.; Li, S. G.; Phanikumar, M. S.; Lusch, D.

    2016-12-01

    Modeling of groundwater systems characterized by complex three-dimensional structure and heterogeneity remains a significant challenge. Most of today's groundwater models are developed based on relatively simple conceptual representations in favor of model calibratibility. As more complexities are modeled, e.g., by adding more layers and/or zones, or introducing transient processes, more parameters have to be estimated and issues related to ill-posed groundwater problems and non-unique calibration arise. Here, we explore the use of an alternative conceptual representation for groundwater modeling that is fully three-dimensional and can capture complex 3D heterogeneity (both systematic and "random") without over-parameterizing the aquifer system. In particular, we apply Transition Probability (TP) geostatistics on high resolution borehole data from a water well database to characterize the complex 3D geology. Different aquifer material classes, e.g., `AQ' (aquifer material), `MAQ' (marginal aquifer material'), `PCM' (partially confining material), and `CM' (confining material), are simulated, with the hydraulic properties of each material type as tuning parameters during calibration. The TP-based approach is applied to simulate unsteady groundwater flow in a large, complex, and strongly heterogeneous glacial aquifer system in Michigan across multiple spatial and temporal scales. The resulting model is calibrated to observed static water level data over a time span of 50 years. The results show that the TP-based conceptualization enables much more accurate and robust calibration/simulation than that based on conventional deterministic layer/zone based conceptual representations.

  2. Dynamic pathway modeling of signal transduction networks: a domain-oriented approach.

    PubMed

    Conzelmann, Holger; Gilles, Ernst-Dieter

    2008-01-01

    Mathematical models of biological processes become more and more important in biology. The aim is a holistic understanding of how processes such as cellular communication, cell division, regulation, homeostasis, or adaptation work, how they are regulated, and how they react to perturbations. The great complexity of most of these processes necessitates the generation of mathematical models in order to address these questions. In this chapter we provide an introduction to basic principles of dynamic modeling and highlight both problems and chances of dynamic modeling in biology. The main focus will be on modeling of s transduction pathways, which requires the application of a special modeling approach. A common pattern, especially in eukaryotic signaling systems, is the formation of multi protein signaling complexes. Even for a small number of interacting proteins the number of distinguishable molecular species can be extremely high. This combinatorial complexity is due to the great number of distinct binding domains of many receptors and scaffold proteins involved in signal transduction. However, these problems can be overcome using a new domain-oriented modeling approach, which makes it possible to handle complex and branched signaling pathways.

  3. A framework for modelling the complexities of food and water security under globalisation

    NASA Astrophysics Data System (ADS)

    Dermody, Brian J.; Sivapalan, Murugesu; Stehfest, Elke; van Vuuren, Detlef P.; Wassen, Martin J.; Bierkens, Marc F. P.; Dekker, Stefan C.

    2018-01-01

    We present a new framework for modelling the complexities of food and water security under globalisation. The framework sets out a method to capture regional and sectoral interdependencies and cross-scale feedbacks within the global food system that contribute to emergent water use patterns. The framework integrates aspects of existing models and approaches in the fields of hydrology and integrated assessment modelling. The core of the framework is a multi-agent network of city agents connected by infrastructural trade networks. Agents receive socio-economic and environmental constraint information from integrated assessment models and hydrological models respectively and simulate complex, socio-environmental dynamics that operate within those constraints. The emergent changes in food and water resources are aggregated and fed back to the original models with minimal modification of the structure of those models. It is our conviction that the framework presented can form the basis for a new wave of decision tools that capture complex socio-environmental change within our globalised world. In doing so they will contribute to illuminating pathways towards a sustainable future for humans, ecosystems and the water they share.

  4. Complex versus simple models: ion-channel cardiac toxicity prediction.

    PubMed

    Mistry, Hitesh B

    2018-01-01

    There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model B net was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the B net model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.

  5. On Convergence of Development Costs and Cost Models for Complex Spaceflight Instrument Electronics

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Patel, Umeshkumar D.; Kasa, Robert L.; Hestnes, Phyllis; Brown, Tammy; Vootukuru, Madhavi

    2008-01-01

    Development costs of a few recent spaceflight instrument electrical and electronics subsystems have diverged from respective heritage cost model predictions. The cost models used are Grass Roots, Price-H and Parametric Model. These cost models originated in the military and industry around 1970 and were successfully adopted and patched by NASA on a mission-by-mission basis for years. However, the complexity of new instruments recently changed rapidly by orders of magnitude. This is most obvious in the complexity of representative spaceflight instrument electronics' data system. It is now required to perform intermediate processing of digitized data apart from conventional processing of science phenomenon signals from multiple detectors. This involves on-board instrument formatting of computational operands from row data for example, images), multi-million operations per second on large volumes of data in reconfigurable hardware (in addition to processing on a general purpose imbedded or standalone instrument flight computer), as well as making decisions for on-board system adaptation and resource reconfiguration. The instrument data system is now tasked to perform more functions, such as forming packets and instrument-level data compression of more than one data stream, which are traditionally performed by the spacecraft command and data handling system. It is furthermore required that the electronics box for new complex instruments is developed for one-digit watt power consumption, small size and that it is light-weight, and delivers super-computing capabilities. The conflict between the actual development cost of newer complex instruments and its electronics components' heritage cost model predictions seems to be irreconcilable. This conflict and an approach to its resolution are addressed in this paper by determining the complexity parameters, complexity index, and their use in enhanced cost model.

  6. KRISSY: user's guide to modeling three-dimensional wind flow in complex terrain

    Treesearch

    Michael A. Fosberg; Michael L. Sestak

    1986-01-01

    KRISSY is a computer model for generating three-dimensional wind flows in complex terrain from data that were not or perhaps cannot be collected. The model is written in FORTRAN IV This guide describes data requirements, modeling, and output from an applications viewpoint rather than that of programming or theoretical modeling. KRISSY is designed to minimize...

  7. Ranking streamflow model performance based on Information theory metrics

    NASA Astrophysics Data System (ADS)

    Martinez, Gonzalo; Pachepsky, Yakov; Pan, Feng; Wagener, Thorsten; Nicholson, Thomas

    2016-04-01

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic model evaluation and selection. We simulated 10-year streamflow time series in five watersheds located in Texas, North Carolina, Mississippi, and West Virginia. Eight model of different complexity were applied. The information-theory based metrics were obtained after representing the time series as strings of symbols where different symbols corresponded to different quantiles of the probability distribution of streamflow. The symbol alphabet was used. Three metrics were computed for those strings - mean information gain that measures the randomness of the signal, effective measure complexity that characterizes predictability and fluctuation complexity that characterizes the presence of a pattern in the signal. The observed streamflow time series has smaller information content and larger complexity metrics than the precipitation time series. Watersheds served as information filters and and streamflow time series were less random and more complex than the ones of precipitation. This is reflected the fact that the watershed acts as the information filter in the hydrologic conversion process from precipitation to streamflow. The Nash Sutcliffe efficiency metric increased as the complexity of models increased, but in many cases several model had this efficiency values not statistically significant from each other. In such cases, ranking models by the closeness of the information-theory based parameters in simulated and measured streamflow time series can provide an additional criterion for the evaluation of hydrologic model performance.

  8. Utility and Scope of Rapid Prototyping in Patients with Complex Muscular Ventricular Septal Defects or Double-Outlet Right Ventricle: Does it Alter Management Decisions?

    PubMed

    Bhatla, Puneet; Tretter, Justin T; Ludomirsky, Achi; Argilla, Michael; Latson, Larry A; Chakravarti, Sujata; Barker, Piers C; Yoo, Shi-Joon; McElhinney, Doff B; Wake, Nicole; Mosca, Ralph S

    2017-01-01

    Rapid prototyping facilitates comprehension of complex cardiac anatomy. However, determining when this additional information proves instrumental in patient management remains a challenge. We describe our experience with patient-specific anatomic models created using rapid prototyping from various imaging modalities, suggesting their utility in surgical and interventional planning in congenital heart disease (CHD). Virtual and physical 3-dimensional (3D) models were generated from CT or MRI data, using commercially available software for patients with complex muscular ventricular septal defects (CMVSD) and double-outlet right ventricle (DORV). Six patients with complex anatomy and uncertainty of the optimal management strategy were included in this study. The models were subsequently used to guide management decisions, and the outcomes reviewed. 3D models clearly demonstrated the complex intra-cardiac anatomy in all six patients and were utilized to guide management decisions. In the three patients with CMVSD, one underwent successful endovascular device closure following a prior failed attempt at transcatheter closure, and the other two underwent successful primary surgical closure with the aid of 3D models. In all three cases of DORV, the models provided better anatomic delineation and additional information that altered or confirmed the surgical plan. Patient-specific 3D heart models show promise in accurately defining intra-cardiac anatomy in CHD, specifically CMVSD and DORV. We believe these models improve understanding of the complex anatomical spatial relationships in these defects and provide additional insight for pre/intra-interventional management and surgical planning.

  9. Distributional potential of the Triatoma brasiliensis species complex at present and under scenarios of future climate conditions

    PubMed Central

    2014-01-01

    Background The Triatoma brasiliensis complex is a monophyletic group, comprising three species, one of which includes two subspecific taxa, distributed across 12 Brazilian states, in the caatinga and cerrado biomes. Members of the complex are diverse in terms of epidemiological importance, morphology, biology, ecology, and genetics. Triatoma b. brasiliensis is the most disease-relevant member of the complex in terms of epidemiology, extensive distribution, broad feeding preferences, broad ecological distribution, and high rates of infection with Trypanosoma cruzi; consequently, it is considered the principal vector of Chagas disease in northeastern Brazil. Methods We used ecological niche models to estimate potential distributions of all members of the complex, and evaluated the potential for suitable adjacent areas to be colonized; we also present first evaluations of potential for climate change-mediated distributional shifts. Models were developed using the GARP and Maxent algorithms. Results Models for three members of the complex (T. b. brasiliensis, N = 332; T. b. macromelasoma, N = 35; and T. juazeirensis, N = 78) had significant distributional predictivity; however, models for T. sherlocki and T. melanica, both with very small sample sizes (N = 7), did not yield predictions that performed better than random. Model projections onto future-climate scenarios indicated little broad-scale potential for change in the potential distribution of the complex through 2050. Conclusions This study suggests that T. b. brasiliensis is the member of the complex with the greatest distributional potential to colonize new areas: overall; however, the distribution of the complex appears relatively stable. These analyses offer key information to guide proactive monitoring and remediation activities to reduce risk of Chagas disease transmission. PMID:24886587

  10. 2D Potential Theory using Complex Algebra: New Perspectives for Interpretation of Marine Magnetic Anomaly

    NASA Astrophysics Data System (ADS)

    Le Maire, P.; Munschy, M.

    2017-12-01

    Interpretation of marine magnetic anomalies enable to perform accurate global kinematic models. Several methods have been proposed to compute the paleo-latitude of the oceanic crust as its formation. A model of the Earth's magnetic field is used to determine a relationship between the apparent inclination of the magnetization and the paleo-latitude. Usually, the estimation of the apparent inclination is qualitative, with the fit between magnetic data and forward models. We propose to apply a new method using complex algebra to obtain the apparent inclination of the magnetization of the oceanic crust. For two dimensional bodies, we rewrite Talwani's equations using complex algebra; the corresponding complex function of the complex variable, called CMA (complex magnetic anomaly) is easier to use for forward modelling and inversion of the magnetic data. This complex equation allows to visualize the data in the complex plane (Argand diagram) and offers a new way to interpret data (curves to the right of the figure (B), while the curves to the left represent the standard display of magnetic anomalies (A) for the model displayed (C) at the bottom of the figure). In the complex plane, the effect of the apparent inclination is to rotate the curves, while on the standard display the evolution of the shape of the anomaly is more complicated (figure). This innovative method gives the opportunity to study a set of magnetic profiles (provided by the Geological Survey of Norway) acquired in the Norwegian Sea, near the Jan Mayen fracture zone. In this area, the age of the oceanic crust ranges from 40 to 55 Ma and the apparent inclination of the magnetization is computed.

  11. Derivative Free Optimization of Complex Systems with the Use of Statistical Machine Learning Models

    DTIC Science & Technology

    2015-09-12

    AFRL-AFOSR-VA-TR-2015-0278 DERIVATIVE FREE OPTIMIZATION OF COMPLEX SYSTEMS WITH THE USE OF STATISTICAL MACHINE LEARNING MODELS Katya Scheinberg...COMPLEX SYSTEMS WITH THE USE OF STATISTICAL MACHINE LEARNING MODELS 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA9550-11-1-0239 5c.  PROGRAM ELEMENT...developed, which has been the focus of our research. 15. SUBJECT TERMS optimization, Derivative-Free Optimization, Statistical Machine Learning 16. SECURITY

  12. A range of complex probabilistic models for RNA secondary structure prediction that includes the nearest-neighbor model and more.

    PubMed

    Rivas, Elena; Lang, Raymond; Eddy, Sean R

    2012-02-01

    The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases.

  13. A range of complex probabilistic models for RNA secondary structure prediction that includes the nearest-neighbor model and more

    PubMed Central

    Rivas, Elena; Lang, Raymond; Eddy, Sean R.

    2012-01-01

    The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases. PMID:22194308

  14. Predictive Models for the Free Energy of Hydrogen Bonded Complexes with Single and Cooperative Hydrogen Bonds.

    PubMed

    Glavatskikh, Marta; Madzhidov, Timur; Solov'ev, Vitaly; Marcou, Gilles; Horvath, Dragos; Varnek, Alexandre

    2016-12-01

    In this work, we report QSPR modeling of the free energy ΔG of 1 : 1 hydrogen bond complexes of different H-bond acceptors and donors. The modeling was performed on a large and structurally diverse set of 3373 complexes featuring a single hydrogen bond, for which ΔG was measured at 298 K in CCl 4 . The models were prepared using Support Vector Machine and Multiple Linear Regression, with ISIDA fragment descriptors. The marked atoms strategy was applied at fragmentation stage, in order to capture the location of H-bond donor and acceptor centers. Different strategies of model validation have been suggested, including the targeted omission of individual H-bond acceptors and donors from the training set, in order to check whether the predictive ability of the model is not limited to the interpolation of H-bond strength between two already encountered partners. Successfully cross-validating individual models were combined into a consensus model, and challenged to predict external test sets of 629 and 12 complexes, in which donor and acceptor formed single and cooperative H-bonds, respectively. In all cases, SVM models outperform MLR. The SVM consensus model performs well both in 3-fold cross-validation (RMSE=1.50 kJ/mol), and on the external test sets containing complexes with single (RMSE=3.20 kJ/mol) and cooperative H-bonds (RMSE=1.63 kJ/mol). © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. iMarNet: an ocean biogeochemistry model inter-comparison project within a common physical ocean modelling framework

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, L.; Yool, A.; Allen, J. I.; Anderson, T. R.; Barciela, R.; Buitenhuis, E. T.; Butenschön, M.; Enright, C.; Halloran, P. R.; Le Quéré, C.; de Mora, L.; Racault, M.-F.; Sinha, B.; Totterdell, I. J.; Cox, P. M.

    2014-07-01

    Ocean biogeochemistry (OBGC) models span a wide range of complexities from highly simplified, nutrient-restoring schemes, through nutrient-phytoplankton-zooplankton-detritus (NPZD) models that crudely represent the marine biota, through to models that represent a broader trophic structure by grouping organisms as plankton functional types (PFT) based on their biogeochemical role (Dynamic Green Ocean Models; DGOM) and ecosystem models which group organisms by ecological function and trait. OBGC models are now integral components of Earth System Models (ESMs), but they compete for computing resources with higher resolution dynamical setups and with other components such as atmospheric chemistry and terrestrial vegetation schemes. As such, the choice of OBGC in ESMs needs to balance model complexity and realism alongside relative computing cost. Here, we present an inter-comparison of six OBGC models that were candidates for implementation within the next UK Earth System Model (UKESM1). The models cover a large range of biological complexity (from 7 to 57 tracers) but all include representations of at least the nitrogen, carbon, alkalinity and oxygen cycles. Each OBGC model was coupled to the Nucleus for the European Modelling of the Ocean (NEMO) ocean general circulation model (GCM), and results from physically identical hindcast simulations were compared. Model skill was evaluated for biogeochemical metrics of global-scale bulk properties using conventional statistical techniques. The computing cost of each model was also measured in standardised tests run at two resource levels. No model is shown to consistently outperform or underperform all other models across all metrics. Nonetheless, the simpler models that are easier to tune are broadly closer to observations across a number of fields, and thus offer a high-efficiency option for ESMs that prioritise high resolution climate dynamics. However, simpler models provide limited insight into more complex marine biogeochemical processes and ecosystem pathways, and a parallel approach of low resolution climate dynamics and high complexity biogeochemistry is desirable in order to provide additional insights into biogeochemistry-climate interactions.

  16. iMarNet: an ocean biogeochemistry model intercomparison project within a common physical ocean modelling framework

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, L.; Yool, A.; Allen, J. I.; Anderson, T. R.; Barciela, R.; Buitenhuis, E. T.; Butenschön, M.; Enright, C.; Halloran, P. R.; Le Quéré, C.; de Mora, L.; Racault, M.-F.; Sinha, B.; Totterdell, I. J.; Cox, P. M.

    2014-12-01

    Ocean biogeochemistry (OBGC) models span a wide variety of complexities, including highly simplified nutrient-restoring schemes, nutrient-phytoplankton-zooplankton-detritus (NPZD) models that crudely represent the marine biota, models that represent a broader trophic structure by grouping organisms as plankton functional types (PFTs) based on their biogeochemical role (dynamic green ocean models) and ecosystem models that group organisms by ecological function and trait. OBGC models are now integral components of Earth system models (ESMs), but they compete for computing resources with higher resolution dynamical setups and with other components such as atmospheric chemistry and terrestrial vegetation schemes. As such, the choice of OBGC in ESMs needs to balance model complexity and realism alongside relative computing cost. Here we present an intercomparison of six OBGC models that were candidates for implementation within the next UK Earth system model (UKESM1). The models cover a large range of biological complexity (from 7 to 57 tracers) but all include representations of at least the nitrogen, carbon, alkalinity and oxygen cycles. Each OBGC model was coupled to the ocean general circulation model Nucleus for European Modelling of the Ocean (NEMO) and results from physically identical hindcast simulations were compared. Model skill was evaluated for biogeochemical metrics of global-scale bulk properties using conventional statistical techniques. The computing cost of each model was also measured in standardised tests run at two resource levels. No model is shown to consistently outperform all other models across all metrics. Nonetheless, the simpler models are broadly closer to observations across a number of fields and thus offer a high-efficiency option for ESMs that prioritise high-resolution climate dynamics. However, simpler models provide limited insight into more complex marine biogeochemical processes and ecosystem pathways, and a parallel approach of low-resolution climate dynamics and high-complexity biogeochemistry is desirable in order to provide additional insights into biogeochemistry-climate interactions.

  17. On the use of Empirical Data to Downscale Non-scientific Scepticism About Results From Complex Physical Based Models

    NASA Astrophysics Data System (ADS)

    Germer, S.; Bens, O.; Hüttl, R. F.

    2008-12-01

    The scepticism of non-scientific local stakeholders about results from complex physical based models is a major problem concerning the development and implementation of local climate change adaptation measures. This scepticism originates from the high complexity of such models. Local stakeholders perceive complex models as black-box models, as it is impossible to gasp all underlying assumptions and mathematically formulated processes at a glance. The use of physical based models is, however, indispensible to study complex underlying processes and to predict future environmental changes. The increase of climate change adaptation efforts following the release of the latest IPCC report indicates that the communication of facts about what has already changed is an appropriate tool to trigger climate change adaptation. Therefore we suggest increasing the practice of empirical data analysis in addition to modelling efforts. The analysis of time series can generate results that are easier to comprehend for non-scientific stakeholders. Temporal trends and seasonal patterns of selected hydrological parameters (precipitation, evapotranspiration, groundwater levels and river discharge) can be identified and the dependence of trends and seasonal patters to land use, topography and soil type can be highlighted. A discussion about lag times between the hydrological parameters can increase the awareness of local stakeholders for delayed environment responses.

  18. Dense power-law networks and simplicial complexes

    NASA Astrophysics Data System (ADS)

    Courtney, Owen T.; Bianconi, Ginestra

    2018-05-01

    There is increasing evidence that dense networks occur in on-line social networks, recommendation networks and in the brain. In addition to being dense, these networks are often also scale-free, i.e., their degree distributions follow P (k ) ∝k-γ with γ ∈(1 ,2 ] . Models of growing networks have been successfully employed to produce scale-free networks using preferential attachment, however these models can only produce sparse networks as the numbers of links and nodes being added at each time step is constant. Here we present a modeling framework which produces networks that are both dense and scale-free. The mechanism by which the networks grow in this model is based on the Pitman-Yor process. Variations on the model are able to produce undirected scale-free networks with exponent γ =2 or directed networks with power-law out-degree distribution with tunable exponent γ ∈(1 ,2 ) . We also extend the model to that of directed two-dimensional simplicial complexes. Simplicial complexes are generalization of networks that can encode the many body interactions between the parts of a complex system and as such are becoming increasingly popular to characterize different data sets ranging from social interacting systems to the brain. Our model produces dense directed simplicial complexes with power-law distribution of the generalized out-degrees of the nodes.

  19. Intelligent classifier for dynamic fault patterns based on hidden Markov model

    NASA Astrophysics Data System (ADS)

    Xu, Bo; Feng, Yuguang; Yu, Jinsong

    2006-11-01

    It's difficult to build precise mathematical models for complex engineering systems because of the complexity of the structure and dynamics characteristics. Intelligent fault diagnosis introduces artificial intelligence and works in a different way without building the analytical mathematical model of a diagnostic object, so it's a practical approach to solve diagnostic problems of complex systems. This paper presents an intelligent fault diagnosis method, an integrated fault-pattern classifier based on Hidden Markov Model (HMM). This classifier consists of dynamic time warping (DTW) algorithm, self-organizing feature mapping (SOFM) network and Hidden Markov Model. First, after dynamic observation vector in measuring space is processed by DTW, the error vector including the fault feature of being tested system is obtained. Then a SOFM network is used as a feature extractor and vector quantization processor. Finally, fault diagnosis is realized by fault patterns classifying with the Hidden Markov Model classifier. The importing of dynamic time warping solves the problem of feature extracting from dynamic process vectors of complex system such as aeroengine, and makes it come true to diagnose complex system by utilizing dynamic process information. Simulating experiments show that the diagnosis model is easy to extend, and the fault pattern classifier is efficient and is convenient to the detecting and diagnosing of new faults.

  20. A novel multilayer model for missing link prediction and future link forecasting in dynamic complex networks

    NASA Astrophysics Data System (ADS)

    Yasami, Yasser; Safaei, Farshad

    2018-02-01

    The traditional complex network theory is particularly focused on network models in which all network constituents are dealt with equivalently, while fail to consider the supplementary information related to the dynamic properties of the network interactions. This is a main constraint leading to incorrect descriptions of some real-world phenomena or incomplete capturing the details of certain real-life problems. To cope with the problem, this paper addresses the multilayer aspects of dynamic complex networks by analyzing the properties of intrinsically multilayered co-authorship networks, DBLP and Astro Physics, and presenting a novel multilayer model of dynamic complex networks. The model examines the layers evolution (layers birth/death process and lifetime) throughout the network evolution. Particularly, this paper models the evolution of each node's membership in different layers by an Infinite Factorial Hidden Markov Model considering feature cascade, and thereby formulates the link generation process for intra-layer and inter-layer links. Although adjacency matrixes are useful to describe the traditional single-layer networks, such a representation is not sufficient to describe and analyze the multilayer dynamic networks. This paper also extends a generalized mathematical infrastructure to address the problems issued by multilayer complex networks. The model inference is performed using some Markov Chain Monte Carlo sampling strategies, given synthetic and real complex networks data. Experimental results indicate a tremendous improvement in the performance of the proposed multilayer model in terms of sensitivity, specificity, positive and negative predictive values, positive and negative likelihood ratios, F1-score, Matthews correlation coefficient, and accuracy for two important applications of missing link prediction and future link forecasting. The experimental results also indicate the strong predictivepower of the proposed model for the application of cascade prediction in terms of accuracy.

  1. Formalizing the Role of Agent-Based Modeling in Causal Inference and Epidemiology

    PubMed Central

    Marshall, Brandon D. L.; Galea, Sandro

    2015-01-01

    Calls for the adoption of complex systems approaches, including agent-based modeling, in the field of epidemiology have largely centered on the potential for such methods to examine complex disease etiologies, which are characterized by feedback behavior, interference, threshold dynamics, and multiple interacting causal effects. However, considerable theoretical and practical issues impede the capacity of agent-based methods to examine and evaluate causal effects and thus illuminate new areas for intervention. We build on this work by describing how agent-based models can be used to simulate counterfactual outcomes in the presence of complexity. We show that these models are of particular utility when the hypothesized causal mechanisms exhibit a high degree of interdependence between multiple causal effects and when interference (i.e., one person's exposure affects the outcome of others) is present and of intrinsic scientific interest. Although not without challenges, agent-based modeling (and complex systems methods broadly) represent a promising novel approach to identify and evaluate complex causal effects, and they are thus well suited to complement other modern epidemiologic methods of etiologic inquiry. PMID:25480821

  2. Evolving Scale-Free Networks by Poisson Process: Modeling and Degree Distribution.

    PubMed

    Feng, Minyu; Qu, Hong; Yi, Zhang; Xie, Xiurui; Kurths, Jurgen

    2016-05-01

    Since the great mathematician Leonhard Euler initiated the study of graph theory, the network has been one of the most significant research subject in multidisciplinary. In recent years, the proposition of the small-world and scale-free properties of complex networks in statistical physics made the network science intriguing again for many researchers. One of the challenges of the network science is to propose rational models for complex networks. In this paper, in order to reveal the influence of the vertex generating mechanism of complex networks, we propose three novel models based on the homogeneous Poisson, nonhomogeneous Poisson and birth death process, respectively, which can be regarded as typical scale-free networks and utilized to simulate practical networks. The degree distribution and exponent are analyzed and explained in mathematics by different approaches. In the simulation, we display the modeling process, the degree distribution of empirical data by statistical methods, and reliability of proposed networks, results show our models follow the features of typical complex networks. Finally, some future challenges for complex systems are discussed.

  3. From Laser Scanning to Finite Element Analysis of Complex Buildings by Using a Semi-Automatic Procedure.

    PubMed

    Castellazzi, Giovanni; D'Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro

    2015-07-28

    In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation.

  4. Molecular architecture of the yeast Mediator complex

    PubMed Central

    Robinson, Philip J; Trnka, Michael J; Pellarin, Riccardo; Greenberg, Charles H; Bushnell, David A; Davis, Ralph; Burlingame, Alma L; Sali, Andrej; Kornberg, Roger D

    2015-01-01

    The 21-subunit Mediator complex transduces regulatory information from enhancers to promoters, and performs an essential role in the initiation of transcription in all eukaryotes. Structural information on two-thirds of the complex has been limited to coarse subunit mapping onto 2-D images from electron micrographs. We have performed chemical cross-linking and mass spectrometry, and combined the results with information from X-ray crystallography, homology modeling, and cryo-electron microscopy by an integrative modeling approach to determine a 3-D model of the entire Mediator complex. The approach is validated by the use of X-ray crystal structures as internal controls and by consistency with previous results from electron microscopy and yeast two-hybrid screens. The model shows the locations and orientations of all Mediator subunits, as well as subunit interfaces and some secondary structural elements. Segments of 20–40 amino acid residues are placed with an average precision of 20 Å. The model reveals roles of individual subunits in the organization of the complex. DOI: http://dx.doi.org/10.7554/eLife.08719.001 PMID:26402457

  5. Surface complexation modeling of americium sorption onto volcanic tuff.

    PubMed

    Ding, M; Kelkar, S; Meijer, A

    2014-10-01

    Results of a surface complexation model (SCM) for americium sorption on volcanic rocks (devitrified and zeolitic tuff) are presented. The model was developed using PHREEQC and based on laboratory data for americium sorption on quartz. Available data for sorption of americium on quartz as a function of pH in dilute groundwater can be modeled with two surface reactions involving an americium sulfate and an americium carbonate complex. It was assumed in applying the model to volcanic rocks from Yucca Mountain, that the surface properties of volcanic rocks can be represented by a quartz surface. Using groundwaters compositionally representative of Yucca Mountain, americium sorption distribution coefficient (Kd, L/Kg) values were calculated as function of pH. These Kd values are close to the experimentally determined Kd values for americium sorption on volcanic rocks, decreasing with increasing pH in the pH range from 7 to 9. The surface complexation constants, derived in this study, allow prediction of sorption of americium in a natural complex system, taking into account the inherent uncertainty associated with geochemical conditions that occur along transport pathways. Published by Elsevier Ltd.

  6. Calcium-manganese oxides as structural and functional models for active site in oxygen evolving complex in photosystem II: lessons from simple models.

    PubMed

    Najafpour, Mohammad Mahdi

    2011-01-01

    The oxygen evolving complex in photosystem II which induces the oxidation of water to dioxygen in plants, algae and certain bacteria contains a cluster of one calcium and four manganese ions. It serves as a model to split water by sunlight. Reports on the mechanism and structure of photosystem II provide a more detailed architecture of the oxygen evolving complex and the surrounding amino acids. One challenge in this field is the development of artificial model compounds to study oxygen evolution reaction outside the complicated environment of the enzyme. Calcium-manganese oxides as structural and functional models for the active site of photosystem II are explained and reviewed in this paper. Because of related structures of these calcium-manganese oxides and the catalytic centers of active site of the oxygen evolving complex of photosystem II, the study may help to understand more about mechanism of oxygen evolution by the oxygen evolving complex of photosystem II. Copyright © 2010 Elsevier B.V. All rights reserved.

  7. Reliability analysis in interdependent smart grid systems

    NASA Astrophysics Data System (ADS)

    Peng, Hao; Kan, Zhe; Zhao, Dandan; Han, Jianmin; Lu, Jianfeng; Hu, Zhaolong

    2018-06-01

    Complex network theory is a useful way to study many real complex systems. In this paper, a reliability analysis model based on complex network theory is introduced in interdependent smart grid systems. In this paper, we focus on understanding the structure of smart grid systems and studying the underlying network model, their interactions, and relationships and how cascading failures occur in the interdependent smart grid systems. We propose a practical model for interdependent smart grid systems using complex theory. Besides, based on percolation theory, we also study the effect of cascading failures effect and reveal detailed mathematical analysis of failure propagation in such systems. We analyze the reliability of our proposed model caused by random attacks or failures by calculating the size of giant functioning components in interdependent smart grid systems. Our simulation results also show that there exists a threshold for the proportion of faulty nodes, beyond which the smart grid systems collapse. Also we determine the critical values for different system parameters. In this way, the reliability analysis model based on complex network theory can be effectively utilized for anti-attack and protection purposes in interdependent smart grid systems.

  8. Adsorption of uranium(VI) to manganese oxides: X-ray absorption spectroscopy and surface complexation modeling.

    PubMed

    Wang, Zimeng; Lee, Sung-Woo; Catalano, Jeffrey G; Lezama-Pacheco, Juan S; Bargar, John R; Tebo, Bradley M; Giammar, Daniel E

    2013-01-15

    The mobility of hexavalent uranium in soil and groundwater is strongly governed by adsorption to mineral surfaces. As strong naturally occurring adsorbents, manganese oxides may significantly influence the fate and transport of uranium. Models for U(VI) adsorption over a broad range of chemical conditions can improve predictive capabilities for uranium transport in the subsurface. This study integrated batch experiments of U(VI) adsorption to synthetic and biogenic MnO(2), surface complexation modeling, ζ-potential analysis, and molecular-scale characterization of adsorbed U(VI) with extended X-ray absorption fine structure (EXAFS) spectroscopy. The surface complexation model included inner-sphere monodentate and bidentate surface complexes and a ternary uranyl-carbonato surface complex, which was consistent with the EXAFS analysis. The model could successfully simulate adsorption results over a broad range of pH and dissolved inorganic carbon concentrations. U(VI) adsorption to synthetic δ-MnO(2) appears to be stronger than to biogenic MnO(2), and the differences in adsorption affinity and capacity are not associated with any substantial difference in U(VI) coordination.

  9. Comparison of new generation low-complexity flood inundation mapping tools with a hydrodynamic model

    NASA Astrophysics Data System (ADS)

    Afshari, Shahab; Tavakoly, Ahmad A.; Rajib, Mohammad Adnan; Zheng, Xing; Follum, Michael L.; Omranian, Ehsan; Fekete, Balázs M.

    2018-01-01

    The objective of this study is to compare two new generation low-complexity tools, AutoRoute and Height Above the Nearest Drainage (HAND), with a two-dimensional hydrodynamic model (Hydrologic Engineering Center-River Analysis System, HEC-RAS 2D). The assessment was conducted on two hydrologically different and geographically distant test-cases in the United States, including the 16,900 km2 Cedar River (CR) watershed in Iowa and a 62 km2 domain along the Black Warrior River (BWR) in Alabama. For BWR, twelve different configurations were set up for each of the models, including four different terrain setups (e.g. with and without channel bathymetry and a levee), and three flooding conditions representing moderate to extreme hazards at 10-, 100-, and 500-year return periods. For the CR watershed, models were compared with a simplistic terrain setup (without bathymetry and any form of hydraulic controls) and one flooding condition (100-year return period). Input streamflow forcing data representing these hypothetical events were constructed by applying a new fusion approach on National Water Model outputs. Simulated inundation extent and depth from AutoRoute, HAND, and HEC-RAS 2D were compared with one another and with the corresponding FEMA reference estimates. Irrespective of the configurations, the low-complexity models were able to produce inundation extents similar to HEC-RAS 2D, with AutoRoute showing slightly higher accuracy than the HAND model. Among four terrain setups, the one including both levee and channel bathymetry showed lowest fitness score on the spatial agreement of inundation extent, due to the weak physical representation of low-complexity models compared to a hydrodynamic model. For inundation depth, the low-complexity models showed an overestimating tendency, especially in the deeper segments of the channel. Based on such reasonably good prediction skills, low-complexity flood models can be considered as a suitable alternative for fast predictions in large-scale hyper-resolution operational frameworks, without completely overriding hydrodynamic models' efficacy.

  10. Data flow modeling techniques

    NASA Technical Reports Server (NTRS)

    Kavi, K. M.

    1984-01-01

    There have been a number of simulation packages developed for the purpose of designing, testing and validating computer systems, digital systems and software systems. Complex analytical tools based on Markov and semi-Markov processes have been designed to estimate the reliability and performance of simulated systems. Petri nets have received wide acceptance for modeling complex and highly parallel computers. In this research data flow models for computer systems are investigated. Data flow models can be used to simulate both software and hardware in a uniform manner. Data flow simulation techniques provide the computer systems designer with a CAD environment which enables highly parallel complex systems to be defined, evaluated at all levels and finally implemented in either hardware or software. Inherent in data flow concept is the hierarchical handling of complex systems. In this paper we will describe how data flow can be used to model computer system.

  11. A human body model for efficient numerical characterization of UWB signal propagation in wireless body area networks.

    PubMed

    Lim, Hooi Been; Baumann, Dirk; Li, Er-Ping

    2011-03-01

    Wireless body area network (WBAN) is a new enabling system with promising applications in areas such as remote health monitoring and interpersonal communication. Reliable and optimum design of a WBAN system relies on a good understanding and in-depth studies of the wave propagation around a human body. However, the human body is a very complex structure and is computationally demanding to model. This paper aims to investigate the effects of the numerical model's structure complexity and feature details on the simulation results. Depending on the application, a simplified numerical model that meets desired simulation accuracy can be employed for efficient simulations. Measurements of ultra wideband (UWB) signal propagation along a human arm are performed and compared to the simulation results obtained with numerical arm models of different complexity levels. The influence of the arm shape and size, as well as tissue composition and complexity is investigated.

  12. Modeling Complex Cross-Systems Software Interfaces Using SysML

    NASA Technical Reports Server (NTRS)

    Mandutianu, Sanda; Morillo, Ron; Simpson, Kim; Liepack, Otfrid; Bonanne, Kevin

    2013-01-01

    The complex flight and ground systems for NASA human space exploration are designed, built, operated and managed as separate programs and projects. However, each system relies on one or more of the other systems in order to accomplish specific mission objectives, creating a complex, tightly coupled architecture. Thus, there is a fundamental need to understand how each system interacts with the other. To determine if a model-based system engineering approach could be utilized to assist with understanding the complex system interactions, the NASA Engineering and Safety Center (NESC) sponsored a task to develop an approach for performing cross-system behavior modeling. This paper presents the results of applying Model Based Systems Engineering (MBSE) principles using the System Modeling Language (SysML) to define cross-system behaviors and how they map to crosssystem software interfaces documented in system-level Interface Control Documents (ICDs).

  13. Some Observations on the Current Status of Performing Finite Element Analyses

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S.; Knight, Norman F., Jr; Shivakumar, Kunigal N.

    2015-01-01

    Aerospace structures are complex high-performance structures. Advances in reliable and efficient computing and modeling tools are enabling analysts to consider complex configurations, build complex finite element models, and perform analysis rapidly. Many of the early career engineers of today are very proficient in the usage of modern computers, computing engines, complex software systems, and visualization tools. These young engineers are becoming increasingly efficient in building complex 3D models of complicated aerospace components. However, the current trends demonstrate blind acceptance of the results of the finite element analysis results. This paper is aimed at raising an awareness of this situation. Examples of the common encounters are presented. To overcome the current trends, some guidelines and suggestions for analysts, senior engineers, and educators are offered.

  14. Understanding large multiprotein complexes: applying a multiple allosteric networks model to explain the function of the Mediator transcription complex.

    PubMed

    Lewis, Brian A

    2010-01-15

    The regulation of transcription and of many other cellular processes involves large multi-subunit protein complexes. In the context of transcription, it is known that these complexes serve as regulatory platforms that connect activator DNA-binding proteins to a target promoter. However, there is still a lack of understanding regarding the function of these complexes. Why do multi-subunit complexes exist? What is the molecular basis of the function of their constituent subunits, and how are these subunits organized within a complex? What is the reason for physical connections between certain subunits and not others? In this article, I address these issues through a model of network allostery and its application to the eukaryotic RNA polymerase II Mediator transcription complex. The multiple allosteric networks model (MANM) suggests that protein complexes such as Mediator exist not only as physical but also as functional networks of interconnected proteins through which information is transferred from subunit to subunit by the propagation of an allosteric state known as conformational spread. Additionally, there are multiple distinct sub-networks within the Mediator complex that can be defined by their connections to different subunits; these sub-networks have discrete functions that are activated when specific subunits interact with other activator proteins.

  15. Syntheses, spectroscopic characterization, thermal study, molecular modeling, and biological evaluation of novel Schiff's base benzil bis(5-amino-1,3,4-thiadiazole-2-thiol) with Ni(II), and Cu(II) metal complexes

    NASA Astrophysics Data System (ADS)

    Chandra, Sulekh; Gautam, Seema; Rajor, Hament Kumar; Bhatia, Rohit

    2015-02-01

    Novel Schiff's base ligand, benzil bis(5-amino-1,3,4-thiadiazole-2-thiol) was synthesized by the condensation of benzil and 5-amino-1,3,4-thiadiazole-2-thiol in 1:2 ratio. The structure of ligand was determined on the basis of elemental analyses, IR, 1H NMR, mass, and molecular modeling studies. Synthesized ligand behaved as tetradentate and coordinated to metal ion through sulfur atoms of thiol ring and nitrogen atoms of imine group. Ni(II), and Cu(II) complexes were synthesized with this nitrogen-sulfur donor (N2S2) ligand. Metal complexes were characterized by elemental analyses, molar conductance, magnetic susceptibility measurements, IR, electronic spectra, EPR, thermal, and molecular modeling studies. All the complexes showed molar conductance corresponding to non-electrolytic nature, expect [Ni(L)](NO3)2 complex, which was 1:2 electrolyte in nature. [Cu(L)(SO4)] complex may possessed square pyramidal geometry, [Ni(L)](NO3)2 complex tetrahedral and rest of the complexes six coordinated octahedral/tetragonal geometry. Newly synthesized ligand and its metal complexes were examined against the opportunistic pathogens. Results suggested that metal complexes were more biological sensitive than free ligand.

  16. Organizational-economic model of formation of socio-commercial multifunctional complex in the construction of high-rise buildings

    NASA Astrophysics Data System (ADS)

    Kirillova, Ariadna; Prytkova, Oksana O.

    2018-03-01

    The article is devoted to the features of the formation of the organizational and economic model of the construction of a socio-commercial multifunctional complex for high-rise construction. Authors have given examples of high-altitude multifunctional complexes in Moscow, analyzed the advantages and disadvantages in the implementation of multifunctional complexes, stressed the need for a holistic strategic approach, allowing to take into account the prospects for the development of the city and the creation of a comfortable living environment. Based on the analysis of multifunctional complexes features, a matrix of SWOT analysis was compiled. For the development of cities and improving the quality of life of the population, it is proposed to implement a new type of multifunctional complexes of a joint social and commercial direction, including, along with the implementation of office areas - schools, polyclinics, various sports facilities and cultural and leisure centers (theatrical, dance, studio, etc.). The approach proposed in the article for developing the model is based on a comparative evaluation of the multifunctional complex project of a social and commercial direction implemented at the expense of public-private partnership in the form of a concession agreement and a commercial multifunctional complex being built at the expense of the investor. It has been proved by calculations that the obtained indicators satisfy the conditions of expediency of the proposed organizational-economic model and the project of the social and commercial multifunctional complex is effective.

  17. Dynamics of nanoparticle-protein corona complex formation: analytical results from population balance equations.

    PubMed

    Darabi Sahneh, Faryad; Scoglio, Caterina; Riviere, Jim

    2013-01-01

    Nanoparticle-protein corona complex formation involves absorption of protein molecules onto nanoparticle surfaces in a physiological environment. Understanding the corona formation process is crucial in predicting nanoparticle behavior in biological systems, including applications of nanotoxicology and development of nano drug delivery platforms. This paper extends the modeling work in to derive a mathematical model describing the dynamics of nanoparticle corona complex formation from population balance equations. We apply nonlinear dynamics techniques to derive analytical results for the composition of nanoparticle-protein corona complex, and validate our results through numerical simulations. The model presented in this paper exhibits two phases of corona complex dynamics. In the first phase, proteins rapidly bind to the free surface of nanoparticles, leading to a metastable composition. During the second phase, continuous association and dissociation of protein molecules with nanoparticles slowly changes the composition of the corona complex. Given sufficient time, composition of the corona complex reaches an equilibrium state of stable composition. We find analytical approximate formulae for metastable and stable compositions of corona complex. Our formulae are very well-structured to clearly identify important parameters determining corona composition. The dynamics of biocorona formation constitute vital aspect of interactions between nanoparticles and living organisms. Our results further understanding of these dynamics through quantitation of experimental conditions, modeling results for in vitro systems to better predict behavior for in vivo systems. One potential application would involve a single cell culture medium related to a complex protein medium, such as blood or tissue fluid.

  18. Inner-sphere complexation of cations at the rutile-water interface: A concise surface structural interpretation with the CD and MUSIC model

    NASA Astrophysics Data System (ADS)

    Ridley, Moira K.; Hiemstra, Tjisse; van Riemsdijk, Willem H.; Machesky, Michael L.

    2009-04-01

    Acid-base reactivity and ion-interaction between mineral surfaces and aqueous solutions is most frequently investigated at the macroscopic scale as a function of pH. Experimental data are then rationalized by a variety of surface complexation models. These models are thermodynamically based which in principle does not require a molecular picture. The models are typically calibrated to relatively simple solid-electrolyte solution pairs and may provide poor descriptions of complex multi-component mineral-aqueous solutions, including those found in natural environments. Surface complexation models may be improved by incorporating molecular-scale surface structural information to constrain the modeling efforts. Here, we apply a concise, molecularly-constrained surface complexation model to a diverse suite of surface titration data for rutile and thereby begin to address the complexity of multi-component systems. Primary surface charging curves in NaCl, KCl, and RbCl electrolyte media were fit simultaneously using a charge distribution (CD) and multisite complexation (MUSIC) model [Hiemstra T. and Van Riemsdijk W. H. (1996) A surface structural approach to ion adsorption: the charge distribution (CD) model. J. Colloid Interf. Sci. 179, 488-508], coupled with a Basic Stern layer description of the electric double layer. In addition, data for the specific interaction of Ca 2+ and Sr 2+ with rutile, in NaCl and RbCl media, were modeled. In recent developments, spectroscopy, quantum calculations, and molecular simulations have shown that electrolyte and divalent cations are principally adsorbed in various inner-sphere configurations on the rutile 1 1 0 surface [Zhang Z., Fenter P., Cheng L., Sturchio N. C., Bedzyk M. J., Předota M., Bandura A., Kubicki J., Lvov S. N., Cummings P. T., Chialvo A. A., Ridley M. K., Bénézeth P., Anovitz L., Palmer D. A., Machesky M. L. and Wesolowski D. J. (2004) Ion adsorption at the rutile-water interface: linking molecular and macroscopic properties. Langmuir20, 4954-4969]. Our CD modeling results are consistent with these adsorbed configurations provided adsorbed cation charge is allowed to be distributed between the surface (0-plane) and Stern plane (1-plane). Additionally, a complete description of our titration data required inclusion of outer-sphere binding, principally for Cl - which was common to all solutions, but also for Rb + and K +. These outer-sphere species were treated as point charges positioned at the Stern layer, and hence determined the Stern layer capacitance value. The modeling results demonstrate that a multi-component suite of experimental data can be successfully rationalized within a CD and MUSIC model using a Stern-based description of the EDL. Furthermore, the fitted CD values of the various inner-sphere complexes of the mono- and divalent ions can be linked to the microscopic structure of the surface complexes and other data found by spectroscopy as well as molecular dynamics (MD). For the Na + ion, the fitted CD value points to the presence of bidenate inner-sphere complexation as suggested by a recent MD study. Moreover, its MD dominance quantitatively agrees with the CD model prediction. For Rb +, the presence of a tetradentate complex, as found by spectroscopy, agreed well with the fitted CD and its predicted presence was quantitatively in very good agreement with the amount found by spectroscopy.

  19. inner-sphere complexation of cations at the rutile-water interface: A concise surface structural interpretation with the CD and MUSIC model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ridley, Mora K.; Hiemstra, T; Van Riemsdijk, Willem H.

    Acid base reactivity and ion-interaction between mineral surfaces and aqueous solutions is most frequently investigated at the macroscopic scale as a function of pH. Experimental data are then rationalized by a variety of surface complexation models. These models are thermodynamically based which in principle does not require a molecular picture. The models are typically calibrated to relatively simple solid-electrolyte solution pairs and may provide poor descriptions of complex multicomponent mineral aqueous solutions, including those found in natural environments. Surface complexation models may be improved by incorporating molecular-scale surface structural information to constrain the modeling efforts. Here, we apply a concise,more » molecularly-constrained surface complexation model to a diverse suite of surface titration data for rutile and thereby begin to address the complexity of multi-component systems. Primary surface charging curves in NaCl, KCl, and RbCl electrolyte media were fit simultaneously using a charge distribution (CD) and multisite complexation (MUSIC) model [Hiemstra T. and Van Riemsdijk W. H. (1996) A surface structural approach to ion adsorption: the charge distribution (CD) model. J. Colloid Interf. Sci. 179, 488 508], coupled with a Basic Stern layer description of the electric double layer. In addition, data for the specific interaction of Ca2+ and Sr2+ with rutile, in NaCl and RbCl media, were modeled. In recent developments, spectroscopy, quantum calculations, and molecular simulations have shown that electrolyte and divalent cations are principally adsorbed in various inner-sphere configurations on the rutile 110 surface [Zhang Z., Fenter P., Cheng L., Sturchio N. C., Bedzyk M. J., Pr edota M., Bandura A., Kubicki J., Lvov S. N., Cummings P. T., Chialvo A. A., Ridley M. K., Be ne zeth P., Anovitz L., Palmer D. A., Machesky M. L. and Wesolowski D. J. (2004) Ion adsorption at the rutile water interface: linking molecular and macroscopic properties. Langmuir 20, 4954 4969]. Our CD modeling results are consistent with these adsorbed configurations provided adsorbed cation charge is allowed to be distributed between the surface (0-plane) and Stern plane (1-plane). Additionally, a complete description of our titration data required inclusion of outer-sphere binding, principally for Cl which was common to all solutions, but also for Rb+ and K+. These outer-sphere species were treated as point charges positioned at the Stern layer, and hence determined the Stern layer capacitance value. The modeling results demonstrate that a multi-component suite of experimental data can be successfully rationalized within a CD and MUSIC model using a Stern-based description of the EDL. Furthermore, the fitted CD values of the various inner-sphere complexes of the mono- and divalent ions can be linked to the microscopic structure of the surface complexes and other data found by spectroscopy as well as molecular dynamics (MD). For the Na+ ion, the fitted CD value points to the presence of bidenate inner-sphere complexation as suggested by a recent MD study. Moreover, its MD dominance quantitatively agrees with the CD model prediction. For Rb+, the presence of a tetradentate complex, as found by spectroscopy, agreed well with the fitted CD and its predicted presence was quantitatively in very good agreement with the amount found by spectroscopy.« less

  20. A probabilistic process model for pelagic marine ecosystems informed by Bayesian inverse analysis

    EPA Science Inventory

    Marine ecosystems are complex systems with multiple pathways that produce feedback cycles, which may lead to unanticipated effects. Models abstract this complexity and allow us to predict, understand, and hypothesize. In ecological models, however, the paucity of empirical data...

  1. Chemical cross-linking of the urease complex from Helicobacter pylori and analysis by Fourier transform ion cyclotron resonance mass spectrometry and molecular modeling

    NASA Astrophysics Data System (ADS)

    Carlsohn, Elisabet; Ångström, Jonas; Emmett, Mark R.; Marshall, Alan G.; Nilsson, Carol L.

    2004-05-01

    Chemical cross-linking of proteins is a well-established method for structural mapping of small protein complexes. When combined with mass spectrometry, cross-linking can reveal protein topology and identify contact sites between the peptide surfaces. When applied to surface-exposed proteins from pathogenic organisms, the method can reveal structural details that are useful in vaccine design. In order to investigate the possibilities of applying cross-linking on larger protein complexes, we selected the urease enzyme from Helicobacter pylori as a model. This membrane-associated protein complex consists of two subunits: [alpha] (26.5 kDa) and [beta] (61.7 kDa). Three ([alpha][beta]) heterodimers form a trimeric ([alpha][beta])3 assembly which further associates into a unique dodecameric 1.1 MDa complex composed of four ([alpha][beta])3 units. Cross-linked peptides from trypsin-digested urease complex were analyzed by Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) and molecular modeling. Two potential cross-linked peptides (present in the cross-linked sample but undetectable in [alpha], [beta], and native complex) were assigned. Molecular modeling of urease [alpha][beta] complex and trimeric urease units ([alpha][beta])3 revealed a linkage site between the [alpha]-subunit and the [beta]-subunit, and an internal cross-linkage in the [beta]-subunit.

  2. Ceruloplasmin: Macromolecular Assemblies with Iron-Containing Acute Phase Proteins

    PubMed Central

    Samygina, Valeriya R.; Sokolov, Alexey V.; Bourenkov, Gleb; Petoukhov, Maxim V.; Pulina, Maria O.; Zakharova, Elena T.; Vasilyev, Vadim B.; Bartunik, Hans; Svergun, Dmitri I.

    2013-01-01

    Copper-containing ferroxidase ceruloplasmin (Cp) forms binary and ternary complexes with cationic proteins lactoferrin (Lf) and myeloperoxidase (Mpo) during inflammation. We present an X-ray crystal structure of a 2Cp-Mpo complex at 4.7 Å resolution. This structure allows one to identify major protein–protein interaction areas and provides an explanation for a competitive inhibition of Mpo by Cp and for the activation of p-phenylenediamine oxidation by Mpo. Small angle X-ray scattering was employed to construct low-resolution models of the Cp-Lf complex and, for the first time, of the ternary 2Cp-2Lf-Mpo complex in solution. The SAXS-based model of Cp-Lf supports the predicted 1∶1 stoichiometry of the complex and demonstrates that both lobes of Lf contact domains 1 and 6 of Cp. The 2Cp-2Lf-Mpo SAXS model reveals the absence of interaction between Mpo and Lf in the ternary complex, so Cp can serve as a mediator of protein interactions in complex architecture. Mpo protects antioxidant properties of Cp by isolating its sensitive loop from proteases. The latter is important for incorporation of Fe3+ into Lf, which activates ferroxidase activity of Cp and precludes oxidation of Cp substrates. Our models provide the structural basis for possible regulatory role of these complexes in preventing iron-induced oxidative damage. PMID:23843990

  3. Uranium(VI) adsorption to ferrihydrite: Application of a surface complexation model

    USGS Publications Warehouse

    Waite, T.D.; Davis, J.A.; Payne, T.E.; Waychunas, G.A.; Xu, N.

    1994-01-01

    A study of U(VI) adsorption by ferrihydrite was conducted over a wide range of U(VI) concentrations, pH, and at two partial pressures of carbon dioxide. A two-site (strong- and weak-affinity sites, FesOH and FewOH, respectively) surface complexation model was able to describe the experimental data well over a wide range of conditions, with only one species formed with each site type: an inner-sphere, mononuclear, bidentate complex of the type (FeO2)UO2. The existence of such a surface species was supported by results of uranium EXAFS spectroscopy performed on two samples with U(VI) adsorption density in the upper range observed in this study (10 and 18% occupancy of total surface sites). Adsorption data in the alkaline pH range suggested the existence of a second surface species, modeled as a ternary surface complex with UO2CO30 binding to a bidentate surface site. Previous surface complexation models for U(VI) adsorption have proposed surface species that are identical to the predominant aqueous species, e.g., multinuclear hydrolysis complexes or several U(VI)-carbonate complexes. The results demonstrate that the speciation of adsorbed U(VI) may be constrained by the coordination environment at the surface, giving rise to surface speciation for U(VI) that is significantly less complex than aqueous speciation.

  4. Applicability study of classical and contemporary models for effective complex permittivity of metal powders.

    PubMed

    Kiley, Erin M; Yakovlev, Vadim V; Ishizaki, Kotaro; Vaucher, Sebastien

    2012-01-01

    Microwave thermal processing of metal powders has recently been a topic of a substantial interest; however, experimental data on the physical properties of mixtures involving metal particles are often unavailable. In this paper, we perform a systematic analysis of classical and contemporary models of complex permittivity of mixtures and discuss the use of these models for determining effective permittivity of dielectric matrices with metal inclusions. Results from various mixture and core-shell mixture models are compared to experimental data for a titanium/stearic acid mixture and a boron nitride/graphite mixture (both obtained through the original measurements), and for a tungsten/Teflon mixture (from literature). We find that for certain experiments, the average error in determining the effective complex permittivity using Lichtenecker's, Maxwell Garnett's, Bruggeman's, Buchelnikov's, and Ignatenko's models is about 10%. This suggests that, for multiphysics computer models describing the processing of metal powder in the full temperature range, input data on effective complex permittivity obtained from direct measurement has, up to now, no substitute.

  5. A Bayes network approach to uncertainty quantification in hierarchically developed computational models

    DOE PAGES

    Urbina, Angel; Mahadevan, Sankaran; Paez, Thomas L.

    2012-03-01

    Here, performance assessment of complex systems is ideally accomplished through system-level testing, but because they are expensive, such tests are seldom performed. On the other hand, for economic reasons, data from tests on individual components that are parts of complex systems are more readily available. The lack of system-level data leads to a need to build computational models of systems and use them for performance prediction in lieu of experiments. Because their complexity, models are sometimes built in a hierarchical manner, starting with simple components, progressing to collections of components, and finally, to the full system. Quantification of uncertainty inmore » the predicted response of a system model is required in order to establish confidence in the representation of actual system behavior. This paper proposes a framework for the complex, but very practical problem of quantification of uncertainty in system-level model predictions. It is based on Bayes networks and uses the available data at multiple levels of complexity (i.e., components, subsystem, etc.). Because epistemic sources of uncertainty were shown to be secondary, in this application, aleatoric only uncertainty is included in the present uncertainty quantification. An example showing application of the techniques to uncertainty quantification of measures of response of a real, complex aerospace system is included.« less

  6. Toward Modeling the Intrinsic Complexity of Test Problems

    ERIC Educational Resources Information Center

    Shoufan, Abdulhadi

    2017-01-01

    The concept of intrinsic complexity explains why different problems of the same type, tackled by the same problem solver, can require different times to solve and yield solutions of different quality. This paper proposes a general four-step approach that can be used to establish a model for the intrinsic complexity of a problem class in terms of…

  7. [Take] and the ASL Verb Complex: An Autolexical Account

    ERIC Educational Resources Information Center

    Metlay, Donald S.

    2012-01-01

    This dissertation will show how linguistic description and an Autolexical account of the bound verb root [take] shed a light on the nature of complex verb constructions in American Sign Language (ASL). This is accomplished by creating a new ASL Verb Complex Model unifying all verbs into one category of VERB. This model also accounts for a variety…

  8. (FRANCE) USING THE QUIC MODEL (QUICK URBAN AND INDUSTRIAL COMPLEX) TO STUDY AIR FLOW AND DISPERSION PATTERNS IN DESERTS

    EPA Science Inventory

    As part of its continuing development and evaluation, the QUIC model (Quick Urban & Industrial Complex) was used to study flow and dispersion in complex terrain for two cases. First, for a small area of lower Manhattan near the World Trade Center site, comparisons were made bet...

  9. USING THE QUIC MODEL (QUICK URBAN AND INDUSTRIAL COMPLEX) TO STUDY AIR FLOW AND DISPERSION PATTERNS IN DESERTS

    EPA Science Inventory

    As part of its continuing development and evaluation, the QUIC model (Quick Urban & Industrial Complex) was used to study flow and dispersion in complex terrain for two cases. First, for a small area of lower Manhattan near the World Trade Center site, comparisons were made bet...

  10. Plant metabolic modeling: achieving new insight into metabolism and metabolic engineering.

    PubMed

    Baghalian, Kambiz; Hajirezaei, Mohammad-Reza; Schreiber, Falk

    2014-10-01

    Models are used to represent aspects of the real world for specific purposes, and mathematical models have opened up new approaches in studying the behavior and complexity of biological systems. However, modeling is often time-consuming and requires significant computational resources for data development, data analysis, and simulation. Computational modeling has been successfully applied as an aid for metabolic engineering in microorganisms. But such model-based approaches have only recently been extended to plant metabolic engineering, mainly due to greater pathway complexity in plants and their highly compartmentalized cellular structure. Recent progress in plant systems biology and bioinformatics has begun to disentangle this complexity and facilitate the creation of efficient plant metabolic models. This review highlights several aspects of plant metabolic modeling in the context of understanding, predicting and modifying complex plant metabolism. We discuss opportunities for engineering photosynthetic carbon metabolism, sucrose synthesis, and the tricarboxylic acid cycle in leaves and oil synthesis in seeds and the application of metabolic modeling to the study of plant acclimation to the environment. The aim of the review is to offer a current perspective for plant biologists without requiring specialized knowledge of bioinformatics or systems biology. © 2014 American Society of Plant Biologists. All rights reserved.

  11. Deterministic ripple-spreading model for complex networks.

    PubMed

    Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Hines, Evor L; Di Paolo, Ezequiel

    2011-04-01

    This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications.

  12. Plant Metabolic Modeling: Achieving New Insight into Metabolism and Metabolic Engineering

    PubMed Central

    Baghalian, Kambiz; Hajirezaei, Mohammad-Reza; Schreiber, Falk

    2014-01-01

    Models are used to represent aspects of the real world for specific purposes, and mathematical models have opened up new approaches in studying the behavior and complexity of biological systems. However, modeling is often time-consuming and requires significant computational resources for data development, data analysis, and simulation. Computational modeling has been successfully applied as an aid for metabolic engineering in microorganisms. But such model-based approaches have only recently been extended to plant metabolic engineering, mainly due to greater pathway complexity in plants and their highly compartmentalized cellular structure. Recent progress in plant systems biology and bioinformatics has begun to disentangle this complexity and facilitate the creation of efficient plant metabolic models. This review highlights several aspects of plant metabolic modeling in the context of understanding, predicting and modifying complex plant metabolism. We discuss opportunities for engineering photosynthetic carbon metabolism, sucrose synthesis, and the tricarboxylic acid cycle in leaves and oil synthesis in seeds and the application of metabolic modeling to the study of plant acclimation to the environment. The aim of the review is to offer a current perspective for plant biologists without requiring specialized knowledge of bioinformatics or systems biology. PMID:25344492

  13. Coarse-grained molecular dynamics simulations for giant protein-DNA complexes

    NASA Astrophysics Data System (ADS)

    Takada, Shoji

    Biomolecules are highly hierarchic and intrinsically flexible. Thus, computational modeling calls for multi-scale methodologies. We have been developing a coarse-grained biomolecular model where on-average 10-20 atoms are grouped into one coarse-grained (CG) particle. Interactions among CG particles are tuned based on atomistic interactions and the fluctuation matching algorithm. CG molecular dynamics methods enable us to simulate much longer time scale motions of much larger molecular systems than fully atomistic models. After broad sampling of structures with CG models, we can easily reconstruct atomistic models, from which one can continue conventional molecular dynamics simulations if desired. Here, we describe our CG modeling methodology for protein-DNA complexes, together with various biological applications, such as the DNA duplication initiation complex, model chromatins, and transcription factor dynamics on chromatin-like environment.

  14. A Model of Compound Heterozygous, Loss-of-Function Alleles Is Broadly Consistent with Observations from Complex-Disease GWAS Datasets

    PubMed Central

    Sanjak, Jaleal S.; Long, Anthony D.; Thornton, Kevin R.

    2017-01-01

    The genetic component of complex disease risk in humans remains largely unexplained. A corollary is that the allelic spectrum of genetic variants contributing to complex disease risk is unknown. Theoretical models that relate population genetic processes to the maintenance of genetic variation for quantitative traits may suggest profitable avenues for future experimental design. Here we use forward simulation to model a genomic region evolving under a balance between recurrent deleterious mutation and Gaussian stabilizing selection. We consider multiple genetic and demographic models, and several different methods for identifying genomic regions harboring variants associated with complex disease risk. We demonstrate that the model of gene action, relating genotype to phenotype, has a qualitative effect on several relevant aspects of the population genetic architecture of a complex trait. In particular, the genetic model impacts genetic variance component partitioning across the allele frequency spectrum and the power of statistical tests. Models with partial recessivity closely match the minor allele frequency distribution of significant hits from empirical genome-wide association studies without requiring homozygous effect sizes to be small. We highlight a particular gene-based model of incomplete recessivity that is appealing from first principles. Under that model, deleterious mutations in a genomic region partially fail to complement one another. This model of gene-based recessivity predicts the empirically observed inconsistency between twin and SNP based estimated of dominance heritability. Furthermore, this model predicts considerable levels of unexplained variance associated with intralocus epistasis. Our results suggest a need for improved statistical tools for region based genetic association and heritability estimation. PMID:28103232

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Celia, Michael A.

    This report documents the accomplishments achieved during the project titled “Model complexity and choice of model approaches for practical simulations of CO 2 injection,migration, leakage and long-term fate” funded by the US Department of Energy, Office of Fossil Energy. The objective of the project was to investigate modeling approaches of various levels of complexity relevant to geologic carbon storage (GCS) modeling with the goal to establish guidelines on choice of modeling approach.

  16. A practical approach for comparing management strategies in complex forest ecosystems using meta-modelling toolkits

    Treesearch

    Andrew Fall; B. Sturtevant; M.-J. Fortin; M. Papaik; F. Doyon; D. Morgan; K. Berninger; C. Messier

    2010-01-01

    The complexity and multi-scaled nature of forests poses significant challenges to understanding and management. Models can provide useful insights into process and their interactions, and implications of alternative management options. Most models, particularly scientific models, focus on a relatively small set of processes and are designed to operate within a...

  17. The Difficult Process of Scientific Modelling: An Analysis Of Novices' Reasoning During Computer-Based Modelling

    ERIC Educational Resources Information Center

    Sins, Patrick H. M.; Savelsbergh, Elwin R.; van Joolingen, Wouter R.

    2005-01-01

    Although computer modelling is widely advocated as a way to offer students a deeper understanding of complex phenomena, the process of modelling is rather complex itself and needs scaffolding. In order to offer adequate support, a thorough understanding of the reasoning processes students employ and of difficulties they encounter during a…

  18. Turing instability in reaction-diffusion models on complex networks

    NASA Astrophysics Data System (ADS)

    Ide, Yusuke; Izuhara, Hirofumi; Machida, Takuya

    2016-09-01

    In this paper, the Turing instability in reaction-diffusion models defined on complex networks is studied. Here, we focus on three types of models which generate complex networks, i.e. the Erdős-Rényi, the Watts-Strogatz, and the threshold network models. From analysis of the Laplacian matrices of graphs generated by these models, we numerically reveal that stable and unstable regions of a homogeneous steady state on the parameter space of two diffusion coefficients completely differ, depending on the network architecture. In addition, we theoretically discuss the stable and unstable regions in the cases of regular enhanced ring lattices which include regular circles, and networks generated by the threshold network model when the number of vertices is large enough.

  19. The importance of including dynamic social networks when modeling epidemics of airborne infections: does increasing complexity increase accuracy?

    PubMed

    Blower, Sally; Go, Myong-Hyun

    2011-07-19

    Mathematical models are useful tools for understanding and predicting epidemics. A recent innovative modeling study by Stehle and colleagues addressed the issue of how complex models need to be to ensure accuracy. The authors collected data on face-to-face contacts during a two-day conference. They then constructed a series of dynamic social contact networks, each of which was used to model an epidemic generated by a fast-spreading airborne pathogen. Intriguingly, Stehle and colleagues found that increasing model complexity did not always increase accuracy. Specifically, the most detailed contact network and a simplified version of this network generated very similar results. These results are extremely interesting and require further exploration to determine their generalizability.

  20. Virtual enterprise model for the electronic components business in the Nuclear Weapons Complex

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferguson, T.J.; Long, K.S.; Sayre, J.A.

    1994-08-01

    The electronic components business within the Nuclear Weapons Complex spans organizational and Department of Energy contractor boundaries. An assessment of the current processes indicates a need for fundamentally changing the way electronic components are developed, procured, and manufactured. A model is provided based on a virtual enterprise that recognizes distinctive competencies within the Nuclear Weapons Complex and at the vendors. The model incorporates changes that reduce component delivery cycle time and improve cost effectiveness while delivering components of the appropriate quality.

  1. Trends in modeling Biomedical Complex Systems

    PubMed Central

    Milanesi, Luciano; Romano, Paolo; Castellani, Gastone; Remondini, Daniel; Liò, Petro

    2009-01-01

    In this paper we provide an introduction to the techniques for multi-scale complex biological systems, from the single bio-molecule to the cell, combining theoretical modeling, experiments, informatics tools and technologies suitable for biological and biomedical research, which are becoming increasingly multidisciplinary, multidimensional and information-driven. The most important concepts on mathematical modeling methodologies and statistical inference, bioinformatics and standards tools to investigate complex biomedical systems are discussed and the prominent literature useful to both the practitioner and the theoretician are presented. PMID:19828068

  2. Gravitational orientation of the orbital complex, Salyut-6--Soyuz

    NASA Technical Reports Server (NTRS)

    Grecho, G. M.; Sarychev, V. A.; Legostayev, V. P.; Sazonov, V. V.; Gansvind, I. N.

    1983-01-01

    A simple mathematical model is proposed for the Salyut-6-Soyuz orbital complex motion with respect to the center of mass under the one-axis gravity-gradient orientation regime. This model was used for processing the measurements of the orbital complex motion parameters when the above orientation region was implemented. Some actual satellite motions are simulated and the satellite's aerodynamic parameters are determined. Estimates are obtained for the accuracy of measurements as well as that of the mathematical model.

  3. On the robustness of complex heterogeneous gene expression networks.

    PubMed

    Gómez-Gardeñes, Jesús; Moreno, Yamir; Floría, Luis M

    2005-04-01

    We analyze a continuous gene expression model on the underlying topology of a complex heterogeneous network. Numerical simulations aimed at studying the chaotic and periodic dynamics of the model are performed. The results clearly indicate that there is a region in which the dynamical and structural complexity of the system avoid chaotic attractors. However, contrary to what has been reported for Random Boolean Networks, the chaotic phase cannot be completely suppressed, which has important bearings on network robustness and gene expression modeling.

  4. New approaches in agent-based modeling of complex financial systems

    NASA Astrophysics Data System (ADS)

    Chen, Ting-Ting; Zheng, Bo; Li, Yan; Jiang, Xiong-Fei

    2017-12-01

    Agent-based modeling is a powerful simulation technique to understand the collective behavior and microscopic interaction in complex financial systems. Recently, the concept for determining the key parameters of agent-based models from empirical data instead of setting them artificially was suggested. We first review several agent-based models and the new approaches to determine the key model parameters from historical market data. Based on the agents' behaviors with heterogeneous personal preferences and interactions, these models are successful in explaining the microscopic origination of the temporal and spatial correlations of financial markets. We then present a novel paradigm combining big-data analysis with agent-based modeling. Specifically, from internet query and stock market data, we extract the information driving forces and develop an agent-based model to simulate the dynamic behaviors of complex financial systems.

  5. Theoretical study on interaction of cytochrome f and plastocyanin complex by a simple coarse-grained model with molecular crowding effect

    NASA Astrophysics Data System (ADS)

    Nakagawa, Satoshi; Kurniawan, Isman; Kodama, Koichi; Arwansyah, Muhammad Saleh; Kawaguchi, Kazutomo; Nagao, Hidemi

    2018-03-01

    We present a simple coarse-grained model with the molecular crowding effect in solvent to investigate the structure and dynamics of protein complexes including association and/or dissociation processes and investigate some physical properties such as the structure and the reaction rate from the viewpoint of the hydrophobic intermolecular interactions of protein complex. In the present coarse-grained model, a function depending upon the density of hydrophobic amino acid residues in a binding area of the complex is introduced, and the function involves the molecular crowding effect for the intermolecular interactions of hydrophobic amino acid residues between proteins. We propose a hydrophobic intermolecular potential energy between proteins by using the density-dependent function. The present coarse-grained model is applied to the complex of cytochrome f and plastocyanin by using the Langevin dynamics simulation to investigate some physical properties such as the complex structure, the electron transfer reaction rate constant from plastocyanin to cytochrome f and so on. We find that for proceeding the electron transfer reaction, the distance between metals in their active sites is necessary within about 18 Å. We discuss some typical complex structures formed in the present simulation in relation to the molecular crowding effect on hydrophobic interactions.

  6. Simulation of groundwater flow in the glacial aquifer system of northeastern Wisconsin with variable model complexity

    USGS Publications Warehouse

    Juckem, Paul F.; Clark, Brian R.; Feinstein, Daniel T.

    2017-05-04

    The U.S. Geological Survey, National Water-Quality Assessment seeks to map estimated intrinsic susceptibility of the glacial aquifer system of the conterminous United States. Improved understanding of the hydrogeologic characteristics that explain spatial patterns of intrinsic susceptibility, commonly inferred from estimates of groundwater age distributions, is sought so that methods used for the estimation process are properly equipped. An important step beyond identifying relevant hydrogeologic datasets, such as glacial geology maps, is to evaluate how incorporation of these resources into process-based models using differing levels of detail could affect resulting simulations of groundwater age distributions and, thus, estimates of intrinsic susceptibility.This report describes the construction and calibration of three groundwater-flow models of northeastern Wisconsin that were developed with differing levels of complexity to provide a framework for subsequent evaluations of the effects of process-based model complexity on estimations of groundwater age distributions for withdrawal wells and streams. Preliminary assessments, which focused on the effects of model complexity on simulated water levels and base flows in the glacial aquifer system, illustrate that simulation of vertical gradients using multiple model layers improves simulated heads more in low-permeability units than in high-permeability units. Moreover, simulation of heterogeneous hydraulic conductivity fields in coarse-grained and some fine-grained glacial materials produced a larger improvement in simulated water levels in the glacial aquifer system compared with simulation of uniform hydraulic conductivity within zones. The relation between base flows and model complexity was less clear; however, the relation generally seemed to follow a similar pattern as water levels. Although increased model complexity resulted in improved calibrations, future application of the models using simulated particle tracking is anticipated to evaluate if these model design considerations are similarly important for understanding the primary modeling objective - to simulate reasonable groundwater age distributions.

  7. Pattern-Based Inverse Modeling for Characterization of Subsurface Flow Models with Complex Geologic Heterogeneity

    NASA Astrophysics Data System (ADS)

    Golmohammadi, A.; Jafarpour, B.; M Khaninezhad, M. R.

    2017-12-01

    Calibration of heterogeneous subsurface flow models leads to ill-posed nonlinear inverse problems, where too many unknown parameters are estimated from limited response measurements. When the underlying parameters form complex (non-Gaussian) structured spatial connectivity patterns, classical variogram-based geostatistical techniques cannot describe the underlying connectivity patterns. Modern pattern-based geostatistical methods that incorporate higher-order spatial statistics are more suitable for describing such complex spatial patterns. Moreover, when the underlying unknown parameters are discrete (geologic facies distribution), conventional model calibration techniques that are designed for continuous parameters cannot be applied directly. In this paper, we introduce a novel pattern-based model calibration method to reconstruct discrete and spatially complex facies distributions from dynamic flow response data. To reproduce complex connectivity patterns during model calibration, we impose a feasibility constraint to ensure that the solution follows the expected higher-order spatial statistics. For model calibration, we adopt a regularized least-squares formulation, involving data mismatch, pattern connectivity, and feasibility constraint terms. Using an alternating directions optimization algorithm, the regularized objective function is divided into a continuous model calibration problem, followed by mapping the solution onto the feasible set. The feasibility constraint to honor the expected spatial statistics is implemented using a supervised machine learning algorithm. The two steps of the model calibration formulation are repeated until the convergence criterion is met. Several numerical examples are used to evaluate the performance of the developed method.

  8. Why Bother to Calibrate? Model Consistency and the Value of Prior Information

    NASA Astrophysics Data System (ADS)

    Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Euser, Tanja; Gharari, Shervan; Nijzink, Remko; Savenije, Hubert; Gascuel-Odoux, Chantal

    2015-04-01

    Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.

  9. Why Bother and Calibrate? Model Consistency and the Value of Prior Information.

    NASA Astrophysics Data System (ADS)

    Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J. E.; Savenije, H.; Gascuel-Odoux, C.

    2014-12-01

    Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.

  10. Development of Maps of Simple and Complex Cells in the Primary Visual Cortex

    PubMed Central

    Antolík, Ján; Bednar, James A.

    2011-01-01

    Hubel and Wiesel (1962) classified primary visual cortex (V1) neurons as either simple, with responses modulated by the spatial phase of a sine grating, or complex, i.e., largely phase invariant. Much progress has been made in understanding how simple-cells develop, and there are now detailed computational models establishing how they can form topographic maps ordered by orientation preference. There are also models of how complex cells can develop using outputs from simple cells with different phase preferences, but no model of how a topographic orientation map of complex cells could be formed based on the actual connectivity patterns found in V1. Addressing this question is important, because the majority of existing developmental models of simple-cell maps group neurons selective to similar spatial phases together, which is contrary to experimental evidence, and makes it difficult to construct complex cells. Overcoming this limitation is not trivial, because mechanisms responsible for map development drive receptive fields (RF) of nearby neurons to be highly correlated, while co-oriented RFs of opposite phases are anti-correlated. In this work, we model V1 as two topographically organized sheets representing cortical layer 4 and 2/3. Only layer 4 receives direct thalamic input. Both sheets are connected with narrow feed-forward and feedback connectivity. Only layer 2/3 contains strong long-range lateral connectivity, in line with current anatomical findings. Initially all weights in the model are random, and each is modified via a Hebbian learning rule. The model develops smooth, matching, orientation preference maps in both sheets. Layer 4 units become simple cells, with phase preference arranged randomly, while those in layer 2/3 are primarily complex cells. To our knowledge this model is the first explaining how simple cells can develop with random phase preference, and how maps of complex cells can develop, using only realistic patterns of connectivity. PMID:21559067

  11. Food-web complexity, meta-community complexity and community stability.

    PubMed

    Mougi, A; Kondoh, M

    2016-04-13

    What allows interacting, diverse species to coexist in nature has been a central question in ecology, ever since the theoretical prediction that a complex community should be inherently unstable. Although the role of spatiality in species coexistence has been recognized, its application to more complex systems has been less explored. Here, using a meta-community model of food web, we show that meta-community complexity, measured by the number of local food webs and their connectedness, elicits a self-regulating, negative-feedback mechanism and thus stabilizes food-web dynamics. Moreover, the presence of meta-community complexity can give rise to a positive food-web complexity-stability effect. Spatiality may play a more important role in stabilizing dynamics of complex, real food webs than expected from ecological theory based on the models of simpler food webs.

  12. Genotypic Complexity of Fisher’s Geometric Model

    PubMed Central

    Hwang, Sungmin; Park, Su-Chan; Krug, Joachim

    2017-01-01

    Fisher’s geometric model was originally introduced to argue that complex adaptations must occur in small steps because of pleiotropic constraints. When supplemented with the assumption of additivity of mutational effects on phenotypic traits, it provides a simple mechanism for the emergence of genotypic epistasis from the nonlinear mapping of phenotypes to fitness. Of particular interest is the occurrence of reciprocal sign epistasis, which is a necessary condition for multipeaked genotypic fitness landscapes. Here we compute the probability that a pair of randomly chosen mutations interacts sign epistatically, which is found to decrease with increasing phenotypic dimension n, and varies nonmonotonically with the distance from the phenotypic optimum. We then derive expressions for the mean number of fitness maxima in genotypic landscapes comprised of all combinations of L random mutations. This number increases exponentially with L, and the corresponding growth rate is used as a measure of the complexity of the landscape. The dependence of the complexity on the model parameters is found to be surprisingly rich, and three distinct phases characterized by different landscape structures are identified. Our analysis shows that the phenotypic dimension, which is often referred to as phenotypic complexity, does not generally correlate with the complexity of fitness landscapes and that even organisms with a single phenotypic trait can have complex landscapes. Our results further inform the interpretation of experiments where the parameters of Fisher’s model have been inferred from data, and help to elucidate which features of empirical fitness landscapes can be described by this model. PMID:28450460

  13. GPU-accelerated depth map generation for X-ray simulations of complex CAD geometries

    NASA Astrophysics Data System (ADS)

    Grandin, Robert J.; Young, Gavin; Holland, Stephen D.; Krishnamurthy, Adarsh

    2018-04-01

    Interactive x-ray simulations of complex computer-aided design (CAD) models can provide valuable insights for better interpretation of the defect signatures such as porosity from x-ray CT images. Generating the depth map along a particular direction for the given CAD geometry is the most compute-intensive step in x-ray simulations. We have developed a GPU-accelerated method for real-time generation of depth maps of complex CAD geometries. We preprocess complex components designed using commercial CAD systems using a custom CAD module and convert them into a fine user-defined surface tessellation. Our CAD module can be used by different simulators as well as handle complex geometries, including those that arise from complex castings and composite structures. We then make use of a parallel algorithm that runs on a graphics processing unit (GPU) to convert the finely-tessellated CAD model to a voxelized representation. The voxelized representation can enable heterogeneous modeling of the volume enclosed by the CAD model by assigning heterogeneous material properties in specific regions. The depth maps are generated from this voxelized representation with the help of a GPU-accelerated ray-casting algorithm. The GPU-accelerated ray-casting method enables interactive (> 60 frames-per-second) generation of the depth maps of complex CAD geometries. This enables arbitrarily rotation and slicing of the CAD model, leading to better interpretation of the x-ray images by the user. In addition, the depth maps can be used to aid directly in CT reconstruction algorithms.

  14. A Spatially Continuous Model of Carbohydrate Digestion and Transport Processes in the Colon

    PubMed Central

    Moorthy, Arun S.; Brooks, Stephen P. J.; Kalmokoff, Martin; Eberl, Hermann J.

    2015-01-01

    A spatially continuous mathematical model of transport processes, anaerobic digestion and microbial complexity as would be expected in the human colon is presented. The model is a system of first-order partial differential equations with context determined number of dependent variables, and stiff, non-linear source terms. Numerical simulation of the model is used to elucidate information about the colon-microbiota complex. It is found that the composition of materials on outflow of the model does not well-describe the composition of material in other model locations, and inferences using outflow data varies according to model reactor representation. Additionally, increased microbial complexity allows the total microbial community to withstand major system perturbations in diet and community structure. However, distribution of strains and functional groups within the microbial community can be modified depending on perturbation length and microbial kinetic parameters. Preliminary model extensions and potential investigative opportunities using the computational model are discussed. PMID:26680208

  15. Sensitivity of Precipitation in Coupled Land-Atmosphere Models

    NASA Technical Reports Server (NTRS)

    Neelin, David; Zeng, N.; Suarez, M.; Koster, R.

    2004-01-01

    The project objective was to understand mechanisms by which atmosphere-land-ocean processes impact precipitation in the mean climate and interannual variations, focusing on tropical and subtropical regions. A combination of modeling tools was used: an intermediate complexity land-atmosphere model developed at UCLA known as the QTCM and the NASA Seasonal-to-Interannual Prediction Program general circulation model (NSIPP GCM). The intermediate complexity model was used to develop hypotheses regarding the physical mechanisms and theory for the interplay of large-scale dynamics, convective heating, cloud radiative effects and land surface feedbacks. The theoretical developments were to be confronted with diagnostics from the more complex GCM to validate or modify the theory.

  16. Ocean Hydrodynamics Numerical Model in Curvilinear Coordinates for Simulating Circulation of the Global Ocean and its Separate Basins.

    NASA Astrophysics Data System (ADS)

    Gusev, Anatoly; Diansky, Nikolay; Zalesny, Vladimir

    2010-05-01

    The original program complex is proposed for the ocean circulation sigma-model, developed in the Institute of Numerical Mathematics (INM), Russian Academy of Sciences (RAS). The complex can be used in various curvilinear orthogonal coordinate systems. In addition to ocean circulation model, the complex contains a sea ice dynamics and thermodynamics model, as well as the original system of the atmospheric forcing implementation on the basis of both prescribed meteodata and atmospheric model results. This complex can be used as the oceanic block of Earth climate model as well as for solving the scientific and practical problems concerning the World ocean and its separate oceans and seas. The developed program complex can be effectively used on parallel shared memory computational systems and on contemporary personal computers. On the base of the complex proposed the ocean general circulation model (OGCM) was developed. The model is realized in the curvilinear orthogonal coordinate system obtained by the conformal transformation of the standard geographical grid that allowed us to locate the system singularities outside the integration domain. The horizontal resolution of the OGCM is 1 degree on longitude, 0.5 degree on latitude, and it has 40 non-uniform sigma-levels in depth. The model was integrated for 100 years starting from the Levitus January climatology using the realistic atmospheric annual cycle calculated on the base of CORE datasets. The experimental results showed us that the model adequately reproduces the basic characteristics of large-scale World Ocean dynamics, that is in good agreement with both observational data and results of the best climatic OGCMs. This OGCM is used as the oceanic component of the new version of climatic system model (CSM) developed in INM RAS. The latter is now ready for carrying out the new numerical experiments on climate and its change modelling according to IPCC (Intergovernmental Panel on Climate Change) scenarios in the scope of the CMIP-5 (Coupled Model Intercomparison Project). On the base of the complex proposed the Pacific Ocean circulation eddy-resolving model was realized. The integration domain covers the Pacific from Equator to Bering Strait. The model horizontal resolution is 0.125 degree and it has 20 non-uniform sigma-levels in depth. The model adequately reproduces circulation large-scale structure and its variability: Kuroshio meandering, ocean synoptic eddies, frontal zones, etc. Kuroshio high variability is shown. The distribution of contaminant was simulated that is admittedly wasted near Petropavlovsk-Kamchatsky. The results demonstrate contaminant distribution structure and provide us understanding of hydrological fields formation processes in the North-West Pacific.

  17. Complexity growth in minimal massive 3D gravity

    NASA Astrophysics Data System (ADS)

    Qaemmaqami, Mohammad M.

    2018-01-01

    We study the complexity growth by using "complexity =action " (CA) proposal in the minimal massive 3D gravity (MMG) model which is proposed for resolving the bulk-boundary clash problem of topologically massive gravity (TMG). We observe that the rate of the complexity growth for Banados-Teitelboim-Zanelli (BTZ) black hole saturates the proposed bound by physical mass of the BTZ black hole in the MMG model, when the angular momentum parameter and the inner horizon of black hole goes to zero.

  18. [Analysis of a three-dimensional finite element model of atlas and axis complex fracture].

    PubMed

    Tang, X M; Liu, C; Huang, K; Zhu, G T; Sun, H L; Dai, J; Tian, J W

    2018-05-22

    Objective: To explored the clinical application of the three-dimensional finite element model of atlantoaxial complex fracture. Methods: A three-dimensional finite element model of cervical spine (FEM/intact) was established by software of Abaqus6.12.On the basis of this model, a three-dimensional finite element model of four types of atlantoaxial complex fracture was established: C(1) fracture (Jefferson)+ C(2) fracture (type Ⅱfracture), Jefferson+ C(2) fracture(type Ⅲfracture), Jefferson+ C(2) fracture(Hangman), Jefferson+ stable C(2) fracture (FEM/fracture). The range of motion under flexion, extension, lateral bending and axial rotation were measured and compared with the model of cervical spine. Results: The three-dimensional finite element model of four types of atlantoaxial complex fracture had the same similarity and profile.The range of motion (ROM) of different segments had different changes.Compared with those in the normal model, the ROM of C(0/1) and C(1/2) in C(1) combined Ⅱ odontoid fracture model in flexion/extension, lateral bending and rotation increased by 57.45%, 29.34%, 48.09% and 95.49%, 88.52%, 36.71%, respectively.The ROM of C(0/1) and C(1/2) in C(1) combined Ⅲodontoid fracture model in flexion/extension, lateral bending and rotation increased by 47.01%, 27.30%, 45.31% and 90.38%, 27.30%, 30.0%.The ROM of C(0/1) and C(1/2) in C(1) combined Hangman fracture model in flexion/extension, lateral bending and rotation increased by 32.68%, 79.34%, 77.62% and 60.53%, 81.20%, 21.48%, respectively.The ROM of C(0/1) and C(1/2) in C(1) combined axis fracture model in flexion/extension, lateral bending and rotation increased by 15.00%, 29.30%, 8.47% and 37.87%, 75.57%, 8.30%, respectively. Conclusions: The three-dimensional finite element model can be used to simulate the biomechanics of atlantoaxial complex fracture.The ROM of atlantoaxial complex fracture is larger than nomal model, which indicates that surgical treatment should be performed.

  19. [Design of Complex Cavity Structure in Air Route System of Automated Peritoneal Dialysis Machine].

    PubMed

    Quan, Xiaoliang

    2017-07-30

    This paper introduced problems about Automated Peritoneal Dialysis machine(APD) that the lack of technical issues such as the structural design of the complex cavities. To study the flow characteristics of this special structure, the application of ANSYS CFX software is used with k-ε turbulence model as the theoretical basis of fluid mechanics. The numerical simulation of flow field simulation result in the internal model can be gotten after the complex structure model is imported into ANSYS CFX module. Then, it will present the distribution of complex cavities inside the flow field and the flow characteristics parameter, which will provide an important reference design for APD design.

  20. Primary Care Physician Insights Into a Typology of the Complex Patient in Primary Care

    PubMed Central

    Loeb, Danielle F.; Binswanger, Ingrid A.; Candrian, Carey; Bayliss, Elizabeth A.

    2015-01-01

    PURPOSE Primary care physicians play unique roles caring for complex patients, often acting as the hub for their care and coordinating care among specialists. To inform the clinical application of new models of care for complex patients, we sought to understand how these physicians conceptualize patient complexity and to develop a corresponding typology. METHODS We conducted qualitative in-depth interviews with internal medicine primary care physicians from 5 clinics associated with a university hospital and a community health hospital. We used systematic nonprobabilistic sampling to achieve an even distribution of sex, years in practice, and type of practice. The interviews were analyzed using a team-based participatory general inductive approach. RESULTS The 15 physicians in this study endorsed a multidimensional concept of patient complexity. The physicians perceived patients to be complex if they had an exacerbating factor—a medical illness, mental illness, socioeconomic challenge, or behavior or trait (or some combination thereof)—that complicated care for chronic medical illnesses. CONCLUSION This perspective of primary care physicians caring for complex patients can help refine models of complexity to design interventions or models of care that improve outcomes for these patients. PMID:26371266

  1. Primary care physician insights into a typology of the complex patient in primary care.

    PubMed

    Loeb, Danielle F; Binswanger, Ingrid A; Candrian, Carey; Bayliss, Elizabeth A

    2015-09-01

    Primary care physicians play unique roles caring for complex patients, often acting as the hub for their care and coordinating care among specialists. To inform the clinical application of new models of care for complex patients, we sought to understand how these physicians conceptualize patient complexity and to develop a corresponding typology. We conducted qualitative in-depth interviews with internal medicine primary care physicians from 5 clinics associated with a university hospital and a community health hospital. We used systematic nonprobabilistic sampling to achieve an even distribution of sex, years in practice, and type of practice. The interviews were analyzed using a team-based participatory general inductive approach. The 15 physicians in this study endorsed a multidimensional concept of patient complexity. The physicians perceived patients to be complex if they had an exacerbating factor-a medical illness, mental illness, socioeconomic challenge, or behavior or trait (or some combination thereof)-that complicated care for chronic medical illnesses. This perspective of primary care physicians caring for complex patients can help refine models of complexity to design interventions or models of care that improve outcomes for these patients. © 2015 Annals of Family Medicine, Inc.

  2. Scope Complexity Options Risks Excursions (SCORE) Version 3.0 Mathematical Description.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gearhart, Jared Lee; Samberson, Jonell Nicole; Shettigar, Subhasini

    The purpose of the Scope, Complexity, Options, Risks, Excursions (SCORE) model is to estimate the relative complexity of design variants of future warhead options. The results of this model allow those considering these options to understand the complexity tradeoffs between proposed warhead options. The core idea of SCORE is to divide a warhead option into a well- defined set of scope elements and then estimate the complexity of each scope element against a well understood reference system. The uncertainty associated with estimates can also be captured. A weighted summation of the relative complexity of each scope element is used tomore » determine the total complexity of the proposed warhead option or portions of the warhead option (i.e., a National Work Breakdown Structure code). The SCORE analysis process is a growing multi-organizational Nuclear Security Enterprise (NSE) effort, under the management of the NA- 12 led Enterprise Modeling and Analysis Consortium (EMAC), that has provided the data elicitation, integration and computation needed to support the out-year Life Extension Program (LEP) cost estimates included in the Stockpile Stewardship Management Plan (SSMP).« less

  3. Comparative density functional study of the complexes [UO2(CO3)3]4- and [(UO2)3(CO3)6]6- in aqueous solution.

    PubMed

    Schlosser, Florian; Moskaleva, Lyudmila V; Kremleva, Alena; Krüger, Sven; Rösch, Notker

    2010-06-28

    With a relativistic all-electron density functional method, we studied two anionic uranium(VI) carbonate complexes that are important for uranium speciation and transport in aqueous medium, the mononuclear tris(carbonato) complex [UO(2)(CO(3))(3)](4-) and the trinuclear hexa(carbonato) complex [(UO(2))(3)(CO(3))(6)](6-). Focusing on the structures in solution, we applied for the first time a full solvation treatment to these complexes. We approximated short-range effects by explicit aqua ligands and described long-range electrostatic interactions via a polarizable continuum model. Structures and vibrational frequencies of "gas-phase" models with explicit aqua ligands agree best with experiment. This is accidental because the continuum model of the solvent to some extent overestimates the electrostatic interactions of these highly anionic systems with the bulk solvent. The calculated free energy change when three mono-nuclear complexes associate to the trinuclear complex, agrees well with experiment and supports the formation of the latter species upon acidification of a uranyl carbonate solution.

  4. Unsilencing Critical Conversations in Social-Studies Teacher Education Using Agent-Based Modeling

    ERIC Educational Resources Information Center

    Hostetler, Andrew; Sengupta, Pratim; Hollett, Ty

    2018-01-01

    In this article, we argue that when complex sociopolitical issues such as ethnocentrism and racial segregation are represented as complex, emergent systems using agent-based computational models (in short agent-based models or ABMs), discourse about these representations can disrupt social studies teacher candidates' dispositions of teaching…

  5. The Relationship between Students' Performance on Conventional Standardized Mathematics Assessments and Complex Mathematical Modeling Problems

    ERIC Educational Resources Information Center

    Kartal, Ozgul; Dunya, Beyza Aksu; Diefes-Dux, Heidi A.; Zawojewski, Judith S.

    2016-01-01

    Critical to many science, technology, engineering, and mathematics (STEM) career paths is mathematical modeling--specifically, the creation and adaptation of mathematical models to solve problems in complex settings. Conventional standardized measures of mathematics achievement are not structured to directly assess this type of mathematical…

  6. From Complex to Simple: Interdisciplinary Stochastic Models

    ERIC Educational Resources Information Center

    Mazilu, D. A.; Zamora, G.; Mazilu, I.

    2012-01-01

    We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions…

  7. Simulator validation results and proposed reporting format from flight testing a software model of a complex, high-performance airplane.

    DOT National Transportation Integrated Search

    2008-01-01

    Computer simulations are often used in aviation studies. These simulation tools may require complex, high-fidelity aircraft models. Since many of the flight models used are third-party developed products, independent validation is desired prior to im...

  8. Complex Instruction: A Model for Reaching Up--and Out

    ERIC Educational Resources Information Center

    Tomlinson, Carol Ann

    2018-01-01

    Complex Instruction is a multifaceted instructional model designed to provide highly challenging learning opportunities for students in heterogeneous classrooms. The model provides a rationale for and philosophy of creating equity of access to excellent curriculum and instruction for a broad range of learners, guidance for preparing students for…

  9. A scalable plant-resolving radiative transfer model based on optimized GPU ray tracing

    USDA-ARS?s Scientific Manuscript database

    A new model for radiative transfer in participating media and its application to complex plant canopies is presented. The goal was to be able to efficiently solve complex canopy-scale radiative transfer problems while also representing sub-plant heterogeneity. In the model, individual leaf surfaces ...

  10. MASS BALANCE MODELLING OF PCBS IN THE FOX RIVER/GREEN BAY COMPLEX

    EPA Science Inventory

    The USEPA Office of Research and Development developed and applies a multimedia, mass balance modeling approach to the Fox River/Green Bay complex to aid managers with remedial decision-making. The suite of models were applied to PCBs due to the long history of contamination and ...

  11. Near-Surface Wind Predictions in Complex Terrain with a CFD Approach Optimized for Atmospheric Boundary Layer Flows

    NASA Astrophysics Data System (ADS)

    Wagenbrenner, N. S.; Forthofer, J.; Butler, B.; Shannon, K.

    2014-12-01

    Near-surface wind predictions are important for a number of applications, including transport and dispersion, wind energy forecasting, and wildfire behavior. Researchers and forecasters would benefit from a wind model that could be readily applied to complex terrain for use in these various disciplines. Unfortunately, near-surface winds in complex terrain are not handled well by traditional modeling approaches. Numerical weather prediction models employ coarse horizontal resolutions which do not adequately resolve sub-grid terrain features important to the surface flow. Computational fluid dynamics (CFD) models are increasingly being applied to simulate atmospheric boundary layer (ABL) flows, especially in wind energy applications; however, the standard functionality provided in commercial CFD models is not suitable for ABL flows. Appropriate CFD modeling in the ABL requires modification of empirically-derived wall function parameters and boundary conditions to avoid erroneous streamwise gradients due to inconsistences between inlet profiles and specified boundary conditions. This work presents a new version of a near-surface wind model for complex terrain called WindNinja. The new version of WindNinja offers two options for flow simulations: 1) the native, fast-running mass-consistent method available in previous model versions and 2) a CFD approach based on the OpenFOAM modeling framework and optimized for ABL flows. The model is described and evaluations of predictions with surface wind data collected from two recent field campaigns in complex terrain are presented. A comparison of predictions from the native mass-consistent method and the new CFD method is also provided.

  12. Effects of model complexity and priors on estimation using sequential importance sampling/resampling for species conservation

    USGS Publications Warehouse

    Dunham, Kylee; Grand, James B.

    2016-01-01

    We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.

  13. Persistent model order reduction for complex dynamical systems using smooth orthogonal decomposition

    NASA Astrophysics Data System (ADS)

    Ilbeigi, Shahab; Chelidze, David

    2017-11-01

    Full-scale complex dynamic models are not effective for parametric studies due to the inherent constraints on available computational power and storage resources. A persistent reduced order model (ROM) that is robust, stable, and provides high-fidelity simulations for a relatively wide range of parameters and operating conditions can provide a solution to this problem. The fidelity of a new framework for persistent model order reduction of large and complex dynamical systems is investigated. The framework is validated using several numerical examples including a large linear system and two complex nonlinear systems with material and geometrical nonlinearities. While the framework is used for identifying the robust subspaces obtained from both proper and smooth orthogonal decompositions (POD and SOD, respectively), the results show that SOD outperforms POD in terms of stability, accuracy, and robustness.

  14. Clinical application of three-dimensional printing to the management of complex univentricular hearts with abnormal systemic or pulmonary venous drainage.

    PubMed

    McGovern, Eimear; Kelleher, Eoin; Snow, Aisling; Walsh, Kevin; Gadallah, Bassem; Kutty, Shelby; Redmond, John M; McMahon, Colin J

    2017-09-01

    In recent years, three-dimensional printing has demonstrated reliable reproducibility of several organs including hearts with complex congenital cardiac anomalies. This represents the next step in advanced image processing and can be used to plan surgical repair. In this study, we describe three children with complex univentricular hearts and abnormal systemic or pulmonary venous drainage, in whom three-dimensional printed models based on CT data assisted with preoperative planning. For two children, after group discussion and examination of the models, a decision was made not to proceed with surgery. We extend the current clinical experience with three-dimensional printed modelling and discuss the benefits of such models in the setting of managing complex surgical problems in children with univentricular circulation and abnormal systemic or pulmonary venous drainage.

  15. A technique for evaluating black-footed ferret habitat

    USGS Publications Warehouse

    Biggins, Dean E.; Miller, Brian J.; Hanebury, Louis R.; Oakleaf, Bob; Farmer, Adrian H.; Crete, Ron; Dood, Arnold

    1993-01-01

    In this paper, we provide a model and step-by-step procedures for rating a prairie dog (Cynomys sp.) complex for the reintroduction of black-footed ferrets (Mustela nigripes). An important factor in the model is an estimate of the number of black-footed ferret families a prairie dog complex can support for a year; thus, the procedures prescribe how to estimate the size of a prairie dog complex and the density of prairie dogs. Other attributes of the model are qualitative: arrangement of colonies, potential for plague and canine distemper, potential for prairie dog expansion, abundance of predators, future resource conflicts and ownership stability, and public and landowner attitudes about prairie dogs and black-footed ferrets. Because of the qualitative attributes in the model, a team approach is recommended for ranking complexes of prairie dogs for black-footed ferret reintroduction.

  16. Complex Systems Models and Their Applications: Towards a New Science of Verification, Validation & Uncertainty Quantification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsao, Jeffrey Y.; Trucano, Timothy G.; Kleban, Stephen D.

    This report contains the written footprint of a Sandia-hosted workshop held in Albuquerque, New Mexico, June 22-23, 2016 on “Complex Systems Models and Their Applications: Towards a New Science of Verification, Validation and Uncertainty Quantification,” as well as of pre-work that fed into the workshop. The workshop’s intent was to explore and begin articulating research opportunities at the intersection between two important Sandia communities: the complex systems (CS) modeling community, and the verification, validation and uncertainty quantification (VVUQ) community The overarching research opportunity (and challenge) that we ultimately hope to address is: how can we quantify the credibility of knowledgemore » gained from complex systems models, knowledge that is often incomplete and interim, but will nonetheless be used, sometimes in real-time, by decision makers?« less

  17. The Use of Complex Adaptive Systems as a Generative Metaphor in an Action Research Study of an Organisation

    ERIC Educational Resources Information Center

    Brown, Callum

    2008-01-01

    Understanding the dynamic behaviour of organisations is challenging and this study uses a model of complex adaptive systems as a generative metaphor to address this challenge. The research question addressed is: How might a conceptual model of complex adaptive systems be used to assist in understanding the dynamic nature of organisations? Using an…

  18. Cognitive Task Complexity and Written Output in Italian and French as a Foreign Language

    ERIC Educational Resources Information Center

    Kuiken, Folkert; Vedder, Ineke

    2008-01-01

    This paper reports on a study on the relationship between cognitive task complexity and linguistic performance in L2 writing. In the study, two models proposed to explain the influence of cognitive task complexity on linguistic performance in L2 are tested and compared: Skehan and Foster's Limited Attentional Capacity Model (Skehan, 1998; Skehan…

  19. Development of a One-Equation Eddy Viscosity Turbulence Model for Application to Complex Turbulent Flows

    NASA Astrophysics Data System (ADS)

    Wray, Timothy J.

    Computational fluid dynamics (CFD) is routinely used in performance prediction and design of aircraft, turbomachinery, automobiles, and in many other industrial applications. Despite its wide range of use, deficiencies in its prediction accuracy still exist. One critical weakness is the accurate simulation of complex turbulent flows using the Reynolds-Averaged Navier-Stokes equations in conjunction with a turbulence model. The goal of this research has been to develop an eddy viscosity type turbulence model to increase the accuracy of flow simulations for mildly separated flows, flows with rotation and curvature effects, and flows with surface roughness. It is accomplished by developing a new zonal one-equation turbulence model which relies heavily on the flow physics; it is now known in the literature as the Wray-Agarwal one-equation turbulence model. The effectiveness of the new model is demonstrated by comparing its results with those obtained by the industry standard one-equation Spalart-Allmaras model and two-equation Shear-Stress-Transport k - o model and experimental data. Results for subsonic, transonic, and supersonic flows in and about complex geometries are presented. It is demonstrated that the Wray-Agarwal model can provide the industry and CFD researchers an accurate, efficient, and reliable turbulence model for the computation of a large class of complex turbulent flows.

  20. Unexpected Results are Usually Wrong, but Often Interesting

    NASA Astrophysics Data System (ADS)

    Huber, M.

    2014-12-01

    In climate modeling, an unexpected result is usually wrong, arising from some sort of mistake. Despite the fact that we all bemoan uncertainty in climate, the field is underlain by a robust, successful body of theory and any properly conducted modeling experiment is posed and conducted within that context. Consequently, if results from a complex climate model disagree with theory or from expectations from simpler models, much skepticism is in order. But, this exposes the fundamental tension of using complex, sophisticated models. If simple models and theory were perfect there would be no reason for complex models--the entire point of sophisticated models is to see if unexpected phenomena arise as emergent properties of the system. In this talk, I will step through some paleoclimate examples, drawn from my own work, of unexpected results that emerge from complex climate models arising from mistakes of two kinds. The first kind of mistake, is what I call a 'smart mistake'; it is an intentional incorporation of assumptions, boundary conditions, or physics that is in violation of theoretical or observational constraints. The second mistake, a 'dumb mistake', is just that, an unintentional violation. Analysis of such mistaken simulations provides some potentially novel and certainly interesting insights into what is possible and right in paleoclimate modeling by forcing the reexamination of well-held assumptions and theories.

  1. Generalized model of electromigration with 1:1 (analyte:selector) complexation stoichiometry: part I. Theory.

    PubMed

    Dubský, Pavel; Müllerová, Ludmila; Dvořák, Martin; Gaš, Bohuslav

    2015-03-06

    The model of electromigration of a multivalent weak acidic/basic/amphoteric analyte that undergoes complexation with a mixture of selectors is introduced. The model provides an extension of the series of models starting with the single-selector model without dissociation by Wren and Rowe in 1992, continuing with the monovalent weak analyte/single-selector model by Rawjee, Williams and Vigh in 1993 and that by Lelièvre in 1994, and ending with the multi-selector overall model without dissociation developed by our group in 2008. The new multivalent analyte multi-selector model shows that the effective mobility of the analyte obeys the original Wren and Row's formula. The overall complexation constant, mobility of the free analyte and mobility of complex can be measured and used in a standard way. The mathematical expressions for the overall parameters are provided. We further demonstrate mathematically that the pH dependent parameters for weak analytes can be simply used as an input into the multi-selector overall model and, in reverse, the multi-selector overall parameters can serve as an input into the pH-dependent models for the weak analytes. These findings can greatly simplify the rationale method development in analytical electrophoresis, specifically enantioseparations. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Numerical modeling of Gaussian beam propagation and diffraction in inhomogeneous media based on the complex eikonal equation

    NASA Astrophysics Data System (ADS)

    Huang, Xingguo; Sun, Hui

    2018-05-01

    Gaussian beam is an important complex geometrical optical technology for modeling seismic wave propagation and diffraction in the subsurface with complex geological structure. Current methods for Gaussian beam modeling rely on the dynamic ray tracing and the evanescent wave tracking. However, the dynamic ray tracing method is based on the paraxial ray approximation and the evanescent wave tracking method cannot describe strongly evanescent fields. This leads to inaccuracy of the computed wave fields in the region with a strong inhomogeneous medium. To address this problem, we compute Gaussian beam wave fields using the complex phase by directly solving the complex eikonal equation. In this method, the fast marching method, which is widely used for phase calculation, is combined with Gauss-Newton optimization algorithm to obtain the complex phase at the regular grid points. The main theoretical challenge in combination of this method with Gaussian beam modeling is to address the irregular boundary near the curved central ray. To cope with this challenge, we present the non-uniform finite difference operator and a modified fast marching method. The numerical results confirm the proposed approach.

  3. The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems.

    PubMed

    White, Andrew; Tolman, Malachi; Thames, Howard D; Withers, Hubert Rodney; Mason, Kathy A; Transtrum, Mark K

    2016-12-01

    We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model's discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system-a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model.

  4. Human systems dynamics: Toward a computational model

    NASA Astrophysics Data System (ADS)

    Eoyang, Glenda H.

    2012-09-01

    A robust and reliable computational model of complex human systems dynamics could support advancements in theory and practice for social systems at all levels, from intrapersonal experience to global politics and economics. Models of human interactions have evolved from traditional, Newtonian systems assumptions, which served a variety of practical and theoretical needs of the past. Another class of models has been inspired and informed by models and methods from nonlinear dynamics, chaos, and complexity science. None of the existing models, however, is able to represent the open, high dimension, and nonlinear self-organizing dynamics of social systems. An effective model will represent interactions at multiple levels to generate emergent patterns of social and political life of individuals and groups. Existing models and modeling methods are considered and assessed against characteristic pattern-forming processes in observed and experienced phenomena of human systems. A conceptual model, CDE Model, based on the conditions for self-organizing in human systems, is explored as an alternative to existing models and methods. While the new model overcomes the limitations of previous models, it also provides an explanatory base and foundation for prospective analysis to inform real-time meaning making and action taking in response to complex conditions in the real world. An invitation is extended to readers to engage in developing a computational model that incorporates the assumptions, meta-variables, and relationships of this open, high dimension, and nonlinear conceptual model of the complex dynamics of human systems.

  5. Using New Models to Analyze Complex Regularities of the World: Commentary on Musso et al. (2013)

    ERIC Educational Resources Information Center

    Nokelainen, Petri; Silander, Tomi

    2014-01-01

    This commentary to the recent article by Musso et al. (2013) discusses issues related to model fitting, comparison of classification accuracy of generative and discriminative models, and two (or more) cultures of data modeling. We start by questioning the extremely high classification accuracy with an empirical data from a complex domain. There is…

  6. Conceptual and Developmental Analysis of Mental Models: An Example with Complex Change Problems.

    ERIC Educational Resources Information Center

    Poirier, Louise

    Defining better implicit models of children's actions in a series of situations is of paramount importance to understanding how knowledge is constructed. The objective of this study was to analyze the implicit mental models used by children in complex change problems to understand the stability of the models and their evolution with the child's…

  7. A toolbox for discrete modelling of cell signalling dynamics.

    PubMed

    Paterson, Yasmin Z; Shorthouse, David; Pleijzier, Markus W; Piterman, Nir; Bendtsen, Claus; Hall, Benjamin A; Fisher, Jasmin

    2018-06-18

    In an age where the volume of data regarding biological systems exceeds our ability to analyse it, many researchers are looking towards systems biology and computational modelling to help unravel the complexities of gene and protein regulatory networks. In particular, the use of discrete modelling allows generation of signalling networks in the absence of full quantitative descriptions of systems, which are necessary for ordinary differential equation (ODE) models. In order to make such techniques more accessible to mainstream researchers, tools such as the BioModelAnalyzer (BMA) have been developed to provide a user-friendly graphical interface for discrete modelling of biological systems. Here we use the BMA to build a library of discrete target functions of known canonical molecular interactions, translated from ordinary differential equations (ODEs). We then show that these BMA target functions can be used to reconstruct complex networks, which can correctly predict many known genetic perturbations. This new library supports the accessibility ethos behind the creation of BMA, providing a toolbox for the construction of complex cell signalling models without the need for extensive experience in computer programming or mathematical modelling, and allows for construction and simulation of complex biological systems with only small amounts of quantitative data.

  8. Tracking of Maneuvering Complex Extended Object with Coupled Motion Kinematics and Extension Dynamics Using Range Extent Measurements

    PubMed Central

    Sun, Lifan; Ji, Baofeng; Lan, Jian; He, Zishu; Pu, Jiexin

    2017-01-01

    The key to successful maneuvering complex extended object tracking (MCEOT) using range extent measurements provided by high resolution sensors lies in accurate and effective modeling of both the extension dynamics and the centroid kinematics. During object maneuvers, the extension dynamics of an object with a complex shape is highly coupled with the centroid kinematics. However, this difficult but important problem is rarely considered and solved explicitly. In view of this, this paper proposes a general approach to modeling a maneuvering complex extended object based on Minkowski sum, so that the coupled turn maneuvers in both the centroid states and extensions can be described accurately. The new model has a concise and unified form, in which the complex extension dynamics can be simply and jointly characterized by multiple simple sub-objects’ extension dynamics based on Minkowski sum. The proposed maneuvering model fits range extent measurements very well due to its favorable properties. Based on this model, an MCEOT algorithm dealing with motion and extension maneuvers is also derived. Two different cases of the turn maneuvers with known/unknown turn rates are specifically considered. The proposed algorithm which jointly estimates the kinematic state and the object extension can also be easily implemented. Simulation results demonstrate the effectiveness of the proposed modeling and tracking approaches. PMID:28937629

  9. Dynamics of Nanoparticle-Protein Corona Complex Formation: Analytical Results from Population Balance Equations

    PubMed Central

    Darabi Sahneh, Faryad; Scoglio, Caterina; Riviere, Jim

    2013-01-01

    Background Nanoparticle-protein corona complex formation involves absorption of protein molecules onto nanoparticle surfaces in a physiological environment. Understanding the corona formation process is crucial in predicting nanoparticle behavior in biological systems, including applications of nanotoxicology and development of nano drug delivery platforms. Method This paper extends the modeling work in to derive a mathematical model describing the dynamics of nanoparticle corona complex formation from population balance equations. We apply nonlinear dynamics techniques to derive analytical results for the composition of nanoparticle-protein corona complex, and validate our results through numerical simulations. Results The model presented in this paper exhibits two phases of corona complex dynamics. In the first phase, proteins rapidly bind to the free surface of nanoparticles, leading to a metastable composition. During the second phase, continuous association and dissociation of protein molecules with nanoparticles slowly changes the composition of the corona complex. Given sufficient time, composition of the corona complex reaches an equilibrium state of stable composition. We find analytical approximate formulae for metastable and stable compositions of corona complex. Our formulae are very well-structured to clearly identify important parameters determining corona composition. Conclusion The dynamics of biocorona formation constitute vital aspect of interactions between nanoparticles and living organisms. Our results further understanding of these dynamics through quantitation of experimental conditions, modeling results for in vitro systems to better predict behavior for in vivo systems. One potential application would involve a single cell culture medium related to a complex protein medium, such as blood or tissue fluid. PMID:23741371

  10. An Exploratory Study of the Butterfly Effect Using Agent-Based Modeling

    NASA Technical Reports Server (NTRS)

    Khasawneh, Mahmoud T.; Zhang, Jun; Shearer, Nevan E. N.; Rodriquez-Velasquez, Elkin; Bowling, Shannon R.

    2010-01-01

    This paper provides insights about the behavior of chaotic complex systems, and the sensitive dependence of the system on the initial starting conditions. How much does a small change in the initial conditions of a complex system affect it in the long term? Do complex systems exhibit what is called the "Butterfly Effect"? This paper uses an agent-based modeling approach to address these questions. An existing model from NetLogo library was extended in order to compare chaotic complex systems with near-identical initial conditions. Results show that small changes in initial starting conditions can have a huge impact on the behavior of chaotic complex systems. The term the "butterfly effect" is attributed to the work of Edward Lorenz [1]. It is used to describe the sensitive dependence of the behavior of chaotic complex systems on the initial conditions of these systems. The metaphor refers to the notion that a butterfly flapping its wings somewhere may cause extreme changes in the ecological system's behavior in the future, such as a hurricane.

  11. Cooperation of Deterministic Dynamics and Random Noise in Production of Complex Syntactical Avian Song Sequences: A Neural Network Model

    PubMed Central

    Yamashita, Yuichi; Okumura, Tetsu; Okanoya, Kazuo; Tani, Jun

    2011-01-01

    How the brain learns and generates temporal sequences is a fundamental issue in neuroscience. The production of birdsongs, a process which involves complex learned sequences, provides researchers with an excellent biological model for this topic. The Bengalese finch in particular learns a highly complex song with syntactical structure. The nucleus HVC (HVC), a premotor nucleus within the avian song system, plays a key role in generating the temporal structures of their songs. From lesion studies, the nucleus interfacialis (NIf) projecting to the HVC is considered one of the essential regions that contribute to the complexity of their songs. However, the types of interaction between the HVC and the NIf that can produce complex syntactical songs remain unclear. In order to investigate the function of interactions between the HVC and NIf, we have proposed a neural network model based on previous biological evidence. The HVC is modeled by a recurrent neural network (RNN) that learns to generate temporal patterns of songs. The NIf is modeled as a mechanism that provides auditory feedback to the HVC and generates random noise that feeds into the HVC. The model showed that complex syntactical songs can be replicated by simple interactions between deterministic dynamics of the RNN and random noise. In the current study, the plausibility of the model is tested by the comparison between the changes in the songs of actual birds induced by pharmacological inhibition of the NIf and the changes in the songs produced by the model resulting from modification of parameters representing NIf functions. The efficacy of the model demonstrates that the changes of songs induced by pharmacological inhibition of the NIf can be interpreted as a trade-off between the effects of noise and the effects of feedback on the dynamics of the RNN of the HVC. These facts suggest that the current model provides a convincing hypothesis for the functional role of NIf–HVC interaction. PMID:21559065

  12. Applying Weick's model of organizing to health care and health promotion: highlighting the central role of health communication.

    PubMed

    Kreps, Gary L

    2009-03-01

    Communication is a crucial process in the effective delivery of health care services and the promotion of public health. However, there are often tremendous complexities in using communication effectively to provide the best health care, direct the adoption of health promoting behaviors, and implement evidence-based public health policies and practices. This article describes Weick's model of organizing as a powerful theory of social organizing that can help increase understanding of the communication demands of health care and health promotion. The article identifies relevant applications from the model for health communication research and practice. Weick's model of organizing is a relevant and heuristic theoretical perspective for guiding health communication research and practice. There are many potential applications of this model illustrating the complexities of effective communication in health care and health promotion. Weick's model of organizing can be used as a template for guiding both research and practice in health care and health promotion. The model illustrates the important roles that communication performs in enabling health care consumers and providers to make sense of the complexities of modern health care and health promotion, select the best strategies for responding effectively to complex health care and health promotion situations, and retain relevant information (develop organizational intelligence) for guiding future responses to complex health care and health promotion challenges.

  13. Syntheses, spectroscopic characterization, thermal study, molecular modeling, and biological evaluation of novel Schiff's base benzil bis(5-amino-1,3,4-thiadiazole-2-thiol) with Ni(II), and Cu(II) metal complexes.

    PubMed

    Chandra, Sulekh; Gautam, Seema; Rajor, Hament Kumar; Bhatia, Rohit

    2015-02-25

    Novel Schiff's base ligand, benzil bis(5-amino-1,3,4-thiadiazole-2-thiol) was synthesized by the condensation of benzil and 5-amino-1,3,4-thiadiazole-2-thiol in 1:2 ratio. The structure of ligand was determined on the basis of elemental analyses, IR, (1)H NMR, mass, and molecular modeling studies. Synthesized ligand behaved as tetradentate and coordinated to metal ion through sulfur atoms of thiol ring and nitrogen atoms of imine group. Ni(II), and Cu(II) complexes were synthesized with this nitrogen-sulfur donor (N2S2) ligand. Metal complexes were characterized by elemental analyses, molar conductance, magnetic susceptibility measurements, IR, electronic spectra, EPR, thermal, and molecular modeling studies. All the complexes showed molar conductance corresponding to non-electrolytic nature, expect [Ni(L)](NO3)2 complex, which was 1:2 electrolyte in nature. [Cu(L)(SO4)] complex may possessed square pyramidal geometry, [Ni(L)](NO3)2 complex tetrahedral and rest of the complexes six coordinated octahedral/tetragonal geometry. Newly synthesized ligand and its metal complexes were examined against the opportunistic pathogens. Results suggested that metal complexes were more biological sensitive than free ligand. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. [Complexity of care and organizational effectiveness: a survey among medical care units in nine Lombardy region hospitals].

    PubMed

    Pasquali, Sara; Capitoni, Enrica; Tiraboschi, Giuseppina; Alborghetti, Adriana; De Luca, Giuseppe; Di Mauro, Stefania

    2017-01-01

    Eleven medical care units of nine Lombardy Region hospitals organized by levels of care model or by the traditional departmental model have been analyzed, in order to evaluate if methods for complexity of patient-care evaluation represent an index factor of nursing organizational effectiveness. Survey with nine Nurses in managerial position was conducted between Nov. 2013-Jan. 2014. The following factors have been described: context and nursing care model, staffing, complexity evaluation, patient satisfaction, staff well-being. Data were processed through Microsoft Excel. Among Units analysed ,all Units in levels of care and one organized by the departmental model systematically evaluate nursing complexity. Registered Nurses (RN) and Health Care Assistants (HCA) are on average numerically higher in Units that measure complexity (0.55/ 0.49 RN, 0.38/0.23 HCA - ratio per bed). Adopted measures in relation to changes in complexity are:rewarding systems, supporting interventions, such as moving personnel within different Units or additional required working hours; reduction in number of beds is adopted when no other solution is available. Patient satisfaction is evaluated through Customer Satisfaction questionnaires. Turnover, stress and rate of absenteeism data are not available in all Units. Complexity evaluation through appropriate methods is carried out in all hospitals organized in levels of care with personalized nursing care models, though complexity is detected with different methods. No significant differences in applied managerial strategies are present. Patient's satisfaction is evaluated everywhere. Data on staffing wellbeing is scarcely available. Coordinated regional actions are recommended in order to gather comparable data for research, improve decision making and effectiveness of Nursing care.

  15. A growth model for directed complex networks with power-law shape in the out-degree distribution

    PubMed Central

    Esquivel-Gómez, J.; Stevens-Navarro, E.; Pineda-Rico, U.; Acosta-Elias, J.

    2015-01-01

    Many growth models have been published to model the behavior of real complex networks. These models are able to reproduce several of the topological properties of such networks. However, in most of these growth models, the number of outgoing links (i.e., out-degree) of nodes added to the network is constant, that is all nodes in the network are born with the same number of outgoing links. In other models, the resultant out-degree distribution decays as a poisson or an exponential distribution. However, it has been found that in real complex networks, the out-degree distribution decays as a power-law. In order to obtain out-degree distribution with power-law behavior some models have been proposed. This work introduces a new model that allows to obtain out-degree distributions that decay as a power-law with an exponent in the range from 0 to 1. PMID:25567141

  16. Enhanced LOD Concepts for Virtual 3d City Models

    NASA Astrophysics Data System (ADS)

    Benner, J.; Geiger, A.; Gröger, G.; Häfele, K.-H.; Löwner, M.-O.

    2013-09-01

    Virtual 3D city models contain digital three dimensional representations of city objects like buildings, streets or technical infrastructure. Because size and complexity of these models continuously grow, a Level of Detail (LoD) concept effectively supporting the partitioning of a complete model into alternative models of different complexity and providing metadata, addressing informational content, complexity and quality of each alternative model is indispensable. After a short overview on various LoD concepts, this paper discusses the existing LoD concept of the CityGML standard for 3D city models and identifies a number of deficits. Based on this analysis, an alternative concept is developed and illustrated with several examples. It differentiates between first, a Geometric Level of Detail (GLoD) and a Semantic Level of Detail (SLoD), and second between the interior building and its exterior shell. Finally, a possible implementation of the new concept is demonstrated by means of an UML model.

  17. A comparative study of turbulence models for overset grids

    NASA Technical Reports Server (NTRS)

    Renze, Kevin J.; Buning, Pieter G.; Rajagopalan, R. G.

    1992-01-01

    The implementation of two different types of turbulence models for a flow solver using the Chimera overset grid method is examined. Various turbulence model characteristics, such as length scale determination and transition modeling, are found to have a significant impact on the computed pressure distribution for a multielement airfoil case. No inherent problem is found with using either algebraic or one-equation turbulence models with an overset grid scheme, but simulation of turbulence for multiple-body or complex geometry flows is very difficult regardless of the gridding method. For complex geometry flowfields, modification of the Baldwin-Lomax turbulence model is necessary to select the appropriate length scale in wall-bounded regions. The overset grid approach presents no obstacle to use of a one- or two-equation turbulence model. Both Baldwin-Lomax and Baldwin-Barth models have problems providing accurate eddy viscosity levels for complex multiple-body flowfields such as those involving the Space Shuttle.

  18. Effective degrees of freedom: a flawed metaphor

    PubMed Central

    Janson, Lucas; Fithian, William; Hastie, Trevor J.

    2015-01-01

    Summary To most applied statisticians, a fitting procedure’s degrees of freedom is synonymous with its model complexity, or its capacity for overfitting to data. In particular, it is often used to parameterize the bias-variance tradeoff in model selection. We argue that, on the contrary, model complexity and degrees of freedom may correspond very poorly. We exhibit and theoretically explore various fitting procedures for which degrees of freedom is not monotonic in the model complexity parameter, and can exceed the total dimension of the ambient space even in very simple settings. We show that the degrees of freedom for any non-convex projection method can be unbounded. PMID:26977114

  19. Simulating complex intracellular processes using object-oriented computational modelling.

    PubMed

    Johnson, Colin G; Goldman, Jacki P; Gullick, William J

    2004-11-01

    The aim of this paper is to give an overview of computer modelling and simulation in cellular biology, in particular as applied to complex biochemical processes within the cell. This is illustrated by the use of the techniques of object-oriented modelling, where the computer is used to construct abstractions of objects in the domain being modelled, and these objects then interact within the computer to simulate the system and allow emergent properties to be observed. The paper also discusses the role of computer simulation in understanding complexity in biological systems, and the kinds of information which can be obtained about biology via simulation.

  20. The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems

    PubMed Central

    Tolman, Malachi; Thames, Howard D.; Mason, Kathy A.

    2016-01-01

    We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model’s discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system–a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model. PMID:27923060

  1. From Laser Scanning to Finite Element Analysis of Complex Buildings by Using a Semi-Automatic Procedure

    PubMed Central

    Castellazzi, Giovanni; D’Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro

    2015-01-01

    In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation. PMID:26225978

  2. Process Consistency in Models: the Importance of System Signatures, Expert Knowledge and Process Complexity

    NASA Astrophysics Data System (ADS)

    Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Gascuel-Odoux, Chantal; Savenije, Hubert

    2014-05-01

    Hydrological models are frequently characterized by what is often considered to be adequate calibration performances. In many cases, however, these models experience a substantial uncertainty and performance decrease in validation periods, thus resulting in poor predictive power. Besides the likely presence of data errors, this observation can point towards wrong or insufficient representations of the underlying processes and their heterogeneity. In other words, right results are generated for the wrong reasons. Thus ways are sought to increase model consistency and to thereby satisfy the contrasting priorities of the need a) to increase model complexity and b) to limit model equifinality. In this study a stepwise model development approach is chosen to test the value of an exhaustive and systematic combined use of hydrological signatures, expert knowledge and readily available, yet anecdotal and rarely exploited, hydrological information for increasing model consistency towards generating the right answer for the right reasons. A simple 3-box, 7 parameter, conceptual HBV-type model, constrained by 4 calibration objective functions was able to adequately reproduce the hydrograph with comparatively high values for the 4 objective functions in the 5-year calibration period. However, closer inspection of the results showed a dramatic decrease of model performance in the 5-year validation period. In addition, assessing the model's skill to reproduce a range of 20 hydrological signatures including, amongst others, the flow duration curve, the autocorrelation function and the rising limb density, showed that it could not adequately reproduce the vast majority of these signatures, indicating a lack of model consistency. Subsequently model complexity was increased in a stepwise way to allow for more process heterogeneity. To limit model equifinality, increase in complexity was counter-balanced by a stepwise application of "realism constraints", inferred from expert knowledge (e.g. unsaturated storage capacity of hillslopes should exceed the one of wetlands) and anecdotal hydrological information (e.g. long-term estimates of actual evaporation obtained from the Budyko framework and long-term estimates of baseflow contribution) to ensure that the model is well behaved with respect to the modeller's perception of the system. A total of 11 model set-ups with increased complexity and an increased number of realism constraints was tested. It could be shown that in spite of largely unchanged calibration performance, compared to the simplest set-up, the most complex model set-up (12 parameters, 8 constraints) exhibited significantly increased performance in the validation period while uncertainty did not increase. In addition, the most complex model was characterized by a substantially increased skill to reproduce all 20 signatures, indicating a more suitable representation of the system. The results suggest that a model, "well" constrained by 4 calibration objective functions may still be an inadequate representation of the system and that increasing model complexity, if counter-balanced by realism constraints, can indeed increase predictive performance of a model and its skill to reproduce a range of hydrological signatures, but that it does not necessarily result in increased uncertainty. The results also strongly illustrate the need to move away from automated model calibration towards a more general expert-knowledge driven strategy of constraining models if a certain level of model consistency is to be achieved.

  3. Sorption of trivalent lanthanides and actinides onto montmorillonite: Macroscopic, thermodynamic and structural evidence for ternary hydroxo and carbonato surface complexes on multiple sorption sites.

    PubMed

    Fernandes, M Marques; Scheinost, A C; Baeyens, B

    2016-08-01

    The credibility of long-term safety assessments of radioactive waste repositories may be greatly enhanced by a molecular level understanding of the sorption processes onto individual minerals present in the near- and far-fields. In this study we couple macroscopic sorption experiments to surface complexation modelling and spectroscopic investigations, including extended X-ray absorption fine structure (EXAFS) and time-resolved laser fluorescence spectroscopies (TRLFS), to elucidate the uptake mechanism of trivalent lanthanides and actinides (Ln/An(III)) by montmorillonite in the absence and presence of dissolved carbonate. Based on the experimental sorption isotherms for the carbonate-free system, the previously developed 2 site protolysis non electrostatic surface complexation and cation exchange (2SPNE SC/CE) model needed to be complemented with an additional surface complexation reaction onto weak sites. The fitting of sorption isotherms in the presence of carbonate required refinement of the previously published model by reducing the strong site capacity and by adding the formation of Ln/An(III)-carbonato complexes both on strong and weak sites. EXAFS spectra of selected Am samples and TRLFS spectra of selected Cm samples corroborate the model assumptions by showing the existence of different surface complexation sites and evidencing the formation of Ln/An(III) carbonate surface complexes. In the absence of carbonate and at low loadings, Ln/An(III) form strong inner-sphere complexes through binding to three Al(O,OH)6 octahedra, most likely by occupying vacant sites in the octahedral layers of montmorillonite, which are exposed on {010} and {110} edge faces. At higher loadings, Ln/An(III) binds to only one Al octahedron, forming a weaker, edge-sharing surface complex. In the presence of carbonate, we identified a ternary mono- or dicarbonato Ln/An(III) complex binding directly to one Al(O,OH)6 octahedron, revealing that type-A ternary complexes form with the one or two carbonate groups pointing away from the surface into the solution phase. Within the spectroscopically observable concentration range these complexes could only be identified on the weak sites, in line with the small strong site capacity suggested by the refined sorption model. When the solubility of carbonates was exceeded, formation of an Am carbonate hydroxide could be identified. The excellent agreement between the thermodynamic model parameters obtained by fitting the macroscopic data, and the spectroscopically identified mechanisms, demonstrates the mature state of the 2SPNE SC/CE model for predicting and quantifying the retention of Ln/An(III) elements by montmorillonite-rich clay rocks. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. The Silent Canyon caldera complex: a three-dimensional model based on drill-hole stratigraphy and gravity inversion

    USGS Publications Warehouse

    McKee, Edwin H.; Hildenbrand, Thomas G.; Anderson, Megan L.; Rowley, Peter D.; Sawyer, David A.

    1999-01-01

    The structural framework of Pahute Mesa, Nevada, is dominated by the Silent Canyon caldera complex, a buried, multiple collapse caldera complex. Using the boundary surface between low density Tertiary volcanogenic rocks and denser granitic and weakly metamorphosed sedimentary rocks (basement) as the outer fault surfaces for the modeled collapse caldera complex, it is postulated that the caldera complex collapsed on steeply- dipping arcuate faults two, possibly three, times following eruption of at least two major ash-flow tuffs. The caldera and most of its eruptive products are now deeply buried below the surface of Pahute Mesa. Relatively low-density rocks in the caldera complex produce one of the largest gravity lows in the western conterminous United States. Gravity modeling defines a steep sided, cup-shaped depression as much as 6,000 meters (19,800 feet) deep that is surrounded and floored by denser rocks. The steeply dipping surface located between the low-density basin fill and the higher density external rocks is considered to be the surface of the ring faults of the multiple calderas. Extrapolation of this surface upward to the outer, or topographic rim, of the Silent Canyon caldera complex defines the upper part of the caldera collapse structure. Rock units within and outside the Silent Canyon caldera complex are combined into seven hydrostratigraphic units based on their predominant hydrologic characteristics. The caldera structures and other faults on Pahute Mesa are used with the seven hydrostratigraphic units to make a three-dimensional geologic model of Pahute Mesa using the "EarthVision" (Dynamic Graphics, Inc.) modeling computer program. This method allows graphic representation of the geometry of the rocks and produces computer generated cross sections, isopach maps, and three-dimensional oriented diagrams. These products have been created to aid in visualizing and modeling the ground-water flow system beneath Pahute Mesa.

  5. A multiscale modelling methodology applicable for regulatory purposes taking into account effects of complex terrain and buildings on pollutant dispersion: a case study for an inner Alpine basin.

    PubMed

    Oettl, D

    2015-11-01

    Dispersion modelling in complex terrain always has been challenging for modellers. Although a large number of publications are dedicated to that field, candidate methods and models for usage in regulatory applications are scarce. This is all the more true when the combined effect of topography and obstacles on pollutant dispersion has to be taken into account. In Austria, largely situated in Alpine regions, such complex situations are quite frequent. This work deals with an approach, which is in principle capable of considering both buildings and topography in simulations by combining state-of-the-art wind field models at the micro- (<1 km) and mesoscale γ (2-20 km) with a Lagrangian particle model. In order to make such complex numerical models applicable for regulatory purposes, meteorological input data for the models need to be readily derived from routine observations. Here, use was made of the traditional way to bin meteorological data based on wind direction, speed, and stability class, formerly mainly used in conjunction with Gaussian-type models. It is demonstrated that this approach leads to reasonable agreements (fractional bias < 0.1) between observed and modelled annual average concentrations in an Alpine basin with frequent low-wind-speed conditions, temperature inversions, and quite complex flow patterns, while keeping the simulation times within the frame of possibility with regard to applications in licencing procedures. However, due to the simplifications in the derivation of meteorological input data as well as several ad hoc assumptions regarding the boundary conditions of the mesoscale wind field model, the methodology is not suited for computing detailed time and space variations of pollutant concentrations.

  6. Molecular modelling, spectroscopic characterization and biological studies of tetraazamacrocyclic metal complexes

    NASA Astrophysics Data System (ADS)

    Rathi, Parveen; Sharma, Kavita; Singh, Dharam Pal

    2014-09-01

    Macrocyclic complexes of the type [MLX]X2; where L is (C30H28N4), a macrocyclic ligand, M = Cr(III) and Fe(III) and X = Cl-, CH3COO- or NO3-, have been synthesized by template condensation reaction of 1,8-diaminonaphthalene and acetylacetone in the presence of trivalent metal salts in a methanolic medium. The complexes have been formulated as [MLX]X2 due to 1:2 electrolytic nature of these complexes. The complexes have been characterized with the help of elemental analyses, molar conductance measurements, magnetic susceptibility measurements, electronic, infrared, far infrared, Mass spectral studies and molecular modelling. Molecular weight of these complexes indicates their monomeric nature. On the basis of all these studies, a five coordinated square pyramidal geometry has been proposed for all these complexes. These metal complexes have also been screened for their in vitro antimicrobial activities.

  7. Prefrontal and parietal activity is modulated by the rule complexity of inductive reasoning and can be predicted by a cognitive model.

    PubMed

    Jia, Xiuqin; Liang, Peipeng; Shi, Lin; Wang, Defeng; Li, Kuncheng

    2015-01-01

    In neuroimaging studies, increased task complexity can lead to increased activation in task-specific regions or to activation of additional regions. How the brain adapts to increased rule complexity during inductive reasoning remains unclear. In the current study, three types of problems were created: simple rule induction (i.e., SI, with rule complexity of 1), complex rule induction (i.e., CI, with rule complexity of 2), and perceptual control. Our findings revealed that increased activations accompany increased rule complexity in the right dorsal lateral prefrontal cortex (DLPFC) and medial posterior parietal cortex (precuneus). A cognitive model predicted both the behavioral and brain imaging results. The current findings suggest that neural activity in frontal and parietal regions is modulated by rule complexity, which may shed light on the neural mechanisms of inductive reasoning. Copyright © 2014. Published by Elsevier Ltd.

  8. Parameters for the RM1 Quantum Chemical Calculation of Complexes of the Trications of Thulium, Ytterbium and Lutetium

    PubMed Central

    Filho, Manoel A. M.; Dutra, José Diogo L.; Rocha, Gerd B.; Simas, Alfredo M.

    2016-01-01

    The RM1 quantum chemical model for the calculation of complexes of Tm(III), Yb(III) and Lu(III) is advanced. Subsequently, we tested the models by fully optimizing the geometries of 126 complexes. We then compared the optimized structures with known crystallographic ones from the Cambridge Structural Database. Results indicate that, for thulium complexes, the accuracy in terms of the distances between the lanthanide ion and its directly coordinated atoms is about 2%. Corresponding results for ytterbium and lutetium are both 3%, levels of accuracy useful for the design of lanthanide complexes, targeting their countless applications. PMID:27223475

  9. Smad Signaling Dynamics: Insights from a Parsimonious Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiley, H. S.; Shankaran, Harish

    2008-09-09

    The molecular mechanisms that transmit information from cell surface receptors to the nucleus are exceedingly complex; thus, much effort has been expended in developing computational models to understand these processes. A recent study on modeling the nuclear-cytoplasmic shuttling of Smad2-Smad4 complexes in response to transforming growth factor β (TGF-β) receptor activation has provided substantial insight into how this signaling network translates the degree of TGF-β receptor activation (input) into the amount of nuclear Smad2-Smad4 complexes (output). The study addressed this question by combining a simple, mechanistic model with targeted experiments, an approach that proved particularly powerful for exploring the fundamentalmore » properties of a complex signaling network. The mathematical model revealed that Smad nuclear-cytoplasmic dynamics enables a proportional, but time-delayed coupling between the input and the output. As a result, the output can faithfully track gradual changes in the input, while the rapid input fluctuations that constitute signaling noise are dampened out.« less

  10. When to use discrete event simulation (DES) for the economic evaluation of health technologies? A review and critique of the costs and benefits of DES.

    PubMed

    Karnon, Jonathan; Haji Ali Afzali, Hossein

    2014-06-01

    Modelling in economic evaluation is an unavoidable fact of life. Cohort-based state transition models are most common, though discrete event simulation (DES) is increasingly being used to implement more complex model structures. The benefits of DES relate to the greater flexibility around the implementation and population of complex models, which may provide more accurate or valid estimates of the incremental costs and benefits of alternative health technologies. The costs of DES relate to the time and expertise required to implement and review complex models, when perhaps a simpler model would suffice. The costs are not borne solely by the analyst, but also by reviewers. In particular, modelled economic evaluations are often submitted to support reimbursement decisions for new technologies, for which detailed model reviews are generally undertaken on behalf of the funding body. This paper reports the results from a review of published DES-based economic evaluations. Factors underlying the use of DES were defined, and the characteristics of applied models were considered, to inform options for assessing the potential benefits of DES in relation to each factor. Four broad factors underlying the use of DES were identified: baseline heterogeneity, continuous disease markers, time varying event rates, and the influence of prior events on subsequent event rates. If relevant, individual-level data are available, representation of the four factors is likely to improve model validity, and it is possible to assess the importance of their representation in individual cases. A thorough model performance evaluation is required to overcome the costs of DES from the users' perspective, but few of the reviewed DES models reported such a process. More generally, further direct, empirical comparisons of complex models with simpler models would better inform the benefits of DES to implement more complex models, and the circumstances in which such benefits are most likely.

  11. Modeling the propagation of mobile malware on complex networks

    NASA Astrophysics Data System (ADS)

    Liu, Wanping; Liu, Chao; Yang, Zheng; Liu, Xiaoyang; Zhang, Yihao; Wei, Zuxue

    2016-08-01

    In this paper, the spreading behavior of malware across mobile devices is addressed. By introducing complex networks to model mobile networks, which follows the power-law degree distribution, a novel epidemic model for mobile malware propagation is proposed. The spreading threshold that guarantees the dynamics of the model is calculated. Theoretically, the asymptotic stability of the malware-free equilibrium is confirmed when the threshold is below the unity, and the global stability is further proved under some sufficient conditions. The influences of different model parameters as well as the network topology on malware propagation are also analyzed. Our theoretical studies and numerical simulations show that networks with higher heterogeneity conduce to the diffusion of malware, and complex networks with lower power-law exponents benefit malware spreading.

  12. Reducing the Complexity of an Agent-Based Local Heroin Market Model

    PubMed Central

    Heard, Daniel; Bobashev, Georgiy V.; Morris, Robert J.

    2014-01-01

    This project explores techniques for reducing the complexity of an agent-based model (ABM). The analysis involved a model developed from the ethnographic research of Dr. Lee Hoffer in the Larimer area heroin market, which involved drug users, drug sellers, homeless individuals and police. The authors used statistical techniques to create a reduced version of the original model which maintained simulation fidelity while reducing computational complexity. This involved identifying key summary quantities of individual customer behavior as well as overall market activity and replacing some agents with probability distributions and regressions. The model was then extended to allow external market interventions in the form of police busts. Extensions of this research perspective, as well as its strengths and limitations, are discussed. PMID:25025132

  13. Refiners Switch to RFG Complex Model

    EIA Publications

    1998-01-01

    On January 1, 1998, domestic and foreign refineries and importers must stop using the "simple" model and begin using the "complex" model to calculate emissions of volatile organic compounds (VOC), toxic air pollutants (TAP), and nitrogen oxides (NOx) from motor gasoline. The primary differences between application of the two models is that some refineries may have to meet stricter standards for the sulfur and olefin content of the reformulated gasoline (RFG) they produce and all refineries will now be held accountable for NOx emissions. Requirements for calculating emissions from conventional gasoline under the anti-dumping rule similarly change for exhaust TAP and NOx. However, the change to the complex model is not expected to result in an increase in the price premium for RFG or constrain supplies.

  14. KSC-2012-2894

    NASA Image and Video Library

    2012-05-21

    CAPE CANAVERAL, Fla. – The high-fidelity space shuttle model awaits to be loaded onto a barge at Kennedy Space Center’s Launch Complex 39 turn basin in Florida. The model is being transported from Kennedy to Space Center Houston, NASA Johnson Space Center's visitor center. The model will be transported via barge to Texas. The model was built in Apopka, Fla., by Guard-Lee and installed at the Kennedy Space Center Visitor Complex in 1993.The model has been parked at the turn basin the past five months to allow the Kennedy Space Center Visitor Complex to begin building a new facility to display space shuttle Atlantis in 2013. For more information about Space Center Houston, visit http://www.spacecenter.org. Photo credit: NASA/Frankie Martin

  15. KSC-2012-2895

    NASA Image and Video Library

    2012-05-21

    CAPE CANAVERAL, Fla. – The high-fidelity space shuttle model awaits to be loaded onto a barge at Kennedy Space Center’s Launch Complex 39 turn basin in Florida. The model is being transported from Kennedy to Space Center Houston, NASA Johnson Space Center's visitor center. The model will be transported via barge to Texas. The model was built in Apopka, Fla., by Guard-Lee and installed at the Kennedy Space Center Visitor Complex in 1993.The model has been parked at the turn basin the past five months to allow the Kennedy Space Center Visitor Complex to begin building a new facility to display space shuttle Atlantis in 2013. For more information about Space Center Houston, visit http://www.spacecenter.org. Photo credit: NASA/Frankie Martin

  16. Capturing complexity in work disability research: application of system dynamics modeling methodology.

    PubMed

    Jetha, Arif; Pransky, Glenn; Hettinger, Lawrence J

    2016-01-01

    Work disability (WD) is characterized by variable and occasionally undesirable outcomes. The underlying determinants of WD outcomes include patterns of dynamic relationships among health, personal, organizational and regulatory factors that have been challenging to characterize, and inadequately represented by contemporary WD models. System dynamics modeling (SDM) methodology applies a sociotechnical systems thinking lens to view WD systems as comprising a range of influential factors linked by feedback relationships. SDM can potentially overcome limitations in contemporary WD models by uncovering causal feedback relationships, and conceptualizing dynamic system behaviors. It employs a collaborative and stakeholder-based model building methodology to create a visual depiction of the system as a whole. SDM can also enable researchers to run dynamic simulations to provide evidence of anticipated or unanticipated outcomes that could result from policy and programmatic intervention. SDM may advance rehabilitation research by providing greater insights into the structure and dynamics of WD systems while helping to understand inherent complexity. Challenges related to data availability, determining validity, and the extensive time and technical skill requirements for model building may limit SDM's use in the field and should be considered. Contemporary work disability (WD) models provide limited insight into complexity associated with WD processes. System dynamics modeling (SDM) has the potential to capture complexity through a stakeholder-based approach that generates a simulation model consisting of multiple feedback loops. SDM may enable WD researchers and practitioners to understand the structure and behavior of the WD system as a whole, and inform development of improved strategies to manage straightforward and complex WD cases.

  17. Surface structural ion adsorption modeling of competitive binding of oxyanions by metal (hydr)oxides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiemstra, T.; Riemsdijk, W.H. van

    1999-02-01

    An important challenge in surface complexation models (SCM) is to connect the molecular microscopic reality to macroscopic adsorption phenomena. This study elucidates the primary factor controlling the adsorption process by analyzing the adsorption and competition of PO{sub 4}, AsO{sub 4}, and SeO{sub 3}. The authors show that the structure of the surface-complex acting in the dominant electrostatic field can be ascertained as the primary controlling adsorption factor. The surface species of arsenate are identical with those of phosphate and the adsorption behavior is very similar. On the basis of the selenite adsorption, The authors show that the commonly used 1pKmore » models are incapable to incorporate in the adsorption modeling the correct bidentate binding mechanism found by spectroscopy. The use of the bidentate mechanism leads to a proton-oxyanion ratio and corresponding pH dependence that are too large. The inappropriate intrinsic charge attribution to the primary surface groups and the condensation of the inner sphere surface complex to a point charge are responsible for this behavior of commonly used 2pK models. Both key factors are differently defined in the charge distributed multi-site complexation (CD-MUSIC) model and are based in this model on a surface structural approach. The CD-MUSIC model can successfully describe the macroscopic adsorption phenomena using the surface speciation and binding mechanisms as found by spectroscopy. The model is also able to predict the anion competition well. The charge distribution in the interface is in agreement with the observed structure of surface complexes.« less

  18. Hybrid modeling and empirical analysis of automobile supply chain network

    NASA Astrophysics Data System (ADS)

    Sun, Jun-yan; Tang, Jian-ming; Fu, Wei-ping; Wu, Bing-ying

    2017-05-01

    Based on the connection mechanism of nodes which automatically select upstream and downstream agents, a simulation model for dynamic evolutionary process of consumer-driven automobile supply chain is established by integrating ABM and discrete modeling in the GIS-based map. Firstly, the rationality is proved by analyzing the consistency of sales and changes in various agent parameters between the simulation model and a real automobile supply chain. Second, through complex network theory, hierarchical structures of the model and relationships of networks at different levels are analyzed to calculate various characteristic parameters such as mean distance, mean clustering coefficients, and degree distributions. By doing so, it verifies that the model is a typical scale-free network and small-world network. Finally, the motion law of this model is analyzed from the perspective of complex self-adaptive systems. The chaotic state of the simulation system is verified, which suggests that this system has typical nonlinear characteristics. This model not only macroscopically illustrates the dynamic evolution of complex networks of automobile supply chain but also microcosmically reflects the business process of each agent. Moreover, the model construction and simulation of the system by means of combining CAS theory and complex networks supplies a novel method for supply chain analysis, as well as theory bases and experience for supply chain analysis of auto companies.

  19. Frequency analysis of stress relaxation dynamics in model asphalts

    NASA Astrophysics Data System (ADS)

    Masoori, Mohammad; Greenfield, Michael L.

    2014-09-01

    Asphalt is an amorphous or semi-crystalline material whose mechanical performance relies on viscoelastic responses to applied strain or stress. Chemical composition and its effect on the viscoelastic properties of model asphalts have been investigated here by computing complex modulus from molecular dynamics simulation results for two different model asphalts whose compositions each resemble the Strategic Highway Research Program AAA-1 asphalt in different ways. For a model system that contains smaller molecules, simulation results for storage and loss modulus at 443 K reach both the low and high frequency scaling limits of the Maxwell model. Results for a model system composed of larger molecules (molecular weights 300-900 g/mol) with longer branches show a quantitatively higher complex modulus that decreases significantly as temperature increases over 400-533 K. Simulation results for its loss modulus approach the low frequency scaling limit of the Maxwell model at only the highest temperature simulated. A Black plot or van Gurp-Palman plot of complex modulus vs. phase angle for the system of larger molecules suggests some overlap among results at different temperatures for less high frequencies, with an interdependence consistent with the empirical Christensen-Anderson-Marasteanu model. Both model asphalts are thermorheologically complex at very high frequencies, where they show a loss peak that appears to be independent of temperature and density.

  20. A 3D puzzle approach to building protein-DNA structures.

    PubMed

    Hinton, Deborah M

    2017-03-15

    Despite recent advances in structural analysis, it is still challenging to obtain a high-resolution structure for a complex of RNA polymerase, transcriptional factors, and DNA. However, using biochemical constraints, 3D printed models of available structures, and computer modeling, one can build biologically relevant models of such supramolecular complexes.

Top