Sample records for simple model representing

  1. Modeling shared resources with generalized synchronization within a Petri net bottom-up approach.

    PubMed

    Ferrarini, L; Trioni, M

    1996-01-01

    This paper proposes a simple and effective way to represent shared resources in manufacturing systems within a Petri net model previously developed. Such a model relies on the bottom-up and modular approach to synthesis and analysis. The designer may define elementary tasks and then connect them with one another with three kinds of connections: self-loops, inhibitor arcs and simple synchronizations. A theoretical framework has been established for the analysis of liveness and reversibility of such models. The generalized synchronization, here formalized, represents an extension of the simple synchronization, allowing the merging of suitable subnets among elementary tasks. It is proved that under suitable, but not restrictive, hypotheses the generalized synchronization may be substituted for a simple one, thus being compatible with all the developed theoretical body.

  2. A simple biosphere model (SiB) for use within general circulation models

    NASA Technical Reports Server (NTRS)

    Sellers, P. J.; Mintz, Y.; Sud, Y. C.; Dalcher, A.

    1986-01-01

    A simple realistic biosphere model for calculating the transfer of energy, mass and momentum between the atmosphere and the vegetated surface of the earth has been developed for use in atmospheric general circulation models. The vegetation in each terrestrial model grid is represented by an upper level, representing the perennial canopy of trees and shrubs, and a lower level, representing the annual cover of grasses and other heraceous species. The vegetation morphology and the physical and physiological properties of the vegetation layers determine such properties as: the reflection, transmission, absorption and emission of direct and diffuse radiation; the infiltration, drainage, and storage of the residual rainfall in the soil; and the control over the stomatal functioning. The model, with prescribed vegetation parameters and soil interactive soil moisture, can be used for prediction of the atmospheric circulation and precipitaion fields for short periods of up to a few weeks.

  3. A simple object-oriented and open-source model for scientific and policy analyses of the global climate system – Hector v1.0

    DOE PAGES

    Hartin, Corinne A.; Patel, Pralit L.; Schwarber, Adria; ...

    2015-04-01

    Simple climate models play an integral role in the policy and scientific communities. They are used for climate mitigation scenarios within integrated assessment models, complex climate model emulation, and uncertainty analyses. Here we describe Hector v1.0, an open source, object-oriented, simple global climate carbon-cycle model. This model runs essentially instantaneously while still representing the most critical global-scale earth system processes. Hector has a three-part main carbon cycle: a one-pool atmosphere, land, and ocean. The model's terrestrial carbon cycle includes primary production and respiration fluxes, accommodating arbitrary geographic divisions into, e.g., ecological biomes or political units. Hector actively solves the inorganicmore » carbon system in the surface ocean, directly calculating air–sea fluxes of carbon and ocean pH. Hector reproduces the global historical trends of atmospheric [CO 2], radiative forcing, and surface temperatures. The model simulates all four Representative Concentration Pathways (RCPs) with equivalent rates of change of key variables over time compared to current observations, MAGICC (a well-known simple climate model), and models from the 5th Coupled Model Intercomparison Project. Hector's flexibility, open-source nature, and modular design will facilitate a broad range of research in various areas.« less

  4. Physical models of collective cell motility: from cell to tissue

    NASA Astrophysics Data System (ADS)

    Camley, B. A.; Rappel, W.-J.

    2017-03-01

    In this article, we review physics-based models of collective cell motility. We discuss a range of techniques at different scales, ranging from models that represent cells as simple self-propelled particles to phase field models that can represent a cell’s shape and dynamics in great detail. We also extensively review the ways in which cells within a tissue choose their direction, the statistics of cell motion, and some simple examples of how cell-cell signaling can interact with collective cell motility. This review also covers in more detail selected recent works on collective cell motion of small numbers of cells on micropatterns, in wound healing, and the chemotaxis of clusters of cells.

  5. Realistic Anatomical Prostate Models for Surgical Skills Workshops Using Ballistic Gelatin for Nerve-Sparing Radical Prostatectomy and Fruit for Simple Prostatectomy

    PubMed Central

    Lindner, Uri; Klotz, Laurence

    2011-01-01

    Purpose Understanding of prostate anatomy has evolved as techniques have been refined and improved for radical prostatectomy (RP), particularly regarding the importance of the neurovascular bundles for erectile function. The objectives of this study were to develop inexpensive and simple but anatomically accurate prostate models not involving human or animal elements to teach the terminology and practical aspects of nerve-sparing RP and simple prostatectomy (SP). Materials and Methods The RP model used a Foley catheter with ballistics gelatin in the balloon and mesh fabric (neurovascular bundles) and balloons (prostatic fascial layers) on either side for the practice of inter- and intrafascial techniques. The SP model required only a ripe clementine, for which the skin represented compressed normal prostate, the pulp represented benign tissue, and the pith mimicked fibrous adhesions. A modification with a balloon through the fruit center acted as a "urethra." Results Both models were easily created and successfully represented the principles of anatomical nerve-sparing RP and SP. Both models were tested in workshops by urologists and residents of differing levels with positive feedback. Conclusions Low-fidelity models for prostate anatomy demonstration and surgical practice are feasible. They are inexpensive and simple to construct. Importantly, these models can be used for education on the practical aspects of nerve-sparing RP and SP. The models will require further validation as educational and competency tools, but as we move to an era in which human donors and animal experiments become less ethical and more difficult to complete, so too will low-fidelity models become more attractive. PMID:21379431

  6. Combinatorial structures to modeling simple games and applications

    NASA Astrophysics Data System (ADS)

    Molinero, Xavier

    2017-09-01

    We connect three different topics: combinatorial structures, game theory and chemistry. In particular, we establish the bases to represent some simple games, defined as influence games, and molecules, defined from atoms, by using combinatorial structures. First, we characterize simple games as influence games using influence graphs. It let us to modeling simple games as combinatorial structures (from the viewpoint of structures or graphs). Second, we formally define molecules as combinations of atoms. It let us to modeling molecules as combinatorial structures (from the viewpoint of combinations). It is open to generate such combinatorial structures using some specific techniques as genetic algorithms, (meta-)heuristics algorithms and parallel programming, among others.

  7. An investigation of the astronomical theory of the ice ages using a simple climate-ice sheet model

    NASA Technical Reports Server (NTRS)

    Pollard, D.

    1978-01-01

    The astronomical theory of the Quaternary ice ages is incorporated into a simple climate model for global weather; important features of the model include the albedo feedback, topography and dynamics of the ice sheets. For various parameterizations of the orbital elements, the model yields realistic assessments of the northern ice sheet. Lack of a land-sea heat capacity contrast represents one of the chief difficulties of the model.

  8. Patterns of Response Times and Response Choices to Science Questions: The Influence of Relative Processing Time

    ERIC Educational Resources Information Center

    Heckler, Andrew F.; Scaife, Thomas M.

    2015-01-01

    We report on five experiments investigating response choices and response times to simple science questions that evoke student "misconceptions," and we construct a simple model to explain the patterns of response choices. Physics students were asked to compare a physical quantity represented by the slope, such as speed, on simple physics…

  9. A simple model of the effect of ocean ventilation on ocean heat uptake

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nadiga, Balasubramanya T.; Urban, Nathan Mark

    Presentation includes slides on Earth System Models vs. Simple Climate Models; A Popular SCM: Energy Balance Model of Anomalies; On calibrating against one ESM experiment, the SCM correctly captures that ESM's surface warming response with other forcings; Multi-Model Analysis: Multiple ESMs, Single SCM; Posterior Distributions of ECS; However In Excess of 90% of TOA Energy Imbalance is Sequestered in the World Oceans; Heat Storage in the Two Layer Model; Heat Storage in the Two Layer Model; Including TOA Rad. Imbalance and Ocean Heat in Calibration Improves Repr., but Significant Errors Persist; Improved Vertical Resolution Does Not Fix Problem; A Seriesmore » of Expts. Confirms That Anomaly-Diffusing Models Cannot Properly Represent Ocean Heat Uptake; Physics of the Thermocline; Outcropping Isopycnals and Horizontally-Averaged Layers; Local interactions between outcropping isopycnals leads to non-local interactions between horizontally-averaged layers; Both Surface Warming and Ocean Heat are Well Represented With Just 4 Layers; A Series of Expts. Confirms That When Non-Local Interactions are Allowed, the SCMs Can Represent Both Surface Warming and Ocean Heat Uptake; and Summary and Conclusions.« less

  10. An egalitarian network model for the emergence of simple and complex cells in visual cortex

    PubMed Central

    Tao, Louis; Shelley, Michael; McLaughlin, David; Shapley, Robert

    2004-01-01

    We explain how simple and complex cells arise in a large-scale neuronal network model of the primary visual cortex of the macaque. Our model consists of ≈4,000 integrate-and-fire, conductance-based point neurons, representing the cells in a small, 1-mm2 patch of an input layer of the primary visual cortex. In the model the local connections are isotropic and nonspecific, and convergent input from the lateral geniculate nucleus confers cortical cells with orientation and spatial phase preference. The balance between lateral connections and lateral geniculate nucleus drive determines whether individual neurons in this recurrent circuit are simple or complex. The model reproduces qualitatively the experimentally observed distributions of both extracellular and intracellular measures of simple and complex response. PMID:14695891

  11. pyhector: A Python interface for the simple climate model Hector

    DOE PAGES

    Willner, Sven N.; Hartin, Corinne; Gieseke, Robert

    2017-04-01

    Here, pyhector is a Python interface for the simple climate model Hector (Hartin et al. 2015) developed in C++. Simple climate models like Hector can, for instance, be used in the analysis of scenarios within integrated assessment models like GCAM1, in the emulation of complex climate models, and in uncertainty analyses. Hector is an open-source, object oriented, simple global climate carbon cycle model. Its carbon cycle consists of a one pool atmosphere, three terrestrial pools which can be broken down into finer biomes or regions, and four carbon pools in the ocean component. The terrestrial carbon cycle includes primary productionmore » and respiration fluxes. The ocean carbon cycle circulates carbon via a simplified thermohaline circulation, calculating air-sea fluxes as well as the marine carbonate system. The model input is time series of greenhouse gas emissions; as example scenarios for these the Pyhector package contains the Representative Concentration Pathways (RCPs)2.« less

  12. A comparison of the structureborne and airborne paths for propfan interior noise

    NASA Technical Reports Server (NTRS)

    Eversman, W.; Koval, L. R.; Ramakrishnan, J. V.

    1986-01-01

    A comparison is made between the relative levels of aircraft interior noise related to structureborne and airborne paths for the same propeller source. A simple, but physically meaningful, model of the structure treats the fuselage interior as a rectangular cavity with five rigid walls. The sixth wall, the fuselage sidewall, is a stiffened panel. The wing is modeled as a simple beam carried into the fuselage by a large discrete stiffener representing the carry-through structure. The fuselage interior is represented by analytically-derived acoustic cavity modes and the entire structure is represented by structural modes derived from a finite element model. The noise source for structureborne noise is the unsteady lift generation on the wing due to the rotating trailing vortex system of the propeller. The airborne noise source is the acoustic field created by a propeller model consistent with the vortex representation. Comparisons are made on the basis of interior noise over a range of propeller rotational frequencies at a fixed thrust.

  13. Memory-Based Simple Heuristics as Attribute Substitution: Competitive Tests of Binary Choice Inference Models.

    PubMed

    Honda, Hidehito; Matsuka, Toshihiko; Ueda, Kazuhiro

    2017-05-01

    Some researchers on binary choice inference have argued that people make inferences based on simple heuristics, such as recognition, fluency, or familiarity. Others have argued that people make inferences based on available knowledge. To examine the boundary between heuristic and knowledge usage, we examine binary choice inference processes in terms of attribute substitution in heuristic use (Kahneman & Frederick, 2005). In this framework, it is predicted that people will rely on heuristic or knowledge-based inference depending on the subjective difficulty of the inference task. We conducted competitive tests of binary choice inference models representing simple heuristics (fluency and familiarity heuristics) and knowledge-based inference models. We found that a simple heuristic model (especially a familiarity heuristic model) explained inference patterns for subjectively difficult inference tasks, and that a knowledge-based inference model explained subjectively easy inference tasks. These results were consistent with the predictions of the attribute substitution framework. Issues on usage of simple heuristics and psychological processes are discussed. Copyright © 2016 Cognitive Science Society, Inc.

  14. A survey of commercial object-oriented database management systems

    NASA Technical Reports Server (NTRS)

    Atkins, John

    1992-01-01

    The object-oriented data model is the culmination of over thirty years of database research. Initially, database research focused on the need to provide information in a consistent and efficient manner to the business community. Early data models such as the hierarchical model and the network model met the goal of consistent and efficient access to data and were substantial improvements over simple file mechanisms for storing and accessing data. However, these models required highly skilled programmers to provide access to the data. Consequently, in the early 70's E.F. Codd, an IBM research computer scientists, proposed a new data model based on the simple mathematical notion of the relation. This model is known as the Relational Model. In the relational model, data is represented in flat tables (or relations) which have no physical or internal links between them. The simplicity of this model fostered the development of powerful but relatively simple query languages that now made data directly accessible to the general database user. Except for large, multi-user database systems, a database professional was in general no longer necessary. Database professionals found that traditional data in the form of character data, dates, and numeric data were easily represented and managed via the relational model. Commercial relational database management systems proliferated and performance of relational databases improved dramatically. However, there was a growing community of potential database users whose needs were not met by the relational model. These users needed to store data with data types not available in the relational model and who required a far richer modelling environment than that provided by the relational model. Indeed, the complexity of the objects to be represented in the model mandated a new approach to database technology. The Object-Oriented Model was the result.

  15. Calibrating the simple biosphere model for Amazonian tropical forest using field and remote sensing data. I - Average calibration with field data

    NASA Technical Reports Server (NTRS)

    Sellers, Piers J.; Shuttleworth, W. James; Dorman, Jeff L.; Dalcher, Amnon; Roberts, John M.

    1989-01-01

    Using meteorological and hydrological measurements taken in and above the central-Amazon-basin tropical forest, calibration of the Sellers et al. (1986) simple biosphere (SiB) model are described. The SiB model is a one-dimensional soil-vegetation-atmosphere model designed for use within GCMs models, representing the vegetation cover by analogy with processes operating within a single representative plant. The experimental systems and the procedures used to obtain field data are described, together with the specification of the physiological parameterization required to provide an average description of data. It was found that some of the existing literature on stomatal behavior for tropical species is inconsistent with the observed behavior of the complete canopy in Amazonia, and that the rainfall interception store of the canopy is considerably smaller than originally specified in the SiB model.

  16. Small should be the New Big: High-resolution Models with Small Segments have Big Advantages when Modeling Eutrophication in the Great Lakes

    EPA Science Inventory

    Historical mathematical models, especially Great Lakes eutrophication models, traditionally used course segmentation schemes and relatively simple hydrodynamics to represent system behavior. Although many modelers have claimed success using such models, these representations can ...

  17. Masquerade Detection Using a Taxonomy-Based Multinomial Modeling Approach in UNIX Systems

    DTIC Science & Technology

    2008-08-25

    primarily the modeling of statistical features , such as the frequency of events, the duration of events, the co- occurrence of multiple events...are identified, we can extract features representing such behavior while auditing the user’s behavior. Figure1: Taxonomy of Linux and Unix...achieved when the features are extracted just from simple commands. Method Hit Rate False Positive Rate ocSVM using simple cmds (freq.-based

  18. Mathematical model for steady state, simple ampholyte isoelectric focusing: Development, computer simulation and implementation

    NASA Technical Reports Server (NTRS)

    Palusinski, O. A.; Allgyer, T. T.

    1979-01-01

    The elimination of Ampholine from the system by establishing the pH gradient with simple ampholytes is proposed. A mathematical model was exercised at the level of the two-component system by using values for mobilities, diffusion coefficients, and dissociation constants representative of glutamic acid and histidine. The constants assumed in the calculations are reported. The predictions of the model and computer simulation of isoelectric focusing experiments are in direct importance to obtain Ampholine-free, stable pH gradients.

  19. Analysis and Modeling of Ground Operations at Hub Airports

    NASA Technical Reports Server (NTRS)

    Atkins, Stephen (Technical Monitor); Andersson, Kari; Carr, Francis; Feron, Eric; Hall, William D.

    2000-01-01

    Building simple and accurate models of hub airports can considerably help one understand airport dynamics, and may provide quantitative estimates of operational airport improvements. In this paper, three models are proposed to capture the dynamics of busy hub airport operations. Two simple queuing models are introduced to capture the taxi-out and taxi-in processes. An integer programming model aimed at representing airline decision-making attempts to capture the dynamics of the aircraft turnaround process. These models can be applied for predictive purposes. They may also be used to evaluate control strategies for improving overall airport efficiency.

  20. Ridge Regression for Interactive Models.

    ERIC Educational Resources Information Center

    Tate, Richard L.

    1988-01-01

    An exploratory study of the value of ridge regression for interactive models is reported. Assuming that the linear terms in a simple interactive model are centered to eliminate non-essential multicollinearity, a variety of common models, representing both ordinal and disordinal interactions, are shown to have "orientations" that are…

  1. Maximum efficiency of state-space models of nanoscale energy conversion devices

    NASA Astrophysics Data System (ADS)

    Einax, Mario; Nitzan, Abraham

    2016-07-01

    The performance of nano-scale energy conversion devices is studied in the framework of state-space models where a device is described by a graph comprising states and transitions between them represented by nodes and links, respectively. Particular segments of this network represent input (driving) and output processes whose properly chosen flux ratio provides the energy conversion efficiency. Simple cyclical graphs yield Carnot efficiency for the maximum conversion yield. We give general proof that opening a link that separate between the two driving segments always leads to reduced efficiency. We illustrate these general result with simple models of a thermoelectric nanodevice and an organic photovoltaic cell. In the latter an intersecting link of the above type corresponds to non-radiative carriers recombination and the reduced maximum efficiency is manifested as a smaller open-circuit voltage.

  2. Maximum efficiency of state-space models of nanoscale energy conversion devices.

    PubMed

    Einax, Mario; Nitzan, Abraham

    2016-07-07

    The performance of nano-scale energy conversion devices is studied in the framework of state-space models where a device is described by a graph comprising states and transitions between them represented by nodes and links, respectively. Particular segments of this network represent input (driving) and output processes whose properly chosen flux ratio provides the energy conversion efficiency. Simple cyclical graphs yield Carnot efficiency for the maximum conversion yield. We give general proof that opening a link that separate between the two driving segments always leads to reduced efficiency. We illustrate these general result with simple models of a thermoelectric nanodevice and an organic photovoltaic cell. In the latter an intersecting link of the above type corresponds to non-radiative carriers recombination and the reduced maximum efficiency is manifested as a smaller open-circuit voltage.

  3. A conformally flat realistic anisotropic model for a compact star

    NASA Astrophysics Data System (ADS)

    Ivanov, B. V.

    2018-04-01

    A physically realistic stellar model with a simple expression for the energy density and conformally flat interior is found. The relations between the different conditions are used without graphic proofs. It may represent a real pulsar.

  4. Room temperature ionic liquids: A simple model. Effect of chain length and size of intermolecular potential on critical temperature.

    PubMed

    Chapela, Gustavo A; Guzmán, Orlando; Díaz-Herrera, Enrique; del Río, Fernando

    2015-04-21

    A model of a room temperature ionic liquid can be represented as an ion attached to an aliphatic chain mixed with a counter ion. The simple model used in this work is based on a short rigid tangent square well chain with an ion, represented by a hard sphere interacting with a Yukawa potential at the head of the chain, mixed with a counter ion represented as well by a hard sphere interacting with a Yukawa potential of the opposite sign. The length of the chain and the depth of the intermolecular forces are investigated in order to understand which of these factors are responsible for the lowering of the critical temperature. It is the large difference between the ionic and the dispersion potentials which explains this lowering of the critical temperature. Calculation of liquid-vapor equilibrium orthobaric curves is used to estimate the critical points of the model. Vapor pressures are used to obtain an estimate of the triple point of the different models in order to calculate the span of temperatures where they remain a liquid. Surface tensions and interfacial thicknesses are also reported.

  5. Acoustic backscatter models of fish: Gradual or punctuated evolution

    NASA Astrophysics Data System (ADS)

    Horne, John K.

    2004-05-01

    Sound-scattering characteristics of aquatic organisms are routinely investigated using theoretical and numerical models. Development of the inverse approach by van Holliday and colleagues in the 1970s catalyzed the development and validation of backscatter models for fish and zooplankton. As the understanding of biological scattering properties increased, so did the number and computational sophistication of backscatter models. The complexity of data used to represent modeled organisms has also evolved in parallel to model development. Simple geometric shapes representing body components or the whole organism have been replaced by anatomically accurate representations derived from imaging sensors such as computer-aided tomography (CAT) scans. In contrast, Medwin and Clay (1998) recommend that fish and zooplankton should be described by simple theories and models, without acoustically superfluous extensions. Since van Holliday's early work, how has data and computational complexity influenced accuracy and precision of model predictions? How has the understanding of aquatic organism scattering properties increased? Significant steps in the history of model development will be identified and changes in model results will be characterized and compared. [Work supported by ONR and the Alaska Fisheries Science Center.

  6. Comparing and combining process-based crop models and statistical models with some implications for climate change

    NASA Astrophysics Data System (ADS)

    Roberts, Michael J.; Braun, Noah O.; Sinclair, Thomas R.; Lobell, David B.; Schlenker, Wolfram

    2017-09-01

    We compare predictions of a simple process-based crop model (Soltani and Sinclair 2012), a simple statistical model (Schlenker and Roberts 2009), and a combination of both models to actual maize yields on a large, representative sample of farmer-managed fields in the Corn Belt region of the United States. After statistical post-model calibration, the process model (Simple Simulation Model, or SSM) predicts actual outcomes slightly better than the statistical model, but the combined model performs significantly better than either model. The SSM, statistical model and combined model all show similar relationships with precipitation, while the SSM better accounts for temporal patterns of precipitation, vapor pressure deficit and solar radiation. The statistical and combined models show a more negative impact associated with extreme heat for which the process model does not account. Due to the extreme heat effect, predicted impacts under uniform climate change scenarios are considerably more severe for the statistical and combined models than for the process-based model.

  7. Forgetting in Immediate Serial Recall: Decay, Temporal Distinctiveness, or Interference?

    ERIC Educational Resources Information Center

    Oberauer, Klaus; Lewandowsky, Stephan

    2008-01-01

    Three hypotheses of forgetting from immediate memory were tested: time-based decay, decreasing temporal distinctiveness, and interference. The hypotheses were represented by 3 models of serial recall: the primacy model, the SIMPLE (scale-independent memory, perception, and learning) model, and the SOB (serial order in a box) model, respectively.…

  8. The Variance Reaction Time Model

    ERIC Educational Resources Information Center

    Sikstrom, Sverker

    2004-01-01

    The variance reaction time model (VRTM) is proposed to account for various recognition data on reaction time, the mirror effect, receiver-operating-characteristic (ROC) curves, etc. The model is based on simple and plausible assumptions within a neural network: VRTM is a two layer neural network where one layer represents items and one layer…

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willner, Sven N.; Hartin, Corinne; Gieseke, Robert

    Here, pyhector is a Python interface for the simple climate model Hector (Hartin et al. 2015) developed in C++. Simple climate models like Hector can, for instance, be used in the analysis of scenarios within integrated assessment models like GCAM1, in the emulation of complex climate models, and in uncertainty analyses. Hector is an open-source, object oriented, simple global climate carbon cycle model. Its carbon cycle consists of a one pool atmosphere, three terrestrial pools which can be broken down into finer biomes or regions, and four carbon pools in the ocean component. The terrestrial carbon cycle includes primary productionmore » and respiration fluxes. The ocean carbon cycle circulates carbon via a simplified thermohaline circulation, calculating air-sea fluxes as well as the marine carbonate system. The model input is time series of greenhouse gas emissions; as example scenarios for these the Pyhector package contains the Representative Concentration Pathways (RCPs)2.« less

  10. Mixing coarse-grained and fine-grained water in molecular dynamics simulations of a single system.

    PubMed

    Riniker, Sereina; van Gunsteren, Wilfred F

    2012-07-28

    The use of a supra-molecular coarse-grained (CG) model for liquid water as solvent in molecular dynamics simulations of biomolecules represented at the fine-grained (FG) atomic level of modelling may reduce the computational effort by one or two orders of magnitude. However, even if the pure FG model and the pure CG model represent the properties of the particular substance of interest rather well, their application in a hybrid FG/CG system containing varying ratios of FG versus CG particles is highly non-trivial, because it requires an appropriate balance between FG-FG, FG-CG, and CG-CG energies, and FG and CG entropies. Here, the properties of liquid water are used to calibrate the FG-CG interactions for the simple-point-charge water model at the FG level and a recently proposed supra-molecular water model at the CG level that represents five water molecules by one CG bead containing two interaction sites. Only two parameters are needed to reproduce different thermodynamic and dielectric properties of liquid water at physiological temperature and pressure for various mole fractions of CG water in FG water. The parametrisation strategy for the FG-CG interactions is simple and can be easily transferred to interactions between atomistic biomolecules and CG water.

  11. Application of Mathematical Modeling in Potentially Survivable Blast Threats in Military Vehicles

    DTIC Science & Technology

    2008-12-01

    elastic – compression and tension of body under loading if elastic tolerances are exceeded, (b) viscous – when fluid matter is involved in the...lumbar spine biomechanical response. The model is a simple spring and damper system and its equation of motion is represented as: 2...dynamic motion. The seat structural management system was represented using Kelvin spring damper element provided in MADYMO. In the actual seat system

  12. Understanding the complex dynamics of stock markets through cellular automata

    NASA Astrophysics Data System (ADS)

    Qiu, G.; Kandhai, D.; Sloot, P. M. A.

    2007-04-01

    We present a cellular automaton (CA) model for simulating the complex dynamics of stock markets. Within this model, a stock market is represented by a two-dimensional lattice, of which each vertex stands for a trader. According to typical trading behavior in real stock markets, agents of only two types are adopted: fundamentalists and imitators. Our CA model is based on local interactions, adopting simple rules for representing the behavior of traders and a simple rule for price updating. This model can reproduce, in a simple and robust manner, the main characteristics observed in empirical financial time series. Heavy-tailed return distributions due to large price variations can be generated through the imitating behavior of agents. In contrast to other microscopic simulation (MS) models, our results suggest that it is not necessary to assume a certain network topology in which agents group together, e.g., a random graph or a percolation network. That is, long-range interactions can emerge from local interactions. Volatility clustering, which also leads to heavy tails, seems to be related to the combined effect of a fast and a slow process: the evolution of the influence of news and the evolution of agents’ activity, respectively. In a general sense, these causes of heavy tails and volatility clustering appear to be common among some notable MS models that can confirm the main characteristics of financial markets.

  13. Modelling Simple Experimental Platform for In Vitro Study of Drug Elution from Drug Eluting Stents (DES)

    NASA Astrophysics Data System (ADS)

    Kalachev, L. V.

    2016-06-01

    We present a simple model of experimental setup for in vitro study of drug release from drug eluting stents and drug propagation in artificial tissue samples representing blood vessels. The model is further reduced using the assumption on vastly different characteristic diffusion times in the stent coating and in the artificial tissue. The model is used to derive a relationship between the times at which the measurements have to be taken for two experimental platforms, with corresponding artificial tissue samples made of different materials with different drug diffusion coefficients, to properly compare the drug release characteristics of drug eluting stents.

  14. Design and analysis of simple choice surveys for natural resource management

    USGS Publications Warehouse

    Fieberg, John; Cornicelli, Louis; Fulton, David C.; Grund, Marrett D.

    2010-01-01

    We used a simple yet powerful method for judging public support for management actions from randomized surveys. We asked respondents to rank choices (representing management regulations under consideration) according to their preference, and we then used discrete choice models to estimate probability of choosing among options (conditional on the set of options presented to respondents). Because choices may share similar unmodeled characteristics, the multinomial logit model, commonly applied to discrete choice data, may not be appropriate. We introduced the nested logit model, which offers a simple approach for incorporating correlation among choices. This forced choice survey approach provides a useful method of gathering public input; it is relatively easy to apply in practice, and the data are likely to be more informative than asking constituents to rate attractiveness of each option separately.

  15. The time-dependent response of 3- and 5-layer sandwich beams

    NASA Technical Reports Server (NTRS)

    Hyer, M. W.; Oleksuk, L. S. S.; Bowles, D. E.

    1992-01-01

    Simple sandwich beam models have been developed to study the effect of the time-dependent constitutive properties of fiber-reinforced polymer matrix composites, considered for use in orbiting precision segmented reflectors, on the overall deformations. The 3- and 5-layer beam models include layers representing the face sheets, the core, and the adhesive. The static elastic deformation response of the sandwich beam models to a midspan point load is studied using the principle of stationary potential energy. In addition to quantitative conclusions, several assumptions are discussed which simplify the analysis for the case of more complicated material models. It is shown that the simple three-layer model is sufficient in many situations.

  16. A Simple Model for Nonlinear Confocal Ultrasonic Beams

    NASA Astrophysics Data System (ADS)

    Zhang, Dong; Zhou, Lin; Si, Li-Sheng; Gong, Xiu-Fen

    2007-01-01

    A confocally and coaxially arranged pair of focused transmitter and receiver represents one of the best geometries for medical ultrasonic imaging and non-invasive detection. We develop a simple theoretical model for describing the nonlinear propagation of a confocal ultrasonic beam in biological tissues. On the basis of the parabolic approximation and quasi-linear approximation, the nonlinear Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation is solved by using the angular spectrum approach. Gaussian superposition technique is applied to simplify the solution, and an analytical solution for the second harmonics in the confocal ultrasonic beam is presented. Measurements are performed to examine the validity of the theoretical model. This model provides a preliminary model for acoustic nonlinear microscopy.

  17. Collapse limit states of reinforced earth retaining walls

    NASA Astrophysics Data System (ADS)

    Bolton, M. D.; Pang, P. L. R.

    The use of systems of earth reinforcement or anchorage is gaining in popularity. It therefore becomes important to assess whether the methods of design which were adopted for such constructions represent valid predictions of realistic limit states. Confidence can only be gained with regard to the effectiveness of limit state criteria if a wide variety of representative limit states were observed. Over 80 centrifugal model tests of simple reinforced earth retaining walls were carried out, with the main purpose of clarifying the nature of appropriate collapse criteria. Collapses due to an insufficiency of friction were shown to be repeatable and therefore subject to fairly simple limit state calculations.

  18. Development strategy and process models for phased automation of design and digital manufacturing electronics

    NASA Astrophysics Data System (ADS)

    Korshunov, G. I.; Petrushevskaya, A. A.; Lipatnikov, V. A.; Smirnova, M. S.

    2018-03-01

    The strategy of quality of electronics insurance is represented as most important. To provide quality, the processes sequence is considered and modeled by Markov chain. The improvement is distinguished by simple database means of design for manufacturing for future step-by-step development. Phased automation of design and digital manufacturing electronics is supposed. The MatLab modelling results showed effectiveness increase. New tools and software should be more effective. The primary digital model is proposed to represent product in the processes sequence from several processes till the whole life circle.

  19. A discrete fibre dispersion method for excluding fibres under compression in the modelling of fibrous tissues.

    PubMed

    Li, Kewei; Ogden, Ray W; Holzapfel, Gerhard A

    2018-01-01

    Recently, micro-sphere-based methods derived from the angular integration approach have been used for excluding fibres under compression in the modelling of soft biological tissues. However, recent studies have revealed that many of the widely used numerical integration schemes over the unit sphere are inaccurate for large deformation problems even without excluding fibres under compression. Thus, in this study, we propose a discrete fibre dispersion model based on a systematic method for discretizing a unit hemisphere into a finite number of elementary areas, such as spherical triangles. Over each elementary area, we define a representative fibre direction and a discrete fibre density. Then, the strain energy of all the fibres distributed over each elementary area is approximated based on the deformation of the representative fibre direction weighted by the corresponding discrete fibre density. A summation of fibre contributions over all elementary areas then yields the resultant fibre strain energy. This treatment allows us to exclude fibres under compression in a discrete manner by evaluating the tension-compression status of the representative fibre directions only. We have implemented this model in a finite-element programme and illustrate it with three representative examples, including simple tension and simple shear of a unit cube, and non-homogeneous uniaxial extension of a rectangular strip. The results of all three examples are consistent and accurate compared with the previously developed continuous fibre dispersion model, and that is achieved with a substantial reduction of computational cost. © 2018 The Author(s).

  20. Identifying and Evaluating the Relationships that Control a Land Surface Model's Hydrological Behavior

    NASA Technical Reports Server (NTRS)

    Koster, Randal D.; Mahanama, Sarith P.

    2012-01-01

    The inherent soil moisture-evaporation relationships used in today 's land surface models (LSMs) arguably reflect a lot of guesswork given the lack of contemporaneous evaporation and soil moisture observations at the spatial scales represented by regional and global models. The inherent soil moisture-runoff relationships used in the LSMs are also of uncertain accuracy. Evaluating these relationships is difficult but crucial given that they have a major impact on how the land component contributes to hydrological and meteorological variability within the climate system. The relationships, it turns out, can be examined efficiently and effectively with a simple water balance model framework. The simple water balance model, driven with multi-decadal observations covering the conterminous United States, shows how different prescribed relationships lead to different manifestations of hydrological variability, some of which can be compared directly to observations. Through the testing of a wide suite of relationships, the simple model provides estimates for the underlying relationships that operate in nature and that should be operating in LSMs. We examine the relationships currently used in a number of different LSMs in the context of the simple water balance model results and make recommendations for potential first-order improvements to these LSMs.

  1. Slow-Slip Phenomena Represented by the One-Dimensional Burridge-Knopoff Model of Earthquakes

    NASA Astrophysics Data System (ADS)

    Kawamura, Hikaru; Yamamoto, Maho; Ueda, Yushi

    2018-05-01

    Slow-slip phenomena, including afterslips and silent earthquakes, are studied using a one-dimensional Burridge-Knopoff model that obeys the rate-and-state dependent friction law. By varying only a few model parameters, this simple model allows reproducing a variety of seismic slips within a single framework, including main shocks, precursory nucleation processes, afterslips, and silent earthquakes.

  2. A Comparative Study of Multi-material Data Structures for Computational Physics Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garimella, Rao Veerabhadra; Robey, Robert W.

    The data structures used to represent the multi-material state of a computational physics application can have a drastic impact on the performance of the application. We look at efficient data structures for sparse applications where there may be many materials, but only one or few in most computational cells. We develop simple performance models for use in selecting possible data structures and programming patterns. We verify the analytic models of performance through a small test program of the representative cases.

  3. A detailed comparison of optimality and simplicity in perceptual decision-making

    PubMed Central

    Shen, Shan; Ma, Wei Ji

    2017-01-01

    Two prominent ideas in the study of decision-making have been that organisms behave near-optimally, and that they use simple heuristic rules. These principles might be operating in different types of tasks, but this possibility cannot be fully investigated without a direct, rigorous comparison within a single task. Such a comparison was lacking in most previous studies, because a) the optimal decision rule was simple; b) no simple suboptimal rules were considered; c) it was unclear what was optimal, or d) a simple rule could closely approximate the optimal rule. Here, we used a perceptual decision-making task in which the optimal decision rule is well-defined and complex, and makes qualitatively distinct predictions from many simple suboptimal rules. We find that all simple rules tested fail to describe human behavior, that the optimal rule accounts well for the data, and that several complex suboptimal rules are indistinguishable from the optimal one. Moreover, we found evidence that the optimal model is close to the true model: first, the better the trial-to-trial predictions of a suboptimal model agree with those of the optimal model, the better that suboptimal model fits; second, our estimate of the Kullback-Leibler divergence between the optimal model and the true model is not significantly different from zero. When observers receive no feedback, the optimal model still describes behavior best, suggesting that sensory uncertainty is implicitly represented and taken into account. Beyond the task and models studied here, our results have implications for best practices of model comparison. PMID:27177259

  4. Comparing fire spread algorithms using equivalence testing and neutral landscape models

    Treesearch

    Brian R. Miranda; Brian R. Sturtevant; Jian Yang; Eric J. Gustafson

    2009-01-01

    We demonstrate a method to evaluate the degree to which a meta-model approximates spatial disturbance processes represented by a more detailed model across a range of landscape conditions, using neutral landscapes and equivalence testing. We illustrate this approach by comparing burn patterns produced by a relatively simple fire spread algorithm with those generated by...

  5. Mathematical modeling of enzyme production using Trichoderma harzianum P49P11 and sugarcane bagasse as carbon source.

    PubMed

    Gelain, Lucas; da Cruz Pradella, José Geraldo; da Costa, Aline Carvalho

    2015-12-01

    A mathematical model to describe the kinetics of enzyme production by the filamentous fungus Trichoderma harzianum P49P11 was developed using a low cost substrate as main carbon source (pretreated sugarcane bagasse). The model describes the cell growth, variation of substrate concentration and production of three kinds of enzymes (cellulases, beta-glucosidase and xylanase) in different sugarcane bagasse concentrations (5; 10; 20; 30; 40 gL(-1)). The 10 gL(-1) concentration was used to validate the model and the other to parameter estimation. The model for enzyme production has terms implicitly representing induction and repression. Substrate variation was represented by a simple degradation rate. The models seem to represent well the kinetics with a good fit for the majority of the assays. Validation results indicate that the models are adequate to represent the kinetics for a biotechnological process. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Serial recall of colors: Two models of memory for serial order applied to continuous visual stimuli.

    PubMed

    Peteranderl, Sonja; Oberauer, Klaus

    2018-01-01

    This study investigated the effects of serial position and temporal distinctiveness on serial recall of simple visual stimuli. Participants observed lists of five colors presented at varying, unpredictably ordered interitem intervals, and their task was to reproduce the colors in their order of presentation by selecting colors on a continuous-response scale. To control for the possibility of verbal labeling, articulatory suppression was required in one of two experimental sessions. The predictions were derived through simulation from two computational models of serial recall: SIMPLE represents the class of temporal-distinctiveness models, whereas SOB-CS represents event-based models. According to temporal-distinctiveness models, items that are temporally isolated within a list are recalled more accurately than items that are temporally crowded. In contrast, event-based models assume that the time intervals between items do not affect recall performance per se, although free time following an item can improve memory for that item because of extended time for the encoding. The experimental and the simulated data were fit to an interference measurement model to measure the tendency to confuse items with other items nearby on the list-the locality constraint-in people as well as in the models. The continuous-reproduction performance showed a pronounced primacy effect with no recency, as well as some evidence for transpositions obeying the locality constraint. Though not entirely conclusive, this evidence favors event-based models over a role for temporal distinctiveness. There was also a strong detrimental effect of articulatory suppression, suggesting that verbal codes can be used to support serial-order memory of simple visual stimuli.

  7. Generalized first-order kinetic model for biosolids decomposition and oxidation during hydrothermal treatment.

    PubMed

    Shanableh, A

    2005-01-01

    The main objective of this study was to develop generalized first-order kinetic models to represent hydrothermal decomposition and oxidation of biosolids within a wide range of temperatures (200-450 degrees C). A lumping approach was used in which oxidation of the various organic ingredients was characterized by the chemical oxygen demand (COD), and decomposition was characterized by the particulate (i.e., nonfilterable) chemical oxygen demand (PCOD). Using the Arrhenius equation (k = k(o)e(-Ea/RT)), activation energy (Ea) levels were derived from 42 continuous-flow hydrothermal treatment experiments conducted at temperatures in the range of 200-450 degrees C. Using predetermined values for k(o) in the Arrhenius equation, the activation energies of the various organic ingredients were separated into 42 values for oxidation and a similar number for decomposition. The activation energy values were then classified into levels representing the relative ease at which the organic ingredients of the biosolids were oxidized or decomposed. The resulting simple first-order kinetic models adequately represented, within the experimental data range, hydrothermal decomposition of the organic particles as measured by PCOD and oxidation of the organic content as measured by COD. The modeling approach presented in the paper provide a simple and general framework suitable for assessing the relative reaction rates of the various organic ingredients of biosolids.

  8. On the dynamics of a human body model.

    NASA Technical Reports Server (NTRS)

    Huston, R. L.; Passerello, C. E.

    1971-01-01

    Equations of motion for a model of the human body are developed. Basically, the model consists of an elliptical cylinder representing the torso, together with a system of frustrums of elliptical cones representing the limbs. They are connected to the main body and each other by hinges and ball and socket joints. Vector, tensor, and matrix methods provide a systematic organization of the geometry. The equations of motion are developed from the principles of classical mechanics. The solution of these equations then provide the displacement and rotation of the main body when the external forces and relative limb motions are specified. Three simple example motions are studied to illustrate the method. The first is an analysis and comparison of simple lifting on the earth and the moon. The second is an elementary approach to underwater swimming, including both viscous and inertia effects. The third is an analysis of kicking motion and its effect upon a vertically suspended man such as a parachutist.

  9. A simple hyperbolic model for communication in parallel processing environments

    NASA Technical Reports Server (NTRS)

    Stoica, Ion; Sultan, Florin; Keyes, David

    1994-01-01

    We introduce a model for communication costs in parallel processing environments called the 'hyperbolic model,' which generalizes two-parameter dedicated-link models in an analytically simple way. Dedicated interprocessor links parameterized by a latency and a transfer rate that are independent of load are assumed by many existing communication models; such models are unrealistic for workstation networks. The communication system is modeled as a directed communication graph in which terminal nodes represent the application processes that initiate the sending and receiving of the information and in which internal nodes, called communication blocks (CBs), reflect the layered structure of the underlying communication architecture. The direction of graph edges specifies the flow of the information carried through messages. Each CB is characterized by a two-parameter hyperbolic function of the message size that represents the service time needed for processing the message. The parameters are evaluated in the limits of very large and very small messages. Rules are given for reducing a communication graph consisting of many to an equivalent two-parameter form, while maintaining an approximation for the service time that is exact in both large and small limits. The model is validated on a dedicated Ethernet network of workstations by experiments with communication subprograms arising in scientific applications, for which a tight fit of the model predictions with actual measurements of the communication and synchronization time between end processes is demonstrated. The model is then used to evaluate the performance of two simple parallel scientific applications from partial differential equations: domain decomposition and time-parallel multigrid. In an appropriate limit, we also show the compatibility of the hyperbolic model with the recently proposed LogP model.

  10. A hybrid spectral representation of phytoplankton growth and zooplankton response: The ''control rod'' model of plankton interaction

    NASA Astrophysics Data System (ADS)

    Armstrong, Robert A.

    2003-11-01

    Phytoplankton species interact through competition for light and nutrients; they also interact through grazers they hold in common. Both interactions are expected to be size-dependent: smaller phytoplankton species will be at an advantage when nutrients are scarce due to surface/volume considerations, while species that are similar in size are more likely to be consumed by grazers held in common than are species that differ greatly in size. While phytoplankton competition for nutrients and light has been extensively characterized, size-based interaction through shared grazers has not been represented systematically. The latter situation is particularly unfortunate because small changes in community structure can give rise to large changes in ecosystem dynamics and, in inverse modeling, to large changes in estimated parameter values. A simple, systematic way to represent phytoplankton interaction through shared grazers, one resistant to unintended idiosyncrasy of model construction yet capable of representing scientifically justifiable idiosyncrasy, would aid greatly in the modeling process. Here I develop a model structure that allows systematic representation of plankton interaction. In this model, the zooplankton community is represented as a continuous size spectrum, while phytoplankton species can be represented individually. The mechanistic basis of the model is a shift in the zooplankton community from carnivory to omnivory to herbivory as phytoplankton density increases. I discuss two limiting approximations in some detail, and fit both to data from the IronEx II experiment. The first limiting case represents a community with no grazer-based interaction among phytoplankton species; this approximation illuminates the general structure of the model. In particular, the zooplankton spectrum can be viewed as the analog of a control rod in a nuclear reactor, which prevents (or fails to prevent) an exponential bloom of phytoplankton. A second, more complex limiting case allows more general interaction of phytoplankton species along a size axis. This latter case would be suitable for describing competition among species with distinct biogeochemical roles, or between species that cause harmful algal blooms and those that do not. The model structure as a whole is therefore simple enough to guide thinking, yet detailed enough to allow quantitative prediction.

  11. A Ball Pool Model to Illustrate Higgs Physics to the Public

    ERIC Educational Resources Information Center

    Organtini, Giovanni

    2017-01-01

    A simple model is presented to explain Higgs boson physics to the grand public. The model consists of a children's ball pool representing a Universe filled with a certain amount of the Higgs field. The model is suitable for usage as a hands-on tool in scientific exhibits and provides a clear explanation of almost all the aspects of the physics of…

  12. Kelvin-Voigt model of wave propagation in fragmented geomaterials with impact damping

    NASA Astrophysics Data System (ADS)

    Khudyakov, Maxim; Pasternak, Elena; Dyskin, Arcady

    2017-04-01

    When a wave propagates through real materials, energy dissipation occurs. The effect of loss of energy in homogeneous materials can be accounted for by using simple viscous models. However, a reliable model representing the effect in fragmented geomaterials has not been established yet. The main reason for that is a mechanism how vibrations are transmitted between the elements (fragments) in these materials. It is hypothesised that the fragments strike against each other, in the process of oscillation, and the impacts lead to the energy loss. We assume that the energy loss is well represented by the restitution coefficient. The principal element of this concept is the interaction of two adjacent blocks. We model it by a simple linear oscillator (a mass on an elastic spring) with an additional condition: each time the system travels through the neutral point, where the displacement is equal to zero, the velocity reduces by multiplying itself by the restitution coefficient, which characterises an impact of the fragments. This additional condition renders the system non-linear. We show that the behaviour of such a model averaged over times much larger than the system period can approximately be represented by a conventional linear oscillator with linear damping characterised by a damping coefficient expressible through the restitution coefficient. Based on this the wave propagation at times considerably greater than the resonance period of oscillations of the neighbouring blocks can be modelled using the Kelvin-Voigt model. The wave velocities and the dispersion relations are obtained.

  13. The Behavioral Economics of Choice and Interval Timing

    ERIC Educational Resources Information Center

    Jozefowiez, J.; Staddon, J. E. R.; Cerutti, D. T.

    2009-01-01

    The authors propose a simple behavioral economic model (BEM) describing how reinforcement and interval timing interact. The model assumes a Weber-law-compliant logarithmic representation of time. Associated with each represented time value are the payoffs that have been obtained for each possible response. At a given real time, the response with…

  14. Bilinear modelling of cellulosic orthotropic nonlinear materials

    Treesearch

    E.P. Saliklis; T. J. Urbanik; B. Tokyay

    2003-01-01

    The proposed method of modelling orthotropic solids that have a nonlinear constitutive material relationship affords several advantages. The first advantage is the application of a simple bilinear stress-strain curve to represent the material response on two orthogonal axes as well as in shear, even for markedly nonlinear materials. The second advantage is that this...

  15. Trajectory optimization and guidance law development for national aerospace plane applications

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Flandro, G. A.; Corban, J. E.

    1988-01-01

    The work completed to date is comprised of the following: a simple vehicle model representative of the aerospace plane concept in the hypersonic flight regime, fuel-optimal climb profiles for the unconstrained and dynamic pressure constrained cases generated using a reduced order dynamic model, an analytic switching condition for transition to rocket powered flight as orbital velocity is approached, simple feedback guidance laws for both the unconstrained and dynamic pressure constrained cases derived via singular perturbation theory and a nonlinear transformation technique, and numerical simulation results for ascent to orbit in the dynamic pressure constrained case.

  16. Vehicle Surveillance with a Generic, Adaptive, 3D Vehicle Model.

    PubMed

    Leotta, Matthew J; Mundy, Joseph L

    2011-07-01

    In automated surveillance, one is often interested in tracking road vehicles, measuring their shape in 3D world space, and determining vehicle classification. To address these tasks simultaneously, an effective approach is the constrained alignment of a prior model of 3D vehicle shape to images. Previous 3D vehicle models are either generic but overly simple or rigid and overly complex. Rigid models represent exactly one vehicle design, so a large collection is needed. A single generic model can deform to a wide variety of shapes, but those shapes have been far too primitive. This paper uses a generic 3D vehicle model that deforms to match a wide variety of passenger vehicles. It is adjustable in complexity between the two extremes. The model is aligned to images by predicting and matching image intensity edges. Novel algorithms are presented for fitting models to multiple still images and simultaneous tracking while estimating shape in video. Experiments compare the proposed model to simple generic models in accuracy and reliability of 3D shape recovery from images and tracking in video. Standard techniques for classification are also used to compare the models. The proposed model outperforms the existing simple models at each task.

  17. A simple method for assessing occupational exposure via the one-way random effects model.

    PubMed

    Krishnamoorthy, K; Mathew, Thomas; Peng, Jie

    2016-11-01

    A one-way random effects model is postulated for the log-transformed shift-long personal exposure measurements, where the random effect in the model represents an effect due to the worker. Simple closed-form confidence intervals are proposed for the relevant parameters of interest using the method of variance estimates recovery (MOVER). The performance of the confidence bounds is evaluated and compared with those based on the generalized confidence interval approach. Comparison studies indicate that the proposed MOVER confidence bounds are better than the generalized confidence bounds for the overall mean exposure and an upper percentile of the exposure distribution. The proposed methods are illustrated using a few examples involving industrial hygiene data.

  18. Calibrating binary lumped parameter models

    NASA Astrophysics Data System (ADS)

    Morgenstern, Uwe; Stewart, Mike

    2017-04-01

    Groundwater at its discharge point is a mixture of water from short and long flowlines, and therefore has a distribution of ages rather than a single age. Various transfer functions describe the distribution of ages within the water sample. Lumped parameter models (LPMs), which are mathematical models of water transport based on simplified aquifer geometry and flow configuration can account for such mixing of groundwater of different age, usually representing the age distribution with two parameters, the mean residence time, and the mixing parameter. Simple lumped parameter models can often match well the measured time varying age tracer concentrations, and therefore are a good representation of the groundwater mixing at these sites. Usually a few tracer data (time series and/or multi-tracer) can constrain both parameters. With the building of larger data sets of age tracer data throughout New Zealand, including tritium, SF6, CFCs, and recently Halon-1301, and time series of these tracers, we realised that for a number of wells the groundwater ages using a simple lumped parameter model were inconsistent between the different tracer methods. Contamination or degradation of individual tracers is unlikely because the different tracers show consistent trends over years and decades. This points toward a more complex mixing of groundwaters with different ages for such wells than represented by the simple lumped parameter models. Binary (or compound) mixing models are able to represent a more complex mixing, with mixing of water of two different age distributions. The problem related to these models is that they usually have 5 parameters which makes them data-hungry and therefore difficult to constrain all parameters. Two or more age tracers with different input functions, with multiple measurements over time, can provide the required information to constrain the parameters of the binary mixing model. We obtained excellent results using tritium time series encompassing the passage of the bomb-tritium through the aquifer, and SF6 with its steep gradient currently in the input. We will show age tracer data from drinking water wells that enabled identification of young water ingression into wells, which poses the risk of bacteriological contamination from the surface into the drinking water.

  19. Forecasting runoff from Pennsylvania landscapes

    USDA-ARS?s Scientific Manuscript database

    Identifying sites prone to surface runoff has been a cornerstone of conservation and nutrient management programs, relying upon site assessment tools that support strategic, as opposed to operational, decision making. We sought to develop simple, empirical models to represent two highly different me...

  20. The Plaque-Antiserum Method: an Assay of Virus Infectivity and an Experimental Model of Virus Infection

    PubMed Central

    De Flora, Silvio

    1974-01-01

    Areas of cytopathic effect can be circumscribed in cell monolayers by adding antiserum to the liquid nutrient medium after adsorption of virus. This procedure represents a simple and reliable tool for the titration of virus infectivity and provides an experimental model for studying some aspects of virus infection. Images PMID:4364462

  1. Effects of host social hierarchy on disease persistence.

    PubMed

    Davidson, Ross S; Marion, Glenn; Hutchings, Michael R

    2008-08-07

    The effects of social hierarchy on population dynamics and epidemiology are examined through a model which contains a number of fundamental features of hierarchical systems, but is simple enough to allow analytical insight. In order to allow for differences in birth rates, contact rates and movement rates among different sets of individuals the population is first divided into subgroups representing levels in the hierarchy. Movement, representing dominance challenges, is allowed between any two levels, giving a completely connected network. The model includes hierarchical effects by introducing a set of dominance parameters which affect birth rates in each social level and movement rates between social levels, dependent upon their rank. Although natural hierarchies vary greatly in form, the skewing of contact patterns, introduced here through non-uniform dominance parameters, has marked effects on the spread of disease. A simple homogeneous mixing differential equation model of a disease with SI dynamics in a population subject to simple birth and death process is presented and it is shown that the hierarchical model tends to this as certain parameter regions are approached. Outside of these parameter regions correlations within the system give rise to deviations from the simple theory. A Gaussian moment closure scheme is developed which extends the homogeneous model in order to take account of correlations arising from the hierarchical structure, and it is shown that the results are in reasonable agreement with simulations across a range of parameters. This approach helps to elucidate the origin of hierarchical effects and shows that it may be straightforward to relate the correlations in the model to measurable quantities which could be used to determine the importance of hierarchical corrections. Overall, hierarchical effects decrease the levels of disease present in a given population compared to a homogeneous unstructured model, but show higher levels of disease than structured models with no hierarchy. The separation between these three models is greatest when the rate of dominance challenges is low, reducing mixing, and when the disease prevalence is low. This suggests that these effects will often need to be considered in models being used to examine the impact of control strategies where the low disease prevalence behaviour of a model is critical.

  2. Tracking trade transactions in water resource systems: A node-arc optimization formulation

    NASA Astrophysics Data System (ADS)

    Erfani, Tohid; Huskova, Ivana; Harou, Julien J.

    2013-05-01

    We formulate and apply a multicommodity network flow node-arc optimization model capable of tracking trade transactions in complex water resource systems. The model uses a simple node to node network connectivity matrix and does not require preprocessing of all possible flow paths in the network. We compare the proposed node-arc formulation with an existing arc-path (flow path) formulation and explain the advantages and difficulties of both approaches. We verify the proposed formulation model on a hypothetical water distribution network. Results indicate the arc-path model solves the problem with fewer constraints, but the proposed formulation allows using a simple network connectivity matrix which simplifies modeling large or complex networks. The proposed algorithm allows converting existing node-arc hydroeconomic models that broadly represent water trading to ones that also track individual supplier-receiver relationships (trade transactions).

  3. Using lumped modelling for providing simple metrics and associated uncertainties of catchment response to agricultural-derived nitrates pollutions

    NASA Astrophysics Data System (ADS)

    RUIZ, L.; Fovet, O.; Faucheux, M.; Molenat, J.; Sekhar, M.; Aquilina, L.; Gascuel-odoux, C.

    2013-12-01

    The development of simple and easily accessible metrics is required for characterizing and comparing catchment response to external forcings (climate or anthropogenic) and for managing water resources. The hydrological and geochemical signatures in the stream represent the integration of the various processes controlling this response. The complexity of these signatures over several time scales from sub-daily to several decades [Kirchner et al., 2001] makes their deconvolution very difficult. A large range of modeling approaches intent to represent this complexity by accounting for the spatial and/or temporal variability of the processes involved. However, simple metrics are not easily retrieved from these approaches, mostly because of over-parametrization issues. We hypothesize that to obtain relevant metrics, we need to use models that are able to simulate the observed variability of river signatures at different time scales, while being as parsimonious as possible. The lumped model ETNA (modified from[Ruiz et al., 2002]) is able to simulate adequately the seasonal and inter-annual patterns of stream NO3 concentration. Shallow groundwater is represented by two linear stores with double porosity and riparian processes are represented by a constant nitrogen removal function. Our objective was to identify simple metrics of catchment response by calibrating this lumped model on two paired agricultural catchments where both N inputs and outputs were monitored for a period of 20 years. These catchments, belonging to ORE AgrHys, although underlain by the same granitic bedrock are displaying contrasted chemical signatures. The model was able to simulate the two contrasted observed patterns in stream and groundwater, both on hydrology and chemistry, and at the seasonal and pluri-annual scales. It was also compatible with the expected trends of nitrate concentration since 1960. The output variables of the model were used to compute the nitrate residence time in both the catchments. We used the Global Likelihood Uncertainty Estimations (GLUE) approach [Beven and Binley, 1992] to assess the parameter uncertainties and the subsequent error in model outputs and residence times. Reasonably low parameter uncertainties were obtained by calibrating simultaneously the two paired catchments with two outlets time series of stream flow and nitrate concentrations. Finally, only one parameter controlled the contrast in nitrogen residence times between the catchments. Therefore, this approach provided a promising metric for classifying the variability of catchment response to agricultural nitrogen inputs. Beven, K., and A. Binley (1992), THE FUTURE OF DISTRIBUTED MODELS - MODEL CALIBRATION AND UNCERTAINTY PREDICTION, Hydrological Processes, 6(3), 279-298. Kirchner, J. W., X. Feng, and C. Neal (2001), Catchment-scale advection and dispersion as a mechanism for fractal scaling in stream tracer concentrations, Journal of Hydrology, 254(1-4), 82-101. Ruiz, L., S. Abiven, C. Martin, P. Durand, V. Beaujouan, and J. Molenat (2002), Effect on nitrate concentration in stream water of agricultural practices in small catchments in Brittany : II. Temporal variations and mixing processes, Hydrology and Earth System Sciences, 6(3), 507-513.

  4. Testing the Structure of Hydrological Models using Genetic Programming

    NASA Astrophysics Data System (ADS)

    Selle, B.; Muttil, N.

    2009-04-01

    Genetic Programming is able to systematically explore many alternative model structures of different complexity from available input and response data. We hypothesised that genetic programming can be used to test the structure hydrological models and to identify dominant processes in hydrological systems. To test this, genetic programming was used to analyse a data set from a lysimeter experiment in southeastern Australia. The lysimeter experiment was conducted to quantify the deep percolation response under surface irrigated pasture to different soil types, water table depths and water ponding times during surface irrigation. Using genetic programming, a simple model of deep percolation was consistently evolved in multiple model runs. This simple and interpretable model confirmed the dominant process contributing to deep percolation represented in a conceptual model that was published earlier. Thus, this study shows that genetic programming can be used to evaluate the structure of hydrological models and to gain insight about the dominant processes in hydrological systems.

  5. Modelling Thin Film Microbending: A Comparative Study of Three Different Approaches

    NASA Astrophysics Data System (ADS)

    Aifantis, Katerina E.; Nikitas, Nikos; Zaiser, Michael

    2011-09-01

    Constitutive models which describe crystal microplasticity in a continuum framework can be envisaged as average representations of the dynamics of dislocation systems. Thus, their performance needs to be assessed not only by their ability to correctly represent stress-strain characteristics on the specimen scale but also by their ability to correctly represent the evolution of internal stress and strain patterns. In the present comparative study we consider the bending of a free-standing thin film. We compare the results of 3D DDD simulations with those obtained from a simple 1D gradient plasticity model and a more complex dislocation-based continuum model. Both models correctly reproduce the nontrivial strain patterns predicted by DDD for the microbending problem.

  6. Stress and Dilatancy Relation of Methane Hydrate Bearing Sand with Various Fines Content

    NASA Astrophysics Data System (ADS)

    Hyodo, M.

    2016-12-01

    This study presents an experimental and numerical study on the shear behaviour of methane hydrate bearing sand with variable confining pressures and methane hydrate saturations. A representative grading curve of Nankai Trough is selected as the grain size distribution of host sand to artificially produce the methane hydrate bearing sand. A shear strength estimation equation for methane hydrate bearing sand from test results is established. A simple constitutive model has been proposed to predict the stress-strain response of methane hydrate bearing sand based on a few well-known relationships. Experimental results indicate that the inclination of stress-dilatancy curve becomes steeper with a rise in methane hydrate saturation. A revised stress-dilatancy equation has been integrated with this simple model to consider the variance in the inclination of stress-dilatancy curve. The mean stress Pcr at critical state when the peak stress ratio reduces to the residual stress ratio increases with the level of methane hydrate saturation. The dilatancy parameter a tends to increase with the methane hydrate saturation. The shear deformability parameter A exhibits a decreasing tendency with the rise in methane hydrate saturation at each confining pressure. This model is capable of reasonably predicting the strength and stiffness enhancement and the dilation behaviour as methane hydrate saturation increases. The volumetric variation from contraction to expansion of MH bearing sand at a lower confining pressure and only pure volumetric contraction a higher confining pressure can be represented by this simple model.

  7. STEP-TRAMM - A modeling interface for simulating localized rainfall induced shallow landslides and debris flow runout pathways

    NASA Astrophysics Data System (ADS)

    Or, D.; von Ruette, J.; Lehmann, P.

    2017-12-01

    Landslides and subsequent debris-flows initiated by rainfall represent a common natural hazard in mountainous regions. We integrated a landslide hydro-mechanical triggering model with a simple model for debris flow runout pathways and developed a graphical user interface (GUI) to represent these natural hazards at catchment scale at any location. The STEP-TRAMM GUI provides process-based estimates of the initiation locations and sizes of landslides patterns based on digital elevation models (SRTM) linked with high resolution global soil maps (SoilGrids 250 m resolution) and satellite based information on rainfall statistics for the selected region. In the preprocessing phase the STEP-TRAMM model estimates soil depth distribution to supplement other soil information for delineating key hydrological and mechanical properties relevant to representing local soil failure. We will illustrate this publicly available GUI and modeling platform to simulate effects of deforestation on landslide hazards in several regions and compare model outcome with satellite based information.

  8. Influence of parameter changes to stability behavior of rotors

    NASA Technical Reports Server (NTRS)

    Fritzen, C. P.; Nordmann, R.

    1982-01-01

    The occurrence of unstable vibrations in rotating machinery requires corrective measures for improvement of the stability behavior. A simple approximate method is represented to find out the influence of parameter changes to the stability behavior. The method is based on an expansion of the eigenvalues in terms of system parameters. Influence coefficients show the effect of structural modifications. The method first of all was applied to simple nonconservative rotor models. It was approved for an unsymmetric rotor of a test rig.

  9. Designing for time-dependent material response in spacecraft structures

    NASA Technical Reports Server (NTRS)

    Hyer, M. W.; Oleksuk, Lynda L. S.; Bowles, D. E.

    1992-01-01

    To study the influence on overall deformations of the time-dependent constitutive properties of fiber-reinforced polymeric matrix composite materials being considered for use in orbiting precision segmented reflectors, simple sandwich beam models are developed. The beam models include layers representing the face sheets, the core, and the adhesive bonding of the face sheets to the core. A three-layer model lumps the adhesive layers with the face sheets or core, while a five-layer model considers the adhesive layers explicitly. The deformation response of the three-layer and five-layer sandwich beam models to a midspan point load is studied. This elementary loading leads to a simple analysis, and it is easy to create this loading in the laboratory. Using the correspondence principle of viscoelasticity, the models representing the elastic behavior of the two beams are transformed into time-dependent models. Representative cases of time-dependent material behavior for the facesheet material, the core material, and the adhesive are used to evaluate the influence of these constituents being time-dependent on the deformations of the beam. As an example of the results presented, if it assumed that, as a worst case, the polymer-dominated shear properties of the core behave as a Maxwell fluid such that under constant shear stress the shear strain increases by a factor of 10 in 20 years, then it is shown that the beam deflection increases by a factor of 1.4 during that time. In addition to quantitative conclusions, several assumptions are discussed which simplify the analyses for use with more complicated material models. Finally, it is shown that the simpler three-layer model suffices in many situations.

  10. A Simple Mathematical Model for Standard Model of Elementary Particles and Extension Thereof

    NASA Astrophysics Data System (ADS)

    Sinha, Ashok

    2016-03-01

    An algebraically (and geometrically) simple model representing the masses of the elementary particles in terms of the interaction (strong, weak, electromagnetic) constants is developed, including the Higgs bosons. The predicted Higgs boson mass is identical to that discovered by LHC experimental programs; while possibility of additional Higgs bosons (and their masses) is indicated. The model can be analyzed to explain and resolve many puzzles of particle physics and cosmology including the neutrino masses and mixing; origin of the proton mass and the mass-difference between the proton and the neutron; the big bang and cosmological Inflation; the Hubble expansion; etc. A novel interpretation of the model in terms of quaternion and rotation in the six-dimensional space of the elementary particle interaction-space - or, equivalently, in six-dimensional spacetime - is presented. Interrelations among particle masses are derived theoretically. A new approach for defining the interaction parameters leading to an elegant and symmetrical diagram is delineated. Generalization of the model to include supersymmetry is illustrated without recourse to complex mathematical formulation and free from any ambiguity. This Abstract represents some results of the Author's Independent Theoretical Research in Particle Physics, with possible connection to the Superstring Theory. However, only very elementary mathematics and physics is used in my presentation.

  11. TRIGRS - A Fortran Program for Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability Analysis, Version 2.0

    USGS Publications Warehouse

    Baum, Rex L.; Savage, William Z.; Godt, Jonathan W.

    2008-01-01

    The Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability Model (TRIGRS) is a Fortran program designed for modeling the timing and distribution of shallow, rainfall-induced landslides. The program computes transient pore-pressure changes, and attendant changes in the factor of safety, due to rainfall infiltration. The program models rainfall infiltration, resulting from storms that have durations ranging from hours to a few days, using analytical solutions for partial differential equations that represent one-dimensional, vertical flow in isotropic, homogeneous materials for either saturated or unsaturated conditions. Use of step-function series allows the program to represent variable rainfall input, and a simple runoff routing model allows the user to divert excess water from impervious areas onto more permeable downslope areas. The TRIGRS program uses a simple infinite-slope model to compute factor of safety on a cell-by-cell basis. An approximate formula for effective stress in unsaturated materials aids computation of the factor of safety in unsaturated soils. Horizontal heterogeneity is accounted for by allowing material properties, rainfall, and other input values to vary from cell to cell. This command-line program is used in conjunction with geographic information system (GIS) software to prepare input grids and visualize model results.

  12. The non-linear response of a muscle in transverse compression: assessment of geometry influence using a finite element model.

    PubMed

    Gras, Laure-Lise; Mitton, David; Crevier-Denoix, Nathalie; Laporte, Sébastien

    2012-01-01

    Most recent finite element models that represent muscles are generic or subject-specific models that use complex, constitutive laws. Identification of the parameters of such complex, constitutive laws could be an important limit for subject-specific approaches. The aim of this study was to assess the possibility of modelling muscle behaviour in compression with a parametric model and a simple, constitutive law. A quasi-static compression test was performed on the muscles of dogs. A parametric finite element model was designed using a linear, elastic, constitutive law. A multi-variate analysis was performed to assess the effects of geometry on muscle response. An inverse method was used to define Young's modulus. The non-linear response of the muscles was obtained using a subject-specific geometry and a linear elastic law. Thus, a simple muscle model can be used to have a bio-faithful, biomechanical response.

  13. Numerical study of centrifugal compressor stage vaneless diffusers

    NASA Astrophysics Data System (ADS)

    Galerkin, Y.; Soldatova, K.; Solovieva, O.

    2015-08-01

    The authors analyzed CFD calculations of flow in vaneless diffusers with relative width in range from 0.014 to 0.100 at inlet flow angles in range from 100 to 450 with different inlet velocity coefficients, Reynolds numbers and surface roughness. The aim is to simulate calculated performances by simple algebraic equations. The friction coefficient that represents head losses as friction losses is proposed for simulation. The friction coefficient and loss coefficient are directly connected by simple equation. The advantage is that friction coefficient changes comparatively little in range of studied parameters. Simple equations for this coefficient are proposed by the authors. The simulation accuracy is sufficient for practical calculations. To create the complete algebraic model of the vaneless diffuser the authors plan to widen this method of modeling to diffusers with different relative length and for wider range of Reynolds numbers.

  14. Development of Maps of Simple and Complex Cells in the Primary Visual Cortex

    PubMed Central

    Antolík, Ján; Bednar, James A.

    2011-01-01

    Hubel and Wiesel (1962) classified primary visual cortex (V1) neurons as either simple, with responses modulated by the spatial phase of a sine grating, or complex, i.e., largely phase invariant. Much progress has been made in understanding how simple-cells develop, and there are now detailed computational models establishing how they can form topographic maps ordered by orientation preference. There are also models of how complex cells can develop using outputs from simple cells with different phase preferences, but no model of how a topographic orientation map of complex cells could be formed based on the actual connectivity patterns found in V1. Addressing this question is important, because the majority of existing developmental models of simple-cell maps group neurons selective to similar spatial phases together, which is contrary to experimental evidence, and makes it difficult to construct complex cells. Overcoming this limitation is not trivial, because mechanisms responsible for map development drive receptive fields (RF) of nearby neurons to be highly correlated, while co-oriented RFs of opposite phases are anti-correlated. In this work, we model V1 as two topographically organized sheets representing cortical layer 4 and 2/3. Only layer 4 receives direct thalamic input. Both sheets are connected with narrow feed-forward and feedback connectivity. Only layer 2/3 contains strong long-range lateral connectivity, in line with current anatomical findings. Initially all weights in the model are random, and each is modified via a Hebbian learning rule. The model develops smooth, matching, orientation preference maps in both sheets. Layer 4 units become simple cells, with phase preference arranged randomly, while those in layer 2/3 are primarily complex cells. To our knowledge this model is the first explaining how simple cells can develop with random phase preference, and how maps of complex cells can develop, using only realistic patterns of connectivity. PMID:21559067

  15. General formulation of characteristic time for persistent chemicals in a multimedia environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, D.H.; McKone, T.E.; Kastenberg, W.E.

    1999-02-01

    A simple yet representative method for determining the characteristic time a persistent organic pollutant remains in a multimedia environment is presented. The characteristic time is an important attribute for assessing long-term health and ecological impacts of a chemical. Calculating the characteristic time requires information on decay rates in multiple environmental media as well as the proportion of mass in each environmental medium. The authors explore the premise that using a steady-state distribution of the mass in the environment provides a means to calculate a representative estimate of the characteristic time while maintaining a simple formulation. Calculating the steady-state mass distributionmore » incorporates the effect of advective transport and nonequilibrium effects resulting from the source terms. Using several chemicals, they calculate and compare the characteristic time in a representative multimedia environment for dynamic, steady-state, and equilibrium multimedia models, and also for a single medium model. They demonstrate that formulating the characteristic time based on the steady-state mass distribution in the environment closely approximates the dynamic characteristic time for a range of chemicals and thus can be used in decisions regarding chemical use in the environment.« less

  16. A simple, analytic 3-dimensional downburst model based on boundary layer stagnation flow

    NASA Technical Reports Server (NTRS)

    Oseguera, Rosa M.; Bowles, Roland L.

    1988-01-01

    A simple downburst model is developed for use in batch and real-time piloted simulation studies of guidance strategies for terminal area transport aircraft operations in wind shear conditions. The model represents an axisymmetric stagnation point flow, based on velocity profiles from the Terminal Area Simulation System (TASS) model developed by Proctor and satisfies the mass continuity equation in cylindrical coordinates. Altitude dependence, including boundary layer effects near the ground, closely matches real-world measurements, as do the increase, peak, and decay of outflow and downflow with increasing distance from the downburst center. Equations for horizontal and vertical winds were derived, and found to be infinitely differentiable, with no singular points existent in the flow field. In addition, a simple relationship exists among the ratio of maximum horizontal to vertical velocities, the downdraft radius, depth of outflow, and altitude of maximum outflow. In use, a microburst can be modeled by specifying four characteristic parameters, velocity components in the x, y and z directions, and the corresponding nine partial derivatives are obtained easily from the velocity equations.

  17. The Bern Simple Climate Model (BernSCM) v1.0: an extensible and fully documented open-source re-implementation of the Bern reduced-form model for global carbon cycle-climate simulations

    NASA Astrophysics Data System (ADS)

    Strassmann, Kuno M.; Joos, Fortunat

    2018-05-01

    The Bern Simple Climate Model (BernSCM) is a free open-source re-implementation of a reduced-form carbon cycle-climate model which has been used widely in previous scientific work and IPCC assessments. BernSCM represents the carbon cycle and climate system with a small set of equations for the heat and carbon budget, the parametrization of major nonlinearities, and the substitution of complex component systems with impulse response functions (IRFs). The IRF approach allows cost-efficient yet accurate substitution of detailed parent models of climate system components with near-linear behavior. Illustrative simulations of scenarios from previous multimodel studies show that BernSCM is broadly representative of the range of the climate-carbon cycle response simulated by more complex and detailed models. Model code (in Fortran) was written from scratch with transparency and extensibility in mind, and is provided open source. BernSCM makes scientifically sound carbon cycle-climate modeling available for many applications. Supporting up to decadal time steps with high accuracy, it is suitable for studies with high computational load and for coupling with integrated assessment models (IAMs), for example. Further applications include climate risk assessment in a business, public, or educational context and the estimation of CO2 and climate benefits of emission mitigation options.

  18. Matrix population models from 20 studies of perennial plant populations

    USGS Publications Warehouse

    Ellis, Martha M.; Williams, Jennifer L.; Lesica, Peter; Bell, Timothy J.; Bierzychudek, Paulette; Bowles, Marlin; Crone, Elizabeth E.; Doak, Daniel F.; Ehrlen, Johan; Ellis-Adam, Albertine; McEachern, Kathryn; Ganesan, Rengaian; Latham, Penelope; Luijten, Sheila; Kaye, Thomas N.; Knight, Tiffany M.; Menges, Eric S.; Morris, William F.; den Nijs, Hans; Oostermeijer, Gerard; Quintana-Ascencio, Pedro F.; Shelly, J. Stephen; Stanley, Amanda; Thorpe, Andrea; Tamara, Ticktin; Valverde, Teresa; Weekley, Carl W.

    2012-01-01

    Demographic transition matrices are one of the most commonly applied population models for both basic and applied ecological research. The relatively simple framework of these models and simple, easily interpretable summary statistics they produce have prompted the wide use of these models across an exceptionally broad range of taxa. Here, we provide annual transition matrices and observed stage structures/population sizes for 20 perennial plant species which have been the focal species for long-term demographic monitoring. These data were assembled as part of the "Testing Matrix Models" working group through the National Center for Ecological Analysis and Synthesis (NCEAS). In sum, these data represent 82 populations with >460 total population-years of data. It is our hope that making these data available will help promote and improve our ability to monitor and understand plant population dynamics.

  19. Matrix population models from 20 studies of perennial plant populations

    USGS Publications Warehouse

    Ellis, Martha M.; Williams, Jennifer L.; Lesica, Peter; Bell, Timothy J.; Bierzychudek, Paulette; Bowles, Marlin; Crone, Elizabeth E.; Doak, Daniel F.; Ehrlen, Johan; Ellis-Adam, Albertine; McEachern, Kathryn; Ganesan, Rengaian; Latham, Penelope; Luijten, Sheila; Kaye, Thomas N.; Knight, Tiffany M.; Menges, Eric S.; Morris, William F.; den Nijs, Hans; Oostermeijer, Gerard; Quintana-Ascencio, Pedro F.; Shelly, J. Stephen; Stanley, Amanda; Thorpe, Andrea; Tamara, Ticktin; Valverde, Teresa; Weekley, Carl W.

    2012-01-01

    Demographic transition matrices are one of the most commonly applied population models for both basic and applied ecological research. The relatively simple framework of these models and simple, easily interpretable summary statistics they produce have prompted the wide use of these models across an exceptionally broad range of taxa. Here, we provide annual transition matrices and observed stage structures/population sizes for 20 perennial plant species which have been the focal species for long-term demographic monitoring. These data were assembled as part of the 'Testing Matrix Models' working group through the National Center for Ecological Analysis and Synthesis (NCEAS). In sum, these data represent 82 populations with >460 total population-years of data. It is our hope that making these data available will help promote and improve our ability to monitor and understand plant population dynamics.

  20. A proposed-standard format to represent and distribute tomographic models and other earth spatial data

    NASA Astrophysics Data System (ADS)

    Postpischl, L.; Morelli, A.; Danecek, P.

    2009-04-01

    Formats used to represent (and distribute) tomographic earth models differ considerably and are rarely self-consistent. In fact, each earth scientist, or research group, uses specific conventions to encode the various parameterizations used to describe, e.g., seismic wave speed or density in three dimensions, and complete information is often found in related documents or publications (if available at all) only. As a consequence, use of various tomographic models from different authors requires considerable effort, is more cumbersome than it should be and prevents widespread exchange and circulation within the community. We propose a format, based on modern web standards, able to represent different (grid-based) model parameterizations within the same simple text-based environment, easy to write, to parse, and to visualise. The aim is the creation of self-describing data-structures, both human and machine readable, that are automatically recognised by general-purpose software agents, and easily imported in the scientific programming environment. We think that the adoption of such a representation as a standard for the exchange and distribution of earth models can greatly ease their usage and enhance their circulation, both among fellow seismologists and among a broader non-specialist community. The proposed solution uses semantic web technologies, fully fitting the current trends in data accessibility. It is based on Json (JavaScript Object Notation), a plain-text, human-readable lightweight computer data interchange format, which adopts a hierarchical name-value model for representing simple data structures and associative arrays (called objects). Our implementation allows integration of large datasets with metadata (authors, affiliations, bibliographic references, units of measure etc.) into a single resource. It is equally suited to represent other geo-referenced volumetric quantities — beyond tomographic models — as well as (structured and unstructured) computational meshes. This approach can exploit the capabilities of the web browser as a computing platform: a series of in-page quick tools for comparative analysis between models will be presented, as well as visualisation techniques for tomographic layers in Google Maps and Google Earth. We are working on tools for conversion into common scientific format like netCDF, to allow easy visualisation in GEON-IDV or gmt.

  1. SEEPLUS: A SIMPLE ONLINE CLIMATE MODEL

    NASA Astrophysics Data System (ADS)

    Tsutsui, Junichi

    A web application for a simple climate model - SEEPLUS (a Simple climate model to Examine Emission Pathways Leading to Updated Scenarios) - has been developed. SEEPLUS consists of carbon-cycle and climate-change modules, through which it provides the information infrastructure required to perform climate-change experiments, even on a millennial-timescale. The main objective of this application is to share the latest scientific knowledge acquired from climate modeling studies among the different stakeholders involved in climate-change issues. Both the carbon-cycle and climate-change modules employ impulse response functions (IRFs) for their key processes, thereby enabling the model to integrate the outcome from an ensemble of complex climate models. The current IRF parameters and forcing manipulation are basically consistent with, or within an uncertainty range of, the understanding of certain key aspects such as the equivalent climate sensitivity and ocean CO2 uptake data documented in representative literature. The carbon-cycle module enables inverse calculation to determine the emission pathway required in order to attain a given concentration pathway, thereby providing a flexible way to compare the module with more advanced modeling studies. The module also enables analytical evaluation of its equilibrium states, thereby facilitating the long-term planning of global warming mitigation.

  2. Generalized Born Models of Macromolecular Solvation Effects

    NASA Astrophysics Data System (ADS)

    Bashford, Donald; Case, David A.

    2000-10-01

    It would often be useful in computer simulations to use a simple description of solvation effects, instead of explicitly representing the individual solvent molecules. Continuum dielectric models often work well in describing the thermodynamic aspects of aqueous solvation, and approximations to such models that avoid the need to solve the Poisson equation are attractive because of their computational efficiency. Here we give an overview of one such approximation, the generalized Born model, which is simple and fast enough to be used for molecular dynamics simulations of proteins and nucleic acids. We discuss its strengths and weaknesses, both for its fidelity to the underlying continuum model and for its ability to replace explicit consideration of solvent molecules in macromolecular simulations. We focus particularly on versions of the generalized Born model that have a pair-wise analytical form, and therefore fit most naturally into conventional molecular mechanics calculations.

  3. Applying the take-grant protection model

    NASA Technical Reports Server (NTRS)

    Bishop, Matt

    1990-01-01

    The Take-Grant Protection Model has in the past been used to model multilevel security hierarchies and simple protection systems. The models are extended to include theft of rights and sharing information, and additional security policies are examined. The analysis suggests that in some cases the basic rules of the Take-Grant Protection Model should be augmented to represent the policy properly; when appropriate, such modifications are made and their efforts with respect to the policy and its Take-Grant representation are discussed.

  4. The use of simple inflow- and storage-based heuristics equations to represent reservoir behavior in California for investigating human impacts on the water cycle

    NASA Astrophysics Data System (ADS)

    Solander, K.; David, C. H.; Reager, J. T.; Famiglietti, J. S.

    2013-12-01

    The ability to reasonably replicate reservoir behavior in terms of storage and outflow is important for studying the potential human impacts on the terrestrial water cycle. Developing a simple method for this purpose could facilitate subsequent integration in a land surface or global climate model. This study attempts to simulate monthly reservoir outflow and storage using a simple, temporally-varying set of heuristics equations with input consisting of in situ records of reservoir inflow and storage. Equations of increasing complexity relative to the number of parameters involved were tested. Only two parameters were employed in the final equations used to predict outflow and storage in an attempt to best mimic seasonal reservoir behavior while still preserving model parsimony. California reservoirs were selected for model development due to the high level of data availability and intensity of water resource management in this region relative to other areas. Calibration was achieved using observations from eight major reservoirs representing approximately 41% of the 107 largest reservoirs in the state. Parameter optimization was accomplished using the minimum RMSE between observed and modeled storage and outflow as the main objective function. Initial results obtained for a multi-reservoir average of the correlation coefficient between observed and modeled storage (resp. outflow) is of 0.78 (resp. 0.75). These results combined with the simplicity of the equations being used show promise for integration into a land surface or a global climate model. This would be invaluable for evaluations of reservoir management impacts on the flow regime and associated ecosystems as well as on the climate at both regional and global scales.

  5. Hydropneumothorax verses Simple Pneumothorax

    DTIC Science & Technology

    2010-08-01

    been created to replicate a hydropneumothorax (Fig 6).3 In this model, a red balloon is used to simulate a lung, the wine glass represents the...the Week where CME credits can be obtained. http://rad.usuhs.mil/amsus.html Fig. 9. Comparison of inflated balloon in wine glass (left

  6. EVALUATING REUSE AND REMANUFACTURING POTENTIAL USING A SIMPLE MODEL FOR PRODUCT VALUE. (R825370C057)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  7. Evaporation and transpiration

    Treesearch

    Robert R. Ziemer

    1979-01-01

    For years, the principal objective of evapotranspiration research has been to calculate the loss of water under varying conditions of climate, soil, and vegetation. The early simple empirical methods have generally been replaced by more detailed models which more closely represent the physical and biological processes involved. Monteith's modification of the...

  8. Recognition of simple visual images using a sparse distributed memory: Some implementations and experiments

    NASA Technical Reports Server (NTRS)

    Jaeckel, Louis A.

    1990-01-01

    Previously, a method was described of representing a class of simple visual images so that they could be used with a Sparse Distributed Memory (SDM). Herein, two possible implementations are described of a SDM, for which these images, suitably encoded, will serve both as addresses to the memory and as data to be stored in the memory. A key feature of both implementations is that a pattern that is represented as an unordered set with a variable number of members can be used as an address to the memory. In the 1st model, an image is encoded as a 9072 bit string to be used as a read or write address; the bit string may also be used as data to be stored in the memory. Another representation, in which an image is encoded as a 256 bit string, may be used with either model as data to be stored in the memory, but not as an address. In the 2nd model, an image is not represented as a vector of fixed length to be used as an address. Instead, a rule is given for determining which memory locations are to be activated in response to an encoded image. This activation rule treats the pieces of an image as an unordered set. With this model, the memory can be simulated, based on a method of computing the approximate result of a read operation.

  9. Bidirectional reflectance modeling of non-homogeneous plant canopies

    NASA Technical Reports Server (NTRS)

    Norman, John M.

    1986-01-01

    The objective of this research is to develop a 3-dimensional radiative transfer model for predicting the bidirectional reflectance distribution function (BRDF) for heterogeneous vegetation canopies. Leaf bidirectional reflectance and transmittance distribution functions were measured for corn and soybean leaves. The measurements clearly show that leaves are complex scatterers and considerable specular reflectance is possible. Because of the character of leaf reflectance, true leaf reflectance is larger than the nadir reflectances that are normally used to represent leaves. A 3-dimensional reflectance model, named BIGAR (Bidirectional General Array Model), was developed and compared with measurements from corn and soybean. The model is based on the concept that heterogeneous canopies can be described by a combination of many subcanopies, which contain all the foliage, and these subcanopy envelopes can be characterized by ellipsoids of various sizes and shapes. The model/measurement comparison results indicate that this relatively simple model captures the essential character of row crop BRDF's. Finally, two soil BDRF models were developed: one represents soil particles as rectangular blocks and the other represents soil particles as spheres. The sphere model was found to be superior.

  10. A simple, efficient polarizable coarse-grained water model for molecular dynamics simulations.

    PubMed

    Riniker, Sereina; van Gunsteren, Wilfred F

    2011-02-28

    The development of coarse-grained (CG) models that correctly represent the important features of compounds is essential to overcome the limitations in time scale and system size currently encountered in atomistic molecular dynamics simulations. Most approaches reported in the literature model one or several molecules into a single uncharged CG bead. For water, this implicit treatment of the electrostatic interactions, however, fails to mimic important properties, e.g., the dielectric screening. Therefore, a coarse-grained model for water is proposed which treats the electrostatic interactions between clusters of water molecules explicitly. Five water molecules are embedded in a spherical CG bead consisting of two oppositely charged particles which represent a dipole. The bond connecting the two particles in a bead is unconstrained, which makes the model polarizable. Experimental and all-atom simulated data of liquid water at room temperature are used for parametrization of the model. The experimental density and the relative static dielectric permittivity were chosen as primary target properties. The model properties are compared with those obtained from experiment, from clusters of simple-point-charge water molecules of appropriate size in the liquid phase, and for other CG water models if available. The comparison shows that not all atomistic properties can be reproduced by a CG model, so properties of key importance have to be selected when coarse graining is applied. Yet, the CG model reproduces the key characteristics of liquid water while being computationally 1-2 orders of magnitude more efficient than standard fine-grained atomistic water models.

  11. On Diffusive Climatological Models.

    NASA Astrophysics Data System (ADS)

    Griffel, D. H.; Drazin, P. G.

    1981-11-01

    A simple, zonally and annually averaged, energy-balance climatological model with diffusive heat transport and nonlinear albedo feedback is solved numerically. Some parameters of the model are varied, one by one, to find the resultant effects on the steady solution representing the climate. In particular, the outward radiation flux, the insulation distribution and the albedo parameterization are varied. We have found an accurate yet simple analytic expression for the mean annual insolation as a function of latitude and the obliquity of the Earth's rotation axis; this has enabled us to consider the effects of the oscillation of the obliquity. We have used a continuous albedo function which fits the observed values; it considerably reduces the sensitivity of the model. Climatic cycles, calculated by solving the time-dependent equation when parameters change slowly and periodically, are compared qualitatively with paleoclimatic records.

  12. Charge Transfer Inefficiency in Pinned Photodiode CMOS image sensors: Simple Montecarlo modeling and experimental measurement based on a pulsed storage-gate method

    NASA Astrophysics Data System (ADS)

    Pelamatti, Alice; Goiffon, Vincent; Chabane, Aziouz; Magnan, Pierre; Virmontois, Cédric; Saint-Pé, Olivier; de Boisanger, Michel Breart

    2016-11-01

    The charge transfer time represents the bottleneck in terms of temporal resolution in Pinned Photodiode (PPD) CMOS image sensors. This work focuses on the modeling and estimation of this key parameter. A simple numerical model of charge transfer in PPDs is presented. The model is based on a Montecarlo simulation and takes into account both charge diffusion in the PPD and the effect of potential obstacles along the charge transfer path. This work also presents a new experimental approach for the estimation of the charge transfer time, called pulsed Storage Gate (SG) method. This method, which allows reproduction of a ;worst-case; transfer condition, is based on dedicated SG pixel structures and is particularly suitable to compare transfer efficiency performances for different pixel geometries.

  13. Mechanisms of Neuronal Computation in Mammalian Visual Cortex

    PubMed Central

    Priebe, Nicholas J.; Ferster, David

    2012-01-01

    Orientation selectivity in the primary visual cortex (V1) is a receptive field property that is at once simple enough to make it amenable to experimental and theoretical approaches and yet complex enough to represent a significant transformation in the representation of the visual image. As a result, V1 has become an area of choice for studying cortical computation and its underlying mechanisms. Here we consider the receptive field properties of the simple cells in cat V1—the cells that receive direct input from thalamic relay cells—and explore how these properties, many of which are highly nonlinear, arise. We have found that many receptive field properties of V1 simple cells fall directly out of Hubel and Wiesel’s feedforward model when the model incorporates realistic neuronal and synaptic mechanisms, including threshold, synaptic depression, response variability, and the membrane time constant. PMID:22841306

  14. Principles of protein folding--a perspective from simple exact models.

    PubMed Central

    Dill, K. A.; Bromberg, S.; Yue, K.; Fiebig, K. M.; Yee, D. P.; Thomas, P. D.; Chan, H. S.

    1995-01-01

    General principles of protein structure, stability, and folding kinetics have recently been explored in computer simulations of simple exact lattice models. These models represent protein chains at a rudimentary level, but they involve few parameters, approximations, or implicit biases, and they allow complete explorations of conformational and sequence spaces. Such simulations have resulted in testable predictions that are sometimes unanticipated: The folding code is mainly binary and delocalized throughout the amino acid sequence. The secondary and tertiary structures of a protein are specified mainly by the sequence of polar and nonpolar monomers. More specific interactions may refine the structure, rather than dominate the folding code. Simple exact models can account for the properties that characterize protein folding: two-state cooperativity, secondary and tertiary structures, and multistage folding kinetics--fast hydrophobic collapse followed by slower annealing. These studies suggest the possibility of creating "foldable" chain molecules other than proteins. The encoding of a unique compact chain conformation may not require amino acids; it may require only the ability to synthesize specific monomer sequences in which at least one monomer type is solvent-averse. PMID:7613459

  15. CAD-based Automatic Modeling Method for Geant4 geometry model Through MCAM

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Nie, Fanzhi; Wang, Guozhong; Long, Pengcheng; LV, Zhongliang; LV, Zhongliang

    2014-06-01

    Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problem existed in most of present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics & Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling.

  16. SUSTAIN: a network model of category learning.

    PubMed

    Love, Bradley C; Medin, Douglas L; Gureckis, Todd M

    2004-04-01

    SUSTAIN (Supervised and Unsupervised STratified Adaptive Incremental Network) is a model of how humans learn categories from examples. SUSTAIN initially assumes a simple category structure. If simple solutions prove inadequate and SUSTAIN is confronted with a surprising event (e.g., it is told that a bat is a mammal instead of a bird), SUSTAIN recruits an additional cluster to represent the surprising event. Newly recruited clusters are available to explain future events and can themselves evolve into prototypes-attractors-rules. SUSTAIN's discovery of category substructure is affected not only by the structure of the world but by the nature of the learning task and the learner's goals. SUSTAIN successfully extends category learning models to studies of inference learning, unsupervised learning, category construction, and contexts in which identification learning is faster than classification learning.

  17. A New Publicly Available Chemical Query Language, CSRML, to support Chemotype Representations for Application to Data-Mining and Modeling

    EPA Science Inventory

    A new XML-based query language, CSRML, has been developed for representing chemical substructures, molecules, reaction rules, and reactions. CSRML queries are capable of integrating additional forms of information beyond the simple substructure (e.g., SMARTS) or reaction transfor...

  18. A Simple Demonstration of Atomic and Molecular Orbitals Using Circular Magnets

    ERIC Educational Resources Information Center

    Chakraborty, Maharudra; Mukhopadhyay, Subrata; Das, Ranendu Sekhar

    2014-01-01

    A quite simple and inexpensive technique is described here to represent the approximate shapes of atomic orbitals and the molecular orbitals formed by them following the principles of the linear combination of atomic orbitals (LCAO) method. Molecular orbitals of a few simple molecules can also be pictorially represented. Instructors can employ the…

  19. Inherent limitations of probabilistic models for protein-DNA binding specificity

    PubMed Central

    Ruan, Shuxiang

    2017-01-01

    The specificities of transcription factors are most commonly represented with probabilistic models. These models provide a probability for each base occurring at each position within the binding site and the positions are assumed to contribute independently. The model is simple and intuitive and is the basis for many motif discovery algorithms. However, the model also has inherent limitations that prevent it from accurately representing true binding probabilities, especially for the highest affinity sites under conditions of high protein concentration. The limitations are not due to the assumption of independence between positions but rather are caused by the non-linear relationship between binding affinity and binding probability and the fact that independent normalization at each position skews the site probabilities. Generally probabilistic models are reasonably good approximations, but new high-throughput methods allow for biophysical models with increased accuracy that should be used whenever possible. PMID:28686588

  20. Strategy Space Exploration of a Multi-Agent Model for the Labor Market

    NASA Astrophysics Data System (ADS)

    de Grande, Pablo; Eguia, Manuel

    We present a multi-agent system where typical labor market mechanisms emerge. Based on a few simple rules, our model allows for different interpretative paradigms to be represented and for different scenarios to be tried out. We thoroughly explore the space of possible strategies both for those unemployed and for companies and analyze the trade-off between these strategies regarding global social and economical indicators.

  1. Vehicle Concept Model Abstractions for Integrated Geometric, Inertial, Rigid Body, Powertrain, and FE Analysis

    DTIC Science & Technology

    2011-01-01

    refinement of the vehicle body structure through quantitative assessment of stiffness and modal parameter changes resulting from modifications to the beam...differential placed on the axle , adjustment of the torque output to the opposite wheel may be required to obtain the correct solution. Thus...represented by simple inertial components with appropriate model connectivity instead to determine the free modal response of powertrain type

  2. The time series approach to short term load forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagan, M.T.; Behr, S.M.

    The application of time series analysis methods to load forecasting is reviewed. It is shown than Box and Jenkins time series models, in particular, are well suited to this application. The logical and organized procedures for model development using the autocorrelation function make these models particularly attractive. One of the drawbacks of these models is the inability to accurately represent the nonlinear relationship between load and temperature. A simple procedure for overcoming this difficulty is introduced, and several Box and Jenkins models are compared with a forecasting procedure currently used by a utility company.

  3. Testing the structure of a hydrological model using Genetic Programming

    NASA Astrophysics Data System (ADS)

    Selle, Benny; Muttil, Nitin

    2011-01-01

    SummaryGenetic Programming is able to systematically explore many alternative model structures of different complexity from available input and response data. We hypothesised that Genetic Programming can be used to test the structure of hydrological models and to identify dominant processes in hydrological systems. To test this, Genetic Programming was used to analyse a data set from a lysimeter experiment in southeastern Australia. The lysimeter experiment was conducted to quantify the deep percolation response under surface irrigated pasture to different soil types, watertable depths and water ponding times during surface irrigation. Using Genetic Programming, a simple model of deep percolation was recurrently evolved in multiple Genetic Programming runs. This simple and interpretable model supported the dominant process contributing to deep percolation represented in a conceptual model that was published earlier. Thus, this study shows that Genetic Programming can be used to evaluate the structure of hydrological models and to gain insight about the dominant processes in hydrological systems.

  4. Culture and Demography: From Reluctant Bedfellows to Committed Partners

    PubMed Central

    Bachrach, Christine A.

    2015-01-01

    Demography and culture have had a long but ambivalent relationship. Cultural influences are widely recognized as important for demographic outcomes, but are often “backgrounded” in demographic research. I argue that progress towards a more successful integration is feasible and suggest a network model of culture as a potential tool. The network model bridges both traditional (holistic and institutional) and contemporary (tool kit) models of culture used in the social sciences and offers a simple vocabulary for the diverse set of cultural concepts such as attitudes, beliefs and norms, and quantitative measures of how culture is organized. The proposed model conceptualizes culture as a nested network of meanings which are represented by schemas that range in complexity from simple concepts to multifaceted cultural models. I illustrate the potential value of a model using accounts of the cultural changes underpinning the transformation of marriage in the U.S. and point to developments in the social, cognitive and computational sciences that could facilitate the application of the model in empirical demographic research. PMID:24338643

  5. Culture and demography: from reluctant bedfellows to committed partners.

    PubMed

    Bachrach, Christine A

    2014-02-01

    Demography and culture have had a long but ambivalent relationship. Cultural influences are widely recognized as important for demographic outcomes but are often "backgrounded" in demographic research. I argue that progress toward a more successful integration is feasible and suggest a network model of culture as a potential tool. The network model bridges both traditional (holistic and institutional) and contemporary (tool kit) models of culture used in the social sciences and offers a simple vocabulary for a diverse set of cultural concepts, such as attitudes, beliefs, and norms, as well as quantitative measures of how culture is organized. The proposed model conceptualizes culture as a nested network of meanings represented by schemas that range in complexity from simple concepts to multifaceted cultural models. I illustrate the potential value of a model using accounts of the cultural changes underpinning the transformation of marriage in the United States and point to developments in the social, cognitive, and computational sciences that could facilitate the application of the model in empirical demographic research.

  6. A New Canopy Integration Factor

    NASA Astrophysics Data System (ADS)

    Badgley, G.; Anderegg, L. D. L.; Baker, I. T.; Berry, J. A.

    2017-12-01

    Ecosystem modelers have long debated how to best represent within-canopy heterogeneity. Can one big leaf represent the full range of canopy physiological responses? Or you need two leaves - sun and shade - to get things right? Is it sufficient to treat the canopy as a diffuse medium? Or would it be better to explicitly represent separate canopy layers? These are open questions that have been subject of an enormous amount of research and scrutiny. Yet regardless of how the canopy is represented, each model must grapple with correctly parameterizing its canopy in a way that properly translates leaf-level processes to the canopy and ecosystem scale. We present a new approach for integrating whole-canopy biochemistry by combining remote sensing with ecological theory. Using the Simple Biosphere model (SiB), we redefined how SiB scales photosynthetic processes from leaf-to-canopy as a function of satellite-derived measurements of solar-induced chlorophyll fluorescence (SIF). Across multiple long-term study sites, our approach improves the accuracy of daily modeled photosynthesis by as much as 25 percent. We share additional insights on how SIF might be more directly integrated into photosynthesis models, as well as present ideas for harnessing SIF to more accurately parameterize canopy biochemical variables.

  7. Wave propagation in equivalent continuums representing truss lattice materials

    DOE PAGES

    Messner, Mark C.; Barham, Matthew I.; Kumar, Mukul; ...

    2015-07-29

    Stiffness scales linearly with density in stretch-dominated lattice meta-materials offering the possibility of very light yet very stiff structures. Current additive manufacturing techniques can assemble structures from lattice materials, but the design of such structures will require accurate, efficient simulation methods. Equivalent continuum models have several advantages over discrete truss models of stretch dominated lattices, including computational efficiency and ease of model construction. However, the development an equivalent model suitable for representing the dynamic response of a periodic truss in the small deformation regime is complicated by microinertial effects. This study derives a dynamic equivalent continuum model for periodic trussmore » structures suitable for representing long-wavelength wave propagation and verifies it against the full Bloch wave theory and detailed finite element simulations. The model must incorporate microinertial effects to accurately reproduce long wavelength characteristics of the response such as anisotropic elastic soundspeeds. Finally, the formulation presented here also improves upon previous work by preserving equilibrium at truss joints for simple lattices and by improving numerical stability by eliminating vertices in the effective yield surface.« less

  8. Guidelines and Procedures for Computing Time-Series Suspended-Sediment Concentrations and Loads from In-Stream Turbidity-Sensor and Streamflow Data

    USGS Publications Warehouse

    Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.

    2009-01-01

    In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.

  9. Astroblaster--A Fascinating Game of Multi-Ball Collisions

    ERIC Educational Resources Information Center

    Kires, Marian

    2009-01-01

    Multi-ball collisions inside the Astroblaster toy are explained from the conservation of momentum point of view. The important role of the coefficient of restitution is demonstrated in ideal and real cases. Real experimental results with the simple toy can be compared with a computer model represented by an interactive Java applet. (Contains 1…

  10. A Simple Approach to Inference in Covariance Structure Modeling with Missing Data: Bayesian Analysis. Project 2.4, Quantitative Models To Monitor the Status and Progress of Learning and Performance and Their Antecedents.

    ERIC Educational Resources Information Center

    Muthen, Bengt

    This paper investigates methods that avoid using multiple groups to represent the missing data patterns in covariance structure modeling, attempting instead to do a single-group analysis where the only action the analyst has to take is to indicate that data is missing. A new covariance structure approach developed by B. Muthen and G. Arminger is…

  11. An Eddy-Diffusivity Mass-flux (EDMF) closure for the unified representation of cloud and convective processes

    NASA Astrophysics Data System (ADS)

    Tan, Z.; Schneider, T.; Teixeira, J.; Lam, R.; Pressel, K. G.

    2014-12-01

    Sub-grid scale (SGS) closures in current climate models are usually decomposed into several largely independent parameterization schemes for different cloud and convective processes, such as boundary layer turbulence, shallow convection, and deep convection. These separate parameterizations usually do not converge as the resolution is increased or as physical limits are taken. This makes it difficult to represent the interactions and smooth transition among different cloud and convective regimes. Here we present an eddy-diffusivity mass-flux (EDMF) closure that represents all sub-grid scale turbulent, convective, and cloud processes in a unified parameterization scheme. The buoyant updrafts and precipitative downdrafts are parameterized with a prognostic multiple-plume mass-flux (MF) scheme. The prognostic term for the mass flux is kept so that the life cycles of convective plumes are better represented. The interaction between updrafts and downdrafts are parameterized with the buoyancy-sorting model. The turbulent mixing outside plumes is represented by eddy diffusion, in which eddy diffusivity (ED) is determined from a turbulent kinetic energy (TKE) calculated from a TKE balance that couples the environment with updrafts and downdrafts. Similarly, tracer variances are decomposed consistently between updrafts, downdrafts and the environment. The closure is internally coupled with a probabilistic cloud scheme and a simple precipitation scheme. We have also developed a relatively simple two-stream radiative scheme that includes the longwave (LW) and shortwave (SW) effects of clouds, and the LW effect of water vapor. We have tested this closure in a single-column model for various regimes spanning stratocumulus, shallow cumulus, and deep convection. The model is also run towards statistical equilibrium with climatologically relevant large-scale forcings. These model tests are validated against large-eddy simulation (LES) with the same forcings. The comparison of results verifies the capacity of this closure to realistically represent different cloud and convective processes. Implementation of the closure in an idealized GCM allows us to study cloud feedbacks to climate change and to study the interactions between clouds, convections, and the large-scale circulation.

  12. Forgetting in immediate serial recall: decay, temporal distinctiveness, or interference?

    PubMed

    Oberauer, Klaus; Lewandowsky, Stephan

    2008-07-01

    Three hypotheses of forgetting from immediate memory were tested: time-based decay, decreasing temporal distinctiveness, and interference. The hypotheses were represented by 3 models of serial recall: the primacy model, the SIMPLE (scale-independent memory, perception, and learning) model, and the SOB (serial order in a box) model, respectively. The models were fit to 2 experiments investigating the effect of filled delays between items at encoding or at recall. Short delays between items, filled with articulatory suppression, led to massive impairment of memory relative to a no-delay baseline. Extending the delays had little additional effect, suggesting that the passage of time alone does not cause forgetting. Adding a choice reaction task in the delay periods to block attention-based rehearsal did not change these results. The interference-based SOB fit the data best; the primacy model overpredicted the effect of lengthening delays, and SIMPLE was unable to explain the effect of delays at encoding. The authors conclude that purely temporal views of forgetting are inadequate. Copyright (c) 2008 APA, all rights reserved.

  13. A simple model of the effect of ocean ventilation on ocean heat uptake

    NASA Astrophysics Data System (ADS)

    Nadiga, Balu; Urban, Nathan

    2017-11-01

    Transport of water from the surface mixed layer into the ocean interior is achieved, in large part, by the process of ventilation-a process associated with outcropping isopycnals. Starting from such a configuration of outcropping isopycnals, we derive a simple model of the effect of ventilation on ocean uptake of anomalous radiative forcing. This model can be seen as an improvement of the popular anomaly-diffusing class of energy balance models (AD-EBM) that are routinely employed to analyze and emulate the warming response of both observed and simulated Earth system. We demonstrate that neither multi-layer, nor continuous-diffusion AD-EBM variants can properly represent both surface-warming and the vertical distribution of ocean heat uptake. The new model overcomes this deficiency. The simplicity of the models notwithstanding, the analysis presented and the necessity of the modification is indicative of the role played by processes related to the down-welling branch of global ocean circulation in shaping the vertical distribution of ocean heat uptake.

  14. Experimental characterization of post rigor mortis human muscle subjected to small tensile strains and application of a simple hyper-viscoelastic model.

    PubMed

    Gras, Laure-Lise; Laporte, Sébastien; Viot, Philippe; Mitton, David

    2014-10-01

    In models developed for impact biomechanics, muscles are usually represented with one-dimensional elements having active and passive properties. The passive properties of muscles are most often obtained from experiments performed on animal muscles, because limited data on human muscle are available. The aim of this study is thus to characterize the passive response of a human muscle in tension. Tensile tests at different strain rates (0.0045, 0.045, and 0.45 s⁻¹) were performed on 10 extensor carpi ulnaris muscles. A model composed of a nonlinear element defined with an exponential law in parallel with one or two Maxwell elements and considering basic geometrical features was proposed. The experimental results were used to identify the parameters of the model. The results for the first- and second-order model were similar. For the first-order model, the mean parameters of the exponential law are as follows: Young's modulus E (6.8 MPa) and curvature parameter α (31.6). The Maxwell element mean values are as follows: viscosity parameter η (1.2 MPa s) and relaxation time τ (0.25 s). Our results provide new data on a human muscle tested in vitro and a simple model with basic geometrical features that represent its behavior in tension under three different strain rates. This approach could be used to assess the behavior of other human muscles. © IMechE 2014.

  15. Non-Relative Value Unit-Generating Activities Represent One-Fifth of Academic Neuroradiologist Productivity.

    PubMed

    Wintermark, M; Zeineh, M; Zaharchuk, G; Srivastava, A; Fischbein, N

    2016-07-01

    A neuroradiologist's activity includes many tasks beyond interpreting relative value unit-generating imaging studies. Our aim was to test a simple method to record and quantify the non-relative value unit-generating clinical activity represented by consults and clinical conferences, including tumor boards. Four full-time neuroradiologists, working an average of 50% clinical and 50% academic activity, systematically recorded all the non-relative value unit-generating consults and conferences in which they were involved during 3 months by using a simple, Web-based, computer-based application accessible from smartphones, tablets, or computers. The number and type of imaging studies they interpreted during the same period and the associated relative value units were extracted from our billing system. During 3 months, the 4 neuroradiologists working an average of 50% clinical activity interpreted 4241 relative value unit-generating imaging studies, representing 8152 work relative value units. During the same period, they recorded 792 non-relative value unit-generating study reviews as part of consults and conferences (not including reading room consults), representing 19% of the interpreted relative value unit-generating imaging studies. We propose a simple Web-based smartphone app to record and quantify non-relative value unit-generating activities including consults, clinical conferences, and tumor boards. The quantification of non-relative value unit-generating activities is paramount in this time of a paradigm shift from volume to value. It also represents an important tool for determining staffing levels, which cannot be performed on the basis of relative value unit only, considering the importance of time spent by radiologists on non-relative value unit-generating activities. It may also influence payment models from medical centers to radiology departments or practices. © 2016 by American Journal of Neuroradiology.

  16. Field validation of a free-agent cellular automata model of fire spread with fire–atmosphere coupling

    Treesearch

    Gary Achtemeier

    2012-01-01

    A cellular automata fire model represents ‘elements’ of fire by autonomous agents. A few simple algebraic expressions substituted for complex physical and meteorological processes and solved iteratively yield simulations for ‘super-diffusive’ fire spread and coupled surface-layer (2-m) fire–atmosphere processes. Pressure anomalies, which are integrals of the thermal...

  17. A Simple Economic Model of Cocaine Production

    DTIC Science & Technology

    1994-01-01

    University of New Mexico Press: Albuquerque (revised editionL 1988 11. "Plan de F*ecucion del Proyecto de Desarrollo Rural Integral del Alto Huallaga...ensuing raw material supplies (an industrial altenative)? This policy is represented in the model by ’Onewould alsohaeummakwbEr de otu of thsenw eprse...tht iemantngpoWAMdualoldi dieweinvesumaitisdie sa.sdietoldieoutpu ~~maahth~~~~ lnu~Ti srpemtdi dwemodelbyaeng diet die tota bacor A~l, ssousud it de nw

  18. A cognitive-consistency based model of population wide attitude change.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lakkaraju, Kiran; Speed, Ann Elizabeth

    Attitudes play a significant role in determining how individuals process information and behave. In this paper we have developed a new computational model of population wide attitude change that captures the social level: how individuals interact and communicate information, and the cognitive level: how attitudes and concept interact with each other. The model captures the cognitive aspect by representing each individuals as a parallel constraint satisfaction network. The dynamics of this model are explored through a simple attitude change experiment where we vary the social network and distribution of attitudes in a population.

  19. Unity of quarks and leptons at the TeV scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foot, R.; Lew, H.

    1990-08-01

    The gauge group (SU(3)){sup 2}{direct product}(SU(2)){sup 2}{direct product}(U(1){sub {ital Y}{prime}}){sup 3} supplemented by quark-lepton, left-right, and generation discrete symmetries represents a new approach to the understanding of the particle content of the standard model. In particular, as a result of the large number of symmetries, the fermion sector of the model is very simple. After symmetry breaking, the standard model can be shown to emerge from this highly symmetric model at low energies.

  20. Retention performance of green roofs in representative climates worldwide

    NASA Astrophysics Data System (ADS)

    Viola, F.; Hellies, M.; Deidda, R.

    2017-10-01

    The ongoing process of global urbanization contributes to an increase in stormwater runoff from impervious surfaces, threatening also water quality. Green roofs have been proved to be innovative stormwater management measures to partially restore natural states, enhancing interception, infiltration and evapotranspiration fluxes. The amount of water that is retained within green roofs depends not only on their depth, but also on the climate, which drives the stochastic soil moisture dynamic. In this context, a simple tool for assessing performance of green roofs worldwide in terms of retained water is still missing and highly desirable for practical assessments. The aim of this work is to explore retention performance of green roofs as a function of their depth and in different climate regimes. Two soil depths are investigated, one representing the intensive configuration and another representing the extensive one. The role of the climate in driving water retention has been represented by rainfall and potential evapotranspiration dynamics. A simple conceptual weather generator has been implemented and used for stochastic simulation of daily rainfall and potential evapotranspiration. Stochastic forcing is used as an input of a simple conceptual hydrological model for estimating long-term water partitioning between rainfall, runoff and actual evapotranspiration. Coupling the stochastic weather generator with the conceptual hydrological model, we assessed the amount of rainfall diverted into evapotranspiration for different combinations of annual rainfall and potential evapotranspiration in five representative climatic regimes. Results quantified the capabilities of green roofs in retaining rainfall and consequently in reducing discharges into sewer systems at an annual time scale. The role of substrate depth has been recognized to be crucial in determining green roofs retention performance, which in general increase from extensive to intensive settings. Looking at the role of climatic conditions, namely annual rainfall, potential evapotranspiration and their seasonality cycles, we found that they drive green roofs retention performance, which are the maxima when rainfall and temperature are in phase. Finally, we provide design charts for a first approximation of possible hydrological benefits deriving from the implementation of intensive or extensive green roofs in different world areas. As an example, 25 big cities have been indicated as benchmark case studies.

  1. Leveraging the UML Metamodel: Expressing ORM Semantics Using a UML Profile

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    CUYLER,DAVID S.

    2000-11-01

    Object Role Modeling (ORM) techniques produce a detailed domain model from the perspective of the business owner/customer. The typical process begins with a set of simple sentences reflecting facts about the business. The output of the process is a single model representing primarily the persistent information needs of the business. This type of model contains little, if any reference to a targeted computerized implementation. It is a model of business entities not of software classes. Through well-defined procedures, an ORM model can be transformed into a high quality objector relational schema.

  2. A simulation of water pollution model parameter estimation

    NASA Technical Reports Server (NTRS)

    Kibler, J. F.

    1976-01-01

    A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.

  3. pyhector: A Python interface for the simple climate model Hector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    N Willner, Sven; Hartin, Corinne; Gieseke, Robert

    2017-04-01

    Pyhector is a Python interface for the simple climate model Hector (Hartin et al. 2015) developed in C++. Simple climate models like Hector can, for instance, be used in the analysis of scenarios within integrated assessment models like GCAM1, in the emulation of complex climate models, and in uncertainty analyses. Hector is an open-source, object oriented, simple global climate carbon cycle model. Its carbon cycle consists of a one pool atmosphere, three terrestrial pools which can be broken down into finer biomes or regions, and four carbon pools in the ocean component. The terrestrial carbon cycle includes primary production andmore » respiration fluxes. The ocean carbon cycle circulates carbon via a simplified thermohaline circulation, calculating air-sea fluxes as well as the marine carbonate system (Hartin et al. 2016). The model input is time series of greenhouse gas emissions; as example scenarios for these the Pyhector package contains the Representative Concentration Pathways (RCPs)2. These were developed to cover the range of baseline and mitigation emissions scenarios and are widely used in climate change research and model intercomparison projects. Using DataFrames from the Python library Pandas (McKinney 2010) as a data structure for the scenarios simplifies generating and adapting scenarios. Other parameters of the Hector model can easily be modified when running the model. Pyhector can be installed using pip from the Python Package Index.3 Source code and issue tracker are available in Pyhector's GitHub repository4. Documentation is provided through Readthedocs5. Usage examples are also contained in the repository as a Jupyter Notebook (Pérez and Granger 2007; Kluyver et al. 2016). Courtesy of the Mybinder project6, the example Notebook can also be executed and modified without installing Pyhector locally.« less

  4. Inferring Soil Moisture Memory from Streamflow Observations Using a Simple Water Balance Model

    NASA Technical Reports Server (NTRS)

    Orth, Rene; Koster, Randal Dean; Seneviratne, Sonia I.

    2013-01-01

    Soil moisture is known for its integrative behavior and resulting memory characteristics. Soil moisture anomalies can persist for weeks or even months into the future, making initial soil moisture a potentially important contributor to skill in weather forecasting. A major difficulty when investigating soil moisture and its memory using observations is the sparse availability of long-term measurements and their limited spatial representativeness. In contrast, there is an abundance of long-term streamflow measurements for catchments of various sizes across the world. We investigate in this study whether such streamflow measurements can be used to infer and characterize soil moisture memory in respective catchments. Our approach uses a simple water balance model in which evapotranspiration and runoff ratios are expressed as simple functions of soil moisture; optimized functions for the model are determined using streamflow observations, and the optimized model in turn provides information on soil moisture memory on the catchment scale. The validity of the approach is demonstrated with data from three heavily monitored catchments. The approach is then applied to streamflow data in several small catchments across Switzerland to obtain a spatially distributed description of soil moisture memory and to show how memory varies, for example, with altitude and topography.

  5. Anthropogenic heat flux: advisable spatial resolutions when input data are scarce

    NASA Astrophysics Data System (ADS)

    Gabey, A. M.; Grimmond, C. S. B.; Capel-Timms, I.

    2018-02-01

    Anthropogenic heat flux (QF) may be significant in cities, especially under low solar irradiance and at night. It is of interest to many practitioners including meteorologists, city planners and climatologists. QF estimates at fine temporal and spatial resolution can be derived from models that use varying amounts of empirical data. This study compares simple and detailed models in a European megacity (London) at 500 m spatial resolution. The simple model (LQF) uses spatially resolved population data and national energy statistics. The detailed model (GQF) additionally uses local energy, road network and workday population data. The Fractions Skill Score (FSS) and bias are used to rate the skill with which the simple model reproduces the spatial patterns and magnitudes of QF, and its sub-components, from the detailed model. LQF skill was consistently good across 90% of the city, away from the centre and major roads. The remaining 10% contained elevated emissions and "hot spots" representing 30-40% of the total city-wide energy. This structure was lost because it requires workday population, spatially resolved building energy consumption and/or road network data. Daily total building and traffic energy consumption estimates from national data were within ± 40% of local values. Progressively coarser spatial resolutions to 5 km improved skill for total QF, but important features (hot spots, transport network) were lost at all resolutions when residential population controlled spatial variations. The results demonstrate that simple QF models should be applied with conservative spatial resolution in cities that, like London, exhibit time-varying energy use patterns.

  6. Causal structure of oscillations in gene regulatory networks: Boolean analysis of ordinary differential equation attractors.

    PubMed

    Sun, Mengyang; Cheng, Xianrui; Socolar, Joshua E S

    2013-06-01

    A common approach to the modeling of gene regulatory networks is to represent activating or repressing interactions using ordinary differential equations for target gene concentrations that include Hill function dependences on regulator gene concentrations. An alternative formulation represents the same interactions using Boolean logic with time delays associated with each network link. We consider the attractors that emerge from the two types of models in the case of a simple but nontrivial network: a figure-8 network with one positive and one negative feedback loop. We show that the different modeling approaches give rise to the same qualitative set of attractors with the exception of a possible fixed point in the ordinary differential equation model in which concentrations sit at intermediate values. The properties of the attractors are most easily understood from the Boolean perspective, suggesting that time-delay Boolean modeling is a useful tool for understanding the logic of regulatory networks.

  7. Does solar activity affect human happiness?

    NASA Astrophysics Data System (ADS)

    Kristoufek, Ladislav

    2018-03-01

    We investigate the direct influence of solar activity (represented by sunspot numbers) on human happiness (represented by the Twitter-based Happiness Index). We construct four models controlling for various statistical and dynamic effects of the analyzed series. The final model gives promising results. First, there is a statistically significant negative influence of solar activity on happiness which holds even after controlling for the other factors. Second, the final model, which is still rather simple, explains around 75% of variance of the Happiness Index. Third, our control variables contribute significantly as well: happiness is higher in no sunspots days, happiness is strongly persistent, there are strong intra-week cycles and happiness peaks during holidays. Our results strongly contribute to the topical literature and they provide evidence of unique utility of the online data.

  8. Endogenous time-varying risk aversion and asset returns.

    PubMed

    Berardi, Michele

    2016-01-01

    Stylized facts about statistical properties for short horizon returns in financial markets have been identified in the literature, but a satisfactory understanding for their manifestation is yet to be achieved. In this work, we show that a simple asset pricing model with representative agent is able to generate time series of returns that replicate such stylized facts if the risk aversion coefficient is allowed to change endogenously over time in response to unexpected excess returns under evolutionary forces. The same model, under constant risk aversion, would instead generate returns that are essentially Gaussian. We conclude that an endogenous time-varying risk aversion represents a very parsimonious way to make the model match real data on key statistical properties, and therefore deserves careful consideration from economists and practitioners alike.

  9. Behavior related pauses in simple spike activity of mouse Purkinje cells are linked to spike rate modulation

    PubMed Central

    Cao, Ying; Maran, Selva K.; Dhamala, Mukesh; Jaeger, Dieter; Heck, Detlef H.

    2012-01-01

    Purkinje cells (PCs) in the mammalian cerebellum express high frequency spontaneous activity with average spike rates between 30 and 200 Hz. Cerebellar nuclear (CN) neurons receive converging input from many PCs resulting in a continuous barrage of inhibitory inputs. It has been hypothesized that pauses in PC activity trigger increases in CN spiking activity. A prediction derived from this hypothesis is that pauses in PC simple spike activity represent relevant behavioral or sensory events. Here we asked whether pauses in the simple spike activity of PCs related to either fluid licking or respiration, play a special role in representing information about behavior. Both behaviors are widely represented in cerebellar PC simple spike activity. We recorded PC activity in the vermis and lobus simplex of head fixed mice while monitoring licking and respiratory behavior. Using cross correlation and Granger causality analysis we examined whether short ISIs had a different temporal relation to behavior than long ISIs or pauses. Behavior related simple spike pauses occurred during low-rate simple spike activity in both licking and breathing related PCs. Granger causality analysis revealed causal relationships between simple spike pauses and behavior. However, the same results were obtained from an analysis of surrogate spike trains with gamma ISI distributions constructed to match rate modulations of behavior related Purkinje cells. Our results therefore suggest that the occurrence of pauses in simple spike activity does not represent additional information about behavioral or sensory events that goes beyond the simple spike rate modulations. PMID:22723707

  10. Simple, stable and reliable modeling of gas properties of organic working fluids in aerodynamic designs of turbomachinery for ORC and VCC

    NASA Astrophysics Data System (ADS)

    Kawakubo, T.

    2016-05-01

    A simple, stable and reliable modeling of the real gas nature of the working fluid is required for the aerodesigns of the turbine in the Organic Rankine Cycle and of the compressor in the Vapor Compression Cycle. Although many modern Computational Fluid Dynamics tools are capable of incorporating real gas models, simulations with such a gas model tend to be more time-consuming than those with a perfect gas model and even can be unstable due to the simulation near the saturation boundary. Thus a perfect gas approximation is still an attractive option to stably and swiftly conduct a design simulation. In this paper, an effective method of the CFD simulation with a perfect gas approximation is discussed. A method of representing the performance of the centrifugal compressor or the radial-inflow turbine by means of each set of non-dimensional performance parameters and translating the fictitious perfect gas result to the actual real gas performance is presented.

  11. Simple models for rope substructure mechanics: application to electro-mechanical lifts

    NASA Astrophysics Data System (ADS)

    Herrera, I.; Kaczmarczyk, S.

    2016-05-01

    Mechanical systems modelled as rigid mass elements connected by tensioned slender structural members such as ropes and cables represent quite common substructures used in lift engineering and hoisting applications. Special interest is devoted by engineers and researchers to the vibratory response of such systems for optimum performance and durability. This paper presents simplified models that can be employed to determine the natural frequencies of systems having substructures of two rigid masses constrained by tensioned rope/cable elements. The exact solution for free un-damped longitudinal displacement response is discussed in the context of simple two-degree-of-freedom models. The results are compared and the influence of characteristics parameters such as the ratio of the average mass of the two rigid masses with respect to the rope mass and the deviation ratio of the two rigid masses with respect to the average mass is analyzed. This analysis gives criteria for the application of such simplified models in complex elevator and hoisting system configurations.

  12. Test of the efficiency of three storm water quality models with a rich set of data.

    PubMed

    Ahyerre, M; Henry, F O; Gogien, F; Chabanel, M; Zug, M; Renaudet, D

    2005-01-01

    The objective of this article is to test the efficiency of three different Storm Water Quality Model (SWQM) on the same data set (34 rain events, SS measurements) sampled on a 42 ha watershed in the center of Paris. The models have been calibrated at the scale of the rain event. Considering the mass of pollution calculated per event, the results on the models are satisfactory but that they are in the same order of magnitude as the simple hydraulic approach associated to a constant concentration. In a second time, the mass of pollutant at the outlet of the catchment at the global scale of the 34 events has been calculated. This approach shows that the simple hydraulic calculations gives better results than SWQM. Finally, the pollutographs are analysed, showing that storm water quality models are interesting tools to represent the shape of the pollutographs, and the dynamics of the phenomenon which can be useful in some projects for managers.

  13. Nondestructive assessment of timber bridges using a vibration-based method

    Treesearch

    Xiping Wang; James P. Wacker; Robert J. Ross; Brian K. Brashaw

    2005-01-01

    This paper describes an effort to develop a global dynamic testing technique for evaluating the overall stiffness of timber bridge superstructures. A forced vibration method was used to measure the natural frequency of single-span timber bridges in the laboratory and field. An analytical model based on simple beam theory was proposed to represent the relationship...

  14. Modeling a Neural Network as a Teaching Tool for the Learning of the Structure-Function Relationship

    ERIC Educational Resources Information Center

    Salinas, Dino G.; Acevedo, Cristian; Gomez, Christian R.

    2010-01-01

    The authors describe an activity they have created in which students can visualize a theoretical neural network whose states evolve according to a well-known simple law. This activity provided an uncomplicated approach to a paradigm commonly represented through complex mathematical formulation. From their observations, students learned many basic…

  15. Coherent vertical structures in numerical simulations of buoyant plumes from wildland fires

    Treesearch

    Philip Cunningham; Scott L. Goodrick; M. Yousuff Hussaini; Rodman R. Linn

    2005-01-01

    The structure and dynamics of buoyant plumes arising from surface-based heat sources in a vertically sheared ambient atmospheric flow are examined via simulations of a three-dimensional, compressible numerical model. Simple circular heat sources and asymmetric elliptical ring heat sources that are representative of wildland fires of moderate intensity are considered....

  16. "Chromoseratops Meiosus": A Simple, Two-Phase Exercise to Represent the Connection between Meiosis & Increased Genetic Diversity

    ERIC Educational Resources Information Center

    Eliyahu, Dorit

    2014-01-01

    I present an activity to help students make the connection between meiosis and genetic variation. The students model meiosis in the first phase of the activity, and by that process they produce gametes of a fictitious reptilobird species, "Chromoseratops meiosus." Later on, they will "mate" their gametes and produce a zygote…

  17. A hierarchy of granular continuum models: Why flowing grains are both simple and complex

    NASA Astrophysics Data System (ADS)

    Kamrin, Ken

    2017-06-01

    Granular materials have a strange propensity to behave as either a complex media or a simple media depending on the precise question being asked. This review paper offers a summary of granular flow rheologies for well-developed or steady-state motion, and seeks to explain this dichotomy through the vast range of complexity intrinsic to these models. A key observation is that to achieve accuracy in predicting flow fields in general geometries, one requires a model that accounts for a number of subtleties, most notably a nonlocal effect to account for cooperativity in the flow as induced by the finite size of grains. On the other hand, forces and tractions that develop on macro-scale, submerged boundaries appear to be minimally affected by grain size and, barring very rapid motions, are well represented by simple rate-independent frictional plasticity models. A major simplification observed in experiments of granular intrusion, which we refer to as the `resistive force hypothesis' of granular Resistive Force Theory, can be shown to arise directly from rate-independent plasticity. Because such plasticity models have so few parameters, and the major rheological parameter is a dimensionless internal friction coefficient, some of these simplifications can be seen as consequences of scaling.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mellors, R J

    The Comprehensive Nuclear Test Ban Treaty (CTBT) includes provisions for an on-site inspection (OSI), which allows the use of specific techniques to detect underground anomalies including cavities and rubble zones. One permitted technique is active seismic surveys such as seismic refraction or reflection. The purpose of this report is to conduct some simple modeling to evaluate the potential use of seismic reflection in detecting cavities and to test the use of open-source software in modeling possible scenarios. It should be noted that OSI inspections are conducted under specific constraints regarding duration and logistics. These constraints are likely to significantly impactmore » active seismic surveying, as a seismic survey typically requires considerable equipment, effort, and expertise. For the purposes of this study, which is a first-order feasibility study, these issues will not be considered. This report provides a brief description of the seismic reflection method along with some commonly used software packages. This is followed by an outline of a simple processing stream based on a synthetic model, along with results from a set of models representing underground cavities. A set of scripts used to generate the models are presented in an appendix. We do not consider detection of underground facilities in this work and the geologic setting used in these tests is an extremely simple one.« less

  19. The induced electric field due to a current transient

    NASA Astrophysics Data System (ADS)

    Beck, Y.; Braunstein, A.; Frankental, S.

    2007-05-01

    Calculations and measurements of the electric fields, induced by a lightning strike, are important for understanding the phenomenon and developing effective protection systems. In this paper, a novel approach to the calculation of the electric fields due to lightning strikes, using a relativistic approach, is presented. This approach is based on a known current wave-pair model, representing the lightning current wave. The model presented is one that describes the lightning current wave, either at the first stage of the descending charge wave from the cloud or at the later stage of the return stroke. The electric fields computed are cylindrically symmetric. A simplified method for the calculation of the electric field is achieved by using special relativity theory and relativistic considerations. The proposed approach, described in this paper, is based on simple expressions (by applying Coulomb's law) compared with much more complicated partial differential equations based on Maxwell's equations. A straight forward method of calculating the electric field due to a lightning strike, modelled as a negative-positive (NP) wave-pair, is determined by using the special relativity theory in order to calculate the 'velocity field' and relativistic concepts for calculating the 'acceleration field'. These fields are the basic elements required for calculating the total field resulting from the current wave-pair model. Moreover, a modified simpler method using sub models is represented. The sub-models are filaments of either static charges or charges at constant velocity only. Combining these simple sub-models yields the total wave-pair model. The results fully agree with that obtained by solving Maxwell's equations for the discussed problem.

  20. A dual theory of price and value in a meso-scale economic model with stochastic profit rate

    NASA Astrophysics Data System (ADS)

    Greenblatt, R. E.

    2014-12-01

    The problem of commodity price determination in a market-based, capitalist economy has a long and contentious history. Neoclassical microeconomic theories are based typically on marginal utility assumptions, while classical macroeconomic theories tend to be value-based. In the current work, I study a simplified meso-scale model of a commodity capitalist economy. The production/exchange model is represented by a network whose nodes are firms, workers, capitalists, and markets, and whose directed edges represent physical or monetary flows. A pair of multivariate linear equations with stochastic input parameters represent physical (supply/demand) and monetary (income/expense) balance. The input parameters yield a non-degenerate profit rate distribution across firms. Labor time and price are found to be eigenvector solutions to the respective balance equations. A simple relation is derived relating the expected value of commodity price to commodity labor content. Results of Monte Carlo simulations are consistent with the stochastic price/labor content relation.

  1. Knowledge representation and qualitative simulation of salmon redd functioning. Part I: qualitative modeling and simulation.

    PubMed

    Guerrin, F; Dumas, J

    2001-02-01

    This work aims at representing empirical knowledge of freshwater ecologists on the functioning of salmon redds (spawning areas of salmon) and its impact on mortality of early stages. For this, we use Qsim, a qualitative simulator. In this first part, we provide unfamiliar readers with the underlying qualitative differential equation (QDE) ontology of Qsim: representing quantities, qualitative variables, qualitative constraints, QDE structure. Based on a very simple example taken of the salmon redd application, we show how informal biological knowledge may be represented and simulated using an approach that was first intended to analyze qualitatively ordinary differential equations systems. A companion paper (Part II) gives the full description and simulation of the salmon redd qualitative model. This work was part of a project aimed at assessing the impact of the environment on salmon populations dynamics by the use of models of processes acting at different levels: catchment, river, and redds. Only the latter level is dealt with in this paper.

  2. Geospatial modeling of plant stable isotope ratios - the development of isoscapes

    NASA Astrophysics Data System (ADS)

    West, J. B.; Ehleringer, J. R.; Hurley, J. M.; Cerling, T. E.

    2007-12-01

    Large-scale spatial variation in stable isotope ratios can yield critical insights into the spatio-temporal dynamics of biogeochemical cycles, animal movements, and shifts in climate, as well as anthropogenic activities such as commerce, resource utilization, and forensic investigation. Interpreting these signals requires that we understand and model the variation. We report progress in our development of plant stable isotope ratio landscapes (isoscapes). Our approach utilizes a GIS, gridded datasets, a range of modeling approaches, and spatially distributed observations. We synthesize findings from four studies to illustrate the general utility of the approach, its ability to represent observed spatio-temporal variability in plant stable isotope ratios, and also outline some specific areas of uncertainty. We also address two basic, but critical questions central to our ability to model plant stable isotope ratios using this approach: 1. Do the continuous precipitation isotope ratio grids represent reasonable proxies for plant source water?, and 2. Do continuous climate grids (as is or modified) represent a reasonable proxy for the climate experienced by plants? Plant components modeled include leaf water, grape water (extracted from wine), bulk leaf material ( Cannabis sativa; marijuana), and seed oil ( Ricinus communis; castor bean). Our approaches to modeling the isotope ratios of these components varied from highly sophisticated process models to simple one-step fractionation models to regression approaches. The leaf water isosocapes were produced using steady-state models of enrichment and continuous grids of annual average precipitation isotope ratios and climate. These were compared to other modeling efforts, as well as a relatively sparse, but geographically distributed dataset from the literature. The latitudinal distributions and global averages compared favorably to other modeling efforts and the observational data compared well to model predictions. These results yield confidence in the precipitation isoscapes used to represent plant source water, the modified climate grids used to represent leaf climate, and the efficacy of this approach to modeling. Further work confirmed these observations. The seed oil isoscape was produced using a simple model of lipid fractionation driven with the precipitation grid, and compared well to widely distributed observations of castor bean oil, again suggesting that the precipitation grids were reasonable proxies for plant source water. The marijuana leaf δ2H observations distributed across the continental United States were regressed against the precipitation δ2H grids and yielded a strong relationship between them, again suggesting that plant source water was reasonably well represented by the precipitation grid. Finally, the wine water δ18O isoscape was developed from regressions that related precipitation isotope ratios and climate to observations from a single vintage. Favorable comparisons between year-specific wine water isoscapes and inter-annual variations in previous vintages yielded confidence in the climate grids. Clearly significant residual variability remains to be explained in all of these cases and uncertainties vary depending on the component modeled, but we conclude from this synthesis that isoscapes are capable of representing real spatial and temporal variability in plant stable isotope ratios.

  3. A Not-So-Simple View of Adolescent Writing

    ERIC Educational Resources Information Center

    Poch, Apryl L.; Lembke, Erica S.

    2017-01-01

    According to the Simple View of Writing, four primary skills are necessary for successful writing (Berninger & Amtmann, 2003; Berninger & Winn, 2006). Transcription skills (e.g., handwriting, spelling) represent lower-order cognitive tasks, whereas text generation skills (e.g., ideation, translation) represent higher-order…

  4. A neural computational model for animal's time-to-collision estimation.

    PubMed

    Wang, Ling; Yao, Dezhong

    2013-04-17

    The time-to-collision (TTC) is the time elapsed before a looming object hits the subject. An accurate estimation of TTC plays a critical role in the survival of animals in nature and acts as an important factor in artificial intelligence systems that depend on judging and avoiding potential dangers. The theoretic formula for TTC is 1/τ≈θ'/sin θ, where θ and θ' are the visual angle and its variation, respectively, and the widely used approximation computational model is θ'/θ. However, both of these measures are too complex to be implemented by a biological neuronal model. We propose a new simple computational model: 1/τ≈Mθ-P/(θ+Q)+N, where M, P, Q, and N are constants that depend on a predefined visual angle. This model, weighted summation of visual angle model (WSVAM), can achieve perfect implementation through a widely accepted biological neuronal model. WSVAM has additional merits, including a natural minimum consumption and simplicity. Thus, it yields a precise and neuronal-implemented estimation for TTC, which provides a simple and convenient implementation for artificial vision, and represents a potential visual brain mechanism.

  5. Simulations of carbon fiber composite delamination tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kay, G

    2007-10-25

    Simulations of mode I interlaminar fracture toughness tests of a carbon-reinforced composite material (BMS 8-212) were conducted with LSDYNA. The fracture toughness tests were performed by U.C. Berkeley. The simulations were performed to investigate the validity and practicality of employing decohesive elements to represent interlaminar bond failures that are prevalent in carbon-fiber composite structure penetration events. The simulations employed a decohesive element formulation that was verified on a simple two element model before being employed to perform the full model simulations. Care was required during the simulations to ensure that the explicit time integration of LSDYNA duplicate the near steady-statemore » testing conditions. In general, this study validated the use of employing decohesive elements to represent the interlaminar bond failures seen in carbon-fiber composite structures, but the practicality of employing the elements to represent the bond failures seen in carbon-fiber composite structures during penetration events was not established.« less

  6. Representing Simple Geometry Types in NetCDF-CF

    NASA Astrophysics Data System (ADS)

    Blodgett, D. L.; Koziol, B. W.; Whiteaker, T. L.; Simons, R.

    2016-12-01

    The Climate and Forecast (CF) metadata convention is well-suited for representing gridded and point-based observational datasets. However, CF currently has no accepted mechanism for representing simple geometry types such as lines and polygons. Lack of support for simple geometries within CF has unintentionally excluded a broad set of geoscientific data types from NetCDF-CF data encodings. For example, hydrologic datasets often contain polygon watershed catchments and polyline stream reaches in addition to point sampling stations and water management infrastructure. The latter has an associated CF specification. In the interest of supporting all simple geometry types within CF, a working group was formed following an EarthCube workshop on Advancing NetCDF-CF [1] to draft a CF specification for simple geometries: points, lines, polygons, and their associated multi-geometry representations [2]. The draft also includes parametric geometry types such as circles and ellipses. This presentation will provide an overview of the scope and content of the proposed specification focusing on mechanisms for representing coordinate arrays using variable length or continuous ragged arrays, capturing multi-geometries, and accounting for type-specific geometry artifacts such as polygon holes/interiors, node ordering, etc. The concepts contained in the specification proposal will be described with a use case representing streamflow in rivers and evapotranspiration from HUC12 watersheds. We will also introduce Python and R reference implementations developed alongside the technical specification. These in-development, open source Python and R libraries convert between commonly used GIS software objects (i.e. GEOS-based primitives) and their associated simple geometry CF representation. [1] http://www.unidata.ucar.edu/events/2016CFWorkshop/[2] https://github.com/bekozi/netCDF-CF-simple-geometry

  7. Development of an Implantable WBAN Path-Loss Model for Capsule Endoscopy

    NASA Astrophysics Data System (ADS)

    Aoyagi, Takahiro; Takizawa, Kenichi; Kobayashi, Takehiko; Takada, Jun-Ichi; Hamaguchi, Kiyoshi; Kohno, Ryuji

    An implantable WBAN path-loss model for a capsule endoscopy which is used for examining digestive organs, is developed by conducting simulations and experiments. First, we performed FDTD simulations on implant WBAN propagation by using a numerical human model. Second, we performed FDTD simulations on a vessel that represents the human body. Third, we performed experiments using a vessel of the same dimensions as that used in the simulations. On the basis of the results of these simulations and experiments, we proposed the gradient and intercept parameters of the simple path-loss in-body propagation model.

  8. Phase-field crystal modeling of heteroepitaxy and exotic modes of crystal nucleation

    NASA Astrophysics Data System (ADS)

    Podmaniczky, Frigyes; Tóth, Gyula I.; Tegze, György; Pusztai, Tamás; Gránásy, László

    2017-01-01

    We review recent advances made in modeling heteroepitaxy, two-step nucleation, and nucleation at the growth front within the framework of a simple dynamical density functional theory, the Phase-Field Crystal (PFC) model. The crystalline substrate is represented by spatially confined periodic potentials. We investigate the misfit dependence of the critical thickness in the StranskiKrastanov growth mode in isothermal studies. Apparently, the simulation results for stress release via the misfit dislocations fit better to the PeopleBean model than to the one by Matthews and Blakeslee. Next, we investigate structural aspects of two-step crystal nucleation at high undercoolings, where an amorphous precursor forms in the first stage. Finally, we present results for the formation of new grains at the solid-liquid interface at high supersaturations/supercoolings, a phenomenon termed Growth Front Nucleation (GFN). Results obtained with diffusive dynamics (applicable to colloids) and with a hydrodynamic extension of the PFC theory (HPFC, developed for simple liquids) will be compared. The HPFC simulations indicate two possible mechanisms for GFN.

  9. Overview of a simple model describing variation of dissolved organic carbon in an upland catchment

    USGS Publications Warehouse

    Boyer, Elizabeth W.; Hornberger, George M.; Bencala, Kenneth E.; McKnight, Diane M.

    1996-01-01

    Hydrological mechanisms controlling the variation of dissolved organic carbon (DOC) were investigated in the Deer Creek catchment located near Montezuma, CO. Patterns of DOC in streamflow suggested that increased flows through the upper soil horizon during snowmelt are responsible for flushing this DOC-enriched interstitial water to the streams. We examined possible hydrological mechanisms to explain the observed variability of DOC in Deer Creek by first simulating the hydrological response of the catchment using TOPMODEL and then routing the predicted flows through a simple model that accounted for temporal changes in DOC. Conceptually the DOC model can be taken to represent a terrestrial (soil) reservoir in which DOC builds up during low flow periods and is flushed out when infiltrating meltwaters cause the water table to rise into this “reservoir”. Concentrations of DOC measured in the upper soil and in streamflow were compared to model simulations. The simulated DOC response provides a reasonable reproduction of the observed dynamics of DOC in the stream at Deer Creek.

  10. Nondestructive assessment of single-span timber bridges using a vibration- based method

    Treesearch

    Xiping Wang; James P. Wacker; Angus M. Morison; John W. Forsman; John R. Erickson; Robert J. Ross

    2005-01-01

    This paper describes an effort to develop a global dynamic testing technique for evaluating the overall stiffness of timber bridge superstructures. A forced vibration method was used to measure the natural frequency of single-span timber bridges in the laboratory and field. An analytical model based on simple beam theory was proposed to represent the relationship...

  11. Direct estimation of aboveground forest productivity through hyperspectral remote sensing of canopy nitrogen

    Treesearch

    Marie-Louise Smith; Scott V. Ollinger; Mary E. Martin; John D. Aber; Richard A. Hallett; Christine L. Goodale

    2002-01-01

    The concentration of nitrogen in foliage has been related to rates of net photosynthesis across a wide range of plant species and functional groups and thus represents a simple and biologically meaningful link between terrestrial cycles of carbon and nitrogen. Although foliar N is used by ecosystem models to predict rates of leaf-level photosynthesis, it has rarely...

  12. Do Adaptive Representations of the Item-Position Effect in APM Improve Model Fit? A Simulation Study

    ERIC Educational Resources Information Center

    Zeller, Florian; Krampen, Dorothea; Reiß, Siegbert; Schweizer, Karl

    2017-01-01

    The item-position effect describes how an item's position within a test, that is, the number of previous completed items, affects the response to this item. Previously, this effect was represented by constraints reflecting simple courses, for example, a linear increase. Due to the inflexibility of these representations our aim was to examine…

  13. Complex dynamics and empirical evidence (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Delli Gatti, Domenico; Gaffeo, Edoardo; Giulioni, Gianfranco; Gallegati, Mauro; Kirman, Alan; Palestrini, Antonio; Russo, Alberto

    2005-05-01

    Standard macroeconomics, based on a reductionist approach centered on the representative agent, is badly equipped to explain the empirical evidence where heterogeneity and industrial dynamics are the rule. In this paper we show that a simple agent-based model of heterogeneous financially fragile agents is able to replicate a large number of scaling type stylized facts with a remarkable degree of statistical precision.

  14. An Investigation of the Effects of Boundary Avoidance on Pilot Tracking

    DTIC Science & Technology

    2006-12-01

    for the most pressing tracking task. 14 2.2.4 System Plant A simple second order system was used to provide representative aircraft system... plant , a small pulse was input into the system at the onset of the simulation. 2.2.5 Model Results The values used by the author in both the...13 2.2.4 System Plant

  15. Energy awareness for supercapacitors using Kalman filter state-of-charge tracking

    NASA Astrophysics Data System (ADS)

    Nadeau, Andrew; Hassanalieragh, Moeen; Sharma, Gaurav; Soyata, Tolga

    2015-11-01

    Among energy buffering alternatives, supercapacitors can provide unmatched efficiency and durability. Additionally, the direct relation between a supercapacitor's terminal voltage and stored energy can improve energy awareness. However, a simple capacitive approximation cannot adequately represent the stored energy in a supercapacitor. It is shown that the three branch equivalent circuit model provides more accurate energy awareness. This equivalent circuit uses three capacitances and associated resistances to represent the supercapacitor's internal SOC (state-of-charge). However, the SOC cannot be determined from one observation of the terminal voltage, and must be tracked over time using inexact measurements. We present: 1) a Kalman filtering solution for tracking the SOC; 2) an on-line system identification procedure to efficiently estimate the equivalent circuit's parameters; and 3) experimental validation of both parameter estimation and SOC tracking for 5 F, 10 F, 50 F, and 350 F supercapacitors. Validation is done within the operating range of a solar powered application and the associated power variability due to energy harvesting. The proposed techniques are benchmarked against the simple capacitive model and prior parameter estimation techniques, and provide a 67% reduction in root-mean-square error for predicting usable buffered energy.

  16. Does Specification Matter? Experiments with Simple Multiregional Probabilistic Population Projections

    PubMed Central

    Raymer, James; Abel, Guy J.; Rogers, Andrei

    2012-01-01

    Population projection models that introduce uncertainty are a growing subset of projection models in general. In this paper, we focus on the importance of decisions made with regard to the model specifications adopted. We compare the forecasts and prediction intervals associated with four simple regional population projection models: an overall growth rate model, a component model with net migration, a component model with in-migration and out-migration rates, and a multiregional model with destination-specific out-migration rates. Vector autoregressive models are used to forecast future rates of growth, birth, death, net migration, in-migration and out-migration, and destination-specific out-migration for the North, Midlands and South regions in England. They are also used to forecast different international migration measures. The base data represent a time series of annual data provided by the Office for National Statistics from 1976 to 2008. The results illustrate how both the forecasted subpopulation totals and the corresponding prediction intervals differ for the multiregional model in comparison to other simpler models, as well as for different assumptions about international migration. The paper ends end with a discussion of our results and possible directions for future research. PMID:23236221

  17. DNA nanosensor surface grafting and salt dependence

    NASA Astrophysics Data System (ADS)

    Carvalho, B. G.; Fagundes, J.; Martin, A. A.; Raniero, L.; Favero, P. P.

    2013-02-01

    In this paper we investigated the Paracoccidoides brasiliensis fungus nanosensor by simulations of simple strand DNA grafting on gold nanoparticle. In order to improve the knowledge of nanoparticle environment, the addiction of salt solution was studied at the models proposed by us. Nanoparticle and DNA are represented by economic models validated by us in this paper. In addition, the DNA grafting and salt influences are evaluated by adsorption and bond energies calculations. This theoretical evaluation gives support to experimental diagnostics techniques of diseases.

  18. Robust stability of second-order systems

    NASA Technical Reports Server (NTRS)

    Chuang, C.-H.

    1993-01-01

    A feedback linearization technique is used in conjunction with passivity concepts to design robust controllers for space robots. It is assumed that bounded modeling uncertainties exist in the inertia matrix and the vector representing the coriolis, centripetal, and friction forces. Under these assumptions, the controller guarantees asymptotic tracking of the joint variables. A Lagrangian approach is used to develop a dynamic model for space robots. Closed-loop simulation results are illustrated for a simple case of a single link planar manipulator with freely floating base.

  19. Simple Model of Macroscopic Instability in XeCl Discharge Pumped Lasers

    NASA Astrophysics Data System (ADS)

    Ahmed, Belasri; Zoheir, Harrache

    2003-10-01

    The aim of this work is to study the development of the macroscopic non uniformity of the electron density of high pressure discharge for excimer lasers and eventually its propagation because of the medium kinetics phenomena. This study is executed using a transverse mono-dimensional model, in which the plasma is represented by a set of resistance's in parallel. This model was employed using a numerical code including three strongly coupled parts: electric circuit equations, electron Boltzmann equation, and kinetics equations (chemical kinetics model). The time variations of the electron density in each plasma element are obtained by solving a set of ordinary differential equations describing the plasma kinetics and external circuit. The use of the present model allows a good comprehension of the halogen depletion phenomena, which is the principal cause of laser ending and allows a simple study of a large-scale non uniformity in preionization density and its effects on electrical and chemical plasma properties. The obtained results indicate clearly that about 50consumed at the end of the pulse. KEY WORDS Excimer laser, XeCl, Modeling, Cold plasma, Kinetic, Halogen depletion, Macroscopic instability.

  20. Error reduction and representation in stages (ERRIS) in hydrological modelling for ensemble streamflow forecasting

    NASA Astrophysics Data System (ADS)

    Li, Ming; Wang, Q. J.; Bennett, James C.; Robertson, David E.

    2016-09-01

    This study develops a new error modelling method for ensemble short-term and real-time streamflow forecasting, called error reduction and representation in stages (ERRIS). The novelty of ERRIS is that it does not rely on a single complex error model but runs a sequence of simple error models through four stages. At each stage, an error model attempts to incrementally improve over the previous stage. Stage 1 establishes parameters of a hydrological model and parameters of a transformation function for data normalization, Stage 2 applies a bias correction, Stage 3 applies autoregressive (AR) updating, and Stage 4 applies a Gaussian mixture distribution to represent model residuals. In a case study, we apply ERRIS for one-step-ahead forecasting at a range of catchments. The forecasts at the end of Stage 4 are shown to be much more accurate than at Stage 1 and to be highly reliable in representing forecast uncertainty. Specifically, the forecasts become more accurate by applying the AR updating at Stage 3, and more reliable in uncertainty spread by using a mixture of two Gaussian distributions to represent the residuals at Stage 4. ERRIS can be applied to any existing calibrated hydrological models, including those calibrated to deterministic (e.g. least-squares) objectives.

  1. A fusion of top-down and bottom-up modeling techniques to constrain regional scale carbon budgets

    NASA Astrophysics Data System (ADS)

    Goeckede, M.; Turner, D. P.; Michalak, A. M.; Vickers, D.; Law, B. E.

    2009-12-01

    The effort to constrain regional scale carbon budgets benefits from assimilating as many high quality data sources as possible in order to reduce uncertainties. Two of the most common approaches used in this field, bottom-up and top-down techniques, both have their strengths and weaknesses, and partly build on very different sources of information to train, drive, and validate the models. Within the context of the ORCA2 project, we follow both bottom-up and top-down modeling strategies with the ultimate objective of reconciling their surface flux estimates. The ORCA2 top-down component builds on a coupled WRF-STILT transport module that resolves the footprint function of a CO2 concentration measurement in high temporal and spatial resolution. Datasets involved in the current setup comprise GDAS meteorology, remote sensing products, VULCAN fossil fuel inventories, boundary conditions from CarbonTracker, and high-accuracy time series of atmospheric CO2 concentrations. Surface fluxes of CO2 are normally provided through a simple diagnostic model which is optimized against atmospheric observations. For the present study, we replaced the simple model with fluxes generated by an advanced bottom-up process model, Biome-BGC, which uses state-of-the-art algorithms to resolve plant-physiological processes, and 'grow' a biosphere based on biogeochemical conditions and climate history. This approach provides a more realistic description of biomass and nutrient pools than is the case for the simple model. The process model ingests various remote sensing data sources as well as high-resolution reanalysis meteorology, and can be trained against biometric inventories and eddy-covariance data. Linking the bottom-up flux fields to the atmospheric CO2 concentrations through the transport module allows evaluating the spatial representativeness of the BGC flux fields, and in that way assimilates more of the available information than either of the individual modeling techniques alone. Bayesian inversion is then applied to assign scaling factors that align the surface fluxes with the CO2 time series. Our project demonstrates how bottom-up and top-down techniques can be reconciled to arrive at a more robust and balanced spatial carbon budget. We will show how to evaluate existing flux products through regionally representative atmospheric observations, i.e. how well the underlying model assumptions represent processes on the regional scale. Adapting process model parameterizations sets for e.g. sub-regions, disturbance regimes, or land cover classes, in order to optimize the agreement between surface fluxes and atmospheric observations can lead to improved understanding of the underlying flux mechanisms, and reduces uncertainties in the regional carbon budgets.

  2. A new visco-elasto-plastic model via time-space fractional derivative

    NASA Astrophysics Data System (ADS)

    Hei, X.; Chen, W.; Pang, G.; Xiao, R.; Zhang, C.

    2018-02-01

    To characterize the visco-elasto-plastic behavior of metals and alloys we propose a new constitutive equation based on a time-space fractional derivative. The rheological representative of the model can be analogous to that of the Bingham-Maxwell model, while the dashpot element and sliding friction element are replaced by the corresponding fractional elements. The model is applied to describe the constant strain rate, stress relaxation and creep tests of different metals and alloys. The results suggest that the proposed simple model can describe the main characteristics of the experimental observations. More importantly, the model can also provide more accurate predictions than the classic Bingham-Maxwell model and the Bingham-Norton model.

  3. Probabilistic Design Storm Method for Improved Flood Estimation in Ungauged Catchments

    NASA Astrophysics Data System (ADS)

    Berk, Mario; Å pačková, Olga; Straub, Daniel

    2017-12-01

    The design storm approach with event-based rainfall-runoff models is a standard method for design flood estimation in ungauged catchments. The approach is conceptually simple and computationally inexpensive, but the underlying assumptions can lead to flawed design flood estimations. In particular, the implied average recurrence interval (ARI) neutrality between rainfall and runoff neglects uncertainty in other important parameters, leading to an underestimation of design floods. The selection of a single representative critical rainfall duration in the analysis leads to an additional underestimation of design floods. One way to overcome these nonconservative approximations is the use of a continuous rainfall-runoff model, which is associated with significant computational cost and requires rainfall input data that are often not readily available. As an alternative, we propose a novel Probabilistic Design Storm method that combines event-based flood modeling with basic probabilistic models and concepts from reliability analysis, in particular the First-Order Reliability Method (FORM). The proposed methodology overcomes the limitations of the standard design storm approach, while utilizing the same input information and models without excessive computational effort. Additionally, the Probabilistic Design Storm method allows deriving so-called design charts, which summarize representative design storm events (combinations of rainfall intensity and other relevant parameters) for floods with different return periods. These can be used to study the relationship between rainfall and runoff return periods. We demonstrate, investigate, and validate the method by means of an example catchment located in the Bavarian Pre-Alps, in combination with a simple hydrological model commonly used in practice.

  4. A new paradigm for predicting zonal-mean climate and climate change

    NASA Astrophysics Data System (ADS)

    Armour, K.; Roe, G.; Donohoe, A.; Siler, N.; Markle, B. R.; Liu, X.; Feldl, N.; Battisti, D. S.; Frierson, D. M.

    2016-12-01

    How will the pole-to-equator temperature gradient, or large-scale patterns of precipitation, change under global warming? Answering such questions typically involves numerical simulations with comprehensive general circulation models (GCMs) that represent the complexities of climate forcing, radiative feedbacks, and atmosphere and ocean dynamics. Yet, our understanding of these predictions hinges on our ability to explain them through the lens of simple models and physical theories. Here we present evidence that zonal-mean climate, and its changes, can be understood in terms of a moist energy balance model that represents atmospheric heat transport as a simple diffusion of latent and sensible heat (as a down-gradient transport of moist static energy, with a diffusivity coefficient that is nearly constant with latitude). We show that the theoretical underpinnings of this model derive from the principle of maximum entropy production; that its predictions are empirically supported by atmospheric reanalyses; and that it successfully predicts the behavior of a hierarchy of climate models - from a gray radiation aquaplanet moist GCM, to comprehensive GCMs participating in CMIP5. As an example of the power of this paradigm, we show that, given only patterns of local radiative feedbacks and climate forcing, the moist energy balance model accurately predicts the evolution of zonal-mean temperature and atmospheric heat transport as simulated by the CMIP5 ensemble. These results suggest that, despite all of its dynamical complexity, the atmosphere essentially responds to energy imbalances by simply diffusing latent and sensible heat down-gradient; this principle appears to explain zonal-mean climate and its changes under global warming.

  5. Modal cost analysis for simple continua

    NASA Technical Reports Server (NTRS)

    Hu, A.; Skelton, R. E.; Yang, T. Y.

    1988-01-01

    The most popular finite element codes are based upon appealing theories of convergence of modal frequencies. For example, the popularity of cubic elements for beam-like structures is due to the rapid convergence of modal frequencies and stiffness properties. However, for those problems in which the primary consideration is the accuracy of response of the structure at specified locations, it is more important to obtain accuracy in the modal costs than in the modal frequencies. The modal cost represents the contribution of a mode in the norm of the response vector. This paper provides a complete modal cost analysis for simple continua such as beam-like structures. Upper bounds are developed for mode truncation errors in the model reduction process and modal cost analysis dictates which modes to retain in order to reduce the model for control design purposes.

  6. Mathematical Modeling for Scrub Typhus and Its Implications for Disease Control.

    PubMed

    Min, Kyung Duk; Cho, Sung Il

    2018-03-19

    The incidence rate of scrub typhus has been increasing in the Republic of Korea. Previous studies have suggested that this trend may have resulted from the effects of climate change on the transmission dynamics among vectors and hosts, but a clear explanation of the process is still lacking. In this study, we applied mathematical models to explore the potential factors that influence the epidemiology of tsutsugamushi disease. We developed mathematical models of ordinary differential equations including human, rodent and mite groups. Two models, including simple and complex models, were developed, and all parameters employed in the models were adopted from previous articles that represent epidemiological situations in the Republic of Korea. The simulation results showed that the force of infection at the equilibrium state under the simple model was 0.236 (per 100,000 person-months), and that in the complex model was 26.796 (per 100,000 person-months). Sensitivity analyses indicated that the most influential parameters were rodent and mite populations and contact rate between them for the simple model, and trans-ovarian transmission for the complex model. In both models, contact rate between humans and mites is more influential than morality rate of rodent and mite group. The results indicate that the effect of controlling either rodents or mites could be limited, and reducing the contact rate between humans and mites is more practical and effective strategy. However, the current level of control would be insufficient relative to the growing mite population. © 2018 The Korean Academy of Medical Sciences.

  7. Optimized theory for simple and molecular fluids.

    PubMed

    Marucho, M; Montgomery Pettitt, B

    2007-03-28

    An optimized closure approximation for both simple and molecular fluids is presented. A smooth interpolation between Perkus-Yevick and hypernetted chain closures is optimized by minimizing the free energy self-consistently with respect to the interpolation parameter(s). The molecular version is derived from a refinement of the method for simple fluids. In doing so, a method is proposed which appropriately couples an optimized closure with the variant of the diagrammatically proper integral equation recently introduced by this laboratory [K. M. Dyer et al., J. Chem. Phys. 123, 204512 (2005)]. The simplicity of the expressions involved in this proposed theory has allowed the authors to obtain an analytic expression for the approximate excess chemical potential. This is shown to be an efficient tool to estimate, from first principles, the numerical value of the interpolation parameters defining the aforementioned closure. As a preliminary test, representative models for simple fluids and homonuclear diatomic Lennard-Jones fluids were analyzed, obtaining site-site correlation functions in excellent agreement with simulation data.

  8. Effect of nonlinearity in hybrid kinetic Monte Carlo-continuum models.

    PubMed

    Balter, Ariel; Lin, Guang; Tartakovsky, Alexandre M

    2012-01-01

    Recently there has been interest in developing efficient ways to model heterogeneous surface reactions with hybrid computational models that couple a kinetic Monte Carlo (KMC) model for a surface to a finite-difference model for bulk diffusion in a continuous domain. We consider two representative problems that validate a hybrid method and show that this method captures the combined effects of nonlinearity and stochasticity. We first validate a simple deposition-dissolution model with a linear rate showing that the KMC-continuum hybrid agrees with both a fully deterministic model and its analytical solution. We then study a deposition-dissolution model including competitive adsorption, which leads to a nonlinear rate, and show that in this case the KMC-continuum hybrid and fully deterministic simulations do not agree. However, we are able to identify the difference as a natural result of the stochasticity coming from the KMC surface process. Because KMC captures inherent fluctuations, we consider it to be more realistic than a purely deterministic model. Therefore, we consider the KMC-continuum hybrid to be more representative of a real system.

  9. Effect of Nonlinearity in Hybrid Kinetic Monte Carlo-Continuum Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balter, Ariel I.; Lin, Guang; Tartakovsky, Alexandre M.

    2012-04-23

    Recently there has been interest in developing efficient ways to model heterogeneous surface reactions with hybrid computational models that couple a KMC model for a surface to a finite difference model for bulk diffusion in a continuous domain. We consider two representative problems that validate a hybrid method and also show that this method captures the combined effects of nonlinearity and stochasticity. We first validate a simple deposition/dissolution model with a linear rate showing that the KMC-continuum hybrid agrees with both a fully deterministic model and its analytical solution. We then study a deposition/dissolution model including competitive adsorption, which leadsmore » to a nonlinear rate, and show that, in this case, the KMC-continuum hybrid and fully deterministic simulations do not agree. However, we are able to identify the difference as a natural result of the stochasticity coming from the KMC surface process. Because KMC captures inherent fluctuations, we consider it to be more realistic than a purely deterministic model. Therefore, we consider the KMC-continuum hybrid to be more representative of a real system.« less

  10. Segmentation in Tardigrada and diversification of segmental patterns in Panarthropoda.

    PubMed

    Smith, Frank W; Goldstein, Bob

    2017-05-01

    The origin and diversification of segmented metazoan body plans has fascinated biologists for over a century. The superphylum Panarthropoda includes three phyla of segmented animals-Euarthropoda, Onychophora, and Tardigrada. This superphylum includes representatives with relatively simple and representatives with relatively complex segmented body plans. At one extreme of this continuum, euarthropods exhibit an incredible diversity of serially homologous segments. Furthermore, distinct tagmosis patterns are exhibited by different classes of euarthropods. At the other extreme, all tardigrades share a simple segmented body plan that consists of a head and four leg-bearing segments. The modular body plans of panarthropods make them a tractable model for understanding diversification of animal body plans more generally. Here we review results of recent morphological and developmental studies of tardigrade segmentation. These results complement investigations of segmentation processes in other panarthropods and paleontological studies to illuminate the earliest steps in the evolution of panarthropod body plans. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. On the context-dependent scaling of consumer feeding rates.

    PubMed

    Barrios-O'Neill, Daniel; Kelly, Ruth; Dick, Jaimie T A; Ricciardi, Anthony; MacIsaac, Hugh J; Emmerson, Mark C

    2016-06-01

    The stability of consumer-resource systems can depend on the form of feeding interactions (i.e. functional responses). Size-based models predict interactions - and thus stability - based on consumer-resource size ratios. However, little is known about how interaction contexts (e.g. simple or complex habitats) might alter scaling relationships. Addressing this, we experimentally measured interactions between a large size range of aquatic predators (4-6400 mg over 1347 feeding trials) and an invasive prey that transitions among habitats: from the water column (3D interactions) to simple and complex benthic substrates (2D interactions). Simple and complex substrates mediated successive reductions in capture rates - particularly around the unimodal optimum - and promoted prey population stability in model simulations. Many real consumer-resource systems transition between 2D and 3D interactions, and along complexity gradients. Thus, Context-Dependent Scaling (CDS) of feeding interactions could represent an unrecognised aspect of food webs, and quantifying the extent of CDS might enhance predictive ecology. © The Authors. Ecology Letters published by CNRS and John Wiley & Sons Ltd.

  12. Prospects for Alpha Particle Heating in JET in the Hot Ion Regime

    NASA Astrophysics Data System (ADS)

    Cordey, J. G.; Keilhacker, M.; Watkins, M. L.

    1987-01-01

    The prospects for alpha particle heating in JET are discussed. A computational model is developed to represent adequately the neutron yield from JET plasmas heated by neutral beam injection. This neutral beam model, augmented by a simple plasma model, is then used to determine the neutron yields and fusion Q-values anticipated for different heating schemes in future operation of JET with tritium. The relative importance of beam-thermal and thermal-thermal reactions is pointed out and the dependence of the results on, for example, plasma density, temperature, energy confinement and purity is shown. Full 1½-D transport code calculations, based on models developed for ohmic, ICRF and NBI heated JET discharges, are used also to provide a power scan for JET operation in tritium in the low density, high ion temperature regime. The results are shown to be in good agreement with the estimates made using the simple plasma model and indicate that, based on present knowledge, a fusion Q-value in the plasma centre above unity should be achieved in JET.

  13. Development of flexural vibration inspection techniques to rapidly assess the structural health of timber bridge systems

    Treesearch

    Xiping Wang; James P. Wacker; Robert J. Ross; Brian K. Brashaw; Robert Vatalaro

    2005-01-01

    This paper describes an effort to develop a global dynamic testing technique for evaluating the overall stiffness of timber bridge superstructures. A forced vibration method was used to measure the natural frequency of single-span timber bridges in the laboratory and field. An analytical model based on simple beam theory was proposed to represent the relationship...

  14. Representing Matrix Cracks Through Decomposition of the Deformation Gradient Tensor in Continuum Damage Mechanics Methods

    NASA Technical Reports Server (NTRS)

    Leone, Frank A., Jr.

    2015-01-01

    A method is presented to represent the large-deformation kinematics of intraply matrix cracks and delaminations in continuum damage mechanics (CDM) constitutive material models. The method involves the additive decomposition of the deformation gradient tensor into 'crack' and 'bulk material' components. The response of the intact bulk material is represented by a reduced deformation gradient tensor, and the opening of an embedded cohesive interface is represented by a normalized cohesive displacement-jump vector. The rotation of the embedded interface is tracked as the material deforms and as the crack opens. The distribution of the total local deformation between the bulk material and the cohesive interface components is determined by minimizing the difference between the cohesive stress and the bulk material stress projected onto the cohesive interface. The improvements to the accuracy of CDM models that incorporate the presented method over existing approaches are demonstrated for a single element subjected to simple shear deformation and for a finite element model of a unidirectional open-hole tension specimen. The material model is implemented as a VUMAT user subroutine for the Abaqus/Explicit finite element software. The presented deformation gradient decomposition method reduces the artificial load transfer across matrix cracks subjected to large shearing deformations, and avoids the spurious secondary failure modes that often occur in analyses based on conventional progressive damage models.

  15. 47 CFR 52.35 - Porting Intervals.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... telephone numbers must complete a simple wireline-to-wireline or simple intermodal port request within one... work week of Monday through Friday represents mandatory business days and 8 a.m. to 5 p.m. represents... complete Local Service Request (LSR) must be received by the current service provider between 8 a.m. and 1...

  16. 47 CFR 52.35 - Porting Intervals.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... telephone numbers must complete a simple wireline-to-wireline or simple intermodal port request within one... work week of Monday through Friday represents mandatory business days and 8 a.m. to 5 p.m. represents... complete Local Service Request (LSR) must be received by the current service provider between 8 a.m. and 1...

  17. Net Efficacy Adjusted for Risk (NEAR): A Simple Procedure for Measuring Risk:Benefit Balance

    PubMed Central

    Boada, José N.; Boada, Carlos; García-Sáiz, Mar; García, Marcelino; Fernández, Eduardo; Gómez, Eugenio

    2008-01-01

    Background Although several mathematical models have been proposed to assess the risk:benefit of drugs in one measure, their use in practice has been rather limited. Our objective was to design a simple, easily applicable model. In this respect, measuring the proportion of patients who respond favorably to treatment without being affected by adverse drug reactions (ADR) could be a suitable endpoint. However, remarkably few published clinical trials report the data required to calculate this proportion. As an approach to the problem, we calculated the expected proportion of this type of patients. Methodology/Principal Findings Theoretically, responders without ADR may be obtained by multiplying the total number of responders by the total number of subjects that did not suffer ADR, and dividing the product by the total number of subjects studied. When two drugs are studied, the same calculation may be repeated for the second drug. Then, by constructing a 2×2 table with the expected frequencies of responders with and without ADR, and non-responders with and without ADR, the odds ratio and relative risk with their confidence intervals may be easily calculated and graphically represented on a logarithmic scale. Such measures represent “net efficacy adjusted for risk” (NEAR). We assayed the model with results extracted from several published clinical trials or meta-analyses. On comparing our results with those originally reported by the authors, marked differences were found in some cases, with ADR arising as a relevant factor to balance the clinical benefit obtained. The particular features of the adverse reaction that must be weighed against benefit is discussed in the paper. Conclusion NEAR representing overall risk-benefit may contribute to improving knowledge of drug clinical usefulness. As most published clinical trials tend to overestimate benefits and underestimate toxicity, our measure represents an effort to change this trend. PMID:18974868

  18. Combining 3d Volume and Mesh Models for Representing Complicated Heritage Buildings

    NASA Astrophysics Data System (ADS)

    Tsai, F.; Chang, H.; Lin, Y.-W.

    2017-08-01

    This study developed a simple but effective strategy to combine 3D volume and mesh models for representing complicated heritage buildings and structures. The idea is to seamlessly integrate 3D parametric or polyhedral models and mesh-based digital surfaces to generate a hybrid 3D model that can take advantages of both modeling methods. The proposed hybrid model generation framework is separated into three phases. Firstly, after acquiring or generating 3D point clouds of the target, these 3D points are partitioned into different groups. Secondly, a parametric or polyhedral model of each group is generated based on plane and surface fitting algorithms to represent the basic structure of that region. A "bare-bones" model of the target can subsequently be constructed by connecting all 3D volume element models. In the third phase, the constructed bare-bones model is used as a mask to remove points enclosed by the bare-bones model from the original point clouds. The remaining points are then connected to form 3D surface mesh patches. The boundary points of each surface patch are identified and these boundary points are projected onto the surfaces of the bare-bones model. Finally, new meshes are created to connect the projected points and original mesh boundaries to integrate the mesh surfaces with the 3D volume model. The proposed method was applied to an open-source point cloud data set and point clouds of a local historical structure. Preliminary results indicated that the reconstructed hybrid models using the proposed method can retain both fundamental 3D volume characteristics and accurate geometric appearance with fine details. The reconstructed hybrid models can also be used to represent targets in different levels of detail according to user and system requirements in different applications.

  19. Evaluation of the soft x-ray reflectivity of micropore optics using anisotropic wet etching of silicon wafers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitsuishi, Ikuyuki; Ezoe, Yuichiro; Koshiishi, Masaki

    2010-02-20

    The x-ray reflectivity of an ultralightweight and low-cost x-ray optic using anisotropic wet etching of Si (110) wafers is evaluated at two energies, C K{alpha}0.28 keV and Al K{alpha}1.49 keV. The obtained reflectivities at both energies are not represented by a simple planar mirror model considering surface roughness. Hence, an geometrical occultation effect due to step structures upon the etched mirror surface is taken into account. Then, the reflectivities are represented by the theoretical model. The estimated surface roughness at C K{alpha} ({approx}6 nm rms) is significantly larger than {approx}1 nm at Al K{alpha}. This can be explained by differentmore » coherent lengths at two energies.« less

  20. Evaluation of the soft x-ray reflectivity of micropore optics using anisotropic wet etching of silicon wafers.

    PubMed

    Mitsuishi, Ikuyuki; Ezoe, Yuichiro; Koshiishi, Masaki; Mita, Makoto; Maeda, Yoshitomo; Yamasaki, Noriko Y; Mitsuda, Kazuhisa; Shirata, Takayuki; Hayashi, Takayuki; Takano, Takayuki; Maeda, Ryutaro

    2010-02-20

    The x-ray reflectivity of an ultralightweight and low-cost x-ray optic using anisotropic wet etching of Si (110) wafers is evaluated at two energies, C K(alpha)0.28 keV and Al K(alpha)1.49 keV. The obtained reflectivities at both energies are not represented by a simple planar mirror model considering surface roughness. Hence, an geometrical occultation effect due to step structures upon the etched mirror surface is taken into account. Then, the reflectivities are represented by the theoretical model. The estimated surface roughness at C K(alpha) (approximately 6 nm rms) is significantly larger than approximately 1 nm at Al K(alpha). This can be explained by different coherent lengths at two energies.

  1. Development of spectral analysis math models and software program and spectral analyzer, digital converter interface equipment design

    NASA Technical Reports Server (NTRS)

    Hayden, W. L.; Robinson, L. H.

    1972-01-01

    Spectral analyses of angle-modulated communication systems is studied by: (1) performing a literature survey of candidate power spectrum computational techniques, determining the computational requirements, and formulating a mathematical model satisfying these requirements; (2) implementing the model on UNIVAC 1230 digital computer as the Spectral Analysis Program (SAP); and (3) developing the hardware specifications for a data acquisition system which will acquire an input modulating signal for SAP. The SAP computational technique uses extended fast Fourier transform and represents a generalized approach for simple and complex modulating signals.

  2. Quantum Mechanics, Path Integrals and Option Pricing:. Reducing the Complexity of Finance

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Corianò, Claudio; Srikant, Marakani

    2003-04-01

    Quantum Finance represents the synthesis of the techniques of quantum theory (quantum mechanics and quantum field theory) to theoretical and applied finance. After a brief overview of the connection between these fields, we illustrate some of the methods of lattice simulations of path integrals for the pricing of options. The ideas are sketched out for simple models, such as the Black-Scholes model, where analytical and numerical results are compared. Application of the method to nonlinear systems is also briefly overviewed. More general models, for exotic or path-dependent options are discussed.

  3. Universality in volume-law entanglement of scrambled pure quantum states.

    PubMed

    Nakagawa, Yuya O; Watanabe, Masataka; Fujita, Hiroyuki; Sugiura, Sho

    2018-04-24

    A pure quantum state can fully describe thermal equilibrium as long as one focuses on local observables. The thermodynamic entropy can also be recovered as the entanglement entropy of small subsystems. When the size of the subsystem increases, however, quantum correlations break the correspondence and mandate a correction to this simple volume law. The elucidation of the size dependence of the entanglement entropy is thus essentially important in linking quantum physics with thermodynamics. Here we derive an analytic formula of the entanglement entropy for a class of pure states called cTPQ states representing equilibrium. We numerically find that our formula applies universally to any sufficiently scrambled pure state representing thermal equilibrium, i.e., energy eigenstates of non-integrable models and states after quantum quenches. Our formula is exploited as diagnostics for chaotic systems; it can distinguish integrable models from non-integrable models and many-body localization phases from chaotic phases.

  4. John Lumley's Contributions to Turbulence Modeling

    NASA Astrophysics Data System (ADS)

    Pope, Stephen

    2015-11-01

    We recall the contributions that John Lumley made to turbulence modeling in the 1970s and 1980s. In these early days, computer power was feeble by today's standards, and eddy-viscosity models were prevalent in CFD. Lumley recognized, however, that second-moment closures represent the simplest level at which the physics of turbulent flows can reasonably be represented. This is especially true when the velocity field is coupled to scalar fields through buoyancy, as in the atmosphere and oceans. While Lumley was not the first to propose second-moment closures, he can be credited with establishing the rational approach to constructing such closures. This includes the application of various invariance principles and tensor representation theorems, imposing the constraints imposed by realizability, and of course appealing to experimental data in simple, canonical flows. These techniques are now well-accepted and have found application far beyond second-moment closures.

  5. Insights from mathematical modeling of renal tubular function.

    PubMed

    Weinstein, A M

    1998-01-01

    Mathematical models of proximal tubule have been developed which represent the important solute species within the constraints of known cytosolic concentrations, transport fluxes, and overall epithelial permeabilities. In general, model simulations have been used to assess the quantitative feasibility of what appear to be qualitatively plausible mechanisms, or alternatively, to identify incomplete rationalization of experimental observations. The examples considered include: (1) proximal water reabsorption, for which the lateral interspace is a locus for solute-solvent coupling; (2) ammonia secretion, for which the issue is prioritizing driving forces - transport on the Na+/H+ exchanger, on the Na,K-ATPase, or ammoniagenesis; (3) formate-stimulated NaCl reabsorption, for which simple addition of a luminal membrane chloride/formate exchanger fails to represent experimental observation, and (4) balancing luminal entry and peritubular exit, in which ATP-dependent peritubular K+ channels have been implicated, but appear unable to account for the bulk of proximal tubule cell volume homeostasis.

  6. Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam

    2009-01-01

    This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.

  7. Simple models for the simulation of submarine melt for a Greenland glacial system model

    NASA Astrophysics Data System (ADS)

    Beckmann, Johanna; Perrette, Mahé; Ganopolski, Andrey

    2018-01-01

    Two hundred marine-terminating Greenland outlet glaciers deliver more than half of the annually accumulated ice into the ocean and have played an important role in the Greenland ice sheet mass loss observed since the mid-1990s. Submarine melt may play a crucial role in the mass balance and position of the grounding line of these outlet glaciers. As the ocean warms, it is expected that submarine melt will increase, potentially driving outlet glaciers retreat and contributing to sea level rise. Projections of the future contribution of outlet glaciers to sea level rise are hampered by the necessity to use models with extremely high resolution of the order of a few hundred meters. That requirement in not only demanded when modeling outlet glaciers as a stand alone model but also when coupling them with high-resolution 3-D ocean models. In addition, fjord bathymetry data are mostly missing or inaccurate (errors of several hundreds of meters), which questions the benefit of using computationally expensive 3-D models for future predictions. Here we propose an alternative approach built on the use of a computationally efficient simple model of submarine melt based on turbulent plume theory. We show that such a simple model is in reasonable agreement with several available modeling studies. We performed a suite of experiments to analyze sensitivity of these simple models to model parameters and climate characteristics. We found that the computationally cheap plume model demonstrates qualitatively similar behavior as 3-D general circulation models. To match results of the 3-D models in a quantitative manner, a scaling factor of the order of 1 is needed for the plume models. We applied this approach to model submarine melt for six representative Greenland glaciers and found that the application of a line plume can produce submarine melt compatible with observational data. Our results show that the line plume model is more appropriate than the cone plume model for simulating the average submarine melting of real glaciers in Greenland.

  8. Mathematical Modeling Of Life-Support Systems

    NASA Technical Reports Server (NTRS)

    Seshan, Panchalam K.; Ganapathi, Balasubramanian; Jan, Darrell L.; Ferrall, Joseph F.; Rohatgi, Naresh K.

    1994-01-01

    Generic hierarchical model of life-support system developed to facilitate comparisons of options in design of system. Model represents combinations of interdependent subsystems supporting microbes, plants, fish, and land animals (including humans). Generic model enables rapid configuration of variety of specific life support component models for tradeoff studies culminating in single system design. Enables rapid evaluation of effects of substituting alternate technologies and even entire groups of technologies and subsystems. Used to synthesize and analyze life-support systems ranging from relatively simple, nonregenerative units like aquariums to complex closed-loop systems aboard submarines or spacecraft. Model, called Generic Modular Flow Schematic (GMFS), coded in such chemical-process-simulation languages as Aspen Plus and expressed as three-dimensional spreadsheet.

  9. GIS data models for coal geology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McColloch, G.H. Jr.; Timberlake, K.J.; Oldham, A.V.

    A variety of spatial data models can be applied to different aspects of coal geology. The simple vector data models found in various Computer Aided Drafting (CAD) programs are sometimes used for routine mapping and some simple analyses. However, more sophisticated applications that maintain the topological relationships between cartographic elements enhance analytical potential. Also, vector data models are best for producing various types of high quality, conventional maps. The raster data model is generally considered best for representing data that varies continuously over a geographic area, such as the thickness of a coal bed. Information is lost when contour linesmore » are threaded through raster grids for display, so volumes and tonnages are more accurately determined by working directly with raster data. Raster models are especially well suited to computationally simple surface-to-surface analysis, or overlay functions. Another data model, triangulated irregular networks (TINs) are superior at portraying visible surfaces because many TIN programs support break fines. Break lines locate sharp breaks in slope such as those generated by bodies of water or ridge crests. TINs also {open_quotes}honor{close_quotes} data points so that a surface generated from a set of points will be forced to pass through those points. TINs or grids generated from TINs, are particularly good at determining the intersections of surfaces such as coal seam outcrops and geologic unit boundaries. No single technique works best for all coal-related applications. The ability to use a variety of data models, and transform from one model to another is essential for obtaining optimum results in a timely manner.« less

  10. Revising Hydrology of a Land Surface Model

    NASA Astrophysics Data System (ADS)

    Le Vine, Nataliya; Butler, Adrian; McIntyre, Neil; Jackson, Christopher

    2015-04-01

    Land Surface Models (LSMs) are key elements in guiding adaptation to the changing water cycle and the starting points to develop a global hyper-resolution model of the terrestrial water, energy and biogeochemical cycles. However, before this potential is realised, there are some fundamental limitations of LSMs related to how meaningfully hydrological fluxes and stores are represented. An important limitation is the simplistic or non-existent representation of the deep subsurface in LSMs; and another is the lack of connection of LSM parameterisations to relevant hydrological information. In this context, the paper uses a case study of the JULES (Joint UK Land Environmental Simulator) LSM applied to the Kennet region in Southern England. The paper explores the assumptions behind JULES hydrology, adapts the model structure and optimises the coupling with the ZOOMQ3D regional groundwater model. The analysis illustrates how three types of information can be used to improve the model's hydrology: a) observations, b) regionalized information, and c) information from an independent physics-based model. It is found that: 1) coupling to the groundwater model allows realistic simulation of streamflows; 2) a simple dynamic lower boundary improves upon JULES' stationary unit gradient condition; 3) a 1D vertical flow in the unsaturated zone is sufficient; however there is benefit in introducing a simple dual soil moisture retention curve; 4) regionalized information can be used to describe soil spatial heterogeneity. It is concluded that relatively simple refinements to the hydrology of JULES and its parameterisation method can provide a substantial step forward in realising its potential as a high-resolution multi-purpose model.

  11. Ground temperature measurement by PRT-5 for maps experiment

    NASA Technical Reports Server (NTRS)

    Gupta, S. K.; Tiwari, S. N.

    1978-01-01

    A simple algorithm and computer program were developed for determining the actual surface temperature from the effective brightness temperature as measured remotely by a radiation thermometer called PRT-5. This procedure allows the computation of atmospheric correction to the effective brightness temperature without performing detailed radiative transfer calculations. Model radiative transfer calculations were performed to compute atmospheric corrections for several values of the surface and atmospheric parameters individually and in combination. Polynomial regressions were performed between the magnitudes or deviations of these parameters and the corresponding computed corrections to establish simple analytical relations between them. Analytical relations were also developed to represent combined correction for simultaneous variation of parameters in terms of their individual corrections.

  12. Locomotion of C. elegans: A Piecewise-Harmonic Curvature Representation of Nematode Behavior

    PubMed Central

    Padmanabhan, Venkat; Khan, Zeina S.; Solomon, Deepak E.; Armstrong, Andrew; Rumbaugh, Kendra P.; Vanapalli, Siva A.; Blawzdziewicz, Jerzy

    2012-01-01

    Caenorhabditis elegans, a free-living soil nematode, displays a rich variety of body shapes and trajectories during its undulatory locomotion in complex environments. Here we show that the individual body postures and entire trails of C. elegans have a simple analytical description in curvature representation. Our model is based on the assumption that the curvature wave is generated in the head segment of the worm body and propagates backwards. We have found that a simple harmonic function for the curvature can capture multiple worm shapes during the undulatory movement. The worm body trajectories can be well represented in terms of piecewise sinusoidal curvature with abrupt changes in amplitude, wavevector, and phase. PMID:22792224

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubrovsky, V. G.; Topovsky, A. V.

    New exact solutions, nonstationary and stationary, of Veselov-Novikov (VN) equation in the forms of simple nonlinear and linear superpositions of arbitrary number N of exact special solutions u{sup (n)}, n= 1, Horizontal-Ellipsis , N are constructed via Zakharov and Manakov {partial_derivative}-dressing method. Simple nonlinear superpositions are represented up to a constant by the sums of solutions u{sup (n)} and calculated by {partial_derivative}-dressing on nonzero energy level of the first auxiliary linear problem, i.e., 2D stationary Schroedinger equation. It is remarkable that in the zero energy limit simple nonlinear superpositions convert to linear ones in the form of the sums ofmore » special solutions u{sup (n)}. It is shown that the sums u=u{sup (k{sub 1})}+...+u{sup (k{sub m})}, 1 Less-Than-Or-Slanted-Equal-To k{sub 1} < k{sub 2} < Horizontal-Ellipsis < k{sub m} Less-Than-Or-Slanted-Equal-To N of arbitrary subsets of these solutions are also exact solutions of VN equation. The presented exact solutions include as superpositions of special line solitons and also superpositions of plane wave type singular periodic solutions. By construction these exact solutions represent also new exact transparent potentials of 2D stationary Schroedinger equation and can serve as model potentials for electrons in planar structures of modern electronics.« less

  14. Physically based model for extracting dual permeability parameters using non-Newtonian fluids

    NASA Astrophysics Data System (ADS)

    Abou Najm, M. R.; Basset, C.; Stewart, R. D.; Hauswirth, S.

    2017-12-01

    Dual permeability models are effective for the assessment of flow and transport in structured soils with two dominant structures. The major challenge to those models remains in the ability to determine appropriate and unique parameters through affordable, simple, and non-destructive methods. This study investigates the use of water and a non-Newtonian fluid in saturated flow experiments to derive physically-based parameters required for improved flow predictions using dual permeability models. We assess the ability of these two fluids to accurately estimate the representative pore sizes in dual-domain soils, by determining the effective pore sizes of macropores and micropores. We developed two sub-models that solve for the effective macropore size assuming either cylindrical (e.g., biological pores) or planar (e.g., shrinkage cracks and fissures) pore geometries, with the micropores assumed to be represented by a single effective radius. Furthermore, the model solves for the percent contribution to flow (wi) corresponding to the representative macro and micro pores. A user-friendly solver was developed to numerically solve the system of equations, given that relevant non-Newtonian viscosity models lack forms conducive to analytical integration. The proposed dual-permeability model is a unique attempt to derive physically based parameters capable of measuring dual hydraulic conductivities, and therefore may be useful in reducing parameter uncertainty and improving hydrologic model predictions.

  15. SPARK: A Framework for Multi-Scale Agent-Based Biomedical Modeling.

    PubMed

    Solovyev, Alexey; Mikheev, Maxim; Zhou, Leming; Dutta-Moscato, Joyeeta; Ziraldo, Cordelia; An, Gary; Vodovotz, Yoram; Mi, Qi

    2010-01-01

    Multi-scale modeling of complex biological systems remains a central challenge in the systems biology community. A method of dynamic knowledge representation known as agent-based modeling enables the study of higher level behavior emerging from discrete events performed by individual components. With the advancement of computer technology, agent-based modeling has emerged as an innovative technique to model the complexities of systems biology. In this work, the authors describe SPARK (Simple Platform for Agent-based Representation of Knowledge), a framework for agent-based modeling specifically designed for systems-level biomedical model development. SPARK is a stand-alone application written in Java. It provides a user-friendly interface, and a simple programming language for developing Agent-Based Models (ABMs). SPARK has the following features specialized for modeling biomedical systems: 1) continuous space that can simulate real physical space; 2) flexible agent size and shape that can represent the relative proportions of various cell types; 3) multiple spaces that can concurrently simulate and visualize multiple scales in biomedical models; 4) a convenient graphical user interface. Existing ABMs of diabetic foot ulcers and acute inflammation were implemented in SPARK. Models of identical complexity were run in both NetLogo and SPARK; the SPARK-based models ran two to three times faster.

  16. The Behavioral Economics of Choice and Interval Timing

    PubMed Central

    Jozefowiez, J.; Staddon, J. E. R.; Cerutti, D. T.

    2009-01-01

    We propose a simple behavioral economic model (BEM) describing how reinforcement and interval timing interact. The model assumes a Weber-law-compliant logarithmic representation of time. Associated with each represented time value are the payoffs that have been obtained for each possible response. At a given real time, the response with the highest payoff is emitted. The model accounts for a wide range of data from procedures such as simple bisection, metacognition in animals, economic effects in free-operant psychophysical procedures and paradoxical choice in double-bisection procedures. Although it assumes logarithmic time representation, it can also account for data from the time-left procedure usually cited in support of linear time representation. It encounters some difficulties in complex free-operant choice procedures, such as concurrent mixed fixed-interval schedules as well as some of the data on double bisection, that may involve additional processes. Overall, BEM provides a theoretical framework for understanding how reinforcement and interval timing work together to determine choice between temporally differentiated reinforcers. PMID:19618985

  17. New Age of 3D Geological Modelling or Complexity is not an Issue Anymore

    NASA Astrophysics Data System (ADS)

    Mitrofanov, Aleksandr

    2017-04-01

    Geological model has a significant value in almost all types of researches related to regional mapping, geodynamics and especially to structural and resource geology of mineral deposits. Well-developed geological model must take into account all vital features of modelling object without over-simplification and also should adequately represent the interpretation of the geologist. In recent years with the gradual exhaustion deposits with relatively simple morphology geologists from all over the world are faced with the necessity of building the representative models for more and more structurally complex objects. Meanwhile, the amount of tools used for that has not significantly changed in the last two-three decades. The most widespread method of wireframe geological modelling now was developed in 1990s and is fully based on engineering design set of instruments (so-called CAD). Strings and polygons representing the section-based interpretation are being used as an intermediate step in the process of wireframes generation. Despite of significant time required for this type of modelling, it still can provide sufficient results for simple and medium-complexity geological objects. However, with the increasing complexity more and more vital features of the deposit are being sacrificed because of fundamental inability (or much greater time required for modelling) of CAD-based explicit techniques to develop the wireframes of the appropriate complexity. At the same time alternative technology which is not based on sectional approach and which uses the fundamentally different mathematical algorithms is being actively developed in the variety of other disciplines: medicine, advanced industrial design, game and cinema industry. In the recent years this implicit technology started to being developed for geological modelling purpose and nowadays it is represented by very powerful set of tools that has been integrated in almost all major commercial software packages. Implicit modelling allows to develop geological models that really correspond with complicated geological reality. Models can include fault blocking, complex structural trends and folding; can be based on excessive input dataset (like lots of drilling on the mining stage) or, on the other hand, on a quite few drillholes intersections with significant input from geological interpretation of the deposit. In any case implicit modelling, if is used correctly, allows to incorporate the whole batch of geological data and relatively quickly get the easily adjustable, flexible and robust geological wireframes that can be used as a reliable foundation on the following stages of geological investigations. In SRK practice nowadays almost all the wireframe models used for structural and resource geology are developed with implicit modelling tools which significantly increased the speed and quality of geological modelling.

  18. Using simple manipulatives to improve student comprehension of a complex biological process: protein synthesis.

    PubMed

    Guzman, Karen; Bartlett, John

    2012-01-01

    Biological systems and living processes involve a complex interplay of biochemicals and macromolecular structures that can be challenging for undergraduate students to comprehend and, thus, misconceptions abound. Protein synthesis, or translation, is an example of a biological process for which students often hold many misconceptions. This article describes an exercise that was developed to illustrate the process of translation using simple objects to represent complex molecules. Animations, 3D physical models, computer simulations, laboratory experiments and classroom lectures are also used to reinforce the students' understanding of translation, but by focusing on the simple manipulatives in this exercise, students are better able to visualize concepts that can elude them when using the other methods. The translation exercise is described along with suggestions for background material, questions used to evaluate student comprehension and tips for using the manipulatives to identify common misconceptions. Copyright © 2012 Wiley Periodicals, Inc.

  19. Crash injury risks for obese occupants using a matched-pair analysis.

    PubMed

    Viano, David C; Parenteau, Chantal S; Edwards, Mark L

    2008-03-01

    The automotive safety community is questioning the impact of obesity on the performance and assessment of occupant protection systems. This study investigates fatality and serious injury risks for front-seat occupants by body mass index (BMI) using a matched-pair analysis. It also develops a simple model for the change in injury risk with obesity. A simple model was developed for the change in injury risk with obesity. It included the normal mass (m) and stiffness (k) of the body resisting compression during a blunt impact. Stiffness is assumed constant as weight is gained (Delta m). For a given impact severity, the risk of injury was assumed proportional to compression. Energy balance was used to determine injury risks with increasing mass. NASS-CDS field data were analyzed for calendar years 1993-2004. Occupant injury was divided into normal (18.5 kg/m2 < or = BMI < 25.0 kg/m2) and obese (BMI > o= 30 kg/m2) categories. A matched-pair analysis was carried out. Driver and front-right passenger fatalities or serious injuries (MAIS 3+) were analyzed in the same crash to determine the effect of obesity. This also allowed the determination of the relative risk of younger (age < or = 55 years), older (age >55 years), male, and female drivers that were obese compared to normal BMI. The family of Hybrid III crash test dummies was evaluated for BMI and the amount of ballast was determined so they could represent an obese or morbidly obese occupant. Based on the simple model, the relative injury risk (r) for an increase in body mass is given by: r = (1 + Delta m / m)(0.5). For a given stature, an obese occupant (BMI = 30-35 kg/m2) has 54-61% higher risk of injury than a normal BMI occupant (22 kg/m2). Matched pairs showed that obese drivers have a 97% higher risk of fatality and 17% higher risk of serious injury (MAIS 3+) than normal BMI drivers. Obese passengers have a 32% higher fatality risk and a 40% higher MAIS 3+ risk than normal passengers. Obese female drivers have a 119% higher MAIS 3+ risk than normal BMI female drivers and young obese drivers have a 20% higher serious injury risk than young normal drivers. This range of increased risk is consistent but broader than predicted by the simple injury model. The smallest crash test dummies need proportionately more ballast to represent an obese or morbidly obese occupant in the evaluation of safety systems. The 5% female Hybrid III has a BMI = 20.4 kg/m2 and needs 22 kg of ballast to represent an obese and 44.8 kg to represent a morbidly obese female, while the 95% male needs only 1.7 and 36.5 kg, respectively. Obesity influences the risk of serious and fatal injury in motor vehicle crashes. The effect is greatest on obese female drivers and young drivers. Since some of the risk difference is related to lower seatbelt wearing rates, the comfort and use of seatbelt extenders should be examined to improve wearing rates. Also, crash testing with ballasted dummies to represent obese and morbidly obese occupants may lead to refined safety systems for this growing segment of the population.

  20. Decoding spike timing: the differential reverse correlation method

    PubMed Central

    Tkačik, Gašper; Magnasco, Marcelo O.

    2009-01-01

    It is widely acknowledged that detailed timing of action potentials is used to encode information, for example in auditory pathways; however the computational tools required to analyze encoding through timing are still in their infancy. We present a simple example of encoding, based on a recent model of time-frequency analysis, in which units fire action potentials when a certain condition is met, but the timing of the action potential depends also on other features of the stimulus. We show that, as a result, spike-triggered averages are smoothed so much they do not represent the true features of the encoding. Inspired by this example, we present a simple method, differential reverse correlations, that can separate an analysis of what causes a neuron to spike, and what controls its timing. We analyze with this method the leaky integrate-and-fire neuron and show the method accurately reconstructs the model's kernel. PMID:18597928

  1. Simple spatial scaling rules behind complex cities.

    PubMed

    Li, Ruiqi; Dong, Lei; Zhang, Jiang; Wang, Xinran; Wang, Wen-Xu; Di, Zengru; Stanley, H Eugene

    2017-11-28

    Although most of wealth and innovation have been the result of human interaction and cooperation, we are not yet able to quantitatively predict the spatial distributions of three main elements of cities: population, roads, and socioeconomic interactions. By a simple model mainly based on spatial attraction and matching growth mechanisms, we reveal that the spatial scaling rules of these three elements are in a consistent framework, which allows us to use any single observation to infer the others. All numerical and theoretical results are consistent with empirical data from ten representative cities. In addition, our model can also provide a general explanation of the origins of the universal super- and sub-linear aggregate scaling laws and accurately predict kilometre-level socioeconomic activity. Our work opens a new avenue for uncovering the evolution of cities in terms of the interplay among urban elements, and it has a broad range of applications.

  2. Microstructure representations for sound absorbing fibrous media: 3D and 2D multiscale modelling and experiments

    NASA Astrophysics Data System (ADS)

    Zieliński, Tomasz G.

    2017-11-01

    The paper proposes and investigates computationally-efficient microstructure representations for sound absorbing fibrous media. Three-dimensional volume elements involving non-trivial periodic arrangements of straight fibres are examined as well as simple two-dimensional cells. It has been found that a simple 2D quasi-representative cell can provide similar predictions as a volume element which is in general much more geometrically accurate for typical fibrous materials. The multiscale modelling allowed to determine the effective speeds and damping of acoustic waves propagating in such media, which brings up a discussion on the correlation between the speed, penetration range and attenuation of sound waves. Original experiments on manufactured copper-wire samples are presented and the microstructure-based calculations of acoustic absorption are compared with the corresponding experimental results. In fact, the comparison suggested the microstructure modifications leading to representations with non-uniformly distributed fibres.

  3. Learning molecular energies using localized graph kernels.

    PubMed

    Ferré, Grégoire; Haut, Terry; Barros, Kipton

    2017-03-21

    Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.

  4. Learning molecular energies using localized graph kernels

    NASA Astrophysics Data System (ADS)

    Ferré, Grégoire; Haut, Terry; Barros, Kipton

    2017-03-01

    Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.

  5. Reactive extraction of lactic acid with trioctylamine/methylene chloride/n-hexane

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, D.H.; Hong, W.H.

    The trioctylamine (TOA)/methylene chloride (MC)/n-hexane system was used as the extraction agent for the extraction of lactic acid. Curves of equilibrium and hydration were obtained at various temperatures and concentrations of TOA. A modified mass action model was proposed to interpret the equilibrium and the hydration curves. The reaction mechanism and the corresponding parameters which best represent the equilibrium data were estimated, and the concentration of water in the organic phase was predicted by inserting the parameters into the simple mathematical equation of the modified model. The concentration of MC and the change of temperature were important factors for themore » extraction and the stripping process. The stripping was performed by a simple distillation which was a combination of temperature-swing regeneration and diluent-swing regeneration. The type of inactive diluent has no influence on the stripping. The stripping efficiencies were about 70%.« less

  6. Computational considerations for collecting and using data in the equidistant cylindrical map projection and the bounds of sampling geographic data at progressively higher resolution

    USGS Publications Warehouse

    Foley, Kevin M.

    2011-01-01

    The Equidistant Cylindrical Map projection is popular with digital modelers and others for storing and processing worldwide data sets because of the simple association of latitude and longitude to cell values or pixels in the resulting grid. This projection does not accurately display area, and the diminished geographic area represented by cells at high latitudes is not often carefully considered. A simple mathematical analysis quantifies the discrepancy in area sampled by cells at different latitudes. The presence of this discrepancy indicates that the use of this projection can induce bias in data sets when both sampling and reporting data. It is demonstrated that as the resolution requirements of input data for models increase, the necessity of providing data to accurately describe smaller cells, particularly at high latitude, will be a challenge.

  7. Trajectory fitting in function space with application to analytic modeling of surfaces

    NASA Technical Reports Server (NTRS)

    Barger, Raymond L.

    1992-01-01

    A theory for representing a parameter-dependent function as a function trajectory is described. Additionally, a theory for determining a piecewise analytic fit to the trajectory is described. An example is given that illustrates the application of the theory to generating a smooth surface through a discrete set of input cross-section shapes. A simple procedure for smoothing in the parameter direction is discussed, and a computed example is given. Application of the theory to aerodynamic surface modeling is demonstrated by applying it to a blended wing-fuselage surface.

  8. Scalable File Systems for High Performance Computing Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandt, S A

    2007-10-03

    Simulations of mode I interlaminar fracture toughness tests of a carbon-reinforced composite material (BMS 8-212) were conducted with LSDYNA. The fracture toughness tests were performed by U.C. Berkeley. The simulations were performed to investigate the validity and practicality of employing decohesive elements to represent interlaminar bond failures that are prevalent in carbon-fiber composite structure penetration events. The simulations employed a decohesive element formulation that was verified on a simple two element model before being employed to perform the full model simulations. Care was required during the simulations to ensure that the explicit time integration of LSDYNA duplicate the near steady-statemore » testing conditions. In general, this study validated the use of employing decohesive elements to represent the interlaminar bond failures seen in carbon-fiber composite structures, but the practicality of employing the elements to represent the bond failures seen in carbon-fiber composite structures during penetration events was not established.« less

  9. Research of MPPT for photovoltaic generation based on two-dimensional cloud model

    NASA Astrophysics Data System (ADS)

    Liu, Shuping; Fan, Wei

    2013-03-01

    The cloud model is a mathematical representation to fuzziness and randomness in linguistic concepts. It represents a qualitative concept with expected value Ex, entropy En and hyper entropy He, and integrates the fuzziness and randomness of a linguistic concept in a unified way. This model is a new method for transformation between qualitative and quantitative in the knowledge. This paper is introduced MPPT (maximum power point tracking, MPPT) controller based two- dimensional cloud model through analysis of auto-optimization MPPT control of photovoltaic power system and combining theory of cloud model. Simulation result shows that the cloud controller is simple and easy, directly perceived through the senses, and has strong robustness, better control performance.

  10. Global Aerodynamic Modeling for Stall/Upset Recovery Training Using Efficient Piloted Flight Test Techniques

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Cunningham, Kevin; Hill, Melissa A.

    2013-01-01

    Flight test and modeling techniques were developed for efficiently identifying global aerodynamic models that can be used to accurately simulate stall, upset, and recovery on large transport airplanes. The techniques were developed and validated in a high-fidelity fixed-base flight simulator using a wind-tunnel aerodynamic database, realistic sensor characteristics, and a realistic flight deck representative of a large transport aircraft. Results demonstrated that aerodynamic models for stall, upset, and recovery can be identified rapidly and accurately using relatively simple piloted flight test maneuvers. Stall maneuver predictions and comparisons of identified aerodynamic models with data from the underlying simulation aerodynamic database were used to validate the techniques.

  11. Improving anterior deltoid activity in a musculoskeletal shoulder model - an analysis of the torque-feasible space at the sternoclavicular joint.

    PubMed

    Ingram, David; Engelhardt, Christoph; Farron, Alain; Terrier, Alexandre; Müllhaupt, Philippe

    2016-01-01

    Modelling the shoulder's musculature is challenging given its mechanical and geometric complexity. The use of the ideal fibre model to represent a muscle's line of action cannot always faithfully represent the mechanical effect of each muscle, leading to considerable differences between model-estimated and in vivo measured muscle activity. While the musculo-tendon force coordination problem has been extensively analysed in terms of the cost function, only few works have investigated the existence and sensitivity of solutions to fibre topology. The goal of this paper is to present an analysis of the solution set using the concepts of torque-feasible space (TFS) and wrench-feasible space (WFS) from cable-driven robotics. A shoulder model is presented and a simple musculo-tendon force coordination problem is defined. The ideal fibre model for representing muscles is reviewed and the TFS and WFS are defined, leading to the necessary and sufficient conditions for the existence of a solution. The shoulder model's TFS is analysed to explain the lack of anterior deltoid (DLTa) activity. Based on the analysis, a modification of the model's muscle fibre geometry is proposed. The performance with and without the modification is assessed by solving the musculo-tendon force coordination problem for quasi-static abduction in the scapular plane. After the proposed modification, the DLTa reaches 20% of activation.

  12. Effectiveness of Key Knowledge Spreader Identification in Online Communities of Practice: A Simulation Study from Network Perspective

    ERIC Educational Resources Information Center

    Cao, Yu

    2017-01-01

    With the rapid development of online communities of practice (CoPs), how to identify key knowledge spreader (KKS) in online CoPs has grown up to be a hot issue. In this paper, we construct a network with variable clustering based on Holme-Kim model to represent CoPs, a simple dynamics of knowledge sharing is considered. Kendall's Tau coefficient…

  13. Calculation of the octanol-water partition coefficient of armchair polyhex BN nanotubes

    NASA Astrophysics Data System (ADS)

    Mohammadinasab, E.; Pérez-Sánchez, H.; Goodarzi, M.

    2017-12-01

    A predictive model for determination partition coefficient (log P) of armchair polyhex BN nanotubes by using simple descriptors was built. The relationship between the octanol-water log P and quantum chemical descriptors, electric moments, and topological indices of some armchair polyhex BN nanotubes with various lengths and fixed circumference are represented. Based on density functional theory electric moments and physico-chemical properties of those nanotubes are calculated.

  14. Anomalous transport in discrete arcs and simulation of double layers in a model auroral circuit

    NASA Technical Reports Server (NTRS)

    Smith, Robert A.

    1987-01-01

    The evolution and long-time stability of a double layer (DL) in a discrete auroral arc requires that the parallel current in the arc, which may be considered uniform at the source, be diverted within the arc to charge the flanks of the U-shaped double layer potential structure. A simple model is presented in which this current redistribution is effected by anomalous transport based on electrostatic lower hybrid waves driven by the flank structure itself. This process provides the limiting constraint on the double layer potential. The flank charging may be represented as that of a nonlinear transmission line. A simplified model circuit, in which the transmission line is represented by a nonlinear impedance in parallel with a variable resistor, is incorporated in a one-dimensional simulation model to give the current density at the DL boundaries. Results are presented for the scaling of the DL potential as a function of the width of the arc and the saturation efficiency of the lower hybrid instability mechanism.

  15. Anomalous transport in discrete arcs and simulation of double layers in a model auroral circuit

    NASA Technical Reports Server (NTRS)

    Smith, Robert A.

    1987-01-01

    The evolution and long-time stability of a double layer in a discrete auroral arc requires that the parallel current in the arc, which may be considered uniform at the source, be diverted within the arc to charge the flanks of the U-shaped double-layer potential structure. A simple model is presented in which this current re-distribution is effected by anomalous transport based on electrostatic lower hybrid waves driven by the flank structure itself. This process provides the limiting constraint on the double-layer potential. The flank charging may be represented as that of a nonlinear transmission. A simplified model circuit, in which the transmission line is represented by a nonlinear impedance in parallel with a variable resistor, is incorporated in a 1-d simulation model to give the current density at the DL boundaries. Results are presented for the scaling of the DL potential as a function of the width of the arc and the saturation efficiency of the lower hybrid instability mechanism.

  16. Influence of reciprocal edges on degree distribution and degree correlations

    NASA Astrophysics Data System (ADS)

    Zlatić, Vinko; Štefančić, Hrvoje

    2009-07-01

    Reciprocal edges represent the lowest-order cycle possible to find in directed graphs without self-loops. Representing also a measure of feedback between vertices, it is interesting to understand how reciprocal edges influence other properties of complex networks. In this paper, we focus on the influence of reciprocal edges on vertex degree distribution and degree correlations. We show that there is a fundamental difference between properties observed on the static network compared to the properties of networks, which are obtained by simple evolution mechanism driven by reciprocity. We also present a way to statistically infer the portion of reciprocal edges, which can be explained as a consequence of feedback process on the static network. In the rest of the paper, the influence of reciprocal edges on a model of growing network is also presented. It is shown that our model of growing network nicely interpolates between Barabási-Albert (BA) model for undirected and the BA model for directed networks.

  17. Automated Performance Prediction of Message-Passing Parallel Programs

    NASA Technical Reports Server (NTRS)

    Block, Robert J.; Sarukkai, Sekhar; Mehra, Pankaj; Woodrow, Thomas S. (Technical Monitor)

    1995-01-01

    The increasing use of massively parallel supercomputers to solve large-scale scientific problems has generated a need for tools that can predict scalability trends of applications written for these machines. Much work has been done to create simple models that represent important characteristics of parallel programs, such as latency, network contention, and communication volume. But many of these methods still require substantial manual effort to represent an application in the model's format. The NIK toolkit described in this paper is the result of an on-going effort to automate the formation of analytic expressions of program execution time, with a minimum of programmer assistance. In this paper we demonstrate the feasibility of our approach, by extending previous work to detect and model communication patterns automatically, with and without overlapped computations. The predictions derived from these models agree, within reasonable limits, with execution times of programs measured on the Intel iPSC/860 and Paragon. Further, we demonstrate the use of MK in selecting optimal computational grain size and studying various scalability metrics.

  18. Multivariate neural biomarkers of emotional states are categorically distinct

    PubMed Central

    Kragel, Philip A.

    2015-01-01

    Understanding how emotions are represented neurally is a central aim of affective neuroscience. Despite decades of neuroimaging efforts addressing this question, it remains unclear whether emotions are represented as distinct entities, as predicted by categorical theories, or are constructed from a smaller set of underlying factors, as predicted by dimensional accounts. Here, we capitalize on multivariate statistical approaches and computational modeling to directly evaluate these theoretical perspectives. We elicited discrete emotional states using music and films during functional magnetic resonance imaging scanning. Distinct patterns of neural activation predicted the emotion category of stimuli and tracked subjective experience. Bayesian model comparison revealed that combining dimensional and categorical models of emotion best characterized the information content of activation patterns. Surprisingly, categorical and dimensional aspects of emotion experience captured unique and opposing sources of neural information. These results indicate that diverse emotional states are poorly differentiated by simple models of valence and arousal, and that activity within separable neural systems can be mapped to unique emotion categories. PMID:25813790

  19. Linear regression metamodeling as a tool to summarize and present simulation model results.

    PubMed

    Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M

    2013-10-01

    Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.

  20. Analytical and multibody modeling for the power analysis of standing jumps.

    PubMed

    Palmieri, G; Callegari, M; Fioretti, S

    2015-01-01

    Two methods for the power analysis of standing jumps are proposed and compared in this article. The first method is based on a simple analytical formulation which requires as input the coordinates of the center of gravity in three specified instants of the jump. The second method is based on a multibody model that simulates the jumps processing the data obtained by a three-dimensional (3D) motion capture system and the dynamometric measurements obtained by the force platforms. The multibody model is developed with OpenSim, an open-source software which provides tools for the kinematic and dynamic analyses of 3D human body models. The study is focused on two of the typical tests used to evaluate the muscular activity of lower limbs, which are the counter movement jump and the standing long jump. The comparison between the results obtained by the two methods confirms that the proposed analytical formulation is correct and represents a simple tool suitable for a preliminary analysis of total mechanical work and the mean power exerted in standing jumps.

  1. Rates of profit as correlated sums of random variables

    NASA Astrophysics Data System (ADS)

    Greenblatt, R. E.

    2013-10-01

    Profit realization is the dominant feature of market-based economic systems, determining their dynamics to a large extent. Rather than attaining an equilibrium, profit rates vary widely across firms, and the variation persists over time. Differing definitions of profit result in differing empirical distributions. To study the statistical properties of profit rates, I used data from a publicly available database for the US Economy for 2009-2010 (Risk Management Association). For each of three profit rate measures, the sample space consists of 771 points. Each point represents aggregate data from a small number of US manufacturing firms of similar size and type (NAICS code of principal product). When comparing the empirical distributions of profit rates, significant ‘heavy tails’ were observed, corresponding principally to a number of firms with larger profit rates than would be expected from simple models. An apparently novel correlated sum of random variables statistical model was used to model the data. In the case of operating and net profit rates, a number of firms show negative profits (losses), ruling out simple gamma or lognormal distributions as complete models for these data.

  2. Integrated control strategy for autonomous decentralized conveyance systems based on distributed MEMS arrays

    NASA Astrophysics Data System (ADS)

    Zhou, Lingfei; Chapuis, Yves-Andre; Blonde, Jean-Philippe; Bervillier, Herve; Fukuta, Yamato; Fujita, Hiroyuki

    2004-07-01

    In this paper, the authors proposed to study a model and a control strategy of a two-dimensional conveyance system based on the principles of the Autonomous Decentralized Microsystems (ADM). The microconveyance system is based on distributed cooperative MEMS actuators which can produce a force field onto the surface of the device to grip and move a micro-object. The modeling approach proposed here is based on a simple model of a microconveyance system which is represented by a 5 x 5 matrix of cells. Each cell is consisted of a microactuator, a microsensor, and a microprocessor to provide actuation, autonomy and decentralized intelligence to the cell. Thus, each cell is able to identify a micro-object crossing on it and to decide by oneself the appropriate control strategy to convey the micro-object to its destination target. The control strategy could be established through five simple decision rules that the cell itself has to respect at each calculate cycle time. Simulation and FPGA implementation results are given in the end of the paper in order to validate model and control approach of the microconveyance system.

  3. An accurate model for predicting high frequency noise of nanoscale NMOS SOI transistors

    NASA Astrophysics Data System (ADS)

    Shen, Yanfei; Cui, Jie; Mohammadi, Saeed

    2017-05-01

    A nonlinear and scalable model suitable for predicting high frequency noise of N-type Metal Oxide Semiconductor (NMOS) transistors is presented. The model is developed for a commercial 45 nm CMOS SOI technology and its accuracy is validated through comparison with measured performance of a microwave low noise amplifier. The model employs the virtual source nonlinear core and adds parasitic elements to accurately simulate the RF behavior of multi-finger NMOS transistors up to 40 GHz. For the first time, the traditional long-channel thermal noise model is supplemented with an injection noise model to accurately represent the noise behavior of these short-channel transistors up to 26 GHz. The developed model is simple and easy to extract, yet very accurate.

  4. Logic-Based Models for the Analysis of Cell Signaling Networks†

    PubMed Central

    2010-01-01

    Computational models are increasingly used to analyze the operation of complex biochemical networks, including those involved in cell signaling networks. Here we review recent advances in applying logic-based modeling to mammalian cell biology. Logic-based models represent biomolecular networks in a simple and intuitive manner without describing the detailed biochemistry of each interaction. A brief description of several logic-based modeling methods is followed by six case studies that demonstrate biological questions recently addressed using logic-based models and point to potential advances in model formalisms and training procedures that promise to enhance the utility of logic-based methods for studying the relationship between environmental inputs and phenotypic or signaling state outputs of complex signaling networks. PMID:20225868

  5. Use of paired simple and complex models to reduce predictive bias and quantify uncertainty

    NASA Astrophysics Data System (ADS)

    Doherty, John; Christensen, Steen

    2011-12-01

    Modern environmental management and decision-making is based on the use of increasingly complex numerical models. Such models have the advantage of allowing representation of complex processes and heterogeneous system property distributions inasmuch as these are understood at any particular study site. The latter are often represented stochastically, this reflecting knowledge of the character of system heterogeneity at the same time as it reflects a lack of knowledge of its spatial details. Unfortunately, however, complex models are often difficult to calibrate because of their long run times and sometimes questionable numerical stability. Analysis of predictive uncertainty is also a difficult undertaking when using models such as these. Such analysis must reflect a lack of knowledge of spatial hydraulic property details. At the same time, it must be subject to constraints on the spatial variability of these details born of the necessity for model outputs to replicate observations of historical system behavior. In contrast, the rapid run times and general numerical reliability of simple models often promulgates good calibration and ready implementation of sophisticated methods of calibration-constrained uncertainty analysis. Unfortunately, however, many system and process details on which uncertainty may depend are, by design, omitted from simple models. This can lead to underestimation of the uncertainty associated with many predictions of management interest. The present paper proposes a methodology that attempts to overcome the problems associated with complex models on the one hand and simple models on the other hand, while allowing access to the benefits each of them offers. It provides a theoretical analysis of the simplification process from a subspace point of view, this yielding insights into the costs of model simplification, and into how some of these costs may be reduced. It then describes a methodology for paired model usage through which predictive bias of a simplified model can be detected and corrected, and postcalibration predictive uncertainty can be quantified. The methodology is demonstrated using a synthetic example based on groundwater modeling environments commonly encountered in northern Europe and North America.

  6. Simple stochastic birth and death models of genome evolution: was there enough time for us to evolve?

    PubMed

    Karev, Georgy P; Wolf, Yuri I; Koonin, Eugene V

    2003-10-12

    The distributions of many genome-associated quantities, including the membership of paralogous gene families can be approximated with power laws. We are interested in developing mathematical models of genome evolution that adequately account for the shape of these distributions and describe the evolutionary dynamics of their formation. We show that simple stochastic models of genome evolution lead to power-law asymptotics of protein domain family size distribution. These models, called Birth, Death and Innovation Models (BDIM), represent a special class of balanced birth-and-death processes, in which domain duplication and deletion rates are asymptotically equal up to the second order. The simplest, linear BDIM shows an excellent fit to the observed distributions of domain family size in diverse prokaryotic and eukaryotic genomes. However, the stochastic version of the linear BDIM explored here predicts that the actual size of large paralogous families is reached on an unrealistically long timescale. We show that introduction of non-linearity, which might be interpreted as interaction of a particular order between individual family members, allows the model to achieve genome evolution rates that are much better compatible with the current estimates of the rates of individual duplication/loss events.

  7. Prey-producing predators: the ecology of human intensification.

    PubMed

    Efferson, Charles

    2008-01-01

    Economic growth theory and theoretical ecology represent independent traditions of modeling aggregate consumer-resource systems. Both focus on different but equally important forces underlying the dynamics of human societies. Though the two traditions have unknowingly converged in some ways, they each have curious conventions from the perspective of the other. These conventions are reviewed, and two separate modeling frameworks that integrate the two traditions in a simple and straightforward fashion are developed and analyzed. The resulting models represent a consumer species (e.g. humans) that both produces and consumes its resources and then reproduces biologically according to the consumption of its resources. Depending on the balance between production, consumption, and reproduction, the models can exhibit stagnant behavior, like some predator-prey models, or growth, like many mutualism and economic growth models. When growth occurs, in the long term it takes one of two forms. Either resources per capita grow and the human population size converges to a constant, which may be zero, or resources per capita converge to a constant and the human population grows. The difference depends on initial conditions and the particular mix of biological conditions and human technology.

  8. Towards a voxel-based geographic automata for the simulation of geospatial processes

    NASA Astrophysics Data System (ADS)

    Jjumba, Anthony; Dragićević, Suzana

    2016-07-01

    Many geographic processes evolve in a three dimensional space and time continuum. However, when they are represented with the aid of geographic information systems (GIS) or geosimulation models they are modelled in a framework of two-dimensional space with an added temporal component. The objective of this study is to propose the design and implementation of voxel-based automata as a methodological approach for representing spatial processes evolving in the four-dimensional (4D) space-time domain. Similar to geographic automata models which are developed to capture and forecast geospatial processes that change in a two-dimensional spatial framework using cells (raster geospatial data), voxel automata rely on the automata theory and use three-dimensional volumetric units (voxels). Transition rules have been developed to represent various spatial processes which range from the movement of an object in 3D to the diffusion of airborne particles and landslide simulation. In addition, the proposed 4D models demonstrate that complex processes can be readily reproduced from simple transition functions without complex methodological approaches. The voxel-based automata approach provides a unique basis to model geospatial processes in 4D for the purpose of improving representation, analysis and understanding their spatiotemporal dynamics. This study contributes to the advancement of the concepts and framework of 4D GIS.

  9. Two Simple Classroom Demonstrations for Scanning Probe Microscopy Based on a Macroscopic Analogy

    ERIC Educational Resources Information Center

    Hajkova, Zdenka; Fejfar, Antonin; Smejkal, Petr

    2013-01-01

    This article describes two simple classroom demonstrations that illustrate the principles of scanning probe microscopy (SPM) based on a macroscopic analogy. The analogy features the bumps in an egg carton to represent the atoms on a chemical surface and a probe that can be represented by a dwarf statue (illustrating an origin of the prefix…

  10. Vector-based model of elastic bonds for simulation of granular solids.

    PubMed

    Kuzkin, Vitaly A; Asonov, Igor E

    2012-11-01

    A model (further referred to as the V model) for the simulation of granular solids, such as rocks, ceramics, concrete, nanocomposites, and agglomerates, composed of bonded particles (rigid bodies), is proposed. It is assumed that the bonds, usually representing some additional gluelike material connecting particles, cause both forces and torques acting on the particles. Vectors rigidly connected with the particles are used to describe the deformation of a single bond. The expression for potential energy of the bond and corresponding expressions for forces and torques are derived. Formulas connecting parameters of the model with longitudinal, shear, bending, and torsional stiffnesses of the bond are obtained. It is shown that the model makes it possible to describe any values of the bond stiffnesses exactly; that is, the model is applicable for the bonds with arbitrary length/thickness ratio. Two different calibration procedures depending on bond length/thickness ratio are proposed. It is shown that parameters of the model can be chosen so that under small deformations the bond is equivalent to either a Bernoulli-Euler beam or a Timoshenko beam or short cylinder connecting particles. Simple analytical expressions, relating parameters of the V model with geometrical and mechanical characteristics of the bond, are derived. Two simple examples of computer simulation of thin granular structures using the V model are given.

  11. Rough Evaluation Structure: Application of Rough Set Theory to Generate Simple Rules for Inconsistent Preference Relation

    NASA Astrophysics Data System (ADS)

    Gehrmann, Andreas; Nagai, Yoshimitsu; Yoshida, Osamu; Ishizu, Syohei

    Since management decision-making becomes complex and preferences of the decision-maker frequently becomes inconsistent, multi-attribute decision-making problems were studied. To represent inconsistent preference relation, the concept of evaluation structure was introduced. We can generate simple rules to represent inconsistent preference relation by the evaluation structures. Further rough set theory for the preference relation was studied and the concept of approximation was introduced. One of our main aims of this paper is to introduce a concept of rough evaluation structure for representing inconsistent preference relation. We apply rough set theory to the evaluation structure, and develop a method for generating simple rules for inconsistent preference relations. In this paper, we introduce concepts of totally ordered information system, similarity class of preference relation, upper and lower approximation of preference relations. We also show the properties of rough evaluation structure and provide a simple example. As an application of rough evaluation structure, we analyze questionnaire survey of customer preferences about audio players.

  12. Robust encoding of stimulus identity and concentration in the accessory olfactory system.

    PubMed

    Arnson, Hannah A; Holy, Timothy E

    2013-08-14

    Sensory systems represent stimulus identity and intensity, but in the neural periphery these two variables are typically intertwined. Moreover, stable detection may be complicated by environmental uncertainty; stimulus properties can differ over time and circumstance in ways that are not necessarily biologically relevant. We explored these issues in the context of the mouse accessory olfactory system, which specializes in detection of chemical social cues and infers myriad aspects of the identity and physiological state of conspecifics from complex mixtures, such as urine. Using mixtures of sulfated steroids, key constituents of urine, we found that spiking responses of individual vomeronasal sensory neurons encode both individual compounds and mixtures in a manner consistent with a simple model of receptor-ligand interactions. Although typical neurons did not accurately encode concentration over a large dynamic range, from population activity it was possible to reliably estimate the log-concentration of pure compounds over several orders of magnitude. For binary mixtures, simple models failed to accurately segment the individual components, largely because of the prevalence of neurons responsive to both components. By accounting for such overlaps during model tuning, we show that, from neuronal firing, one can accurately estimate log-concentration of both components, even when tested across widely varying concentrations. With this foundation, the difference of logarithms, log A - log B = log A/B, provides a natural mechanism to accurately estimate concentration ratios. Thus, we show that a biophysically plausible circuit model can reconstruct concentration ratios from observed neuronal firing, representing a powerful mechanism to separate stimulus identity from absolute concentration.

  13. Spin Hall and Nernst effects of Weyl magnons

    NASA Astrophysics Data System (ADS)

    Zyuzin, Vladimir A.; Kovalev, Alexey A.

    2018-05-01

    In this paper, we present a simple model of a three-dimensional insulating magnetic structure which represents a magnonic analog of the layered electronic system described by A. A. Burkov and L. Balents [Phys. Rev. Lett. 107, 127205 (2011), 10.1103/PhysRevLett.107.127205]. In particular, our model realizes Weyl magnons as well as surface states with a Dirac spectrum. In this model, the Dzyaloshinskii-Moriya interaction is responsible for the separation of opposite Weyl points in momentum space. We calculate the intrinsic (due to the Berry curvature) transport properties of Weyl and so-called anomalous Hall effect magnons. The results are compared with fermionic analogs.

  14. Models of convection-driven tectonic plates - A comparison of methods and results

    NASA Technical Reports Server (NTRS)

    King, Scott D.; Gable, Carl W.; Weinstein, Stuart A.

    1992-01-01

    Recent numerical studies of convection in the earth's mantle have included various features of plate tectonics. This paper describes three methods of modeling plates: through material properties, through force balance, and through a thin power-law sheet approximation. The results obtained are compared using each method on a series of simple calculations. From these results, scaling relations between the different parameterizations are developed. While each method produces different degrees of deformation within the surface plate, the surface heat flux and average plate velocity agree to within a few percent. The main results are not dependent upon the plate modeling method and herefore are representative of the physical system modeled.

  15. Simple theoretical models for composite rotor blades

    NASA Technical Reports Server (NTRS)

    Valisetty, R. R.; Rehfield, L. W.

    1984-01-01

    The development of theoretical rotor blade structural models for designs based upon composite construction is discussed. Care was exercised to include a member of nonclassical effects that previous experience indicated would be potentially important to account for. A model, representative of the size of a main rotor blade, is analyzed in order to assess the importance of various influences. The findings of this model study suggest that for the slenderness and closed cell construction considered, the refinements are of little importance and a classical type theory is adequate. The potential of elastic tailoring is dramatically demonstrated, so the generality of arbitrary ply layup in the cell wall is needed to exploit this opportunity.

  16. 'Peeling a comet': Layering of comet analogues

    NASA Astrophysics Data System (ADS)

    Kaufmann, E.; Hagermann, A.

    2017-09-01

    Using a simple comet analogue we investigate the influence of subsurface solar light absorption by dust. We found that a sample initially consisting of loose water ice grains and carbon particles becomes significantly harder after being irradiated with artificial sunlight for several hours. Further a drastic change of the sample surface could be observed. These results suggests that models should treat the nucleus surface as an interactive transitional zone to better represent cometary processes.

  17. Mashups over the Deep Web

    NASA Astrophysics Data System (ADS)

    Hornung, Thomas; Simon, Kai; Lausen, Georg

    Combining information from different Web sources often results in a tedious and repetitive process, e.g. even simple information requests might require to iterate over a result list of one Web query and use each single result as input for a subsequent query. One approach for this chained queries are data-centric mashups, which allow to visually model the data flow as a graph, where the nodes represent the data source and the edges the data flow.

  18. A Simple Hydraulic Analog Model of Oxidative Phosphorylation.

    PubMed

    Willis, Wayne T; Jackman, Matthew R; Messer, Jeffrey I; Kuzmiak-Glancy, Sarah; Glancy, Brian

    2016-06-01

    Mitochondrial oxidative phosphorylation is the primary source of cellular energy transduction in mammals. This energy conversion involves dozens of enzymatic reactions, energetic intermediates, and the dynamic interactions among them. With the goal of providing greater insight into the complex thermodynamics and kinetics ("thermokinetics") of mitochondrial energy transduction, a simple hydraulic analog model of oxidative phosphorylation is presented. In the hydraulic model, water tanks represent the forward and back "pressures" exerted by thermodynamic driving forces: the matrix redox potential (ΔGredox), the electrochemical potential for protons across the mitochondrial inner membrane (ΔGH), and the free energy of adenosine 5'-triphosphate (ATP) (ΔGATP). Net water flow proceeds from tanks with higher water pressure to tanks with lower pressure through "enzyme pipes" whose diameters represent the conductances (effective activities) of the proteins that catalyze the energy transfer. These enzyme pipes include the reactions of dehydrogenase enzymes, the electron transport chain (ETC), and the combined action of ATP synthase plus the ATP-adenosine 5'-diphosphate exchanger that spans the inner membrane. In addition, reactive oxygen species production is included in the model as a leak that is driven out of the ETC pipe by high pressure (high ΔGredox) and a proton leak dependent on the ΔGH for both its driving force and the conductance of the leak pathway. Model water pressures and flows are shown to simulate thermodynamic forces and metabolic fluxes that have been experimentally observed in mammalian skeletal muscle in response to acute exercise, chronic endurance training, and reduced substrate availability, as well as account for the thermokinetic behavior of mitochondria from fast- and slow-twitch skeletal muscle and the metabolic capacitance of the creatine kinase reaction.

  19. Functional and Structural Optimality in Plant Growth: A Crop Modelling Case Study

    NASA Astrophysics Data System (ADS)

    Caldararu, S.; Purves, D. W.; Smith, M. J.

    2014-12-01

    Simple mechanistic models of vegetation processes are essential both to our understanding of plant behaviour and to our ability to predict future changes in vegetation. One concept that can take us closer to such models is that of plant optimality, the hypothesis that plants aim to achieve an optimal state. Conceptually, plant optimality can be either structural or functional optimality. A structural constraint would mean that plants aim to achieve a certain structural characteristic such as an allometric relationship or nutrient content that allows optimal function. A functional condition refers to plants achieving optimal functionality, in most cases by maximising carbon gain. Functional optimality conditions are applied on shorter time scales and lead to higher plasticity, making plants more adaptable to changes in their environment. In contrast, structural constraints are optimal given the specific environmental conditions that plants are adapted to and offer less flexibility. We exemplify these concepts using a simple model of crop growth. The model represents annual cycles of growth from sowing date to harvest, including both vegetative and reproductive growth and phenology. Structural constraints to growth are represented as an optimal C:N ratio in all plant organs, which drives allocation throughout the vegetative growing stage. Reproductive phenology - i.e. the onset of flowering and grain filling - is determined by a functional optimality condition in the form of maximising final seed mass, so that vegetative growth stops when the plant reaches maximum nitrogen or carbon uptake. We investigate the plants' response to variations in environmental conditions within these two optimality constraints and show that final yield is most affected by changes during vegetative growth which affect the structural constraint.

  20. Bridging the scales in a eulerian air quality model to assess megacity export of pollution

    NASA Astrophysics Data System (ADS)

    Siour, G.; Colette, A.; Menut, L.; Bessagnet, B.; Coll, I.; Meleux, F.

    2013-08-01

    In Chemistry Transport Models (CTMs), spatial scale interactions are often represented through off-line coupling between large and small scale models. However, those nested configurations cannot give account of the impact of the local scale on its surroundings. This issue can be critical in areas exposed to air mass recirculation (sea breeze cells) or around regions with sharp pollutant emission gradients (large cities). Such phenomena can still be captured by the mean of adaptive gridding, two-way nesting or using model nudging, but these approaches remain relatively costly. We present here the development and the results of a simple alternative multi-scale approach making use of a horizontal stretched grid, in the Eulerian CTM CHIMERE. This method, called "stretching" or "zooming", consists in the introduction of local zooms in a single chemistry-transport simulation. It allows bridging online the spatial scales from the city (∼1 km resolution) to the continental area (∼50 km resolution). The CHIMERE model was run over a continental European domain, zoomed over the BeNeLux (Belgium, Netherlands and Luxembourg) area. We demonstrate that, compared with one-way nesting, the zooming method allows the expression of a significant feedback of the refined domain towards the large scale: around the city cluster of BeNeLuX, NO2 and O3 scores are improved. NO2 variability around BeNeLux is also better accounted for, and the net primary pollutant flux transported back towards BeNeLux is reduced. Although the results could not be validated for ozone over BeNeLux, we show that the zooming approach provides a simple and immediate way to better represent scale interactions within a CTM, and constitutes a useful tool for apprehending the hot topic of megacities within their continental environment.

  1. Circuit theory and model-based inference for landscape connectivity

    USGS Publications Warehouse

    Hanks, Ephraim M.; Hooten, Mevin B.

    2013-01-01

    Circuit theory has seen extensive recent use in the field of ecology, where it is often applied to study functional connectivity. The landscape is typically represented by a network of nodes and resistors, with the resistance between nodes a function of landscape characteristics. The effective distance between two locations on a landscape is represented by the resistance distance between the nodes in the network. Circuit theory has been applied to many other scientific fields for exploratory analyses, but parametric models for circuits are not common in the scientific literature. To model circuits explicitly, we demonstrate a link between Gaussian Markov random fields and contemporary circuit theory using a covariance structure that induces the necessary resistance distance. This provides a parametric model for second-order observations from such a system. In the landscape ecology setting, the proposed model provides a simple framework where inference can be obtained for effects that landscape features have on functional connectivity. We illustrate the approach through a landscape genetics study linking gene flow in alpine chamois (Rupicapra rupicapra) to the underlying landscape.

  2. A modelling approach for the vibroacoustic behaviour of aluminium extrusions used in railway vehicles

    NASA Astrophysics Data System (ADS)

    Xie, G.; Thompson, D. J.; Jones, C. J. C.

    2006-06-01

    Modern railway vehicles are often constructed from double walled aluminium extrusions, which give a stiff, light construction. However, the acoustic performance of such panels is less satisfactory, with the airborne sound transmission being considerably worse than the mass law for the equivalent simple panel. To compensate for this, vehicle manufacturers are forced to add treatments such as damping layers, absorptive layers and floating floors. Moreover, a model for extruded panels that is both simple and reliable is required to assist in the early stages of design. An statistical energy analysis (SEA) model to predict the vibroacoustic behaviour of aluminium extrusions is presented here. An extruded panel is represented by a single global mode subsystem and three subsystems representing local modes of the various strips which occur for frequencies typically above 500 Hz. An approximate model for the modal density of extruded panels is developed and this is verified using an FE model. The coupling between global and local modes is approximated with the coupling between a travelling global wave and uncorrelated local waves. This model enables the response difference across the panels to be predicted. For the coupling with air, the average radiation efficiency of a baffled extruded panel is modelled in terms of the contributions from global and local modes. Experimental studies of a sample extruded panel have also been carried out. The vibration of an extruded panel under mechanical excitation is measured for various force positions and the vibration distribution over the panel is obtained in detail. The radiation efficiencies of a free extruded panel have also been measured. The complete SEA model of a panel is finally used to predict the response of the extruded panel under mechanical and acoustic excitations. Especially for mechanical excitation, the proposed SEA model gives a good prediction compared with the measurement results.

  3. Modeling the Soft Geometry of Biological Membranes

    NASA Astrophysics Data System (ADS)

    Daly, K.

    This dissertation presents work done applying the techniques of physics to biological systems. The difference in length scales of the thickness of the phospolipid bilayer and overall size of a biological cell allows bilayer to be modeled elastically as a thin sheet. The Helfrich free energy is extended applied to models representing various biological systems, in order to find quasi-equilibrium states as well as transitions between states. Morphologies are approximated as axially sym-metric. Stable morphologies are de-termined analytically and through the use of computer simulation. The simple morphologies examined analytically give a model for the pearling transition seen in growing biological cells. An analytic model of celluar bulging in gram-negative bacteria predicts a critical pore radius for bulging of 20 nanometers. This model is extended to the membrane dynamics of human red blood cells, predicting three morphologic phases which are seen in vivo. A computer simulation was developed to study more complex morphologies with models representing different bilayer compositions. Single and multi-component bilayer models reproduce morphologies previously predicted by Seifert. A mean field model representing the intrinsic curvature of proteins coupling to membrane curvature is used to explore the stability of the particular morphology of rod outer segment cells. The process of pore formation and expansion in cell-cell fusion is not well understood. Simulation of the pore created in cell-cell fusion led to the finding of a minimal pore radius required for pore expansion, suggesting pores formed in nature are formed with a minimum size.

  4. Dynamics of market structure driven by the degree of consumer’s rationality

    NASA Astrophysics Data System (ADS)

    Yanagita, Tatsuo; Onozaki, Tamotsu

    2010-03-01

    We study a simple model of market share dynamics with boundedly rational consumers and firms interacting with each other. As the number of consumers is large, we employ a statistical description to represent firms’ distribution of consumer share, which is characterized by a single parameter representing how rationally the mass of consumers pursue higher utility. As the boundedly rational firm does not know the shape of demand function it faces, it revises production and price so as to raise its profit with the aid of a simple reinforcement learning rule. Simulation results show that (1) three phases of market structure, i.e. the uniform share phase, the oligopolistic phase, and the monopolistic phase, appear depending upon how rational consumers are, and (2) in an oligopolistic phase, the market share distribution of firms follows Zipf’s law and the growth-rate distribution of firms follows Gibrat’s law, and (3) an oligopolistic phase is the best state of market in terms of consumers’ utility but brings the minimum profit to the firms because of severe competition based on the moderate rationality of consumers.

  5. Requirements analysis, domain knowledge, and design

    NASA Technical Reports Server (NTRS)

    Potts, Colin

    1988-01-01

    Two improvements to current requirements analysis practices are suggested: domain modeling, and the systematic application of analysis heuristics. Domain modeling is the representation of relevant application knowledge prior to requirements specification. Artificial intelligence techniques may eventually be applicable for domain modeling. In the short term, however, restricted domain modeling techniques, such as that in JSD, will still be of practical benefit. Analysis heuristics are standard patterns of reasoning about the requirements. They usually generate questions of clarification or issues relating to completeness. Analysis heuristics can be represented and therefore systematically applied in an issue-based framework. This is illustrated by an issue-based analysis of JSD's domain modeling and functional specification heuristics. They are discussed in the context of the preliminary design of simple embedded systems.

  6. Analysis of a decision model in the context of equilibrium pricing and order book pricing

    NASA Astrophysics Data System (ADS)

    Wagner, D. C.; Schmitt, T. A.; Schäfer, R.; Guhr, T.; Wolf, D. E.

    2014-12-01

    An agent-based model for financial markets has to incorporate two aspects: decision making and price formation. We introduce a simple decision model and consider its implications in two different pricing schemes. First, we study its parameter dependence within a supply-demand balance setting. We find realistic behavior in a wide parameter range. Second, we embed our decision model in an order book setting. Here, we observe interesting features which are not present in the equilibrium pricing scheme. In particular, we find a nontrivial behavior of the order book volumes which reminds of a trend switching phenomenon. Thus, the decision making model alone does not realistically represent the trading and the stylized facts. The order book mechanism is crucial.

  7. A Martian global groundwater model

    NASA Technical Reports Server (NTRS)

    Howard, Alan D.

    1991-01-01

    A global groundwater flow model was constructed for Mars to study hydrologic response under a variety of scenarios, improving and extending earlier simple cross sectional models. The model is capable of treating both steady state and transient flow as well as permeability that is anisotropic in the horizontal dimensions. A single near surface confining layer may be included (representing in these simulations a coherent permafrost layer). Furthermore, in unconfined flow, locations of complete saturation and seepage are determined. The flow model assumes that groundwater gradients are sufficiently low that DuPuit conditions are satisfied and the flow component perpendicular to the ground surface is negligible. The flow equations were solved using a finite difference method employing 10 deg spacing of latitude and longitude.

  8. Identification of cracks in thick beams with a cracked beam element model

    NASA Astrophysics Data System (ADS)

    Hou, Chuanchuan; Lu, Yong

    2016-12-01

    The effect of a crack on the vibration of a beam is a classical problem, and various models have been proposed, ranging from the basic stiffness reduction method to the more sophisticated model involving formulation based on the additional flexibility due to a crack. However, in the damage identification or finite element model updating applications, it is still common practice to employ a simple stiffness reduction factor to represent a crack in the identification process, whereas the use of a more realistic crack model is rather limited. In this paper, the issues with the simple stiffness reduction method, particularly concerning thick beams, are highlighted along with a review of several other crack models. A robust finite element model updating procedure is then presented for the detection of cracks in beams. The description of the crack parameters is based on the cracked beam flexibility formulated by means of the fracture mechanics, and it takes into consideration of shear deformation and coupling between translational and longitudinal vibrations, and thus is particularly suitable for thick beams. The identification procedure employs a global searching technique using Genetic Algorithms, and there is no restriction on the location, severity and the number of cracks to be identified. The procedure is verified to yield satisfactory identification for practically any configurations of cracks in a beam.

  9. Considerations for the application of finite element beam modeling to vibration analysis of flight vehicle structures. Ph.D. Thesis - Case Western Reserve Univ.

    NASA Technical Reports Server (NTRS)

    Kvaternik, R. G.

    1976-01-01

    The manner of representing a flight vehicle structure as an assembly of beam, spring, and rigid-body components for vibration analysis is described. The development is couched in terms of a substructures methodology which is based on the finite-element stiffness method. The particular manner of employing beam, spring, and rigid-body components to model such items as wing structures, external stores, pylons supporting engines or external stores, and sprung masses associated with launch vehicle fuel slosh is described by means of several simple qualitative examples. A detailed numerical example consisting of a tilt-rotor VTOL aircraft is included to provide a unified illustration of the procedure for representing a structure as an equivalent system of beams, springs, and rigid bodies, the manner of forming the substructure mass and stiffness matrices, and the mechanics of writing the equations of constraint which enforce deflection compatibility at the junctions of the substructures. Since many structures, or selected components of structures, can be represented in this manner for vibration analysis, the modeling concepts described and their application in the numerical example shown should prove generally useful to the dynamicist.

  10. Building a Formal Model of a Human-Interactive System: Insights into the Integration of Formal Methods and Human Factors Engineering

    NASA Technical Reports Server (NTRS)

    Bolton, Matthew L.; Bass, Ellen J.

    2009-01-01

    Both the human factors engineering (HFE) and formal methods communities are concerned with finding and eliminating problems with safety-critical systems. This work discusses a modeling effort that leveraged methods from both fields to use model checking with HFE practices to perform formal verification of a human-interactive system. Despite the use of a seemingly simple target system, a patient controlled analgesia pump, the initial model proved to be difficult for the model checker to verify in a reasonable amount of time. This resulted in a number of model revisions that affected the HFE architectural, representativeness, and understandability goals of the effort. If formal methods are to meet the needs of the HFE community, additional modeling tools and technological developments are necessary.

  11. Classification framework for partially observed dynamical systems

    NASA Astrophysics Data System (ADS)

    Shen, Yuan; Tino, Peter; Tsaneva-Atanasova, Krasimira

    2017-04-01

    We present a general framework for classifying partially observed dynamical systems based on the idea of learning in the model space. In contrast to the existing approaches using point estimates of model parameters to represent individual data items, we employ posterior distributions over model parameters, thus taking into account in a principled manner the uncertainty due to both the generative (observational and/or dynamic noise) and observation (sampling in time) processes. We evaluate the framework on two test beds: a biological pathway model and a stochastic double-well system. Crucially, we show that the classification performance is not impaired when the model structure used for inferring posterior distributions is much more simple than the observation-generating model structure, provided the reduced-complexity inferential model structure captures the essential characteristics needed for the given classification task.

  12. Continuum Modeling of Inductor Hysteresis and Eddy Current Loss Effects in Resonant Circuits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pries, Jason L.; Tang, Lixin; Burress, Timothy A.

    This paper presents experimental validation of a high-fidelity toroid inductor modeling technique. The aim of this research is to accurately model the instantaneous magnetization state and core losses in ferromagnetic materials. Quasi–static hysteresis effects are captured using a Preisach model. Eddy currents are included by coupling the associated quasi-static Everett function to a simple finite element model representing the inductor cross sectional area. The modeling technique is validated against the nonlinear frequency response from two different series RLC resonant circuits using inductors made of electrical steel and soft ferrite. The method is shown to accurately model shifts in resonant frequencymore » and quality factor. The technique also successfully predicts a discontinuity in the frequency response of the ferrite inductor resonant circuit.« less

  13. The hydrological cycle at European Fluxnet sites: modeling seasonal water and energy budgets at local scale.

    NASA Astrophysics Data System (ADS)

    Stockli, R.; Vidale, P. L.

    2003-04-01

    The importance of correctly including land surface processes in climate models has been increasingly recognized in the past years. Even on seasonal to interannual time scales land surface - atmosphere feedbacks can play a substantial role in determining the state of the near-surface climate. The availability of soil moisture for both runoff and evapotranspiration is dependent on biophysical processes occuring in plants and in the soil acting on a wide time-scale from minutes to years. Fluxnet site measurements in various climatic zones are used to drive three generations of LSM's (land surface models) in order to assess the level of complexity needed to represent vegetation processes at the local scale. The three models were the Bucket model (Manabe 1969), BATS 1E (Dickinson 1984) and SiB 2 (Sellers et al. 1996). Evapotranspiration and runoff processes simulated by these models range from simple one-layer soils and no-vegetation parameterizations to complex multilayer soils, including realistic photosynthesis-stomatal conductance models. The latter is driven by satellite remote sensing land surface parameters inheriting the spatiotemporal evolution of vegetation phenology. In addition a simulation with SiB 2 not only including vertical water fluxes but also lateral soil moisture transfers by downslope flow is conducted for a pre-alpine catchment in Switzerland. Preliminary results are presented and show that - depending on the climatic environment and on the season - a realistic representation of evapotranspiration processes including seasonally and interannually-varying state of vegetation is significantly improving the representation of observed latent and sensible heat fluxes on the local scale. Moreover, the interannual evolution of soil moisture availability and runoff is strongly dependent on the chosen model complexity. Biophysical land surface parameters from satellite allow to represent the seasonal changes in vegetation activity, which has great impact on the yearly budget of transpiration fluxes. For some sites, however, the hydrological cycle is simulated reasonably well even with simple land surface representations.

  14. Model of head-neck joint fast movements in the frontal plane.

    PubMed

    Pedrocchi, A; Ferrigno, G

    2004-06-01

    The objective of this work is to develop a model representing the physiological systems driving fast head movements in frontal plane. All the contributions occurring mechanically in the head movement are considered: damping, stiffness, physiological limit of range of motion, gravitational field, and muscular torques due to voluntary activation as well as to stretch reflex depending on fusal afferences. Model parameters are partly derived from the literature, when possible, whereas undetermined block parameters are determined by optimising the model output, fitting to real kinematics data acquired by a motion capture system in specific experimental set-ups. The optimisation for parameter identification is performed by genetic algorithms. Results show that the model represents very well fast head movements in the whole range of inclination in the frontal plane. Such a model could be proposed as a tool for transforming kinematics data on head movements in 'neural equivalent data', especially for assessing head control disease and properly planning the rehabilitation process. In addition, the use of genetic algorithms seems to fit well the problem of parameter identification, allowing for the use of a very simple experimental set-up and granting model robustness.

  15. A brief description of the simple biosphere model (SiB)

    NASA Technical Reports Server (NTRS)

    Sellers, P. J.; Mintz, Y.; Sud, Y. C.

    1986-01-01

    A biosphere model for calculating the transfer of energy, mass, and momentum between the atmosphere and the vegetated surface of the Earth was designed for atmospheric general circulation models. An upper vegetation layer represents the perennial canopy of trees or shrubs, a lower layer represents the annual ground cover of grasses and other herbacious species. The local coverage of each vegetation layer may be fractional or complete but as the individual vegetation elements are considered to be evenly spaced, their root systems are assumed to extend uniformly throughout the entire grid-area. The biosphere has seven prognostic physical-state variables: two temperatures (one for the canopy and one for the ground cover and soil surface); two interception water stores (one for the canopy and one for the ground cover); and three soil moisture stores (two of which can be reached by the vegetation root systems and one underlying recharge layer into and out of which moisture is transferred only by hydraulic diffusion).

  16. Adjusted variable plots for Cox's proportional hazards regression model.

    PubMed

    Hall, C B; Zeger, S L; Bandeen-Roche, K J

    1996-01-01

    Adjusted variable plots are useful in linear regression for outlier detection and for qualitative evaluation of the fit of a model. In this paper, we extend adjusted variable plots to Cox's proportional hazards model for possibly censored survival data. We propose three different plots: a risk level adjusted variable (RLAV) plot in which each observation in each risk set appears, a subject level adjusted variable (SLAV) plot in which each subject is represented by one point, and an event level adjusted variable (ELAV) plot in which the entire risk set at each failure event is represented by a single point. The latter two plots are derived from the RLAV by combining multiple points. In each point, the regression coefficient and standard error from a Cox proportional hazards regression is obtained by a simple linear regression through the origin fit to the coordinates of the pictured points. The plots are illustrated with a reanalysis of a dataset of 65 patients with multiple myeloma.

  17. Regional myocardial flow heterogeneity explained with fractal networks

    PubMed Central

    VAN BEEK, JOHANNES H. G. M.; ROGER, STEPHEN A.; BASSINGTHWAIGHTE, JAMES B.

    2010-01-01

    There is explain how the distribution of flow broadens with an increase in the spatial resolution of the measurement, we developed fractal models for vascular networks. A dichotomous branching network of vessels represents the arterial tree and connects to a similar venous network. A small difference in vessel lengths and radii between the two daughter vessels, with the same degree of asymmetry at each branch generation, predicts the dependence of the relative dispersion (mean ± SD) on spatial resolution of the perfusion measurement reasonably well. When the degree of asymmetry increases with successive branching, a better fit to data on sheep and baboons results. When the asymmetry is random, a satisfactory fit is found. These models show that a difference in flow of 20% between the daughter vessels at a branch point gives a relative dispersion of flow of ~30% when the heart is divided into 100–200 pieces. Although these simple models do not represent anatomic features accurately, they provide valuable insight on the heterogeneity of flow within the heart. PMID:2589520

  18. Propagation of flexural and membrane waves with fluid loaded NASTRAN plate and shell elements

    NASA Technical Reports Server (NTRS)

    Kalinowski, A. J.; Wagner, C. A.

    1983-01-01

    Modeling of flexural and membrane type waves existing in various submerged (or in vacuo) plate and/or shell finite element models that are excited with steady state type harmonic loadings proportioned to e(i omega t) is discussed. Only thin walled plates and shells are treated wherein rotary inertia and shear correction factors are not included. More specifically, the issue of determining the shell or plate mesh size needed to represent the spatial distribution of the plate or shell response is of prime importance towards successfully representing the solution to the problem at hand. To this end, a procedure is presented for establishing guide lines for determining the mesh size based on a simple test model that can be used for a variety of plate and shell configurations such as, cylindrical shells with water loading, cylindrical shells in vacuo, plates with water loading, and plates in vacuo. The procedure for doing these four cases is given, with specific numerical examples present only for the cylindrical shell case.

  19. Indirect Reconstruction of Pore Morphology for Parametric Computational Characterization of Unidirectional Porous Iron.

    PubMed

    Kovačič, Aljaž; Borovinšek, Matej; Vesenjak, Matej; Ren, Zoran

    2018-01-26

    This paper addresses the problem of reconstructing realistic, irregular pore geometries of lotus-type porous iron for computer models that allow for simple porosity and pore size variation in computational characterization of their mechanical properties. The presented methodology uses image-recognition algorithms for the statistical analysis of pore morphology in real material specimens, from which a unique fingerprint of pore morphology at a certain porosity level is derived. The representative morphology parameter is introduced and used for the indirect reconstruction of realistic and statistically representative pore morphologies, which can be used for the generation of computational models with an arbitrary porosity. Such models were subjected to parametric computer simulations to characterize the dependence of engineering elastic modulus on the porosity of lotus-type porous iron. The computational results are in excellent agreement with experimental observations, which confirms the suitability of the presented methodology of indirect pore geometry reconstruction for computational simulations of similar porous materials.

  20. Evaluation of transtension and transpression within contractional fault steps: Comparing kinematic and mechanical models to field data

    NASA Astrophysics Data System (ADS)

    Nevitt, Johanna M.; Pollard, David D.; Warren, Jessica M.

    2014-03-01

    Rock deformation often is investigated using kinematic and/or mechanical models. Here we provide a direct comparison of these modeling techniques in the context of a deformed dike within a meter-scale contractional fault step. The kinematic models consider two possible shear plane orientations and various modes of deformation (simple shear, transtension, transpression), while the mechanical model uses the finite element method and assumes elastoplastic constitutive behavior. The results for the kinematic and mechanical models are directly compared using the modeled maximum and minimum principal stretches. The kinematic analysis indicates that the contractional step may be classified as either transtensional or transpressional depending on the modeled shear plane orientation, suggesting that these terms may be inappropriate descriptors of step-related deformation. While the kinematic models do an acceptable job of depicting the change in dike shape and orientation, they are restricted to a prescribed homogeneous deformation. In contrast, the mechanical model allows for heterogeneous deformation within the step to accurately represent the deformation. The ability to characterize heterogeneous deformation and include fault slip - not as a prescription, but as a solution to the governing equations of motion - represents a significant advantage of the mechanical model over the kinematic models.

  1. A method for simulating transient ground-water recharge in deep water-table settings in central Florida by using a simple water-balance/transfer-function model

    USGS Publications Warehouse

    O'Reilly, Andrew M.

    2004-01-01

    A relatively simple method is needed that provides estimates of transient ground-water recharge in deep water-table settings that can be incorporated into other hydrologic models. Deep water-table settings are areas where the water table is below the reach of plant roots and virtually all water that is not lost to surface runoff, evaporation at land surface, or evapotranspiration in the root zone eventually becomes ground-water recharge. Areas in central Florida with a deep water table generally are high recharge areas; consequently, simulation of recharge in these areas is of particular interest to water-resource managers. Yet the complexities of meteorological variations and unsaturated flow processes make it difficult to estimate short-term recharge rates, thereby confounding calibration and predictive use of transient hydrologic models. A simple water-balance/transfer-function (WBTF) model was developed for simulating transient ground-water recharge in deep water-table settings. The WBTF model represents a one-dimensional column from the top of the vegetative canopy to the water table and consists of two components: (1) a water-balance module that simulates the water storage capacity of the vegetative canopy and root zone; and (2) a transfer-function module that simulates the traveltime of water as it percolates from the bottom of the root zone to the water table. Data requirements include two time series for the period of interest?precipitation (or precipitation minus surface runoff, if surface runoff is not negligible) and evapotranspiration?and values for five parameters that represent water storage capacity or soil-drainage characteristics. A limiting assumption of the WBTF model is that the percolation of water below the root zone is a linear process. That is, percolating water is assumed to have the same traveltime characteristics, experiencing the same delay and attenuation, as it moves through the unsaturated zone. This assumption is more accurate if the moisture content, and consequently the unsaturated hydraulic conductivity, below the root zone does not vary substantially with time. Results of the WBTF model were compared to those of the U.S. Geological Survey variably saturated flow model, VS2DT, and to field-based estimates of recharge to demonstrate the applicability of the WBTF model for a range of conditions relevant to deep water-table settings in central Florida. The WBTF model reproduced independently obtained estimates of recharge reasonably well for different soil types and water-table depths.

  2. Can simple rules control development of a pioneer vertebrate neuronal network generating behavior?

    PubMed

    Roberts, Alan; Conte, Deborah; Hull, Mike; Merrison-Hort, Robert; al Azad, Abul Kalam; Buhl, Edgar; Borisyuk, Roman; Soffe, Stephen R

    2014-01-08

    How do the pioneer networks in the axial core of the vertebrate nervous system first develop? Fundamental to understanding any full-scale neuronal network is knowledge of the constituent neurons, their properties, synaptic interconnections, and normal activity. Our novel strategy uses basic developmental rules to generate model networks that retain individual neuron and synapse resolution and are capable of reproducing correct, whole animal responses. We apply our developmental strategy to young Xenopus tadpoles, whose brainstem and spinal cord share a core vertebrate plan, but at a tractable complexity. Following detailed anatomical and physiological measurements to complete a descriptive library of each type of spinal neuron, we build models of their axon growth controlled by simple chemical gradients and physical barriers. By adding dendrites and allowing probabilistic formation of synaptic connections, we reconstruct network connectivity among up to 2000 neurons. When the resulting "network" is populated by model neurons and synapses, with properties based on physiology, it can respond to sensory stimulation by mimicking tadpole swimming behavior. This functioning model represents the most complete reconstruction of a vertebrate neuronal network that can reproduce the complex, rhythmic behavior of a whole animal. The findings validate our novel developmental strategy for generating realistic networks with individual neuron- and synapse-level resolution. We use it to demonstrate how early functional neuronal connectivity and behavior may in life result from simple developmental "rules," which lay out a scaffold for the vertebrate CNS without specific neuron-to-neuron recognition.

  3. Non-additive simple potentials for pre-programmed self-assembly

    NASA Astrophysics Data System (ADS)

    Mendoza, Carlos

    2015-03-01

    A major goal in nanoscience and nanotechnology is the self-assembly of any desired complex structure with a system of particles interacting through simple potentials. To achieve this objective, intense experimental and theoretical efforts are currently concentrated in the development of the so called ``patchy'' particles. Here we follow a completely different approach and introduce a very accessible model to produce a large variety of pre-programmed two-dimensional (2D) complex structures. Our model consists of a binary mixture of particles that interact through isotropic interactions that is able to self-assemble into targeted lattices by the appropriate choice of a small number of geometrical parameters and interaction strengths. We study the system using Monte Carlo computer simulations and, despite its simplicity, we are able to self assemble potentially useful structures such as chains, stripes, Kagomé, twisted Kagomé, honeycomb, square, Archimedean and quasicrystalline tilings. Our model is designed such that it may be implemented using discotic particles or, alternatively, using exclusively spherical particles interacting isotropically. Thus, it represents a promising strategy for bottom-up nano-fabrication. Partial Financial Support: DGAPA IN-110613.

  4. Statistical self-similarity of width function maxima with implications to floods

    USGS Publications Warehouse

    Veitzer, S.A.; Gupta, V.K.

    2001-01-01

    Recently a new theory of random self-similar river networks, called the RSN model, was introduced to explain empirical observations regarding the scaling properties of distributions of various topologic and geometric variables in natural basins. The RSN model predicts that such variables exhibit statistical simple scaling, when indexed by Horton-Strahler order. The average side tributary structure of RSN networks also exhibits Tokunaga-type self-similarity which is widely observed in nature. We examine the scaling structure of distributions of the maximum of the width function for RSNs for nested, complete Strahler basins by performing ensemble simulations. The maximum of the width function exhibits distributional simple scaling, when indexed by Horton-Strahler order, for both RSNs and natural river networks extracted from digital elevation models (DEMs). We also test a powerlaw relationship between Horton ratios for the maximum of the width function and drainage areas. These results represent first steps in formulating a comprehensive physical statistical theory of floods at multiple space-time scales for RSNs as discrete hierarchical branching structures. ?? 2001 Published by Elsevier Science Ltd.

  5. Threaded cognition: an integrated theory of concurrent multitasking.

    PubMed

    Salvucci, Dario D; Taatgen, Niels A

    2008-01-01

    The authors propose the idea of threaded cognition, an integrated theory of concurrent multitasking--that is, performing 2 or more tasks at once. Threaded cognition posits that streams of thought can be represented as threads of processing coordinated by a serial procedural resource and executed across other available resources (e.g., perceptual and motor resources). The theory specifies a parsimonious mechanism that allows for concurrent execution, resource acquisition, and resolution of resource conflicts, without the need for specialized executive processes. By instantiating this mechanism as a computational model, threaded cognition provides explicit predictions of how multitasking behavior can result in interference, or lack thereof, for a given set of tasks. The authors illustrate the theory in model simulations of several representative domains ranging from simple laboratory tasks such as dual-choice tasks to complex real-world domains such as driving and driver distraction. (c) 2008 APA, all rights reserved

  6. A simple model for the evolution of melt pond coverage on permeable Arctic sea ice

    NASA Astrophysics Data System (ADS)

    Popović, Predrag; Abbot, Dorian

    2017-05-01

    As the melt season progresses, sea ice in the Arctic often becomes permeable enough to allow for nearly complete drainage of meltwater that has collected on the ice surface. Melt ponds that remain after drainage are hydraulically connected to the ocean and correspond to regions of sea ice whose surface is below sea level. We present a simple model for the evolution of melt pond coverage on such permeable sea ice floes in which we allow for spatially varying ice melt rates and assume the whole floe is in hydrostatic balance. The model is represented by two simple ordinary differential equations, where the rate of change of pond coverage depends on the pond coverage. All the physical parameters of the system are summarized by four strengths that control the relative importance of the terms in the equations. The model both fits observations and allows us to understand the behavior of melt ponds in a way that is often not possible with more complex models. Examples of insights we can gain from the model are that (1) the pond growth rate is more sensitive to changes in bare sea ice albedo than changes in pond albedo, (2) ponds grow slower on smoother ice, and (3) ponds respond strongest to freeboard sinking on first-year ice and sidewall melting on multiyear ice. We also show that under a global warming scenario, pond coverage would increase, decreasing the overall ice albedo and leading to ice thinning that is likely comparable to thinning due to direct forcing. Since melt pond coverage is one of the key parameters controlling the albedo of sea ice, understanding the mechanisms that control the distribution of pond coverage will help improve large-scale model parameterizations and sea ice forecasts in a warming climate.

  7. Cosmic Star Formation: A Simple Model of the SFRD(z)

    NASA Astrophysics Data System (ADS)

    Chiosi, Cesare; Sciarratta, Mauro; D’Onofrio, Mauro; Chiosi, Emanuela; Brotto, Francesca; De Michele, Rosaria; Politino, Valeria

    2017-12-01

    We investigate the evolution of the cosmic star formation rate density (SFRD) from redshift z = 20 to z = 0 and compare it with the observational one by Madau and Dickinson derived from recent compilations of ultraviolet (UV) and infrared (IR) data. The theoretical SFRD(z) and its evolution are obtained using a simple model that folds together the star formation histories of prototype galaxies that are designed to represent real objects of different morphological type along the Hubble sequence and the hierarchical growing of structures under the action of gravity from small perturbations to large-scale objects in Λ-CDM cosmogony, i.e., the number density of dark matter halos N(M,z). Although the overall model is very simple and easy to set up, it provides results that mimic results obtained from highly complex large-scale N-body simulations well. The simplicity of our approach allows us to test different assumptions for the star formation law in galaxies, the effects of energy feedback from stars to interstellar gas, the efficiency of galactic winds, and also the effect of N(M,z). The result of our analysis is that in the framework of the hierarchical assembly of galaxies, the so-called time-delayed star formation under plain assumptions mainly for the energy feedback and galactic winds can reproduce the observational SFRD(z).

  8. Interferometric Constraints on Surface Brightness Asymmetries in Long-Period Variable Stars: A Threat to Accurate Gaia Parallaxes

    NASA Astrophysics Data System (ADS)

    Sacuto, S.; Jorissen, A.; Cruzalèbes, P.; Pasquato, E.; Chiavassa, A.; Spang, A.; Rabbia, Y.; Chesneau, O.

    2011-09-01

    A monitoring of surface brightness asymmetries in evolved giants and supergiants is necessary to estimate the threat that they represent to accurate Gaia parallaxes. Closure-phase measurements obtained with AMBER/VISA in a 3-telescope configuration are fitted by a simple model to constrain the photocenter displacement. The results for the C-type star TX Psc show a large deviation of the photocenter displacement that could bias the Gaia parallax.

  9. Analysis and control of the METC fluid-bed gasifier. Quarterly report, October 1994--January 1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farell, A.E.; Reddy, S.

    1995-03-01

    This document summarizes work performed for the period 10/1/94 to 2/1/95. The initial phase of the work focuses on developing a simple transfer function model of the Fluidized Bed Gasifier (FBG). This transfer function model will be developed based purely on the gasifier responses to step changes in gasifier inputs (including reactor air, convey air, cone nitrogen, FBG pressure, and coal feedrate). This transfer function model will represent a linear, dynamic model that is valid near the operating point at which the data was taken. In addition, a similar transfer function model will be developed using MGAS in order tomore » assess MGAS for use as a model of the FBG for control systems analysis.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moignier, Alexandra, E-mail: alexandra.moignier@irsn.fr; Derreumaux, Sylvie; Broggio, David

    Purpose: Current retrospective cardiovascular dosimetry studies are based on a representative patient or simple mathematic phantoms. Here, a process of patient modeling was developed to personalize the anatomy of the thorax and to include a heart model with coronary arteries. Methods and Materials: The patient models were hybrid computational phantoms (HCPs) with an inserted detailed heart model. A computed tomography (CT) acquisition (pseudo-CT) was derived from HCP and imported into a treatment planning system where treatment conditions were reproduced. Six current patients were selected: 3 were modeled from their CT images (A patients) and the others were modelled from 2more » orthogonal radiographs (B patients). The method performance and limitation were investigated by quantitative comparison between the initial CT and the pseudo-CT, namely, the morphology and the dose calculation were compared. For the B patients, a comparison with 2 kinds of representative patients was also conducted. Finally, dose assessment was focused on the whole coronary artery tree and the left anterior descending coronary. Results: When 3-dimensional anatomic information was available, the dose calculations performed on the initial CT and the pseudo-CT were in good agreement. For the B patients, comparison of doses derived from HCP and representative patients showed that the HCP doses were either better or equivalent. In the left breast radiation therapy context and for the studied cases, coronary mean doses were at least 5-fold higher than heart mean doses. Conclusions: For retrospective dose studies, it is suggested that HCP offers a better surrogate, in terms of dose accuracy, than representative patients. The use of a detailed heart model eliminates the problem of identifying the coronaries on the patient's CT.« less

  11. A smart sensor architecture based on emergent computation in an array of outer-totalistic cells

    NASA Astrophysics Data System (ADS)

    Dogaru, Radu; Dogaru, Ioana; Glesner, Manfred

    2005-06-01

    A novel smart-sensor architecture is proposed, capable to segment and recognize characters in a monochrome image. It is capable to provide a list of ASCII codes representing the recognized characters from the monochrome visual field. It can operate as a blind's aid or for industrial applications. A bio-inspired cellular model with simple linear neurons was found the best to perform the nontrivial task of cropping isolated compact objects such as handwritten digits or characters. By attaching a simple outer-totalistic cell to each pixel sensor, emergent computation in the resulting cellular automata lattice provides a straightforward and compact solution to the otherwise computationally intensive problem of character segmentation. A simple and robust recognition algorithm is built in a compact sequential controller accessing the array of cells so that the integrated device can provide directly a list of codes of the recognized characters. Preliminary simulation tests indicate good performance and robustness to various distortions of the visual field.

  12. The structure of tropical forests and sphere packings

    PubMed Central

    Jahn, Markus Wilhelm; Dobner, Hans-Jürgen; Wiegand, Thorsten; Huth, Andreas

    2015-01-01

    The search for simple principles underlying the complex architecture of ecological communities such as forests still challenges ecological theorists. We use tree diameter distributions—fundamental for deriving other forest attributes—to describe the structure of tropical forests. Here we argue that tree diameter distributions of natural tropical forests can be explained by stochastic packing of tree crowns representing a forest crown packing system: a method usually used in physics or chemistry. We demonstrate that tree diameter distributions emerge accurately from a surprisingly simple set of principles that include site-specific tree allometries, random placement of trees, competition for space, and mortality. The simple static model also successfully predicted the canopy structure, revealing that most trees in our two studied forests grow up to 30–50 m in height and that the highest packing density of about 60% is reached between the 25- and 40-m height layer. Our approach is an important step toward identifying a minimal set of processes responsible for generating the spatial structure of tropical forests. PMID:26598678

  13. Analyzing inflammatory response as excitable media

    NASA Astrophysics Data System (ADS)

    Yde, Pernille; Høgh Jensen, Mogens; Trusina, Ala

    2011-11-01

    The regulatory system of the transcription factor NF-κB plays a great role in many cell functions, including inflammatory response. Interestingly, the NF-κB system is known to up-regulate production of its own triggering signal—namely, inflammatory cytokines such as TNF, IL-1, and IL-6. In this paper we investigate a previously presented model of the NF-κB, which includes both spatial effects and the positive feedback from cytokines. The model exhibits the properties of an excitable medium and has the ability to propagate waves of high cytokine concentration. These waves represent an optimal way of sending an inflammatory signal through the tissue as they create a chemotactic signal able to recruit neutrophils to the site of infection. The simple model displays three qualitatively different states; low stimuli leads to no or very little response. Intermediate stimuli leads to reoccurring waves of high cytokine concentration. Finally, high stimuli leads to a sustained high cytokine concentration, a scenario which is toxic for the tissue cells and corresponds to chronic inflammation. Due to the few variables of the simple model, we are able to perform a phase-space analysis leading to a detailed understanding of the functional form of the model and its limitations. The spatial effects of the model contribute to the robustness of the cytokine wave formation and propagation.

  14. Large-scale coastal and fluvial models constrain the late Holocene evolution of the Ebro Delta

    NASA Astrophysics Data System (ADS)

    Nienhuis, Jaap H.; Ashton, Andrew D.; Kettner, Albert J.; Giosan, Liviu

    2017-09-01

    The distinctive plan-view shape of the Ebro Delta coast reveals a rich morphologic history. The degree to which the form and depositional history of the Ebro and other deltas represent autogenic (internal) dynamics or allogenic (external) forcing remains a prominent challenge for paleo-environmental reconstructions. Here we use simple coastal and fluvial morphodynamic models to quantify paleo-environmental changes affecting the Ebro Delta over the late Holocene. Our findings show that these models are able to broadly reproduce the Ebro Delta morphology, with simple fluvial and wave climate histories. Based on numerical model experiments and the preserved and modern shape of the Ebro Delta plain, we estimate that a phase of rapid shoreline progradation began approximately 2100 years BP, requiring approximately a doubling in coarse-grained fluvial sediment supply to the delta. River profile simulations suggest that an instantaneous and sustained increase in coarse-grained sediment supply to the delta requires a combined increase in both flood discharge and sediment supply from the drainage basin. The persistence of rapid delta progradation throughout the last 2100 years suggests an anthropogenic control on sediment supply and flood intensity. Using proxy records of the North Atlantic Oscillation, we do not find evidence that changes in wave climate aided this delta expansion. Our findings highlight how scenario-based investigations of deltaic systems using simple models can assist first-order quantitative paleo-environmental reconstructions, elucidating the effects of past human influence and climate change, and allowing a better understanding of the future of deltaic landforms.

  15. Mortality and economic instability: detailed analyses for Britain and comparative analyses for selected industrialized countries.

    PubMed

    Brenner, M H

    1983-01-01

    This paper discusses a first-stage analysis of the link of unemployment rates, as well as other economic, social and environmental health risk factors, to mortality rates in postwar Britain. The results presented represent part of an international study of the impact of economic change on mortality patterns in industrialized countries. The mortality patterns examined include total and infant mortality and (by cause) cardiovascular (total), cerebrovascular and heart disease, cirrhosis of the liver, and suicide, homicide and motor vehicle accidents. Among the most prominent factors that beneficially influence postwar mortality patterns in England/Wales and Scotland are economic growth and stability and health service availability. A principal detrimental factor to health is a high rate of unemployment. Additional factors that have an adverse influence on mortality rates are cigarette consumption and heavy alcohol use and unusually cold winter temperatures (especially in Scotland). The model of mortality that includes both economic changes and behavioral and environmental risk factors was successfully applied to infant mortality rates in the interwar period. In addition, the "simple" economic change model of mortality (using only economic indicators) was applied to other industrialized countries. In Canada, the United States, the United Kingdom, and Sweden, the simple version of the economic change model could be successfully applied only if the analysis was begun before World War II; for analysis beginning in the postwar era, the more sophisticated economic change model, including behavioral and environmental risk factors, was required. In France, West Germany, Italy, and Spain, by contrast, some success was achieved using the simple economic change model.

  16. Molecular-dynamics simulation of mutual diffusion in nonideal liquid mixtures

    NASA Astrophysics Data System (ADS)

    Rowley, R. L.; Stoker, J. M.; Giles, N. F.

    1991-05-01

    The mutual-diffusion coefficients, D 12, of n-hexane, n-heptane, and n-octane in chloroform were modeled using equilibrium molecular-dynamics (MD) simulations of simple Lennard-Jones (LJ) fluids. Pure-component LJ parameters were obtained by comparison of simulations to experimental self-diffusion coefficients. While values of “effective” LJ parameters are not expected to simulate accurately diverse thermophysical properties over a wide range of conditions, it was recently shown that effective parameters obtained from pure self-diffusion coefficients can accurately model mutual diffusion in ideal, liquid mixtures. In this work, similar simulations are used to model diffusion in nonideal mixtures. The same combining rules used in the previous study for the cross-interaction parameters were found to be adequate to represent the composition dependence of D 12. The effect of alkane chain length on D 12 is also correctly predicted by the simulations. A commonly used assumption in empirical correlations of D 12, that its kinetic portion is a simple, compositional average of the intradiffusion coefficients, is inconsistent with the simulation results. In fact, the value of the kinetic portion of D 12 was often outside the range of values bracketed by the two intradiffusion coefficients for the nonideal system modeled here.

  17. METALLICITY GRADIENTS THROUGH DISK INSTABILITY: A SIMPLE MODEL FOR THE MILKY WAY'S BOXY BULGE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martinez-Valpuesta, Inma; Gerhard, Ortwin, E-mail: imv@mpe.mpg.de, E-mail: gerhard@mpe.mpg.de

    2013-03-20

    Observations show a clear vertical metallicity gradient in the Galactic bulge, which is often taken as a signature of dissipative processes in the formation of a classical bulge. Various evidence shows, however, that the Milky Way is a barred galaxy with a boxy bulge representing the inner three-dimensional part of the bar. Here we show with a secular evolution N-body model that a boxy bulge formed through bar and buckling instabilities can show vertical metallicity gradients similar to the observed gradient if the initial axisymmetric disk had a comparable radial metallicity gradient. In this framework, the range of metallicities inmore » bulge fields constrains the chemical structure of the Galactic disk at early times before bar formation. Our secular evolution model was previously shown to reproduce inner Galaxy star counts and we show here that it also has cylindrical rotation. We use it to predict a full mean metallicity map across the Galactic bulge from a simple metallicity model for the initial disk. This map shows a general outward gradient on the sky as well as longitudinal perspective asymmetries. We also briefly comment on interpreting metallicity gradient observations in external boxy bulges.« less

  18. Anticipatory Cognitive Systems: a Theoretical Model

    NASA Astrophysics Data System (ADS)

    Terenzi, Graziano

    This paper deals with the problem of understanding anticipation in biological and cognitive systems. It is argued that a physical theory can be considered as biologically plausible only if it incorporates the ability to describe systems which exhibit anticipatory behaviors. The paper introduces a cognitive level description of anticipation and provides a simple theoretical characterization of anticipatory systems on this level. Specifically, a simple model of a formal anticipatory neuron and a model (i.e. the τ-mirror architecture) of an anticipatory neural network which is based on the former are introduced and discussed. The basic feature of this architecture is that a part of the network learns to represent the behavior of the other part over time, thus constructing an implicit model of its own functioning. As a consequence, the network is capable of self-representation; anticipation, on a oscopic level, is nothing but a consequence of anticipation on a microscopic level. Some learning algorithms are also discussed together with related experimental tasks and possible integrations. The outcome of the paper is a formal characterization of anticipation in cognitive systems which aims at being incorporated in a comprehensive and more general physical theory.

  19. A Fuzzy Cognitive Model of aeolian instability across the South Texas Sandsheet

    NASA Astrophysics Data System (ADS)

    Houser, C.; Bishop, M. P.; Barrineau, C. P.

    2014-12-01

    Characterization of aeolian systems is complicated by rapidly changing surface-process regimes, spatio-temporal scale dependencies, and subjective interpretation of imagery and spatial data. This paper describes the development and application of analytical reasoning to quantify instability of an aeolian environment using scale-dependent information coupled with conceptual knowledge of process and feedback mechanisms. Specifically, a simple Fuzzy Cognitive Model (FCM) for aeolian landscape instability was developed that represents conceptual knowledge of key biophysical processes and feedbacks. Model inputs include satellite-derived surface biophysical and geomorphometric parameters. FCMs are a knowledge-based Artificial Intelligence (AI) technique that merges fuzzy logic and neural computing in which knowledge or concepts are structured as a web of relationships that is similar to both human reasoning and the human decision-making process. Given simple process-form relationships, the analytical reasoning model is able to map the influence of land management practices and the geomorphology of the inherited surface on aeolian instability within the South Texas Sandsheet. Results suggest that FCMs can be used to formalize process-form relationships and information integration analogous to human cognition with future iterations accounting for the spatial interactions and temporal lags across the sand sheets.

  20. A coordination theory for intelligent machines

    NASA Technical Reports Server (NTRS)

    Wang, Fei-Yue; Saridis, George N.

    1990-01-01

    A formal model for the coordination level of intelligent machines is established. The framework of the coordination level investigated consists of one dispatcher and a number of coordinators. The model called coordination structure has been used to describe analytically the information structure and information flow for the coordination activities in the coordination level. Specifically, the coordination structure offers a formalism to (1) describe the task translation of the dispatcher and coordinators; (2) represent the individual process within the dispatcher and coordinators; (3) specify the cooperation and connection among the dispatcher and coordinators; (4) perform the process analysis and evaluation; and (5) provide a control and communication mechanism for the real-time monitor or simulation of the coordination process. A simple procedure for the task scheduling in the coordination structure is presented. The task translation is achieved by a stochastic learning algorithm. The learning process is measured with entropy and its convergence is guaranteed. Finally, a case study of the coordination structure with three coordinators and one dispatcher for a simple intelligent manipulator system illustrates the proposed model and the simulation of the task processes performed on the model verifies the soundness of the theory.

  1. Multivariate neural biomarkers of emotional states are categorically distinct.

    PubMed

    Kragel, Philip A; LaBar, Kevin S

    2015-11-01

    Understanding how emotions are represented neurally is a central aim of affective neuroscience. Despite decades of neuroimaging efforts addressing this question, it remains unclear whether emotions are represented as distinct entities, as predicted by categorical theories, or are constructed from a smaller set of underlying factors, as predicted by dimensional accounts. Here, we capitalize on multivariate statistical approaches and computational modeling to directly evaluate these theoretical perspectives. We elicited discrete emotional states using music and films during functional magnetic resonance imaging scanning. Distinct patterns of neural activation predicted the emotion category of stimuli and tracked subjective experience. Bayesian model comparison revealed that combining dimensional and categorical models of emotion best characterized the information content of activation patterns. Surprisingly, categorical and dimensional aspects of emotion experience captured unique and opposing sources of neural information. These results indicate that diverse emotional states are poorly differentiated by simple models of valence and arousal, and that activity within separable neural systems can be mapped to unique emotion categories. © The Author (2015). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  2. 3D digital headform models of Australian cyclists.

    PubMed

    Ellena, Thierry; Skals, Sebastian; Subic, Aleksandar; Mustafa, Helmy; Pang, Toh Yen

    2017-03-01

    Traditional 1D anthropometric data have been the primary source of information used by ergonomists for the dimensioning of head and facial gear. Although these data are simple to use and understand, they only provide univariate measures of key dimensions. 3D anthropometric data, however, describe the complete shape characteristics of the head surface, but are complicated to interpret due to the abundance of information they contain. Consequently, current headform standards based on 1D measurements may not adequately represent the actual head shape variations of the intended user groups. The purpose of this study was to introduce a set of new digital headform models representative of the adult cyclists' community in Australia. Four models were generated based on an Australian 3D anthropometric database of head shapes and a modified hierarchical clustering algorithm. Considerable shape differences were identified between our models and the current headforms from the Australian standard. We conclude that the design of head and facial gear based on current standards might not be favorable for optimal fitting results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. The polygonal model: A simple representation of biomolecules as a tool for teaching metabolism.

    PubMed

    Bonafe, Carlos Francisco Sampaio; Bispo, Jose Ailton Conceição; de Jesus, Marcelo Bispo

    2018-01-01

    Metabolism involves numerous reactions and organic compounds that the student must master to understand adequately the processes involved. Part of biochemical learning should include some knowledge of the structure of biomolecules, although the acquisition of such knowledge can be time-consuming and may require significant effort from the student. In this report, we describe the "polygonal model" as a new means of graphically representing biomolecules. This model is based on the use of geometric figures such as open triangles, squares, and circles to represent hydroxyl, carbonyl, and carboxyl groups, respectively. The usefulness of the polygonal model was assessed by undergraduate students in a classroom activity that consisted of "transforming" molecules from Fischer models to polygonal models and vice and versa. The survey was applied to 135 undergraduate Biology and Nursing students. Students found the model easy to use and we noted that it allowed identification of students' misconceptions in basic concepts of organic chemistry, such as in stereochemistry and organic groups that could then be corrected. The students considered the polygonal model easier and faster for representing molecules than Fischer representations, without loss of information. These findings indicate that the polygonal model can facilitate the teaching of metabolism when the structures of biomolecules are discussed. Overall, the polygonal model promoted contact with chemical structures, e.g. through drawing activities, and encouraged student-student dialog, thereby facilitating biochemical learning. © 2017 by The International Union of Biochemistry and Molecular Biology, 46(1):66-75, 2018. © 2017 The International Union of Biochemistry and Molecular Biology.

  4. Learning molecular energies using localized graph kernels

    DOE PAGES

    Ferré, Grégoire; Haut, Terry Scot; Barros, Kipton Marcos

    2017-03-21

    We report that recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturallymore » incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. Finally, we benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.« less

  5. A Simple Principled Approach for Modeling and Understanding Uniform Color Metrics

    PubMed Central

    Smet, Kevin A.G.; Webster, Michael A.; Whitehead, Lorne A.

    2016-01-01

    An important goal in characterizing human color vision is to order color percepts in a way that captures their similarities and differences. This has resulted in the continuing evolution of “uniform color spaces,” in which the distances within the space represent the perceptual differences between the stimuli. While these metrics are now very successful in predicting how color percepts are scaled, they do so in largely empirical, ad hoc ways, with limited reference to actual mechanisms of color vision. In this article our aim is to instead begin with general and plausible assumptions about color coding, and then develop a model of color appearance that explicitly incorporates them. We show that many of the features of empirically-defined color order systems (such as those of Munsell, Pantone, NCS, and others) as well as many of the basic phenomena of color perception, emerge naturally from fairly simple principles of color information encoding in the visual system and how it can be optimized for the spectral characteristics of the environment. PMID:26974939

  6. Learning molecular energies using localized graph kernels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferré, Grégoire; Haut, Terry Scot; Barros, Kipton Marcos

    We report that recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturallymore » incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. Finally, we benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.« less

  7. Sea-ice floe-size distribution in the context of spontaneous scaling emergence in stochastic systems.

    PubMed

    Herman, Agnieszka

    2010-06-01

    Sea-ice floe-size distribution (FSD) in ice-pack covered seas influences many aspects of ocean-atmosphere interactions. However, data concerning FSD in the polar oceans are still sparse and processes shaping the observed FSD properties are poorly understood. Typically, power-law FSDs are assumed although no feasible explanation has been provided neither for this one nor for other properties of the observed distributions. Consequently, no model exists capable of predicting FSD parameters in any particular situation. Here I show that the observed FSDs can be well represented by a truncated Pareto distribution P(x)=x(-1-α) exp[(1-α)/x] , which is an emergent property of a certain group of multiplicative stochastic systems, described by the generalized Lotka-Volterra (GLV) equation. Building upon this recognition, a possibility of developing a simple agent-based GLV-type sea-ice model is considered. Contrary to simple power-law FSDs, GLV gives consistent estimates of the total floe perimeter, as well as floe-area distribution in agreement with observations.

  8. Sea-ice floe-size distribution in the context of spontaneous scaling emergence in stochastic systems

    NASA Astrophysics Data System (ADS)

    Herman, Agnieszka

    2010-06-01

    Sea-ice floe-size distribution (FSD) in ice-pack covered seas influences many aspects of ocean-atmosphere interactions. However, data concerning FSD in the polar oceans are still sparse and processes shaping the observed FSD properties are poorly understood. Typically, power-law FSDs are assumed although no feasible explanation has been provided neither for this one nor for other properties of the observed distributions. Consequently, no model exists capable of predicting FSD parameters in any particular situation. Here I show that the observed FSDs can be well represented by a truncated Pareto distribution P(x)=x-1-αexp[(1-α)/x] , which is an emergent property of a certain group of multiplicative stochastic systems, described by the generalized Lotka-Volterra (GLV) equation. Building upon this recognition, a possibility of developing a simple agent-based GLV-type sea-ice model is considered. Contrary to simple power-law FSDs, GLV gives consistent estimates of the total floe perimeter, as well as floe-area distribution in agreement with observations.

  9. Capturing tumor complexity in vitro: Comparative analysis of 2D and 3D tumor models for drug discovery.

    PubMed

    Stock, Kristin; Estrada, Marta F; Vidic, Suzana; Gjerde, Kjersti; Rudisch, Albin; Santo, Vítor E; Barbier, Michaël; Blom, Sami; Arundkar, Sharath C; Selvam, Irwin; Osswald, Annika; Stein, Yan; Gruenewald, Sylvia; Brito, Catarina; van Weerden, Wytske; Rotter, Varda; Boghaert, Erwin; Oren, Moshe; Sommergruber, Wolfgang; Chong, Yolanda; de Hoogt, Ronald; Graeser, Ralph

    2016-07-01

    Two-dimensional (2D) cell cultures growing on plastic do not recapitulate the three dimensional (3D) architecture and complexity of human tumors. More representative models are required for drug discovery and validation. Here, 2D culture and 3D mono- and stromal co-culture models of increasing complexity have been established and cross-comparisons made using three standard cell carcinoma lines: MCF7, LNCaP, NCI-H1437. Fluorescence-based growth curves, 3D image analysis, immunohistochemistry and treatment responses showed that end points differed according to cell type, stromal co-culture and culture format. The adaptable methodologies described here should guide the choice of appropriate simple and complex in vitro models.

  10. Capturing tumor complexity in vitro: Comparative analysis of 2D and 3D tumor models for drug discovery

    PubMed Central

    Stock, Kristin; Estrada, Marta F.; Vidic, Suzana; Gjerde, Kjersti; Rudisch, Albin; Santo, Vítor E.; Barbier, Michaël; Blom, Sami; Arundkar, Sharath C.; Selvam, Irwin; Osswald, Annika; Stein, Yan; Gruenewald, Sylvia; Brito, Catarina; van Weerden, Wytske; Rotter, Varda; Boghaert, Erwin; Oren, Moshe; Sommergruber, Wolfgang; Chong, Yolanda; de Hoogt, Ronald; Graeser, Ralph

    2016-01-01

    Two-dimensional (2D) cell cultures growing on plastic do not recapitulate the three dimensional (3D) architecture and complexity of human tumors. More representative models are required for drug discovery and validation. Here, 2D culture and 3D mono- and stromal co-culture models of increasing complexity have been established and cross-comparisons made using three standard cell carcinoma lines: MCF7, LNCaP, NCI-H1437. Fluorescence-based growth curves, 3D image analysis, immunohistochemistry and treatment responses showed that end points differed according to cell type, stromal co-culture and culture format. The adaptable methodologies described here should guide the choice of appropriate simple and complex in vitro models. PMID:27364600

  11. A simple analogue of lung mechanics.

    PubMed

    Sherman, T F

    1993-12-01

    A model of the chest and lungs can be easily constructed from a bottle of water, a balloon, a syringe, a rubber stopper, glass and rubber tubing, and clamps. The model is a more exact analogue of the body than the classic apparatus of Hering in two respects: 1) the pleurae and intrapleural fluid are represented by water rather than air, and 2) the subatmospheric "intrapleural" pressure is created by the elasticity of the "lung" (balloon) rather than by a vacuum pump. With this model, students can readily see how the lung is inflated and deflated by movements of the "diaphragm and chest" (syringe plunger) and how intrapleural pressures change as this is accomplished.

  12. Computational methods and traveling wave solutions for the fourth-order nonlinear Ablowitz-Kaup-Newell-Segur water wave dynamical equation via two methods and its applications

    NASA Astrophysics Data System (ADS)

    Ali, Asghar; Seadawy, Aly R.; Lu, Dianchen

    2018-05-01

    The aim of this article is to construct some new traveling wave solutions and investigate localized structures for fourth-order nonlinear Ablowitz-Kaup-Newell-Segur (AKNS) water wave dynamical equation. The simple equation method (SEM) and the modified simple equation method (MSEM) are applied in this paper to construct the analytical traveling wave solutions of AKNS equation. The different waves solutions are derived by assigning special values to the parameters. The obtained results have their importance in the field of physics and other areas of applied sciences. All the solutions are also graphically represented. The constructed results are often helpful for studying several new localized structures and the waves interaction in the high-dimensional models.

  13. Testing the Paradigm that Ultra-Luminous X-Ray Sources as a Class Represent Accreting Intermediate

    NASA Technical Reports Server (NTRS)

    Berghea, C. T.; Weaver, K. A.; Colbert, E. J. M.; Roberts, T. P.

    2008-01-01

    To test the idea that ultraluminous X-ray sources (ULXs) in external galaxies represent a class of accreting Intermediate-Mass Black Holes (IMBHs), we have undertaken a program to identify ULXs and a lower luminosity X-ray comparison sample with the highest quality data in the Chandra archive. We establish a general property of ULXs that the most X-ray luminous objects possess the fattest X-ray spectra (in the Chandra band pass). No prior sample studies have established the general hardening of ULX spectra with luminosity. This hardening occurs at the highest luminosities (absorbed luminosity > or equals 5x10(exp 39) ergs/s) and is in line with recent models arguing that ULXs are actually stellar-mass black holes. From spectral modeling, we show that the evidence originally taken to mean that ULXs are IMBHs - i.e., the "simple IMBH model" - is nowhere near as compelling when a large sample of ULXs is looked at properly. During the last couple of years, XMM-Newton spectroscopy of ULXs has to some large extent begun to negate the simple IMBH model based on fewer objects. We confirm and expand these results, which validates the XMM-Newton work in a broader sense with independent X-ray data. We find (1) that cool disk components are present with roughly equal probability and total flux fraction for any given ULX, regardless of luminosity, and (2) that cool disk components extend below the standard ULX luminosity cutoff of 10(exp 39) ergs/s, down to our sample limit of 10(exp 38:3) ergs/s. The fact that cool disk components are not correlated with luminosity damages the argument that cool disks indicate IMBHs in ULXs, for which a strong statistical support was never made.

  14. Testing the Paradigm that Ultraluminous X-Ray Sources as a Class Represent Accreting Intermediate-Mass Black Holes

    NASA Astrophysics Data System (ADS)

    Berghea, C. T.; Weaver, K. A.; Colbert, E. J. M.; Roberts, T. P.

    2008-11-01

    To test the idea that ultraluminous X-ray sources (ULXs) in external galaxies represent a class of accreting intermediate-mass black holes (IMBHs), we have undertaken a program to identify ULXs and a lower luminosity X-ray comparison sample with the highest quality data in the Chandra archive. We establish as a general property of ULXs that the most X-ray-luminous objects possess the flattest X-ray spectra (in the Chandra bandpass). No prior sample studies have established the general hardening of ULX spectra with luminosity. This hardening occurs at the highest luminosities (absorbed luminosity >=5 × 1039 erg s-1) and is in line with recent models arguing that ULXs are actually stellar mass black holes. From spectral modeling, we show that the evidence originally taken to mean that ULXs are IMBHs—i.e., the "simple IMBH model"—is nowhere near as compelling when a large sample of ULXs is looked at properly. During the last couple of years, XMM-Newton spectroscopy of ULXs has to a large extent begun to negate the simple IMBH model based on fewer objects. We confirm and expand these results, which validates the XMM-Newton work in a broader sense with independent X-ray data. We find that (1) cool-disk components are present with roughly equal probability and total flux fraction for any given ULX, regardless of luminosity, and (2) cool-disk components extend below the standard ULX luminosity cutoff of 1039 erg s-1, down to our sample limit of 1038.3 erg s-1. The fact that cool-disk components are not correlated with luminosity damages the argument that cool disks indicate IMBHs in ULXs, for which strong statistical support was never found.

  15. Everyday Engineering: What Makes a Bic Click?

    ERIC Educational Resources Information Center

    Moyer, Richard; Everett, Susan

    2009-01-01

    The ballpoint pen is an ideal example of simple engineering that we use everyday. But is it really so simple? The ballpoint pen is a remarkable combination of technology and science. Its operation uses several scientific principles related to chemistry and physics, such as properties of liquids and simple machines. They represent significant…

  16. A simple distributed sediment delivery approach for rural catchments

    NASA Astrophysics Data System (ADS)

    Reid, Lucas; Scherer, Ulrike

    2014-05-01

    The transfer of sediments from source areas to surface waters is a complex process. In process based erosion models sediment input is thus quantified by representing all relevant sub processes such as detachment, transport and deposition of sediment particles along the flow path to the river. A successful application of these models requires, however, a large amount of spatially highly resolved data on physical catchment characteristics, which is only available for a few, well examined small catchments. For the lack of appropriate models, the empirical Universal Soil Loss Equation (USLE) is widely applied to quantify the sediment production in meso to large scale basins. As the USLE provides long-term mean soil loss rates, it is often combined with spatially lumped models to estimate the sediment delivery ratio (SDR). In these models, the SDR is related to data on morphological characteristics of the catchment such as average local relief, drainage density, proportion of depressions or soil texture. Some approaches include the relative distance between sediment source areas and the river channels. However, several studies showed that spatially lumped parameters describing the morphological characteristics are only of limited value to represent the factors of influence on sediment transport at the catchment scale. Sediment delivery is controlled by the location of the sediment source areas in the catchment and the morphology along the flow path to the surface water bodies. This complex interaction of spatially varied physiographic characteristics cannot be adequately represented by lumped morphological parameters. The objective of this study is to develop a simple but spatially distributed approach to quantify the sediment delivery ratio by considering the characteristics of the flow paths in a catchment. We selected a small catchment located in in an intensively cultivated loess region in Southwest Germany as study area for the development of the SDR approach. The flow pathways were extracted in a geographic information system. Then the sediment delivery ratio for each source area was determined using an empirical approach considering the slope, morphology and land use properties along the flow path. As a benchmark for the calibration of the model parameters we used results of a detailed process based erosion model available for the study area. Afterwards the approach was tested in larger catchments located in the same loess region.

  17. Nonlinear Friction Compensation of Ball Screw Driven Stage Based on Variable Natural Length Spring Model and Disturbance Observer

    NASA Astrophysics Data System (ADS)

    Asaumi, Hiroyoshi; Fujimoto, Hiroshi

    Ball screw driven stages are used for industrial equipments such as machine tools and semiconductor equipments. Fast and precise positioning is necessary to enhance productivity and microfabrication technology of the system. The rolling friction of the ball screw driven stage deteriorate the positioning performance. Therefore, the control system based on the friction model is necessary. In this paper, we propose variable natural length spring model (VNLS model) as the friction model. VNLS model is simple and easy to implement as friction controller. Next, we propose multi variable natural length spring model (MVNLS model) as the friction model. MVNLS model can represent friction characteristic of the stage precisely. Moreover, the control system based on MVNLS model and disturbance observer is proposed. Finally, the simulation results and experimental results show the advantages of the proposed method.

  18. A comparison of quantitative methods for clinical imaging with hyperpolarized (13)C-pyruvate.

    PubMed

    Daniels, Charlie J; McLean, Mary A; Schulte, Rolf F; Robb, Fraser J; Gill, Andrew B; McGlashan, Nicholas; Graves, Martin J; Schwaiger, Markus; Lomas, David J; Brindle, Kevin M; Gallagher, Ferdia A

    2016-04-01

    Dissolution dynamic nuclear polarization (DNP) enables the metabolism of hyperpolarized (13)C-labelled molecules, such as the conversion of [1-(13)C]pyruvate to [1-(13)C]lactate, to be dynamically and non-invasively imaged in tissue. Imaging of this exchange reaction in animal models has been shown to detect early treatment response and correlate with tumour grade. The first human DNP study has recently been completed, and, for widespread clinical translation, simple and reliable methods are necessary to accurately probe the reaction in patients. However, there is currently no consensus on the most appropriate method to quantify this exchange reaction. In this study, an in vitro system was used to compare several kinetic models, as well as simple model-free methods. Experiments were performed using a clinical hyperpolarizer, a human 3 T MR system, and spectroscopic imaging sequences. The quantitative methods were compared in vivo by using subcutaneous breast tumours in rats to examine the effect of pyruvate inflow. The two-way kinetic model was the most accurate method for characterizing the exchange reaction in vitro, and the incorporation of a Heaviside step inflow profile was best able to describe the in vivo data. The lactate time-to-peak and the lactate-to-pyruvate area under the curve ratio were simple model-free approaches that accurately represented the full reaction, with the time-to-peak method performing indistinguishably from the best kinetic model. Finally, extracting data from a single pixel was a robust and reliable surrogate of the whole region of interest. This work has identified appropriate quantitative methods for future work in the analysis of human hyperpolarized (13)C data. © 2016 The Authors. NMR in Biomedicine published by John Wiley & Sons Ltd.

  19. A semiparametric spatio-temporal model for solar irradiance data

    DOE PAGES

    Patrick, Joshua D.; Harvill, Jane L.; Hansen, Clifford W.

    2016-03-01

    Here, we evaluate semiparametric spatio-temporal models for global horizontal irradiance at high spatial and temporal resolution. These models represent the spatial domain as a lattice and are capable of predicting irradiance at lattice points, given data measured at other lattice points. Using data from a 1.2 MW PV plant located in Lanai, Hawaii, we show that a semiparametric model can be more accurate than simple interpolation between sensor locations. We investigate spatio-temporal models with separable and nonseparable covariance structures and find no evidence to support assuming a separable covariance structure. These results indicate a promising approach for modeling irradiance atmore » high spatial resolution consistent with available ground-based measurements. Moreover, this kind of modeling may find application in design, valuation, and operation of fleets of utility-scale photovoltaic power systems.« less

  20. Diagnosing the impact of alternative calibration strategies on coupled hydrologic models

    NASA Astrophysics Data System (ADS)

    Smith, T. J.; Perera, C.; Corrigan, C.

    2017-12-01

    Hydrologic models represent a significant tool for understanding, predicting, and responding to the impacts of water on society and society on water resources and, as such, are used extensively in water resources planning and management. Given this important role, the validity and fidelity of hydrologic models is imperative. While extensive focus has been paid to improving hydrologic models through better process representation, better parameter estimation, and better uncertainty quantification, significant challenges remain. In this study, we explore a number of competing model calibration scenarios for simple, coupled snowmelt-runoff models to better understand the sensitivity / variability of parameterizations and its impact on model performance, robustness, fidelity, and transferability. Our analysis highlights the sensitivity of coupled snowmelt-runoff model parameterizations to alterations in calibration approach, underscores the concept of information content in hydrologic modeling, and provides insight into potential strategies for improving model robustness / fidelity.

  1. The effect of a turbulent wake on the stagnation point. I - Skin friction results

    NASA Technical Reports Server (NTRS)

    Wilson, Dennis E.; Hanford, Anthony J.

    1990-01-01

    The response of a boundary layer in the stagnation region of a two-dimensional body to fluctuations in the freestream is examined. The analysis is restricted to laminar incompressible flow. The assumed form of the velocity distribution at the edge of the boundary layer represents both a pulsation of the incoming flow, and an oscillation of the stagnation point streamline. Both features are essential in accurately representing the effect which freestream spatial and temporal nonuniformities have upon the unsteady boundary layer. Finally, a simple model is proposed which relates the characteristic parameters in a turbulent wake to the unsteady boundary-layer edge velocity. Numerical results are presented for both an arbitrary two-dimensional geometry and a circular cylinder.

  2. A gel as an array of channels.

    PubMed

    Zimm, B H

    1996-06-01

    We consider the theory of charged point molecules ('probes') being pulled by an electric field through a two-dimensional net of channels that represents a piece of gel. Associated with the position in the net is a free energy of interaction between the probe and the net; this free energy fluctuates randomly with the position of the probe in the net. The free energy is intended to represent weak interactions between the probe and the gel, such as entropy associated with the restriction of the freedom of motion of the probe by the gel, or electrostatic interactions between the probe and charges fixed to the gel. The free energy can be thought of as a surface with the appearance of a rough, hilly landscape spread over the net; the roughness is measured by the standard deviation of the free-energy distribution. Two variations of the model are examined: (1) the net is assumed to have all channels open, or (2) only channels parallel to the electric field are open and all the cross-connecting channels are closed. Model (1) is more realistic but presents a two-dimensional mathematical problem which can only be solved by slow iteration methods, while model (2) is less realistic but presents a one-dimensional problem that can be reduced to simple quadratures and is easy to solve by numerical integration. In both models the mobility of the probe decreases as the roughness parameter is increased, but the effect is larger in the less realistic model (2) if the same free-energy surface is used in both. The mobility in model (2) is reduced both by high points in the rough surface ('bumps') and by low points ('traps'), while in model (1) only the traps are effective, since the probes can flow around the bumps through the cross channels. The mobility in model (2) can be made to agree with model (1) simply by cutting off the bumps of the surface. Thus the simple model (2) can be used in place of the more realistic model (1) that is more difficult to compute.

  3. A modified impulse-response representation of the global near-surface air temperature and atmospheric concentration response to carbon dioxide emissions

    NASA Astrophysics Data System (ADS)

    Millar, Richard J.; Nicholls, Zebedee R.; Friedlingstein, Pierre; Allen, Myles R.

    2017-06-01

    Projections of the response to anthropogenic emission scenarios, evaluation of some greenhouse gas metrics, and estimates of the social cost of carbon often require a simple model that links emissions of carbon dioxide (CO2) to atmospheric concentrations and global temperature changes. An essential requirement of such a model is to reproduce typical global surface temperature and atmospheric CO2 responses displayed by more complex Earth system models (ESMs) under a range of emission scenarios, as well as an ability to sample the range of ESM response in a transparent, accessible and reproducible form. Here we adapt the simple model of the Intergovernmental Panel on Climate Change 5th Assessment Report (IPCC AR5) to explicitly represent the state dependence of the CO2 airborne fraction. Our adapted model (FAIR) reproduces the range of behaviour shown in full and intermediate complexity ESMs under several idealised carbon pulse and exponential concentration increase experiments. We find that the inclusion of a linear increase in 100-year integrated airborne fraction with cumulative carbon uptake and global temperature change substantially improves the representation of the response of the climate system to CO2 on a range of timescales and under a range of experimental designs.

  4. Prediction of plastic instabilities under thermo-mechanical loadings in tension and simple shear

    NASA Astrophysics Data System (ADS)

    Manach, P. Y.; Mansouri, L. F.; Thuillier, S.

    2016-08-01

    Plastic instabilities like Portevin-Le Châtelier were quite thoroughly investigated experimentally in tension, under a large range of strain rates and temperatures. Such instabilities are characterized both by a jerky flow and a localization of the strain in bands. Similar phenomena were also recorded for example in simple shear [1]. Modelling of this phenomenon is mainly performed at room temperature, taking into account the strain rate sensitivity, though an extension of the classical Estrin-Kubin-McCormick was proposed in the literature, by making some of the material parameters dependent on temperature. A similar approach is considered in this study, furthermore extended for anisotropic plasticity with Hill's 1948 yield criterion. Material parameters are identified at 4 different temperatures, ranging from room temperature up to 250°C. The identification procedure is split in 3 steps, related to the elasticity, the average stress level and the magnitude of the stress drops. The anisotropy is considered constant in this temperature range, as evidenced by experimental results [2]. The model is then used to investigate the temperature dependence of the critical strain, as well as its capability to represent the propagation of the bands. Numerical predictions of the instabilities in tension and simple shear at room temperature and up to 250°C are compared with experimental results [3]. In the case of simple shear, a monotonic loading followed by unloading and reloading in the reverse direction (“Bauschinger-type” test) is also considered, showing that (i) kinematic hardening should be taken into account to fully describe the transition at re-yielding (ii) the modelling of the critical strain has to be improved.

  5. Responses to atmospheric CO2 concentrations in crop simulation models: a review of current simple and semicomplex representations and options for model development.

    PubMed

    Vanuytrecht, Eline; Thorburn, Peter J

    2017-05-01

    Elevated atmospheric CO 2 concentrations ([CO 2 ]) cause direct changes in crop physiological processes (e.g. photosynthesis and stomatal conductance). To represent these CO 2 responses, commonly used crop simulation models have been amended, using simple and semicomplex representations of the processes involved. Yet, there is no standard approach to and often poor documentation of these developments. This study used a bottom-up approach (starting with the APSIM framework as case study) to evaluate modelled responses in a consortium of commonly used crop models and illuminate whether variation in responses reflects true uncertainty in our understanding compared to arbitrary choices of model developers. Diversity in simulated CO 2 responses and limited validation were common among models, both within the APSIM framework and more generally. Whereas production responses show some consistency up to moderately high [CO 2 ] (around 700 ppm), transpiration and stomatal responses vary more widely in nature and magnitude (e.g. a decrease in stomatal conductance varying between 35% and 90% among models was found for [CO 2 ] doubling to 700 ppm). Most notably, nitrogen responses were found to be included in few crop models despite being commonly observed and critical for the simulation of photosynthetic acclimation, crop nutritional quality and carbon allocation. We suggest harmonization and consideration of more mechanistic concepts in particular subroutines, for example, for the simulation of N dynamics, as a way to improve our predictive understanding of CO 2 responses and capture secondary processes. Intercomparison studies could assist in this aim, provided that they go beyond simple output comparison and explicitly identify the representations and assumptions that are causal for intermodel differences. Additionally, validation and proper documentation of the representation of CO 2 responses within models should be prioritized. © 2017 John Wiley & Sons Ltd.

  6. A geometric modeler based on a dual-geometry representation polyhedra and rational b-splines

    NASA Technical Reports Server (NTRS)

    Klosterman, A. L.

    1984-01-01

    For speed and data base reasons, solid geometric modeling of large complex practical systems is usually approximated by a polyhedra representation. Precise parametric surface and implicit algebraic modelers are available but it is not yet practical to model the same level of system complexity with these precise modelers. In response to this contrast the GEOMOD geometric modeling system was built so that a polyhedra abstraction of the geometry would be available for interactive modeling without losing the precise definition of the geometry. Part of the reason that polyhedra modelers are effective is that all bounded surfaces can be represented in a single canonical format (i.e., sets of planar polygons). This permits a very simple and compact data structure. Nonuniform rational B-splines are currently the best representation to describe a very large class of geometry precisely with one canonical format. The specific capabilities of the modeler are described.

  7. The biomechanics of an overarm throwing task: a simulation model examination of optimal timing of muscle activations.

    PubMed

    Chowdhary, A G; Challis, J H

    2001-07-07

    A series of overarm throws, constrained to the parasagittal plane, were simulated using a muscle model actuated two-segment model representing the forearm and hand plus projectile. The parameters defining the modeled muscles and the anthropometry of the two-segment models were specific to the two young male subjects. All simulations commenced from a position of full elbow flexion and full wrist extension. The study was designed to elucidate the optimal inter-muscular coordination strategies for throwing projectiles to achieve maximum range, as well as maximum projectile kinetic energy for a variety of projectile masses. A proximal to distal (PD) sequence of muscle activations was seen in many of the simulated throws but not all. Under certain conditions moment reversal produced a longer throw and greater projectile energy, and deactivation of the muscles resulted in increased projectile energy. Therefore, simple timing of muscle activation does not fully describe the patterns of muscle recruitment which can produce optimal throws. The models of the two subjects required different timings of muscle activations, and for some of the tasks used different coordination patterns. Optimal strategies were found to vary with the mass of the projectile, the anthropometry and the muscle characteristics of the subjects modeled. The tasks examined were relatively simple, but basic rules for coordinating these tasks were not evident. Copyright 2001 Academic Press.

  8. Hierarchical modeling for reliability analysis using Markov models. B.S./M.S. Thesis - MIT

    NASA Technical Reports Server (NTRS)

    Fagundo, Arturo

    1994-01-01

    Markov models represent an extremely attractive tool for the reliability analysis of many systems. However, Markov model state space grows exponentially with the number of components in a given system. Thus, for very large systems Markov modeling techniques alone become intractable in both memory and CPU time. Often a particular subsystem can be found within some larger system where the dependence of the larger system on the subsystem is of a particularly simple form. This simple dependence can be used to decompose such a system into one or more subsystems. A hierarchical technique is presented which can be used to evaluate these subsystems in such a way that their reliabilities can be combined to obtain the reliability for the full system. This hierarchical approach is unique in that it allows the subsystem model to pass multiple aggregate state information to the higher level model, allowing more general systems to be evaluated. Guidelines are developed to assist in the system decomposition. An appropriate method for determining subsystem reliability is also developed. This method gives rise to some interesting numerical issues. Numerical error due to roundoff and integration are discussed at length. Once a decomposition is chosen, the remaining analysis is straightforward but tedious. However, an approach is developed for simplifying the recombination of subsystem reliabilities. Finally, a real world system is used to illustrate the use of this technique in a more practical context.

  9. The continuum fusion theory of signal detection applied to a bi-modal fusion problem

    NASA Astrophysics Data System (ADS)

    Schaum, A.

    2011-05-01

    A new formalism has been developed that produces detection algorithms for model-based problems, in which one or more parameter values is unknown. Continuum Fusion can be used to generate different flavors of algorithm for any composite hypothesis testing problem. The methodology is defined by a fusion logic that can be translated into max/min conditions. Here it is applied to a simple sensor fusion model, but one for which the generalized likelihood ratio test is intractable. By contrast, a fusion-based response to the same problem can be devised that is solvable in closed form and represents a good approximation to the GLR test.

  10. Bilinearity, Rules, and Prefrontal Cortex

    PubMed Central

    Dayan, Peter

    2007-01-01

    Humans can be instructed verbally to perform computationally complex cognitive tasks; their performance then improves relatively slowly over the course of practice. Many skills underlie these abilities; in this paper, we focus on the particular question of a uniform architecture for the instantiation of habitual performance and the storage, recall, and execution of simple rules. Our account builds on models of gated working memory, and involves a bilinear architecture for representing conditional input-output maps and for matching rules to the state of the input and working memory. We demonstrate the performance of our model on two paradigmatic tasks used to investigate prefrontal and basal ganglia function. PMID:18946523

  11. A flowgraph model for bladder carcinoma

    PubMed Central

    2014-01-01

    Background Superficial bladder cancer has been the subject of numerous studies for many years, but the evolution of the disease still remains not well understood. After the tumor has been surgically removed, it may reappear at a similar level of malignancy or progress to a higher level. The process may be reasonably modeled by means of a Markov process. However, in order to more completely model the evolution of the disease, this approach is insufficient. The semi-Markov framework allows a more realistic approach, but calculations become frequently intractable. In this context, flowgraph models provide an efficient approach to successfully manage the evolution of superficial bladder carcinoma. Our aim is to test this methodology in this particular case. Results We have built a successful model for a simple but representative case. Conclusion The flowgraph approach is suitable for modeling of superficial bladder cancer. PMID:25080066

  12. A neural network model of foraging decisions made under predation risk.

    PubMed

    Coleman, Scott L; Brown, Vincent R; Levine, Daniel S; Mellgren, Roger L

    2005-12-01

    This article develops the cognitive-emotional forager (CEF) model, a novel application of a neural network to dynamical processes in foraging behavior. The CEF is based on a neural network known as the gated dipole, introduced by Grossberg, which is capable of representing short-term affective reactions in a manner similar to Solomon and Corbit's (1974) opponent process theory. The model incorporates a trade-off between approach toward food and avoidance of predation under varying levels of motivation induced by hunger. The results of simulations in a simple patch selection paradigm, using a lifetime fitness criterion for comparison, indicate that the CEF model is capable of nearly optimal foraging and outperforms a run-of-luck rule-of-thumb model. Models such as the one presented here can illuminate the underlying cognitive and motivational components of animal decision making.

  13. A New Model of Jupiter's Magnetic Field From Juno's First Nine Orbits

    NASA Astrophysics Data System (ADS)

    Connerney, J. E. P.; Kotsiaros, S.; Oliversen, R. J.; Espley, J. R.; Joergensen, J. L.; Joergensen, P. S.; Merayo, J. M. G.; Herceg, M.; Bloxham, J.; Moore, K. M.; Bolton, S. J.; Levin, S. M.

    2018-03-01

    A spherical harmonic model of the magnetic field of Jupiter is obtained from vector magnetic field observations acquired by the Juno spacecraft during its first nine polar orbits about the planet. Observations acquired during eight of these orbits provide the first truly global coverage of Jupiter's magnetic field with a coarse longitudinal separation of 45° between perijoves. The magnetic field is represented with a degree 20 spherical harmonic model for the planetary ("internal") field, combined with a simple model of the magnetodisc for the field ("external") due to distributed magnetospheric currents. Partial solution of the underdetermined inverse problem using generalized inverse techniques yields a model ("Juno Reference Model through Perijove 9") of the planetary magnetic field with spherical harmonic coefficients well determined through degree and order 10, providing the first detailed view of a planetary dynamo beyond Earth.

  14. A simple method for EEG guided transcranial electrical stimulation without models.

    PubMed

    Cancelli, Andrea; Cottone, Carlo; Tecchio, Franca; Truong, Dennis Q; Dmochowski, Jacek; Bikson, Marom

    2016-06-01

    There is longstanding interest in using EEG measurements to inform transcranial Electrical Stimulation (tES) but adoption is lacking because users need a simple and adaptable recipe. The conventional approach is to use anatomical head-models for both source localization (the EEG inverse problem) and current flow modeling (the tES forward model), but this approach is computationally demanding, requires an anatomical MRI, and strict assumptions about the target brain regions. We evaluate techniques whereby tES dose is derived from EEG without the need for an anatomical head model, target assumptions, difficult case-by-case conjecture, or many stimulation electrodes. We developed a simple two-step approach to EEG-guided tES that based on the topography of the EEG: (1) selects locations to be used for stimulation; (2) determines current applied to each electrode. Each step is performed based solely on the EEG with no need for head models or source localization. Cortical dipoles represent idealized brain targets. EEG-guided tES strategies are verified using a finite element method simulation of the EEG generated by a dipole, oriented either tangential or radial to the scalp surface, and then simulating the tES-generated electric field produced by each model-free technique. These model-free approaches are compared to a 'gold standard' numerically optimized dose of tES that assumes perfect understanding of the dipole location and head anatomy. We vary the number of electrodes from a few to over three hundred, with focality or intensity as optimization criterion. Model-free approaches evaluated include (1) voltage-to-voltage, (2) voltage-to-current; (3) Laplacian; and two Ad-Hoc techniques (4) dipole sink-to-sink; and (5) sink to concentric. Our results demonstrate that simple ad hoc approaches can achieve reasonable targeting for the case of a cortical dipole, remarkably with only 2-8 electrodes and no need for a model of the head. Our approach is verified directly only for a theoretically localized source, but may be potentially applied to an arbitrary EEG topography. For its simplicity and linearity, our recipe for model-free EEG guided tES lends itself to broad adoption and can be applied to static (tDCS), time-variant (e.g., tACS, tRNS, tPCS), or closed-loop tES.

  15. A simple method for EEG guided transcranial electrical stimulation without models

    NASA Astrophysics Data System (ADS)

    Cancelli, Andrea; Cottone, Carlo; Tecchio, Franca; Truong, Dennis Q.; Dmochowski, Jacek; Bikson, Marom

    2016-06-01

    Objective. There is longstanding interest in using EEG measurements to inform transcranial Electrical Stimulation (tES) but adoption is lacking because users need a simple and adaptable recipe. The conventional approach is to use anatomical head-models for both source localization (the EEG inverse problem) and current flow modeling (the tES forward model), but this approach is computationally demanding, requires an anatomical MRI, and strict assumptions about the target brain regions. We evaluate techniques whereby tES dose is derived from EEG without the need for an anatomical head model, target assumptions, difficult case-by-case conjecture, or many stimulation electrodes. Approach. We developed a simple two-step approach to EEG-guided tES that based on the topography of the EEG: (1) selects locations to be used for stimulation; (2) determines current applied to each electrode. Each step is performed based solely on the EEG with no need for head models or source localization. Cortical dipoles represent idealized brain targets. EEG-guided tES strategies are verified using a finite element method simulation of the EEG generated by a dipole, oriented either tangential or radial to the scalp surface, and then simulating the tES-generated electric field produced by each model-free technique. These model-free approaches are compared to a ‘gold standard’ numerically optimized dose of tES that assumes perfect understanding of the dipole location and head anatomy. We vary the number of electrodes from a few to over three hundred, with focality or intensity as optimization criterion. Main results. Model-free approaches evaluated include (1) voltage-to-voltage, (2) voltage-to-current; (3) Laplacian; and two Ad-Hoc techniques (4) dipole sink-to-sink; and (5) sink to concentric. Our results demonstrate that simple ad hoc approaches can achieve reasonable targeting for the case of a cortical dipole, remarkably with only 2-8 electrodes and no need for a model of the head. Significance. Our approach is verified directly only for a theoretically localized source, but may be potentially applied to an arbitrary EEG topography. For its simplicity and linearity, our recipe for model-free EEG guided tES lends itself to broad adoption and can be applied to static (tDCS), time-variant (e.g., tACS, tRNS, tPCS), or closed-loop tES.

  16. From K giants to G dwarfs: stellar lifetime effects on metallicity distributions derived from red giants

    NASA Astrophysics Data System (ADS)

    Manning, Ellen M.; Cole, Andrew A.

    2017-11-01

    We examine the biases inherent to chemical abundance distributions when targets are selected from the red giant branch (RGB), using simulated giant branches created from isochrones. We find that even when stars are chosen from the entire colour range of RGB stars and over a broad range of magnitudes, the relative numbers of stars of different ages and metallicities, integrated over all stellar types, are not accurately represented in the giant branch sample. The result is that metallicity distribution functions derived from RGB star samples require a correction before they can be fitted by chemical evolution models. We derive simple correction factors for over- and under-represented populations for the limiting cases of single-age populations with a broad range of metallicities and of continuous star formation at constant metallicity; an important general conclusion is that intermediate-age populations (≈1-4 Gyr) are over-represented in RGB samples. We apply our models to the case of the Large Magellanic Cloud bar and show that the observed metallicity distribution underestimates the true number of metal-poor stars by more than 25 per cent; as a result, the inferred importance of gas flows in chemical evolution models could potentially be overestimated. The age- and metallicity-dependences of RGB lifetimes require careful modelling if they are not to lead to spurious conclusions about the chemical enrichment history of galaxies.

  17. Investigation of the ionospheric Faraday rotation for use in orbit corrections

    NASA Technical Reports Server (NTRS)

    Llewellyn, S. K.; Bent, R. B.; Nesterczuk, G.

    1974-01-01

    The possibility of mapping the Faraday factors on a worldwide basis was examined as a simple method of representing the conversion factors for any possible user. However, this does not seem feasible. The complex relationship between the true magnetic coordinates and the geographic latitude, longitude, and azimuth angles eliminates the possibility of setting up some simple tables that would yield worldwide results of sufficient accuracy. Tabular results for specific stations can easily be produced or could be represented in graphic form.

  18. [A simple model for describing pressure-volume curves in free balloon dilatation with reference the dynamics of inflation hydraulic aspects].

    PubMed

    Bloss, P; Werner, C

    2000-06-01

    We propose a simple model to describe pressure-time and pressure-volume curves for the free balloon (balloon in air) of balloon catheters, taking into account the dynamics of the inflation device. On the basis of our investigations of the flow rate-dependence of characteristic parameters of the pressure-time curves, the appropriateness of this simple model is demonstrated using a representative example. Basic considerations lead to the following assumptions: (1) the flow within the shaft of the catheter is laminar, and (ii) the volume decrease of the liquid used for inflation due to pressurization can be neglected if the liquid is carefully degassed prior to inflation, and if the total volume of the liquid in the system is less than 2 ml. Taking into account the dynamics of the inflation device used for pumping the liquid into the proximal end of the shaft during inflation, the inflation process can be subdivided into the following three phases: initial phase, filling phase and dilatation phase. For these three phases, the transformation of the time into the volume coordinates is given. On the basis of our model, the following parameters of the balloon catheter can be determined from a measured pressure-time curve: (1) the resistance to flow of the liquid through the shaft of the catheter and the resulting pressure drop across the shaft, (2) the residual volume and residual pressure of the balloon, and (3) the volume compliance of the balloon catheter with and without the inflation device.

  19. Age estimation standards for a Western Australian population using the coronal pulp cavity index.

    PubMed

    Karkhanis, Shalmira; Mack, Peter; Franklin, Daniel

    2013-09-10

    Age estimation is a vital aspect in creating a biological profile and aids investigators by narrowing down potentially matching identities from the available pool. In addition to routine casework, in the present global political scenario, age estimation in living individuals is required in cases of refugees, asylum seekers, human trafficking and to ascertain age of criminal responsibility. Thus robust methods that are simple, non-invasive and ethically viable are required. The aim of the present study is, therefore, to test the reliability and applicability of the coronal pulp cavity index method, for the purpose of developing age estimation standards for an adult Western Australian population. A total of 450 orthopantomograms (220 females and 230 males) of Australian individuals were analyzed. Crown and coronal pulp chamber heights were measured in the mandibular left and right premolars, and the first and second molars. These measurements were then used to calculate the tooth coronal index. Data was analyzed using paired sample t-tests to assess bilateral asymmetry followed by simple linear and multiple regressions to develop age estimation models. The most accurate age estimation based on simple linear regression model was with mandibular right first molar (SEE ±8.271 years). Multiple regression models improved age prediction accuracy considerably and the most accurate model was with bilateral first and second molars (SEE ±6.692 years). This study represents the first investigation of this method in a Western Australian population and our results indicate that the method is suitable for forensic application. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  20. Effective Biot theory and its generalization to poroviscoelastic models

    NASA Astrophysics Data System (ADS)

    Liu, Xu; Greenhalgh, Stewart; Zhou, Bing; Greenhalgh, Mark

    2018-02-01

    A method is suggested to express the effective bulk modulus of the solid frame of a poroelastic material as a function of the saturated bulk modulus. This method enables effective Biot theory to be described through the use of seismic dispersion measurements or other models developed for the effective saturated bulk modulus. The effective Biot theory is generalized to a poroviscoelastic model of which the moduli are represented by the relaxation functions of the generalized fractional Zener model. The latter covers the general Zener and the Cole-Cole models as special cases. A global search method is described to determine the parameters of the relaxation functions, and a simple deterministic method is also developed to find the defining parameters of the single Cole-Cole model. These methods enable poroviscoelastic models to be constructed, which are based on measured seismic attenuation functions, and ensure that the model dispersion characteristics match the observations.

  1. SCS-CN based time-distributed sediment yield model

    NASA Astrophysics Data System (ADS)

    Tyagi, J. V.; Mishra, S. K.; Singh, Ranvir; Singh, V. P.

    2008-05-01

    SummaryA sediment yield model is developed to estimate the temporal rates of sediment yield from rainfall events on natural watersheds. The model utilizes the SCS-CN based infiltration model for computation of rainfall-excess rate, and the SCS-CN-inspired proportionality concept for computation of sediment-excess. For computation of sedimentographs, the sediment-excess is routed to the watershed outlet using a single linear reservoir technique. Analytical development of the model shows the ratio of the potential maximum erosion (A) to the potential maximum retention (S) of the SCS-CN method is constant for a watershed. The model is calibrated and validated on a number of events using the data of seven watersheds from India and the USA. Representative values of the A/S ratio computed for the watersheds from calibration are used for the validation of the model. The encouraging results of the proposed simple four parameter model exhibit its potential in field application.

  2. The IDEA model: A single equation approach to the Ebola forecasting challenge.

    PubMed

    Tuite, Ashleigh R; Fisman, David N

    2018-03-01

    Mathematical modeling is increasingly accepted as a tool that can inform disease control policy in the face of emerging infectious diseases, such as the 2014-2015 West African Ebola epidemic, but little is known about the relative performance of alternate forecasting approaches. The RAPIDD Ebola Forecasting Challenge (REFC) tested the ability of eight mathematical models to generate useful forecasts in the face of simulated Ebola outbreaks. We used a simple, phenomenological single-equation model (the "IDEA" model), which relies only on case counts, in the REFC. Model fits were performed using a maximum likelihood approach. We found that the model performed reasonably well relative to other more complex approaches, with performance metrics ranked on average 4th or 5th among participating models. IDEA appeared better suited to long- than short-term forecasts, and could be fit using nothing but reported case counts. Several limitations were identified, including difficulty in identifying epidemic peak (even retrospectively), unrealistically precise confidence intervals, and difficulty interpolating daily case counts when using a model scaled to epidemic generation time. More realistic confidence intervals were generated when case counts were assumed to follow a negative binomial, rather than Poisson, distribution. Nonetheless, IDEA represents a simple phenomenological model, easily implemented in widely available software packages that could be used by frontline public health personnel to generate forecasts with accuracy that approximates that which is achieved using more complex methodologies. Copyright © 2016 The Author(s). Published by Elsevier B.V. All rights reserved.

  3. Motor and sensory neuropathy due to myelin infolding and paranodal damage in a transgenic mouse model of Charcot–Marie–Tooth disease type 1C

    PubMed Central

    Lee, Samuel M.; Sha, Di; Mohammed, Anum A.; Asress, Seneshaw; Glass, Jonathan D.; Chin, Lih-Shen; Li, Lian

    2013-01-01

    Charcot–Marie–Tooth disease type 1C (CMT1C) is a dominantly inherited motor and sensory neuropathy. Despite human genetic evidence linking missense mutations in SIMPLE to CMT1C, the in vivo role of CMT1C-linked SIMPLE mutations remains undetermined. To investigate the molecular mechanism underlying CMT1C pathogenesis, we generated transgenic mice expressing either wild-type or CMT1C-linked W116G human SIMPLE. Mice expressing mutant, but not wild type, SIMPLE develop a late-onset motor and sensory neuropathy that recapitulates key clinical features of CMT1C disease. SIMPLE mutant mice exhibit motor and sensory behavioral impairments accompanied by decreased motor and sensory nerve conduction velocity and reduced compound muscle action potential amplitude. This neuropathy phenotype is associated with focally infolded myelin loops that protrude into the axons at paranodal regions and near Schmidt–Lanterman incisures of peripheral nerves. We find that myelin infolding is often linked to constricted axons with signs of impaired axonal transport and to paranodal defects and abnormal organization of the node of Ranvier. Our findings support that SIMPLE mutation disrupts myelin homeostasis and causes peripheral neuropathy via a combination of toxic gain-of-function and dominant-negative mechanisms. The results from this study suggest that myelin infolding and paranodal damage may represent pathogenic precursors preceding demyelination and axonal degeneration in CMT1C patients. PMID:23359569

  4. Late Quaternary glacier sensitivity to temperature and precipitation distribution in the Southern Alps of New Zealand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ann V. Rowan; Simon H. Brocklehurst; David M. Schultz

    2014-05-01

    Glaciers respond to climate variations and leave geomorphic evidence that represents an important terrestrial paleoclimate record. However, the accuracy of paleoclimate reconstructions from glacial geology is limited by the challenge of representing mountain meteorology in numerical models. Precipitation is usually treated in a simple manner and yet represents difficult-to-characterize variables such as amount, distribution, and phase. Furthermore, precipitation distributions during a glacial probably differed from present-day interglacial patterns. We applied two models to investigate glacier sensitivity to temperature and precipitation in the eastern Southern Alps of New Zealand. A 2-D model was used to quantify variations in the length ofmore » the reconstructed glaciers resulting from plausible precipitation distributions compared to variations in length resulting from change in mean annual air temperature and precipitation amount. A 1-D model was used to quantify variations in length resulting from interannual climate variability. Assuming that present-day interglacial values represent precipitation distributions during the last glacial, a range of plausible present-day precipitation distributions resulted in uncertainty in the Last Glacial Maximum length of the Pukaki Glacier of 17.1?km (24%) and the Rakaia Glacier of 9.3?km (25%), corresponding to a 0.5°C difference in temperature. Smaller changes in glacier length resulted from a 50% decrease in precipitation amount from present-day values (-14% and -18%) and from a 50% increase in precipitation amount (5% and 9%). Our results demonstrate that precipitation distribution can produce considerable variation in simulated glacier extents and that reconstructions of paleoglaciers should include this uncertainty.« less

  5. Rheological Properties of Natural Subduction Zone Interface: Insights from "Digital" Griggs Experiments

    NASA Astrophysics Data System (ADS)

    Ioannidi, P. I.; Le Pourhiet, L.; Moreno, M.; Agard, P.; Oncken, O.; Angiboust, S.

    2017-12-01

    The physical nature of plate locking and its relation to surface deformation patterns at different time scales (e.g. GPS displacements during the seismic cycle) can be better understood by determining the rheological parameters of the subduction interface. However, since direct rheological measurements are not possible, finite element modelling helps to determine the effective rheological parameters of the subduction interface. We used the open source finite element code pTatin to create 2D models, starting with a homogeneous medium representing shearing at the subduction interface. We tested several boundary conditions that mimic simple shear and opted for the one that best describes the Grigg's type simple shear experiments. After examining different parameters, such as shearing velocity, temperature and viscosity, we added complexity to the geometry by including a second phase. This arises from field observations, where shear zone outcrops are often composites of multiple phases: stronger crustal blocks embedded within a sedimentary and/or serpentinized matrix have been reported for several exhumed subduction zones. We implemented a simplified model to simulate simple shearing of a two-phase medium in order to quantify the effect of heterogeneous rheology on stress and strain localization. Preliminary results show different strength in the models depending on the block-to-matrix ratio. We applied our method to outcrop scale block-in-matrix geometries and by sampling at different depths along exhumed former subduction interfaces, we expect to be able to provide effective friction and viscosity of a natural interface. In a next step, these effective parameters will be used as input into seismic cycle deformation models in an attempt to assess the possible signature of field geometries on the slip behaviour of the plate interface.

  6. A Simple Model Framework to Explore the Deeply Uncertain, Local Sea Level Response to Climate Change. A Case Study on New Orleans, Louisiana

    NASA Astrophysics Data System (ADS)

    Bakker, Alexander; Louchard, Domitille; Keller, Klaus

    2016-04-01

    Sea-level rise threatens many coastal areas around the world. The integrated assessment of potential adaptation and mitigation strategies requires a sound understanding of the upper tails and the major drivers of the uncertainties. Global warming causes sea-level to rise, primarily due to thermal expansion of the oceans and mass loss of the major ice sheets, smaller ice caps and glaciers. These components show distinctly different responses to temperature changes with respect to response time, threshold behavior, and local fingerprints. Projections of these different components are deeply uncertain. Projected uncertainty ranges strongly depend on (necessary) pragmatic choices and assumptions; e.g. on the applied climate scenarios, which processes to include and how to parameterize them, and on error structure of the observations. Competing assumptions are very hard to objectively weigh. Hence, uncertainties of sea-level response are hard to grasp in a single distribution function. The deep uncertainty can be better understood by making clear the key assumptions. Here we demonstrate this approach using a relatively simple model framework. We present a mechanistically motivated, but simple model framework that is intended to efficiently explore the deeply uncertain sea-level response to anthropogenic climate change. The model consists of 'building blocks' that represent the major components of sea-level response and its uncertainties, including threshold behavior. The framework's simplicity enables the simulation of large ensembles allowing for an efficient exploration of parameter uncertainty and for the simulation of multiple combined adaptation and mitigation strategies. The model framework can skilfully reproduce earlier major sea level assessments, but due to the modular setup it can also be easily utilized to explore high-end scenarios and the effect of competing assumptions and parameterizations.

  7. Generative Models of Segregation: Investigating Model-Generated Patterns of Residential Segregation by Ethnicity and Socioeconomic Status

    PubMed Central

    Fossett, Mark

    2011-01-01

    This paper considers the potential for using agent models to explore theories of residential segregation in urban areas. Results of generative experiments conducted using an agent-based simulation of segregation dynamics document that varying a small number of model parameters representing constructs from urban-ecological theories of segregation can generate a wide range of qualitatively distinct and substantively interesting segregation patterns. The results suggest how complex, macro-level patterns of residential segregation can arise from a small set of simple micro-level social dynamics operating within particular urban-demographic contexts. The promise and current limitations of agent simulation studies are noted and optimism is expressed regarding the potential for such studies to engage and contribute to the broader research literature on residential segregation. PMID:21379372

  8. Value of the distant future: Model-independent results

    NASA Astrophysics Data System (ADS)

    Katz, Yuri A.

    2017-01-01

    This paper shows that the model-independent account of correlations in an interest rate process or a log-consumption growth process leads to declining long-term tails of discount curves. Under the assumption of an exponentially decaying memory in fluctuations of risk-free real interest rates, I derive the analytical expression for an apt value of the long run discount factor and provide a detailed comparison of the obtained result with the outcome of the benchmark risk-free interest rate models. Utilizing the standard consumption-based model with an isoelastic power utility of the representative economic agent, I derive the non-Markovian generalization of the Ramsey discounting formula. Obtained analytical results allowing simple calibration, may augment the rigorous cost-benefit and regulatory impact analysis of long-term environmental and infrastructure projects.

  9. Using instability to reconfigure smart structures in a spring-mass model

    NASA Astrophysics Data System (ADS)

    Zhang, Jiaying; McInnes, Colin R.

    2017-07-01

    Multistable phenomenon have long been used in mechanism design. In this paper a subset of unstable configurations of a smart structure model will be used to develop energy-efficient schemes to reconfigure the structure. This new concept for reconfiguration uses heteroclinic connections to transition the structure between different unstable equal-energy states. In an ideal structure model zero net energy input is required for the reconfiguration, compared to transitions between stable equilibria across a potential barrier. A simple smart structure model is firstly used to identify sets of equal-energy unstable configurations using dynamical systems theory. Dissipation is then added to be more representative of a practical structure. A range of strategies are then used to reconfigure the smart structure using heteroclinic connections with different approaches to handle dissipation.

  10. Ontology and modeling patterns for state-based behavior representation

    NASA Technical Reports Server (NTRS)

    Castet, Jean-Francois; Rozek, Matthew L.; Ingham, Michel D.; Rouquette, Nicolas F.; Chung, Seung H.; Kerzhner, Aleksandr A.; Donahue, Kenneth M.; Jenkins, J. Steven; Wagner, David A.; Dvorak, Daniel L.; hide

    2015-01-01

    This paper provides an approach to capture state-based behavior of elements, that is, the specification of their state evolution in time, and the interactions amongst them. Elements can be components (e.g., sensors, actuators) or environments, and are characterized by state variables that vary with time. The behaviors of these elements, as well as interactions among them are represented through constraints on state variables. This paper discusses the concepts and relationships introduced in this behavior ontology, and the modeling patterns associated with it. Two example cases are provided to illustrate their usage, as well as to demonstrate the flexibility and scalability of the behavior ontology: a simple flashlight electrical model and a more complex spacecraft model involving instruments, power and data behaviors. Finally, an implementation in a SysML profile is provided.

  11. Research on Capacity Addition using Market Model with Transmission Congestion under Competitive Environment

    NASA Astrophysics Data System (ADS)

    Katsura, Yasufumi; Attaviriyanupap, Pathom; Kataoka, Yoshihiko

    In this research, the fundamental premises for deregulation of the electric power industry are reevaluated. The authors develop a simple model to represent wholesale electricity market with highly congested network. The model is developed by simplifying the power system and market in New York ISO based on available data of New York ISO in 2004 with some estimation. Based on the developed model and construction cost data from the past, the economic impact of transmission line addition on market participants and the impact of deregulation on power plant additions under market with transmission congestion are studied. Simulation results show that the market signals may fail to facilitate proper capacity additions and results in the undesirable over-construction and insufficient-construction cycle of capacity addition.

  12. Adaptation of an unstructured-mesh, finite-element ocean model to the simulation of ocean circulation beneath ice shelves

    NASA Astrophysics Data System (ADS)

    Kimura, Satoshi; Candy, Adam S.; Holland, Paul R.; Piggott, Matthew D.; Jenkins, Adrian

    2013-07-01

    Several different classes of ocean model are capable of representing floating glacial ice shelves. We describe the incorporation of ice shelves into Fluidity-ICOM, a nonhydrostatic finite-element ocean model with the capacity to utilize meshes that are unstructured and adaptive in three dimensions. This geometric flexibility offers several advantages over previous approaches. The model represents melting and freezing on all ice-shelf surfaces including vertical faces, treats the ice shelf topography as continuous rather than stepped, and does not require any smoothing of the ice topography or any of the additional parameterisations of the ocean mixed layer used in isopycnal or z-coordinate models. The model can also represent a water column that decreases to zero thickness at the 'grounding line', where the floating ice shelf is joined to its tributary ice streams. The model is applied to idealised ice-shelf geometries in order to demonstrate these capabilities. In these simple experiments, arbitrarily coarsening the mesh outside the ice-shelf cavity has little effect on the ice-shelf melt rate, while the mesh resolution within the cavity is found to be highly influential. Smoothing the vertical ice front results in faster flow along the smoothed ice front, allowing greater exchange with the ocean than in simulations with a realistic ice front. A vanishing water-column thickness at the grounding line has little effect in the simulations studied. We also investigate the response of ice shelf basal melting to variations in deep water temperature in the presence of salt stratification.

  13. Chemical consequences of the initial diffusional growth of cloud droplets - A clean marine case

    NASA Technical Reports Server (NTRS)

    Twohy, C. H.; Charlson, R. J.; Austin, P. H.

    1989-01-01

    A simple microphysical cloud parcel model and a simple representation of the background marine aerosol are used to predict the concentrations and compositions of droplets of various sizes near cloud base. The aerosol consists of an externally-mixed ammonium bisulfate accumulation mode and a sea-salt coarse particle mode. The difference in diffusional growth rates between the small and large droplets as well as the differences in composition between the two aerosol modes result in substantial differences in solute concentration and composition with size of droplets in the parcel. The chemistry of individual droplets is not, in general, representative of the bulk (volume-weighted mean) cloud water sample. These differences, calculated to occur early in the parcel's lifetime, should have important consequences for chemical reactions such as aqueous phase sulfate production.

  14. Winnerless competition principle and prediction of the transient dynamics in a Lotka-Volterra model

    NASA Astrophysics Data System (ADS)

    Afraimovich, Valentin; Tristan, Irma; Huerta, Ramon; Rabinovich, Mikhail I.

    2008-12-01

    Predicting the evolution of multispecies ecological systems is an intriguing problem. A sufficiently complex model with the necessary predicting power requires solutions that are structurally stable. Small variations of the system parameters should not qualitatively perturb its solutions. When one is interested in just asymptotic results of evolution (as time goes to infinity), then the problem has a straightforward mathematical image involving simple attractors (fixed points or limit cycles) of a dynamical system. However, for an accurate prediction of evolution, the analysis of transient solutions is critical. In this paper, in the framework of the traditional Lotka-Volterra model (generalized in some sense), we show that the transient solution representing multispecies sequential competition can be reproducible and predictable with high probability.

  15. Winnerless competition principle and prediction of the transient dynamics in a Lotka-Volterra model.

    PubMed

    Afraimovich, Valentin; Tristan, Irma; Huerta, Ramon; Rabinovich, Mikhail I

    2008-12-01

    Predicting the evolution of multispecies ecological systems is an intriguing problem. A sufficiently complex model with the necessary predicting power requires solutions that are structurally stable. Small variations of the system parameters should not qualitatively perturb its solutions. When one is interested in just asymptotic results of evolution (as time goes to infinity), then the problem has a straightforward mathematical image involving simple attractors (fixed points or limit cycles) of a dynamical system. However, for an accurate prediction of evolution, the analysis of transient solutions is critical. In this paper, in the framework of the traditional Lotka-Volterra model (generalized in some sense), we show that the transient solution representing multispecies sequential competition can be reproducible and predictable with high probability.

  16. Concentrator optical characterization using computer mathematical modelling and point source testing

    NASA Technical Reports Server (NTRS)

    Dennison, E. W.; John, S. L.; Trentelman, G. F.

    1984-01-01

    The optical characteristics of a paraboloidal solar concentrator are analyzed using the intercept factor curve (a format for image data) to describe the results of a mathematical model and to represent reduced data from experimental testing. This procedure makes it possible not only to test an assembled concentrator, but also to evaluate single optical panels or to conduct non-solar tests of an assembled concentrator. The use of three-dimensional ray tracing computer programs to calculate the mathematical model is described. These ray tracing programs can include any type of optical configuration from simple paraboloids to array of spherical facets and can be adapted to microcomputers or larger computers, which can graphically display real-time comparison of calculated and measured data.

  17. Data fusion in cyber security: first order entity extraction from common cyber data

    NASA Astrophysics Data System (ADS)

    Giacobe, Nicklaus A.

    2012-06-01

    The Joint Directors of Labs Data Fusion Process Model (JDL Model) provides a framework for how to handle sensor data to develop higher levels of inference in a complex environment. Beginning from a call to leverage data fusion techniques in intrusion detection, there have been a number of advances in the use of data fusion algorithms in this subdomain of cyber security. While it is tempting to jump directly to situation-level or threat-level refinement (levels 2 and 3) for more exciting inferences, a proper fusion process starts with lower levels of fusion in order to provide a basis for the higher fusion levels. The process begins with first order entity extraction, or the identification of important entities represented in the sensor data stream. Current cyber security operational tools and their associated data are explored for potential exploitation, identifying the first order entities that exist in the data and the properties of these entities that are described by the data. Cyber events that are represented in the data stream are added to the first order entities as their properties. This work explores typical cyber security data and the inferences that can be made at the lower fusion levels (0 and 1) with simple metrics. Depending on the types of events that are expected by the analyst, these relatively simple metrics can provide insight on their own, or could be used in fusion algorithms as a basis for higher levels of inference.

  18. On the importance of incorporating sampling weights in ...

    EPA Pesticide Factsheets

    Occupancy models are used extensively to assess wildlife-habitat associations and to predict species distributions across large geographic regions. Occupancy models were developed as a tool to properly account for imperfect detection of a species. Current guidelines on survey design requirements for occupancy models focus on the number of sample units and the pattern of revisits to a sample unit within a season. We focus on the sampling design or how the sample units are selected in geographic space (e.g., stratified, simple random, unequal probability, etc). In a probability design, each sample unit has a sample weight which quantifies the number of sample units it represents in the finite (oftentimes areal) sampling frame. We demonstrate the importance of including sampling weights in occupancy model estimation when the design is not a simple random sample or equal probability design. We assume a finite areal sampling frame as proposed for a national bat monitoring program. We compare several unequal and equal probability designs and varying sampling intensity within a simulation study. We found the traditional single season occupancy model produced biased estimates of occupancy and lower confidence interval coverage rates compared to occupancy models that accounted for the sampling design. We also discuss how our findings inform the analyses proposed for the nascent North American Bat Monitoring Program and other collaborative synthesis efforts that propose h

  19. The Role of Wakes in Modelling Tidal Current Turbines

    NASA Astrophysics Data System (ADS)

    Conley, Daniel; Roc, Thomas; Greaves, Deborah

    2010-05-01

    The eventual proper development of arrays of Tidal Current Turbines (TCT) will require a balance which maximizes power extraction while minimizing environmental impacts. Idealized analytical analogues and simple 2-D models are useful tools for investigating questions of a general nature but do not represent a practical tool for application to realistic cases. Some form of 3-D numerical simulations will be required for such applications and the current project is designed to develop a numerical decision-making tool for use in planning large scale TCT projects. The project is predicated on the use of an existing regional ocean modelling framework (the Regional Ocean Modelling System - ROMS) which is modified to enable the user to account for the effects of TCTs. In such a framework where mixing processes are highly parametrized, the fidelity of the quantitative results is critically dependent on the parameter values utilized. In light of the early stage of TCT development and the lack of field scale measurements, the calibration of such a model is problematic. In the absence of explicit calibration data sets, the device wake structure has been identified as an efficient feature for model calibration. This presentation will discuss efforts to design an appropriate calibration scheme which focuses on wake decay and the motivation for this approach, techniques applied, validation results from simple test cases and limitations shall be presented.

  20. Learning to represent spatial transformations with factored higher-order Boltzmann machines.

    PubMed

    Memisevic, Roland; Hinton, Geoffrey E

    2010-06-01

    To allow the hidden units of a restricted Boltzmann machine to model the transformation between two successive images, Memisevic and Hinton (2007) introduced three-way multiplicative interactions that use the intensity of a pixel in the first image as a multiplicative gain on a learned, symmetric weight between a pixel in the second image and a hidden unit. This creates cubically many parameters, which form a three-dimensional interaction tensor. We describe a low-rank approximation to this interaction tensor that uses a sum of factors, each of which is a three-way outer product. This approximation allows efficient learning of transformations between larger image patches. Since each factor can be viewed as an image filter, the model as a whole learns optimal filter pairs for efficiently representing transformations. We demonstrate the learning of optimal filter pairs from various synthetic and real image sequences. We also show how learning about image transformations allows the model to perform a simple visual analogy task, and we show how a completely unsupervised network trained on transformations perceives multiple motions of transparent dot patterns in the same way as humans.

  1. A Coarse-Grained Protein Model in a Water-like Solvent

    NASA Astrophysics Data System (ADS)

    Sharma, Sumit; Kumar, Sanat K.; Buldyrev, Sergey V.; Debenedetti, Pablo G.; Rossky, Peter J.; Stanley, H. Eugene

    2013-05-01

    Simulations employing an explicit atom description of proteins in solvent can be computationally expensive. On the other hand, coarse-grained protein models in implicit solvent miss essential features of the hydrophobic effect, especially its temperature dependence, and have limited ability to capture the kinetics of protein folding. We propose a free space two-letter protein (``H-P'') model in a simple, but qualitatively accurate description for water, the Jagla model, which coarse-grains water into an isotropically interacting sphere. Using Monte Carlo simulations, we design protein-like sequences that can undergo a collapse, exposing the ``Jagla-philic'' monomers to the solvent, while maintaining a ``hydrophobic'' core. This protein-like model manifests heat and cold denaturation in a manner that is reminiscent of proteins. While this protein-like model lacks the details that would introduce secondary structure formation, we believe that these ideas represent a first step in developing a useful, but computationally expedient, means of modeling proteins.

  2. Using a new high resolution regional model for malaria that accounts for population density and surface hydrology to determine sensitivity of malaria risk to climate drivers

    NASA Astrophysics Data System (ADS)

    Tompkins, Adrian; Ermert, Volker; Di Giuseppe, Francesca

    2013-04-01

    In order to better address the role of population dynamics and surface hydrology in the assessment of malaria risk, a new dynamical disease model been developed at ICTP, known as VECTRI: VECtor borne disease community model of ICTP, TRIeste (VECTRI). The model accounts for the temperature impact on the larvae, parasite and adult vector populations. Local host population density affects the transmission intensity, and the model thus reproduces the differences between peri-urban and rural transmission noted in Africa. A new simple pond model framework represents surface hydrology. The model can be used on with spatial resolutions finer than 10km to resolve individual health districts and thus can be used as a planning tool. Results of the models representation of interannual variability and longer term projections of malaria transmission will be shown for Africa. These will show that the model represents the seasonality and spatial variations of malaria transmission well matching a wide range of survey data of parasite rate and entomological inoculation rate (EIR) from across West and East Africa taken in the period prior to large-scale interventions. The model is used to determine the sensitivity of malaria risk to climate variations, both in rainfall and temperature, and then its use in a prototype forecasting system coupled with ECMWF forecasts will be demonstrated.

  3. A simple theory of molecular organization in fullerene-containing liquid crystals

    NASA Astrophysics Data System (ADS)

    Peroukidis, S. D.; Vanakaras, A. G.; Photinos, D. J.

    2005-10-01

    Systematic efforts to synthesize fullerene-containing liquid crystals have produced a variety of successful model compounds. We present a simple molecular theory, based on the interconverting shape approach [Vanakaras and Photinos, J. Mater. Chem. 15, 2002 (2005)], that relates the self-organization observed in these systems to their molecular structure. The interactions are modeled by dividing each molecule into a number of submolecular blocks to which specific interactions are assigned. Three types of blocks are introduced, corresponding to fullerene units, mesogenic units, and nonmesogenic linkage units. The blocks are constrained to move on a cubic three-dimensional lattice and molecular flexibility is allowed by retaining a number of representative conformations within the block representation of the molecule. Calculations are presented for a variety of molecular architectures including twin mesogenic branch monoadducts of C60, twin dendromesogenic branch monoadducts, and conical (badminton shuttlecock) multiadducts of C60. The dependence of the phase diagrams on the interaction parameters is explored. In spite of its many simplifications and the minimal molecular modeling used (three types of chemically distinct submolecular blocks with only repulsive interactions), the theory accounts remarkably well for the phase behavior of these systems.

  4. A NEW METHOD FOR ENVIRONMENTAL FLOW ASSESSMENT BASED ON BASIN GEOLOGY. APPLICATION TO EBRO BASIN.

    PubMed

    2018-02-01

    The determination of environmental flows is one of the commonest practical actions implemented on European rivers to promote their good ecological status. In Mediterranean rivers, groundwater inflows are a decisive factor in streamflow maintenance. This work examines the relationship between the lithological composition of the Ebro basin (Spain) and dry season flows in order to establish a model that can assist in the calculation of environmental flow rates.Due to the lack of information on the hydrogeological characteristics of the studied basin, the variable representing groundwater inflows has been estimated in a very simple way. The explanatory variable used in the proposed model is easy to calculate and is sufficiently powerful to take into account all the required characteristics.The model has a high coefficient of determination, indicating that it is accurate for the intended purpose. The advantage of this method compared to other methods is that it requires very little data and provides a simple estimate of environmental flow. It is also independent of the basin area and the river section order.The results of this research also contribute to knowledge of the variables that influence low flow periods and low flow rates on rivers in the Ebro basin.

  5. Role of community tolerance level (CTL) in predicting the prevalence of the annoyance of road and rail noise.

    PubMed

    Schomer, Paul; Mestre, Vincent; Fidell, Sanford; Berry, Bernard; Gjestland, Truls; Vallet, Michel; Reid, Timothy

    2012-04-01

    Fidell et al. [(2011), J. Acoust. Soc. Am. 130(2), 791-806] have shown (1) that the rate of growth of annoyance with noise exposure reported in attitudinal surveys of the annoyance of aircraft noise closely resembles the exponential rate of change of loudness with sound level, and (2) that the proportion of a community highly annoyed and the variability in annoyance prevalence rates in communities are well accounted for by a simple model with a single free parameter: a community tolerance level (abbreviated CTL, and represented symbolically in mathematical expressions as L(ct)), expressed in units of DNL. The current study applies the same modeling approach to predicting the prevalence of annoyance of road traffic and rail noise. The prevalence of noise-induced annoyance of all forms of transportation noise is well accounted for by a simple, loudness-like exponential function with community-specific offsets. The model fits all of the road traffic findings well, but the prevalence of annoyance due to rail noise is more accurately predicted separately for interviewing sites with and without high levels of vibration and/or rattle.

  6. Measurements of PANs during the New England Air Quality Study 2002

    NASA Astrophysics Data System (ADS)

    Roberts, J. M.; Marchewka, M.; Bertman, S. B.; Sommariva, R.; Warneke, C.; de Gouw, J.; Kuster, W.; Goldan, P.; Williams, E.; Lerner, B. M.; Murphy, P.; Fehsenfeld, F. C.

    2007-10-01

    Measurements of peroxycarboxylic nitric anhydrides (PANs) were made during the New England Air Quality Study 2002 cruise of the NOAA RV Ronald H Brown. The four compounds observed, PAN, peroxypropionic nitric anhydride (PPN), peroxymethacrylic nitric anhydride (MPAN), and peroxyisobutyric nitric anhydride (PiBN) were compared with results from other continental and Gulf of Maine sites. Systematic changes in PPN/PAN ratio, due to differential thermal decomposition rates, were related quantitatively to air mass aging. At least one early morning period was observed when O3 seemed to have been lost probably due to NO3 and N2O5 chemistry. The highest O3 episode was observed in the combined plume of isoprene sources and anthropogenic volatile organic compounds (VOCs) and NOx sources from the greater Boston area. A simple linear combination model showed that the organic precursors leading to elevated O3 were roughly half from the biogenic and half from anthropogenic VOC regimes. An explicit chemical box model confirmed that the chemistry in the Boston plume is well represented by the simple linear combination model. This degree of biogenic hydrocarbon involvement in the production of photochemical ozone has significant implications for air quality control strategies in this region.

  7. Evaluation of the CEAS model for barley yields in North Dakota and Minnesota

    NASA Technical Reports Server (NTRS)

    Barnett, T. L. (Principal Investigator)

    1981-01-01

    The CEAS yield model is based upon multiple regression analysis at the CRD and state levels. For the historical time series, yield is regressed on a set of variables derived from monthly mean temperature and monthly precipitation. Technological trend is represented by piecewise linear and/or quadriatic functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test (1970-79) demonstrated that biases are small and performance as indicated by the root mean square errors are acceptable for intended application, however, model response for individual years particularly unusual years, is not very reliable and shows some large errors. The model is objective, adequate, timely, simple and not costly. It considers scientific knowledge on a broad scale but not in detail, and does not provide a good current measure of modeled yield reliability.

  8. Bubbles, shocks and elementary technical trading strategies

    NASA Astrophysics Data System (ADS)

    Fry, John

    2014-01-01

    In this paper we provide a unifying framework for a set of seemingly disparate models for bubbles, shocks and elementary technical trading strategies in financial markets. Markets operate by balancing intrinsic levels of risk and return. This seemingly simple observation is commonly over-looked by academics and practitioners alike. Our model shares its origins in statistical physics with others. However, under our approach, changes in market regime can be explicitly shown to represent a phase transition from random to deterministic behaviour in prices. This structure leads to an improved physical and econometric model. We develop models for bubbles, shocks and elementary technical trading strategies. The list of empirical applications is both interesting and topical and includes real-estate bubbles and the on-going Eurozone crisis. We close by comparing the results of our model with purely qualitative findings from the finance literature.

  9. Adding Temporal Characteristics to Geographical Schemata and Instances: A General Framework

    NASA Astrophysics Data System (ADS)

    Ota, Morishige

    2018-05-01

    This paper proposes the temporal general feature model (TGFM) as a meta-model for application schemata representing changes of real-world phenomena. It is not very easy to determine history directly from the current application schemata, even if the revision notes are attached to the specification. To solve this problem, the rules for description of the succession between previous and posterior components are added to the general feature model, thus resulting in TGFM. After discussing the concepts associated with the new model, simple examples of application schemata are presented as instances of TGFM. Descriptors for changing properties, the succession of changing properties in moving features, and the succession of features and associations are introduced. The modeling methods proposed in this paper will contribute to the acquisition of consistent and reliable temporal geospatial data.

  10. A generic analytical foot rollover model for predicting translational ankle kinematics in gait simulation studies.

    PubMed

    Ren, Lei; Howard, David; Ren, Luquan; Nester, Chris; Tian, Limei

    2010-01-19

    The objective of this paper is to develop an analytical framework to representing the ankle-foot kinematics by modelling the foot as a rollover rocker, which cannot only be used as a generic tool for general gait simulation but also allows for case-specific modelling if required. Previously, the rollover models used in gait simulation have often been based on specific functions that have usually been of a simple form. In contrast, the analytical model described here is in a general form that the effective foot rollover shape can be represented by any polar function rho=rho(phi). Furthermore, a normalized generic foot rollover model has been established based on a normative foot rollover shape dataset of 12 normal healthy subjects. To evaluate model accuracy, the predicted ankle motions and the centre of pressure (CoP) were compared with measurement data for both subject-specific and general cases. The results demonstrated that the ankle joint motions in both vertical and horizontal directions (relative RMSE approximately 10%) and CoP (relative RMSE approximately 15% for most of the subjects) are accurately predicted over most of the stance phase (from 10% to 90% of stance). However, we found that the foot cannot be very accurately represented by a rollover model just after heel strike (HS) and just before toe off (TO), probably due to shear deformation of foot plantar tissues (ankle motion can occur without any foot rotation). The proposed foot rollover model can be used in both inverse and forward dynamics gait simulation studies and may also find applications in rehabilitation engineering. Copyright 2009 Elsevier Ltd. All rights reserved.

  11. Multiyear high-resolution carbon exchange over European croplands from the integration of observed crop yields into CarbonTracker Europe

    NASA Astrophysics Data System (ADS)

    Combe, Marie; Vilà-Guerau de Arellano, Jordi; de Wit, Allard; Peters, Wouter

    2016-04-01

    Carbon exchange over croplands plays an important role in the European carbon cycle over daily-to-seasonal time scales. Not only do crops occupy one fourth of the European land area, but their photosynthesis and respiration are large and affect CO2 mole fractions at nearly every atmospheric CO2 monitoring site. A better description of this crop carbon exchange in our CarbonTracker Europe data assimilation system - which currently treats crops as unmanaged grasslands - could strongly improve its ability to constrain terrestrial carbon fluxes. Available long-term observations of crop yield, harvest, and cultivated area allow such improvements, when combined with the new crop-modeling framework we present. This framework can model the carbon fluxes of 10 major European crops at high spatial and temporal resolution, on a 12x12 km grid and 3-hourly time-step. The development of this framework is threefold: firstly, we optimize crop growth using the process-based WOrld FOod STudies (WOFOST) agricultural crop growth model. Simulated yields are downscaled to match regional crop yield observations from the Statistical Office of the European Union (EUROSTAT) by estimating a yearly regional parameter for each crop species: the yield gap factor. This step allows us to better represent crop phenology, to reproduce the observed multiannual European crop yields, and to construct realistic time series of the crop carbon fluxes (gross primary production, GPP, and autotrophic respiration, Raut) on a fine spatial and temporal resolution. Secondly, we combine these GPP and Raut fluxes with a simple soil respiration model to obtain the total ecosystem respiration (TER) and net ecosystem exchange (NEE). And thirdly, we represent the horizontal transport of carbon that follows crop harvest and its back-respiration into the atmosphere during harvest consumption. We distribute this carbon using observations of the density of human and ruminant populations from EUROSTAT. We assess the model's ability to represent the seasonal GPP, TER and NEE fluxes using observations at 6 European FluxNet winter wheat and grain maize sites and compare it with the fluxes of the current terrestrial carbon cycle model of CarbonTracker Europe: the Simple Biosphere - Carnegie-Ames-Stanford Approach (SiBCASA) model. We find that the new model framework provides a detailed, realistic, and strongly observation-driven estimate of carbon exchange over European croplands. Its products will be made available to the scientific community through the ICOS Carbon Portal, and serve as a new cropland component in CarbonTracker Europe flux estimates.

  12. Packing Regularities in Biological Structures Relate to Their Dynamics

    PubMed Central

    Jernigan, Robert L.; Kloczkowski, Andrzej

    2007-01-01

    The high packing density inside proteins leads to certain geometric regularities and also is one of the most important contributors to the high extent of cooperativity manifested by proteins in their cohesive domain motions. The orientations between neighboring non-bonded residues in proteins substantially follow the similar geometric regularities, regardless of whether the residues are on the surface or buried - a direct result of hydrophobicity forces. These orientations are relatively fixed and correspond closely to small deformations from those of the face-centered cubic lattice, which is the way in which identical spheres pack at the highest density. Packing density also is related to the extent of conservation of residues, and we show this relationship for residue packing densities by averaging over a large sample or residue packings. There are three regimes: 1) over a broad range of packing densities the relationship between sequence entropy and inverse packing density is nearly linear, 2) over a limited range of low packing densities the sequence entropy is nearly constant, and 3) at extremely low packing densities the sequence entropy is highly variable. These packing results provide important justification for the simple elastic network models that have been shown for a large number of proteins to represent protein dynamics so successfully, even when the models are extremely coarse-grained. Elastic network models for polymeric chains are simple and could be combined with these protein elastic networks to represent partially denatured parts of proteins. Finally, we show results of applications of the elastic network model to study the functional motions of the ribosome, based on its known structure. These results indicate expected correlations among its components for the step-wise processing steps in protein synthesis, and suggest ways to use these elastic network models to develop more detailed mechanisms - an important possibility, since most experiments yield only static structures. PMID:16957327

  13. 32 CFR 203.7 - Eligible applicants.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... interests are broadly represented. The applicant must certify that the request represents the wishes of a simple majority of the community members of the RAB or TRC. Certification includes, but is not limited to...

  14. Random noise effects in pulse-mode digital multilayer neural networks.

    PubMed

    Kim, Y C; Shanblatt, M A

    1995-01-01

    A pulse-mode digital multilayer neural network (DMNN) based on stochastic computing techniques is implemented with simple logic gates as basic computing elements. The pulse-mode signal representation and the use of simple logic gates for neural operations lead to a massively parallel yet compact and flexible network architecture, well suited for VLSI implementation. Algebraic neural operations are replaced by stochastic processes using pseudorandom pulse sequences. The distributions of the results from the stochastic processes are approximated using the hypergeometric distribution. Synaptic weights and neuron states are represented as probabilities and estimated as average pulse occurrence rates in corresponding pulse sequences. A statistical model of the noise (error) is developed to estimate the relative accuracy associated with stochastic computing in terms of mean and variance. Computational differences are then explained by comparison to deterministic neural computations. DMNN feedforward architectures are modeled in VHDL using character recognition problems as testbeds. Computational accuracy is analyzed, and the results of the statistical model are compared with the actual simulation results. Experiments show that the calculations performed in the DMNN are more accurate than those anticipated when Bernoulli sequences are assumed, as is common in the literature. Furthermore, the statistical model successfully predicts the accuracy of the operations performed in the DMNN.

  15. Use of a spread sheet to calculate the current-density distribution produced in human and rat models by low-frequency electric fields.

    PubMed

    Hart, F X

    1990-01-01

    The current-density distribution produced inside irregularly shaped, homogeneous human and rat models by low-frequency electric fields is obtained by a two-stage finite-difference procedure. In the first stage the model is assumed to be equipotential. Laplace's equation is solved by iteration in the external region to obtain the capacitive-current densities at the model's surface elements. These values then provide the boundary conditions for the second-stage relaxation solution, which yields the internal current-density distribution. Calculations were performed with the Excel spread-sheet program on a Macintosh-II microcomputer. A spread sheet is a two-dimensional array of cells. Each cell of the sheet can represent a square element of space. Equations relating the values of the cells can represent the relationships between the potentials in the corresponding spatial elements. Extension to three dimensions is readily made. Good agreement was obtained with current densities measured on human models with both, one, or no legs grounded and on rat models in four different grounding configurations. The results also compared well with predictions of more sophisticated numerical analyses. Spread sheets can provide an inexpensive and relatively simple means to perform good, approximate dosimetric calculations on irregularly shaped objects.

  16. The relative roles of environment, history and local dispersal in controlling the distributions of common tree and shrub species in a tropical forest landscape, Panama

    USGS Publications Warehouse

    Svenning, J.-C.; Engelbrecht, B.M.J.; Kinner, D.A.; Kursar, T.A.; Stallard, R.F.; Wright, S.J.

    2006-01-01

    We used regression models and information-theoretic model selection to assess the relative importance of environment, local dispersal and historical contingency as controls of the distributions of 26 common plant species in tropical forest on Barro Colorado Island (BCI), Panama. We censused eighty-eight 0.09-ha plots scattered across the landscape. Environmental control, local dispersal and historical contingency were represented by environmental variables (soil moisture, slope, soil type, distance to shore, old-forest presence), a spatial autoregressive parameter (??), and four spatial trend variables, respectively. We built regression models, representing all combinations of the three hypotheses, for each species. The probability that the best model included the environmental variables, spatial trend variables and ?? averaged 33%, 64% and 50% across the study species, respectively. The environmental variables, spatial trend variables, ??, and a simple intercept model received the strongest support for 4, 15, 5 and 2 species, respectively. Comparing the model results to information on species traits showed that species with strong spatial trends produced few and heavy diaspores, while species with strong soil moisture relationships were particularly drought-sensitive. In conclusion, history and local dispersal appeared to be the dominant controls of the distributions of common plant species on BCI. Copyright ?? 2006 Cambridge University Press.

  17. An approach to define semantics for BPM systems interoperability

    NASA Astrophysics Data System (ADS)

    Rico, Mariela; Caliusco, María Laura; Chiotti, Omar; Rosa Galli, María

    2015-04-01

    This article proposes defining semantics for Business Process Management systems interoperability through the ontology of Electronic Business Documents (EBD) used to interchange the information required to perform cross-organizational processes. The semantic model generated allows aligning enterprise's business processes to support cross-organizational processes by matching the business ontology of each business partner with the EBD ontology. The result is a flexible software architecture that allows dynamically defining cross-organizational business processes by reusing the EBD ontology. For developing the semantic model, a method is presented, which is based on a strategy for discovering entity features whose interpretation depends on the context, and representing them for enriching the ontology. The proposed method complements ontology learning techniques that can not infer semantic features not represented in data sources. In order to improve the representation of these entity features, the method proposes using widely accepted ontologies, for representing time entities and relations, physical quantities, measurement units, official country names, and currencies and funds, among others. When the ontologies reuse is not possible, the method proposes identifying whether that feature is simple or complex, and defines a strategy to be followed. An empirical validation of the approach has been performed through a case study.

  18. Numerical study of ship airwake characteristics immersed in atmospheric boundary-layer flow

    NASA Astrophysics Data System (ADS)

    Thedin, Regis; Kinzel, Michael; Schmitz, Sven

    2017-11-01

    Helicopter pilot workload is known to increase substantially in the vicinity of a ship flight deck due to the unsteady flowfield past the superstructure. In this work, the influence of atmospheric turbulence on a ship airwake is investigated. A ship geometry representing the Simple Frigate Shape 2 is immersed into a Large-Eddy-Simulation-resolved Atmospheric Boundary Layer (ABL). Specifically, we aim in identifying the fundamental topology differences between a uniform-inflow model of the incoming wind and those representative of a neutral atmospheric stability state. Thus, airwake characteristics due to a shear-driven ABL are evaluated and compared. Differences in the energy content of the airwakes are identified and discussed. The framework being developed allows for future coupling of flight dynamic models of helicopters to investigate flight envelope testing. Hence, this work represents the first step towards the goal of identifying the effects a modified airwake due to the atmospheric turbulence imposes on the handling of a helicopter and pilot workload. This research was partially supported by the University Graduate Fellowship program at The Pennsylvania State University and by the Government under Agreement No. W911W6-17-2-0003.

  19. The problem with simple lumped parameter models: Evidence from tritium mean transit times

    NASA Astrophysics Data System (ADS)

    Stewart, Michael; Morgenstern, Uwe; Gusyev, Maksym; Maloszewski, Piotr

    2017-04-01

    Simple lumped parameter models (LPMs) based on assuming homogeneity and stationarity in catchments and groundwater bodies are widely used to model and predict hydrological system outputs. However, most systems are not homogeneous or stationary, and errors resulting from disregard of the real heterogeneity and non-stationarity of such systems are not well understood and rarely quantified. As an example, mean transit times (MTTs) of streamflow are usually estimated from tracer data using simple LPMs. The MTT or transit time distribution of water in a stream reveals basic catchment properties such as water flow paths, storage and mixing. Importantly however, Kirchner (2016a) has shown that there can be large (several hundred percent) aggregation errors in MTTs inferred from seasonal cycles in conservative tracers such as chloride or stable isotopes when they are interpreted using simple LPMs (i.e. a range of gamma models or GMs). Here we show that MTTs estimated using tritium concentrations are similarly affected by aggregation errors due to heterogeneity and non-stationarity when interpreted using simple LPMs (e.g. GMs). The tritium aggregation error series from the strong nonlinearity between tritium concentrations and MTT, whereas for seasonal tracer cycles it is due to the nonlinearity between tracer cycle amplitudes and MTT. In effect, water from young subsystems in the catchment outweigh water from old subsystems. The main difference between the aggregation errors with the different tracers is that with tritium it applies at much greater ages than it does with seasonal tracer cycles. We stress that the aggregation errors arise when simple LPMs are applied (with simple LPMs the hydrological system is assumed to be a homogeneous whole with parameters representing averages for the system). With well-chosen compound LPMs (which are combinations of simple LPMs) on the other hand, aggregation errors are very much smaller because young and old water flows are treated separately. "Well-chosen" means that the compound LPM is based on hydrologically- and geologically-validated information, and the choice can be assisted by matching simulations to time series of tritium measurements. References: Kirchner, J.W. (2016a): Aggregation in environmental systems - Part 1: Seasonal tracer cycles quantify young water fractions, but not mean transit times, in spatially heterogeneous catchments. Hydrol. Earth Syst. Sci. 20, 279-297. Stewart, M.K., Morgenstern, U., Gusyev, M.A., Maloszewski, P. 2016: Aggregation effects on tritium-based mean transit times and young water fractions in spatially heterogeneous catchments and groundwater systems, and implications for past and future applications of tritium. Submitted to Hydrol. Earth Syst. Sci., 10 October 2016, doi:10.5194/hess-2016-532.

  20. New Gravity Wave Treatments for GISS Climate Models

    NASA Technical Reports Server (NTRS)

    Geller, Marvin A.; Zhou, Tiehan; Ruedy, Reto; Aleinov, Igor; Nazarenko, Larissa; Tausnev, Nikolai L.; Sun, Shan; Kelley, Maxwell; Cheng, Ye

    2011-01-01

    Previous versions of GISS climate models have either used formulations of Rayleigh drag to represent unresolved gravity wave interactions with the model-resolved flow or have included a rather complicated treatment of unresolved gravity waves that, while being climate interactive, involved the specification of a relatively large number of parameters that were not well constrained by observations and also was computationally very expensive. Here, the authors introduce a relatively simple and computationally efficient specification of unresolved orographic and nonorographic gravity waves and their interaction with the resolved flow. Comparisons of the GISS model winds and temperatures with no gravity wave parameterization; with only orographic gravity wave parameterization; and with both orographic and nonorographic gravity wave parameterizations are shown to illustrate how the zonal mean winds and temperatures converge toward observations. The authors also show that the specifications of orographic and nonorographic gravity waves must be different in the Northern and Southern Hemispheres. Then results are presented where the nonorographic gravity wave sources are specified to represent sources from convection in the intertropical convergence zone and spontaneous emission from jet imbalances. Finally, a strategy to include these effects in a climate-dependent manner is suggested.

  1. Evolutionary Agent-Based Simulation of the Introduction of New Technologies in Air Traffic Management

    NASA Technical Reports Server (NTRS)

    Yliniemi, Logan; Agogino, Adrian K.; Tumer, Kagan

    2014-01-01

    Accurate simulation of the effects of integrating new technologies into a complex system is critical to the modernization of our antiquated air traffic system, where there exist many layers of interacting procedures, controls, and automation all designed to cooperate with human operators. Additions of even simple new technologies may result in unexpected emergent behavior due to complex human/ machine interactions. One approach is to create high-fidelity human models coming from the field of human factors that can simulate a rich set of behaviors. However, such models are difficult to produce, especially to show unexpected emergent behavior coming from many human operators interacting simultaneously within a complex system. Instead of engineering complex human models, we directly model the emergent behavior by evolving goal directed agents, representing human users. Using evolution we can predict how the agent representing the human user reacts given his/her goals. In this paradigm, each autonomous agent in a system pursues individual goals, and the behavior of the system emerges from the interactions, foreseen or unforeseen, between the agents/actors. We show that this method reflects the integration of new technologies in a historical case, and apply the same methodology for a possible future technology.

  2. A mass transfer model of ammonia volatilization from anaerobic digestate.

    PubMed

    Whelan, M J; Everitt, T; Villa, R

    2010-10-01

    Anaerobic digestion (AD) is becoming increasingly popular for treating organic waste. The methane produced can be burned to generate electricity and the digestate, which is high in mineral nitrogen, can be used as a fertiliser. In this paper we evaluate potential losses of ammonia via volatilization from food waste anaerobic digestate using a closed chamber system equipped with a sulphuric acid trap. Ammonia losses represent a pollution source and, over long periods could reduce the agronomic value of the digestate. Observed ammonia losses from the experimental system were linear with time. A simple non-steady-state partitioning model was developed to represent the process. After calibration, the model was able to describe the behaviour of ammonia in the digestate and in the trap very well. The average rate of volatilization was approximately 5.2 g Nm(-2)week(-1). The model was used to extrapolate the findings of the laboratory study to a number of AD storage scenarios. The simulations highlight that open storage of digestate could result in significant losses of ammonia to the atmosphere. Losses are predicted to be relatively minor from covered facilities, particularly if depth to surface area ratio is high. (c) 2009 Elsevier Ltd. All rights reserved.

  3. Discrete Element Framework for Modelling Extracellular Matrix, Deformable Cells and Subcellular Components

    PubMed Central

    Gardiner, Bruce S.; Wong, Kelvin K. L.; Joldes, Grand R.; Rich, Addison J.; Tan, Chin Wee; Burgess, Antony W.; Smith, David W.

    2015-01-01

    This paper presents a framework for modelling biological tissues based on discrete particles. Cell components (e.g. cell membranes, cell cytoskeleton, cell nucleus) and extracellular matrix (e.g. collagen) are represented using collections of particles. Simple particle to particle interaction laws are used to simulate and control complex physical interaction types (e.g. cell-cell adhesion via cadherins, integrin basement membrane attachment, cytoskeletal mechanical properties). Particles may be given the capacity to change their properties and behaviours in response to changes in the cellular microenvironment (e.g., in response to cell-cell signalling or mechanical loadings). Each particle is in effect an ‘agent’, meaning that the agent can sense local environmental information and respond according to pre-determined or stochastic events. The behaviour of the proposed framework is exemplified through several biological problems of ongoing interest. These examples illustrate how the modelling framework allows enormous flexibility for representing the mechanical behaviour of different tissues, and we argue this is a more intuitive approach than perhaps offered by traditional continuum methods. Because of this flexibility, we believe the discrete modelling framework provides an avenue for biologists and bioengineers to explore the behaviour of tissue systems in a computational laboratory. PMID:26452000

  4. Project Rulison gas flow analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Montan, D.N.

    1971-01-01

    An analysis of the well performance was attempted by fitting a simple model of the chimney, gas sands, and explosively created fracturing to the 2 experimentally measured variables, flow rate, and chimney pressure. The gas-flow calculations for various trial models were done by a finite difference solution to the nonlinear partial differential equation for radial Darcy flow. The TRUMP computer program was used to perform the numerical calculations. In principle, either the flow rate or the chimney pressure could be used as the independent variable in the calculations. In the present case, the flow rate was used as the independentmore » variable, since chimney pressure measurements were not made until after the second flow period in early Nov. 1970. Furthermore, the formation pressure was not accurately known and, hence, was considered a variable parameter in the modeling process. The chimney pressure was assumed equal to the formation pressure at the beginning of the flow testing. The model consisted of a central zone, representing the chimney, surrounded by a number of concentric zones, representing the formation. The effect of explosive fracturing was simulated by increasing the permeability in the zones near the central zone.« less

  5. Discrete Element Framework for Modelling Extracellular Matrix, Deformable Cells and Subcellular Components.

    PubMed

    Gardiner, Bruce S; Wong, Kelvin K L; Joldes, Grand R; Rich, Addison J; Tan, Chin Wee; Burgess, Antony W; Smith, David W

    2015-10-01

    This paper presents a framework for modelling biological tissues based on discrete particles. Cell components (e.g. cell membranes, cell cytoskeleton, cell nucleus) and extracellular matrix (e.g. collagen) are represented using collections of particles. Simple particle to particle interaction laws are used to simulate and control complex physical interaction types (e.g. cell-cell adhesion via cadherins, integrin basement membrane attachment, cytoskeletal mechanical properties). Particles may be given the capacity to change their properties and behaviours in response to changes in the cellular microenvironment (e.g., in response to cell-cell signalling or mechanical loadings). Each particle is in effect an 'agent', meaning that the agent can sense local environmental information and respond according to pre-determined or stochastic events. The behaviour of the proposed framework is exemplified through several biological problems of ongoing interest. These examples illustrate how the modelling framework allows enormous flexibility for representing the mechanical behaviour of different tissues, and we argue this is a more intuitive approach than perhaps offered by traditional continuum methods. Because of this flexibility, we believe the discrete modelling framework provides an avenue for biologists and bioengineers to explore the behaviour of tissue systems in a computational laboratory.

  6. New Gravity Wave Treatments for GISS Climate Models

    NASA Technical Reports Server (NTRS)

    Geller, Marvin A.; Zhou, Tiehan; Ruedy, Reto; Aleinov, Igor; Nazarenko, Larissa; Tausnev, Nikolai L.; Sun, Shan; Kelley, Maxwell; Cheng, Ye

    2010-01-01

    Previous versions of GISS climate models have either used formulations of Rayleigh drag to represent unresolved gravity wave interactions with the model resolved flow or have included a rather complicated treatment of unresolved gravity waves that, while being climate interactive, involved the specification of a relatively large number of parameters that were not well constrained by observations and also was computationally very expensive. Here, we introduce a relatively simple and computationally efficient specification of unresolved orographic and non-orographic gravity waves and their interaction with the resolved flow. We show comparisons of the GISS model winds and temperatures with no gravity wave parametrization; with only orographic gravity wave parameterization; and with both orographic and non-orographic gravity wave parameterizations to illustrate how the zonal mean winds and temperatures converge toward observations. We also show that the specifications of orographic and nonorographic gravity waves must be different in the Northern and Southern Hemispheres. We then show results where the non-orographic gravity wave sources are specified to represent sources from convection in the Intertropical Convergence Zone and spontaneous emission from jet imbalances. Finally, we suggest a strategy to include these effects in a climate dependent manner.

  7. A graph grammar approach to artificial life.

    PubMed

    Kniemeyer, Ole; Buck-Sorlin, Gerhard H; Kurth, Winfried

    2004-01-01

    We present the high-level language of relational growth grammars (RGGs) as a formalism designed for the specification of ALife models. RGGs can be seen as an extension of the well-known parametric Lindenmayer systems and contain rule-based, procedural, and object-oriented features. They are defined as rewriting systems operating on graphs with the edges coming from a set of user-defined relations, whereas the nodes can be associated with objects. We demonstrate their ability to represent genes, regulatory networks of metabolites, and morphologically structured organisms, as well as developmental aspects of these entities, in a common formal framework. Mutation, crossing over, selection, and the dynamics of a network of gene regulation can all be represented with simple graph rewriting rules. This is demonstrated in some detail on the classical example of Dawkins' biomorphs and the ABC model of flower morphogenesis: other applications are briefly sketched. An interactive program was implemented, enabling the execution of the formalism and the visualization of the results.

  8. Demonstration of Synaptic Behaviors and Resistive Switching Characterizations by Proton Exchange Reactions in Silicon Oxide

    PubMed Central

    Chang, Yao-Feng; Fowler, Burt; Chen, Ying-Chen; Zhou, Fei; Pan, Chih-Hung; Chang, Ting-Chang; Lee, Jack C.

    2016-01-01

    We realize a device with biological synaptic behaviors by integrating silicon oxide (SiOx) resistive switching memory with Si diodes. Minimal synaptic power consumption due to sneak-path current is achieved and the capability for spike-induced synaptic behaviors is demonstrated, representing critical milestones for the use of SiO2–based materials in future neuromorphic computing applications. Biological synaptic behaviors such as long-term potentiation (LTP), long-term depression (LTD) and spike-timing dependent plasticity (STDP) are demonstrated systematically using a comprehensive analysis of spike-induced waveforms, and represent interesting potential applications for SiOx-based resistive switching materials. The resistive switching SET transition is modeled as hydrogen (proton) release from (SiH)2 to generate the hydrogen bridge defect, and the RESET transition is modeled as an electrochemical reaction (proton capture) that re-forms (SiH)2. The experimental results suggest a simple, robust approach to realize programmable neuromorphic chips compatible with large-scale CMOS manufacturing technology. PMID:26880381

  9. Advancing Embedded and Extrinsic Solutions for Optimal Control and Efficiency of Energy Systems in Buildings

    NASA Astrophysics Data System (ADS)

    Bay, Christopher Joseph

    Massachusetts' Act to Promote Energy Diversity requires distribution companies to solicit contracts for up to 1600 MW of offshore wind. To test whether offshore wind projects can meet the Act's requirement to reduce C02 emissions, the Oak Ridge Competitive Electricity Dispatch Model was used to forecast changes in ISO New England's resource mix under five different wind capacity levels and calculate avoided C02 emissions attributable to offshore wind. With 1600 MW of installed capacity, representing full solicitation under the Act, reliance on natural gas is reduced by ˜10% and carbon emissions decline by ˜9%. This represents significant progress towards the goals of the Global Warming Solutions Act and the Clean Power Plan. The 5000 MW scenario reduces emissions enough to meet the Clean Power Plan's 2030 goals. This study's application of a dispatch model provides an example for policymakers of a simple and cost-effective approach for assessing a project's value.

  10. Demonstration of Synaptic Behaviors and Resistive Switching Characterizations by Proton Exchange Reactions in Silicon Oxide

    NASA Astrophysics Data System (ADS)

    Chang, Yao-Feng; Fowler, Burt; Chen, Ying-Chen; Zhou, Fei; Pan, Chih-Hung; Chang, Ting-Chang; Lee, Jack C.

    2016-02-01

    We realize a device with biological synaptic behaviors by integrating silicon oxide (SiOx) resistive switching memory with Si diodes. Minimal synaptic power consumption due to sneak-path current is achieved and the capability for spike-induced synaptic behaviors is demonstrated, representing critical milestones for the use of SiO2-based materials in future neuromorphic computing applications. Biological synaptic behaviors such as long-term potentiation (LTP), long-term depression (LTD) and spike-timing dependent plasticity (STDP) are demonstrated systematically using a comprehensive analysis of spike-induced waveforms, and represent interesting potential applications for SiOx-based resistive switching materials. The resistive switching SET transition is modeled as hydrogen (proton) release from (SiH)2 to generate the hydrogen bridge defect, and the RESET transition is modeled as an electrochemical reaction (proton capture) that re-forms (SiH)2. The experimental results suggest a simple, robust approach to realize programmable neuromorphic chips compatible with large-scale CMOS manufacturing technology.

  11. A synaptic device built in one diode-one resistor (1D-1R) architecture with intrinsic SiOx-based resistive switching memory

    NASA Astrophysics Data System (ADS)

    Chang, Yao-Feng; Fowler, Burt; Chen, Ying-Chen; Zhou, Fei; Pan, Chih-Hung; Chang, Kuan-Chang; Tsai, Tsung-Ming; Chang, Ting-Chang; Sze, Simon M.; Lee, Jack C.

    2016-04-01

    We realize a device with biological synaptic behaviors by integrating silicon oxide (SiOx) resistive switching memory with Si diodes to further minimize total synaptic power consumption due to sneak-path currents and demonstrate the capability for spike-induced synaptic behaviors, representing critical milestones for the use of SiO2-based materials in future neuromorphic computing applications. Biological synaptic behaviors such as long-term potentiation, long-term depression, and spike-timing dependent plasticity are demonstrated systemically with comprehensive investigation of spike waveform analyses and represent a potential application for SiOx-based resistive switching materials. The resistive switching SET transition is modeled as hydrogen (proton) release from the (SiH)2 defect to generate the hydrogenbridge defect, and the RESET transition is modeled as an electrochemical reaction (proton capture) that re-forms (SiH)2. The experimental results suggest a simple, robust approach to realize programmable neuromorphic chips compatible with largescale complementary metal-oxide semiconductor manufacturing technology.

  12. Computing Strongly Connected Components in the Streaming Model

    NASA Astrophysics Data System (ADS)

    Laura, Luigi; Santaroni, Federico

    In this paper we present the first algorithm to compute the Strongly Connected Components of a graph in the datastream model (W-Stream), where the graph is represented by a stream of edges and we are allowed to produce intermediate output streams. The algorithm is simple, effective, and can be implemented with few lines of code: it looks at each edge in the stream, and selects the appropriate action with respect to a tree T, representing the graph connectivity seen so far. We analyze the theoretical properties of the algorithm: correctness, memory occupation (O(n logn)), per item processing time (bounded by the current height of T), and number of passes (bounded by the maximal height of T). We conclude by presenting a brief experimental evaluation of the algorithm against massive synthetic and real graphs that confirms its effectiveness: with graphs with up to 100M nodes and 4G edges, only few passes are needed, and millions of edges per second are processed.

  13. "AFacet": a geometry based format and visualizer to support SAR and multisensor signature generation

    NASA Astrophysics Data System (ADS)

    Rosencrantz, Stephen; Nehrbass, John; Zelnio, Ed; Sudkamp, Beth

    2018-04-01

    When simulating multisensor signature data (including SAR, LIDAR, EO, IR, etc...), geometry data are required that accurately represent the target. Most vehicular targets can, in real life, exist in many possible configurations. Examples of these configurations might include a rotated turret, an open door, a missing roof rack, or a seat made of metal or wood. Previously we have used the Modelman (.mmp) format and tool to represent and manipulate our articulable models. Unfortunately Modelman is now an unsupported tool and an undocumented binary format. Some work has been done to reverse engineer a reader in Matlab so that the format could continue to be useful. This work was tedious and resulted in an incomplete conversion. In addition, the resulting articulable models could not be altered and re-saved in the Modelman format. The AFacet (.afacet) articulable facet file format is a replacement for the binary Modelman (.mmp) file format. There is a one-time straight forward path for conversion from Modelman to the AFacet format. It is a simple ASCII, comma separated, self-documenting format that is easily readable (and in many cases usefully editable) by a human with any text editor, preventing future obsolescence. In addition, because the format is simple, it is relatively easy for even the most novice programmer to create a program to read and write AFacet files in any language without any special libraries. This paper presents the AFacet format, as well as a suite of tools for creating, articulating, manipulating, viewing, and converting the 370+ (when this paper was written) models that have been converted to the AFacet format.

  14. Low-Velocity Impact Response of Sandwich Beams with Functionally Graded Core

    NASA Technical Reports Server (NTRS)

    Apetre, N. A.; Sankar, B. V.; Ambur, D. R.

    2006-01-01

    The problem of low-speed impact of a one-dimensional sandwich panel by a rigid cylindrical projectile is considered. The core of the sandwich panel is functionally graded such that the density, and hence its stiffness, vary through the thickness. The problem is a combination of static contact problem and dynamic response of the sandwich panel obtained via a simple nonlinear spring-mass model (quasi-static approximation). The variation of core Young's modulus is represented by a polynomial in the thickness coordinate, but the Poisson's ratio is kept constant. The two-dimensional elasticity equations for the plane sandwich structure are solved using a combination of Fourier series and Galerkin method. The contact problem is solved using the assumed contact stress distribution method. For the impact problem we used a simple dynamic model based on quasi-static behavior of the panel - the sandwich beam was modeled as a combination of two springs, a linear spring to account for the global deflection and a nonlinear spring to represent the local indentation effects. Results indicate that the contact stiffness of thc beam with graded core Increases causing the contact stresses and other stress components in the vicinity of contact to increase. However, the values of maximum strains corresponding to the maximum impact load arc reduced considerably due to grading of thc core properties. For a better comparison, the thickness of the functionally graded cores was chosen such that the flexural stiffness was equal to that of a beam with homogeneous core. The results indicate that functionally graded cores can be used effectively to mitigate or completely prevent impact damage in sandwich composites.

  15. Numerical slope stability simulations of chasma walls in Valles Marineris/Mars using a distinct element method (dem).

    NASA Astrophysics Data System (ADS)

    Imre, B.

    2003-04-01

    NUMERICAL SLOPE STABILITY SIMULATIONS OF CHASMA WALLS IN VALLES MARINERIS/MARS USING A DISTINCT ELEMENT METHOD (DEM). B. Imre (1) (1) German Aerospace Center, Berlin Adlershof, bernd.imre@gmx.net The 8- to 10-km depths of Valles Marineris (VM) offer excellent views into the upper Martian crust. Layering, fracturing, lithology, stratigraphy and the content of volatiles have influenced the evolution of the Valles Marineris wallslopes. But these parameters also reflect the development of VM and its wall slopes. The scope of this work is to gain understanding in these parameters by back-simulating the development of wall slopes. For that purpose, the two dimensional Particle Flow Code PFC2D has been chosen (ITASCA, version 2.00-103). PFC2D is a distinct element code for numerical modelling of movements and interactions of assemblies of arbitrarily sized circular particles. Particles may be bonded together to represent a solid material. Movements of particles are unlimited. That is of importance because results of open systems with numerous unknown variables are non-unique and therefore highly path dependent. This DEM allows the simulation of whole development paths of VM walls what makes confirmation of the model more complete (e.g. Oreskes et al., Science 263, 1994). To reduce the number of unknown variables a proper (that means as simple as possible) field-site had to be selected. The northern wall of eastern Candor Chasma has been chosen. This wall is up to 8-km high and represents a significant outcrop of the upper Martian crust. It is quite uncomplex, well-aligned and of simple morphology. Currently the work on the model is at the stage of performing the parameter study. Results will be presented via poster by the EGS-Meeting.

  16. Mesh Oriented datABase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tautges, Timothy J.

    MOAB is a component for representing and evaluating mesh data. MOAB can store stuctured and unstructured mesh, consisting of elements in the finite element "zoo". The functional interface to MOAB is simple yet powerful, allowing the representation of many types of metadata commonly found on the mesh. MOAB is optimized for efficiency in space and time, based on access to mesh in chunks rather than through individual entities, while also versatile enough to support individual entity access. The MOAB data model consists of a mesh interface instance, mesh entities (vertices and elements), sets, and tags. Entities are addressed through handlesmore » rather than pointers, to allow the underlying representation of an entity to change without changing the handle to that entity. Sets are arbitrary groupings of mesh entities and other sets. Sets also support parent/child relationships as a relation distinct from sets containing other sets. The directed-graph provided by set parent/child relationships is useful for modeling topological relations from a geometric model or other metadata. Tags are named data which can be assigned to the mesh as a whole, individual entities, or sets. Tags are a mechanism for attaching data to individual entities and sets are a mechanism for describing relations between entities; the combination of these two mechanisms isa powerful yet simple interface for representing metadata or application-specific data. For example, sets and tags can be used together to describe geometric topology, boundary condition, and inter-processor interface groupings in a mesh. MOAB is used in several ways in various applications. MOAB serves as the underlying mesh data representation in the VERDE mesh verification code. MOAB can also be used as a mesh input mechanism, using mesh readers induded with MOAB, or as a t’anslator between mesh formats, using readers and writers included with MOAB.« less

  17. MOAB : a mesh-oriented database.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tautges, Timothy James; Ernst, Corey; Stimpson, Clint

    A finite element mesh is used to decompose a continuous domain into a discretized representation. The finite element method solves PDEs on this mesh by modeling complex functions as a set of simple basis functions with coefficients at mesh vertices and prescribed continuity between elements. The mesh is one of the fundamental types of data linking the various tools in the FEA process (mesh generation, analysis, visualization, etc.). Thus, the representation of mesh data and operations on those data play a very important role in FEA-based simulations. MOAB is a component for representing and evaluating mesh data. MOAB can storemore » structured and unstructured mesh, consisting of elements in the finite element 'zoo'. The functional interface to MOAB is simple yet powerful, allowing the representation of many types of metadata commonly found on the mesh. MOAB is optimized for efficiency in space and time, based on access to mesh in chunks rather than through individual entities, while also versatile enough to support individual entity access. The MOAB data model consists of a mesh interface instance, mesh entities (vertices and elements), sets, and tags. Entities are addressed through handles rather than pointers, to allow the underlying representation of an entity to change without changing the handle to that entity. Sets are arbitrary groupings of mesh entities and other sets. Sets also support parent/child relationships as a relation distinct from sets containing other sets. The directed-graph provided by set parent/child relationships is useful for modeling topological relations from a geometric model or other metadata. Tags are named data which can be assigned to the mesh as a whole, individual entities, or sets. Tags are a mechanism for attaching data to individual entities and sets are a mechanism for describing relations between entities; the combination of these two mechanisms is a powerful yet simple interface for representing metadata or application-specific data. For example, sets and tags can be used together to describe geometric topology, boundary condition, and inter-processor interface groupings in a mesh. MOAB is used in several ways in various applications. MOAB serves as the underlying mesh data representation in the VERDE mesh verification code. MOAB can also be used as a mesh input mechanism, using mesh readers included with MOAB, or as a translator between mesh formats, using readers and writers included with MOAB. The remainder of this report is organized as follows. Section 2, 'Getting Started', provides a few simple examples of using MOAB to perform simple tasks on a mesh. Section 3 discusses the MOAB data model in more detail, including some aspects of the implementation. Section 4 summarizes the MOAB function API. Section 5 describes some of the tools included with MOAB, and the implementation of mesh readers/writers for MOAB. Section 6 contains a brief description of MOAB's relation to the TSTT mesh interface. Section 7 gives a conclusion and future plans for MOAB development. Section 8 gives references cited in this report. A reference description of the full MOAB API is contained in Section 9.« less

  18. The Oceanographic Multipurpose Software Environment (OMUSE v1.0)

    NASA Astrophysics Data System (ADS)

    Pelupessy, Inti; van Werkhoven, Ben; van Elteren, Arjen; Viebahn, Jan; Candy, Adam; Portegies Zwart, Simon; Dijkstra, Henk

    2017-08-01

    In this paper we present the Oceanographic Multipurpose Software Environment (OMUSE). OMUSE aims to provide a homogeneous environment for existing or newly developed numerical ocean simulation codes, simplifying their use and deployment. In this way, numerical experiments that combine ocean models representing different physics or spanning different ranges of physical scales can be easily designed. Rapid development of simulation models is made possible through the creation of simple high-level scripts. The low-level core of the abstraction in OMUSE is designed to deploy these simulations efficiently on heterogeneous high-performance computing resources. Cross-verification of simulation models with different codes and numerical methods is facilitated by the unified interface that OMUSE provides. Reproducibility in numerical experiments is fostered by allowing complex numerical experiments to be expressed in portable scripts that conform to a common OMUSE interface. Here, we present the design of OMUSE as well as the modules and model components currently included, which range from a simple conceptual quasi-geostrophic solver to the global circulation model POP (Parallel Ocean Program). The uniform access to the codes' simulation state and the extensive automation of data transfer and conversion operations aids the implementation of model couplings. We discuss the types of couplings that can be implemented using OMUSE. We also present example applications that demonstrate the straightforward model initialization and the concurrent use of data analysis tools on a running model. We give examples of multiscale and multiphysics simulations by embedding a regional ocean model into a global ocean model and by coupling a surface wave propagation model with a coastal circulation model.

  19. An internal variable constitutive model for the large deformation of metals at high temperatures

    NASA Technical Reports Server (NTRS)

    Brown, Stuart; Anand, Lallit

    1988-01-01

    The advent of large deformation finite element methodologies is beginning to permit the numerical simulation of hot working processes whose design until recently has been based on prior industrial experience. Proper application of such finite element techniques requires realistic constitutive equations which more accurately model material behavior during hot working. A simple constitutive model for hot working is the single scalar internal variable model for isotropic thermal elastoplasticity proposed by Anand. The model is recalled and the specific scalar functions, for the equivalent plastic strain rate and the evolution equation for the internal variable, presented are slight modifications of those proposed by Anand. The modified functions are better able to represent high temperature material behavior. The monotonic constant true strain rate and strain rate jump compression experiments on a 2 percent silicon iron is briefly described. The model is implemented in the general purpose finite element program ABAQUS.

  20. Operational models of infrastructure resilience.

    PubMed

    Alderson, David L; Brown, Gerald G; Carlyle, W Matthew

    2015-04-01

    We propose a definition of infrastructure resilience that is tied to the operation (or function) of an infrastructure as a system of interacting components and that can be objectively evaluated using quantitative models. Specifically, for any particular system, we use quantitative models of system operation to represent the decisions of an infrastructure operator who guides the behavior of the system as a whole, even in the presence of disruptions. Modeling infrastructure operation in this way makes it possible to systematically evaluate the consequences associated with the loss of infrastructure components, and leads to a precise notion of "operational resilience" that facilitates model verification, validation, and reproducible results. Using a simple example of a notional infrastructure, we demonstrate how to use these models for (1) assessing the operational resilience of an infrastructure system, (2) identifying critical vulnerabilities that threaten its continued function, and (3) advising policymakers on investments to improve resilience. © 2014 Society for Risk Analysis.

  1. Generic distortion model for metrology under optical microscopes

    NASA Astrophysics Data System (ADS)

    Liu, Xingjian; Li, Zhongwei; Zhong, Kai; Chao, YuhJin; Miraldo, Pedro; Shi, Yusheng

    2018-04-01

    For metrology under optical microscopes, lens distortion is the dominant source of error. Previous distortion models and correction methods mostly rely on the assumption that parametric distortion models require a priori knowledge of the microscopes' lens systems. However, because of the numerous optical elements in a microscope, distortions can be hardly represented by a simple parametric model. In this paper, a generic distortion model considering both symmetric and asymmetric distortions is developed. Such a model is obtained by using radial basis functions (RBFs) to interpolate the radius and distortion values of symmetric distortions (image coordinates and distortion rays for asymmetric distortions). An accurate and easy to implement distortion correction method is presented. With the proposed approach, quantitative measurement with better accuracy can be achieved, such as in Digital Image Correlation for deformation measurement when used with an optical microscope. The proposed technique is verified by both synthetic and real data experiments.

  2. Automatic network coupling analysis for dynamical systems based on detailed kinetic models.

    PubMed

    Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich

    2005-10-01

    We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.

  3. Electrical description of N2 capacitively coupled plasmas with the global model

    NASA Astrophysics Data System (ADS)

    Cao, Ming-Lu; Lu, Yi-Jia; Cheng, Jia; Ji, Lin-Hong; Engineering Design Team

    2016-10-01

    N2 discharges in a commercial capacitively coupled plasma reactor are modelled by a combination of an equivalent circuit and the global model, for a range of gas pressure at 1 4 Torr. The ohmic and inductive plasma bulk and the capacitive sheath are represented as LCR elements, with electrical characteristics determined by plasma parameters. The electron density and electron temperature are obtained from the global model in which a Maxwellian electron distribution is assumed. Voltages and currents are recorded by a VI probe installed after the match network. Using the measured voltage as an input, the current flowing through the discharge volume is calculated from the electrical model and shows excellent agreement with the measurements. The experimentally verified electrical model provides a simple and accurate description for the relationship between the external electrical parameters and the plasma properties, which can serve as a guideline for process window planning in industrial applications.

  4. Modeling Methods

    USGS Publications Warehouse

    Healy, Richard W.; Scanlon, Bridget R.

    2010-01-01

    Simulation models are widely used in all types of hydrologic studies, and many of these models can be used to estimate recharge. Models can provide important insight into the functioning of hydrologic systems by identifying factors that influence recharge. The predictive capability of models can be used to evaluate how changes in climate, water use, land use, and other factors may affect recharge rates. Most hydrological simulation models, including watershed models and groundwater-flow models, are based on some form of water-budget equation, so the material in this chapter is closely linked to that in Chapter 2. Empirical models that are not based on a water-budget equation have also been used for estimating recharge; these models generally take the form of simple estimation equations that define annual recharge as a function of precipitation and possibly other climatic data or watershed characteristics.Model complexity varies greatly. Some models are simple accounting models; others attempt to accurately represent the physics of water movement through each compartment of the hydrologic system. Some models provide estimates of recharge explicitly; for example, a model based on the Richards equation can simulate water movement from the soil surface through the unsaturated zone to the water table. Recharge estimates can be obtained indirectly from other models. For example, recharge is a parameter in groundwater-flow models that solve for hydraulic head (i.e. groundwater level). Recharge estimates can be obtained through a model calibration process in which recharge and other model parameter values are adjusted so that simulated water levels agree with measured water levels. The simulation that provides the closest agreement is called the best fit, and the recharge value used in that simulation is the model-generated estimate of recharge.

  5. On the bandwidth of the plenoptic function.

    PubMed

    Do, Minh N; Marchand-Maillet, Davy; Vetterli, Martin

    2012-02-01

    The plenoptic function (POF) provides a powerful conceptual tool for describing a number of problems in image/video processing, vision, and graphics. For example, image-based rendering is shown as sampling and interpolation of the POF. In such applications, it is important to characterize the bandwidth of the POF. We study a simple but representative model of the scene where band-limited signals (e.g., texture images) are "painted" on smooth surfaces (e.g., of objects or walls). We show that, in general, the POF is not band limited unless the surfaces are flat. We then derive simple rules to estimate the essential bandwidth of the POF for this model. Our analysis reveals that, in addition to the maximum and minimum depths and the maximum frequency of painted signals, the bandwidth of the POF also depends on the maximum surface slope. With a unifying formalism based on multidimensional signal processing, we can verify several key results in POF processing, such as induced filtering in space and depth-corrected interpolation, and quantify the necessary sampling rates. © 2011 IEEE

  6. Functional renormalization group analysis of tensorial group field theories on Rd

    NASA Astrophysics Data System (ADS)

    Geloun, Joseph Ben; Martini, Riccardo; Oriti, Daniele

    2016-07-01

    Rank-d tensorial group field theories are quantum field theories (QFTs) defined on a group manifold G×d , which represent a nonlocal generalization of standard QFT and a candidate formalism for quantum gravity, since, when endowed with appropriate data, they can be interpreted as defining a field theoretic description of the fundamental building blocks of quantum spacetime. Their renormalization analysis is crucial both for establishing their consistency as quantum field theories and for studying the emergence of continuum spacetime and geometry from them. In this paper, we study the renormalization group flow of two simple classes of tensorial group field theories (TGFTs), defined for the group G =R for arbitrary rank, both without and with gauge invariance conditions, by means of functional renormalization group techniques. The issue of IR divergences is tackled by the definition of a proper thermodynamic limit for TGFTs. We map the phase diagram of such models, in a simple truncation, and identify both UV and IR fixed points of the RG flow. Encouragingly, for all the models we study, we find evidence for the existence of a phase transition of condensation type.

  7. Convective Propagation Characteristics Using a Simple Representation of Convective Organization

    NASA Astrophysics Data System (ADS)

    Neale, R. B.; Mapes, B. E.

    2016-12-01

    Observed equatorial wave propagation is intimately linked to convective organization and it's coupling to features of the larger-scale flow. In this talk we a use simple 4 level model to accommodate vertical modes of a mass flux convection scheme (shallow, mid-level and deep). Two paradigms of convection are used to represent convective processes. One that has only both random (unorganized) diagnosed fluctuations of convective properties and one with organized fluctuations of convective properties that are amplified by previously existing convection and has an explicit moistening impact on the local convecting environment We show a series of model simulations in single-column, 2D and 3D configurations, where the role of convective organization in wave propagation is shown to be fundamental. For the optimal choice of parameters linking organization to local atmospheric state, a broad array of convective wave propagation emerges. Interestingly the key characteristics of propagating modes are the low-level moistening followed by deep convection followed by mature 'large-scale' heating. This organization structure appears to hold firm across timescales from 5-day wave disturbances to MJO-like wave propagation.

  8. Manual lateralization in macaques: handedness, target laterality and task complexity.

    PubMed

    Regaiolli, Barbara; Spiezio, Caterina; Vallortigara, Giorgio

    2016-01-01

    Non-human primates represent models to understand the evolution of handedness in humans. Despite several researches have been investigating non-human primates handedness, few studies examined the relationship between target position, hand preference and task complexity. This study aimed at investigating macaque handedness in relation to target laterality and tastiness, as well as task complexity. Seven pig-tailed macaques (Macaca nemestrina) were involved in three different "two alternative choice" tests: one low-level task and two high-level tasks (HLTs). During the first and the third tests macaques could select a preferred food and a non-preferred food, whereas by modifying the design of the second test, macaques were presented with no-difference alternative per trial. Furthermore, a simple-reaching test was administered to assess hand preference in a social context. Macaques showed hand preference at individual level both in simple and complex tasks, but not in the simple-reaching test. Moreover, target position seemed to affect hand preference in retrieving an object in the low-level task, but not in the HLT. Additionally, individual hand preference seemed to be affected from the tastiness of the item to be retrieved. The results suggest that both target laterality and individual motivation might influence hand preference of macaques, especially in simple tasks.

  9. Performance of friction dampersin geometric mistuned bladed disk assembly subjected to random excitations

    NASA Astrophysics Data System (ADS)

    Cha, Douksoon

    2018-07-01

    In this study, the performance of friction dampers of a geometric mistuned bladed disk assembly is examined under random excitations. The results are represented by non-dimensional variables. It is shown that the performance of the blade-to-blade damper can deteriorate when the correlated narrow band excitations have a dominant frequency near the 1st natural frequency of the bladed disk assembly. Based on a simple model of a geometric mistuned bladed disk assembly, the analytical technique shows an efficient way to design friction dampers.

  10. Adsorption of basic dyes on granular activated carbon and natural zeolite.

    PubMed

    Meshko, V; Markovska, L; Mincheva, M; Rodrigues, A E

    2001-10-01

    The adsorption of basic dyes from aqueous solution onto granular activated carbon and natural zeolite has been studied using an agitated batch adsorber. The influence of agitation, initial dye concentration and adsorbent mass has been studied. The parameters of Langmuir and Freundlich adsorption isotherms have been determined using the adsorption data. Homogeneous diffusion model (solid diffusion) combined with external mass transfer resistance is proposed for the kinetic investigation. The dependence of solid diffusion coefficient on initial concentration and mass adsorbent is represented by the simple empirical equations.

  11. Coarse-grained hydrodynamics from correlation functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmer, Bruce

    This paper will describe a formalism for using correlation functions between different grid cells as the basis for determining coarse-grained hydrodynamic equations for modeling the behavior of mesoscopic fluid systems. Configuration from a molecular dynamics simulation are projected onto basis functions representing grid cells in a continuum hydrodynamic simulation. Equilbrium correlation functions between different grid cells are evaluated from the molecular simulation and used to determine the evolution operator for the coarse-grained hydrodynamic system. The formalism is applied to some simple hydrodynamic cases to determine the feasibility of applying this to realistic nanoscale systems.

  12. Catenaries in viscous fluid

    NASA Astrophysics Data System (ADS)

    Hanna, James; Chakrabarti, Brato

    2015-11-01

    Slender structures live in fluid flows across many scales, from towed instruments to plant blades to microfluidic valves. The present work details a simple model of a flexible structure in a uniform flow. We present analytical solutions for the translating, axially flowing equilibria of strings subjected to a uniform body force and linear drag forces. This is an extension of the classical catenaries to a five-parameter family of solutions, represented as trajectories in angle-curvature ``phase space.'' Limiting cases include neutrally buoyant towed cables and freely sedimenting flexible filaments. Now at University of California, San Diego.

  13. Simulated laser fluorosensor signals from subsurface chlorophyll distributions

    NASA Technical Reports Server (NTRS)

    Venable, D. D.; Khatun, S.; Punjabi, A.; Poole, L.

    1986-01-01

    A semianalytic Monte Carlo model has been used to simulate laser fluorosensor signals returned from subsurface distributions of chlorophyll. This study assumes the only constituent of the ocean medium is the common coastal zone dinoflagellate Prorocentrum minimum. The concentration is represented by Gaussian distributions in which the location of the distribution maximum and the standard deviation are variable. Most of the qualitative features observed in the fluorescence signal for total chlorophyll concentrations up to 1.0 microg/liter can be accounted for with a simple analytic solution assuming a rectangular chlorophyll distribution function.

  14. Ising model of cardiac thin filament activation with nearest-neighbor cooperative interactions

    NASA Technical Reports Server (NTRS)

    Rice, John Jeremy; Stolovitzky, Gustavo; Tu, Yuhai; de Tombe, Pieter P.; Bers, D. M. (Principal Investigator)

    2003-01-01

    We have developed a model of cardiac thin filament activation using an Ising model approach from equilibrium statistical physics. This model explicitly represents nearest-neighbor interactions between 26 troponin/tropomyosin units along a one-dimensional array that represents the cardiac thin filament. With transition rates chosen to match experimental data, the results show that the resulting force-pCa (F-pCa) relations are similar to Hill functions with asymmetries, as seen in experimental data. Specifically, Hill plots showing (log(F/(1-F)) vs. log [Ca]) reveal a steeper slope below the half activation point (Ca(50)) compared with above. Parameter variation studies show interplay of parameters that affect the apparent cooperativity and asymmetry in the F-pCa relations. The model also predicts that Ca binding is uncooperative for low [Ca], becomes steeper near Ca(50), and becomes uncooperative again at higher [Ca]. The steepness near Ca(50) mirrors the steep F-pCa as a result of thermodynamic considerations. The model also predicts that the correlation between troponin/tropomyosin units along the one-dimensional array quickly decays at high and low [Ca], but near Ca(50), high correlation occurs across the whole array. This work provides a simple model that can account for the steepness and shape of F-pCa relations that other models fail to reproduce.

  15. Elementary Teachers' Selection and Use of Visual Models

    NASA Astrophysics Data System (ADS)

    Lee, Tammy D.; Gail Jones, M.

    2018-02-01

    As science grows in complexity, science teachers face an increasing challenge of helping students interpret models that represent complex science systems. Little is known about how teachers select and use models when planning lessons. This mixed methods study investigated the pedagogical approaches and visual models used by elementary in-service and preservice teachers in the development of a science lesson about a complex system (e.g., water cycle). Sixty-seven elementary in-service and 69 elementary preservice teachers completed a card sort task designed to document the types of visual models (e.g., images) that teachers choose when planning science instruction. Quantitative and qualitative analyses were conducted to analyze the card sort task. Semistructured interviews were conducted with a subsample of teachers to elicit the rationale for image selection. Results from this study showed that both experienced in-service teachers and novice preservice teachers tended to select similar models and use similar rationales for images to be used in lessons. Teachers tended to select models that were aesthetically pleasing and simple in design and illustrated specific elements of the water cycle. The results also showed that teachers were not likely to select images that represented the less obvious dimensions of the water cycle. Furthermore, teachers selected visual models more as a pedagogical tool to illustrate specific elements of the water cycle and less often as a tool to promote student learning related to complex systems.

  16. A simple model for factory distribution: Historical effect in an industry city

    NASA Astrophysics Data System (ADS)

    Uehara, Takashi; Sato, Kazunori; Morita, Satoru; Maeda, Yasunobu; Yoshimura, Jin; Tainaka, Kei-ichi

    2016-02-01

    The construction and discontinuance processes of factories are complicated problems in sociology. We focus on the spatial and temporal changes of factories at Hamamatsu city in Japan. Real data indicate that the clumping degree of factories decreases as the density of factory increases. To represent the spatial and temporal changes of factories, we apply "contact process" which is one of cellular automata. This model roughly explains the dynamics of factory distribution. We also find "historical effect" in spatial distribution. Namely, the recent factories have been dispersed due to the past distribution during the period of economic bubble. This effect may be related to heavy shock in Japanese stock market.

  17. Algorithms for the prediction of retinopathy of prematurity based on postnatal weight gain.

    PubMed

    Binenbaum, Gil

    2013-06-01

    Current ROP screening guidelines represent a simple risk model with two dichotomized factors, birth weight and gestational age at birth. Pioneering work has shown that tracking postnatal weight gain, a surrogate for low insulin-like growth factor 1, may capture the influence of many other ROP risk factors and improve risk prediction. Models including weight gain, such as WINROP, ROPScore, and CHOP ROP, have demonstrated accurate ROP risk assessment and a potentially large reduction in ROP examinations, compared to current guidelines. However, there is a need for larger studies, and generalizability is limited in countries with developing neonatal care systems. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Nano-swimmers in biological membranes and propulsion hydrodynamics in two dimensions.

    PubMed

    Huang, Mu-Jie; Chen, Hsuan-Yi; Mikhailov, Alexander S

    2012-11-01

    Active protein inclusions in biological membranes can represent nano-swimmers and propel themselves in lipid bilayers. A simple model of an active inclusion with three particles (domains) connected by variable elastic links is considered. First, the membrane is modeled as a two-dimensional viscous fluid and propulsion behavior in two dimensions is examined. After that, an example of a microscopic dynamical simulation is presented, where the lipid bilayer structure of the membrane is resolved and the solvent effects are included by multiparticle collision dynamics. Statistical analysis of data reveals ballistic motion of the swimmer, in contrast to the classical diffusion behavior found in the absence of active transitions between the states.

  19. Radiating dipoles in photonic crystals

    PubMed

    Busch; Vats; John; Sanders

    2000-09-01

    The radiation dynamics of a dipole antenna embedded in a photonic crystal are modeled by an initially excited harmonic oscillator coupled to a non-Markovian bath of harmonic oscillators representing the colored electromagnetic vacuum within the crystal. Realistic coupling constants based on the natural modes of the photonic crystal, i.e., Bloch waves and their associated dispersion relation, are derived. For simple model systems, well-known results such as decay times and emission spectra are reproduced. This approach enables direct incorporation of realistic band structure computations into studies of radiative emission from atoms and molecules within photonic crystals. We therefore provide a predictive and interpretative tool for experiments in both the microwave and optical regimes.

  20. Molecular graph convolutions: moving beyond fingerprints

    NASA Astrophysics Data System (ADS)

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-08-01

    Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.

  1. Consequences of increased longevity for wealth, fertility, and population growth

    NASA Astrophysics Data System (ADS)

    Bogojević, A.; Balaž, A.; Karapandža, R.

    2008-01-01

    We present, solve and numerically simulate a simple model that describes the consequences of increased longevity for fertility rates, population growth and the distribution of wealth in developed societies. We look at the consequences of the repeated use of life extension techniques and show that they represent a novel commodity whose introduction will profoundly influence key aspects of the economy and society in general. In particular, we uncover two phases within our simplified model, labeled as ‘mortal’ and ‘immortal’. Within the life extension scenario it is possible to have sustainable economic growth in a population of stable size, as a result of dynamical equilibrium between the two phases.

  2. Molecular graph convolutions: moving beyond fingerprints.

    PubMed

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-08-01

    Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph-atoms, bonds, distances, etc.-which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.

  3. A collision probability analysis of the double-heterogeneity problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hebert, A.

    1993-10-01

    A practical collision probability model is presented for the description of geometries with many levels of heterogeneity. Regular regions of the macrogeometry are assumed to contain a stochastic mixture of spherical grains or cylindrical tubes. Simple expressions for the collision probabilities in the global geometry are obtained as a function of the collision probabilities in the macro- and microgeometries. This model was successfully implemented in the collision probability kernel of the APOLLO-1, APOLLO-2, and DRAGON lattice codes for the description of a broad range of reactor physics problems. Resonance self-shielding and depletion calculations in the microgeometries are possible because eachmore » microregion is explicitly represented.« less

  4. Structure and Dynamics of Solvent Landscapes in Charge-Transfer Reactions

    NASA Astrophysics Data System (ADS)

    Leite, Vitor B. Pereira

    The dynamics of solvent polarization plays a major role in the control of charge transfer reactions. The success of Marcus theory describing the solvent influence via a single collective quadratic polarization coordinate has been remarkable. Onuchic and Wolynes have recently proposed (J. Chem Phys 98 (3) 2218, 1993) a simple model demonstrating how a many-dimensional-complex model composed by several dipole moments (representing solvent molecules or polar groups in proteins) can be reduced under the appropriate limits into the Marcus Model. This work presents a dynamical study of the same model, which is characterized by two parameters, an average dipole-dipole interaction as a term associated with the potential energy landscape roughness. It is shown why the effective potential, obtained using a thermodynamic approach, is appropriate for the dynamics of the system. At high temperatures, the system exhibits effective diffusive one-dimensional dynamics, where the Born-Marcus limit is recovered. At low temperatures, a glassy phase appears with a slow non-self-averaging dynamics. At intermediate temperatures, the concept of equivalent diffusion paths and polarization dependence effects are discussed. This approach is extended to treat more realistic solvent models. Real solvents are discussed in terms of simple parameters described above, and an analysis of how different regimes affect the rate of charge transfer is presented. Finally, these ideas are correlated to analogous problems in other areas.

  5. Falling head ponded infiltration in the nonlinear limit

    NASA Astrophysics Data System (ADS)

    Triadis, D.

    2014-12-01

    The Green and Ampt infiltration solution represents only an extreme example of behavior within a larger class of very nonlinear, delta function diffusivity soils. The mathematical analysis of these soils is greatly simplified by the existence of a sharp wetting front below the soil surface. Solutions for more realistic delta function soil models have recently been presented for infiltration under surface saturation without ponding. After general formulation of the problem, solutions for a full suite of delta function soils are derived for ponded surface water depleted by infiltration. Exact expressions for the cumulative infiltration as a function of time, or the drainage time as a function of the initial ponded depth may take implicit or parametric forms, and are supplemented by simple asymptotic expressions valid for small times, and small and large initial ponded depths. As with surface saturation without ponding, the Green-Ampt model overestimates the effect of the soil hydraulic conductivity. At the opposing extreme, a low-conductivity model is identified that also takes a very simple mathematical form and appears to be more accurate than the Green-Ampt model for larger ponded depths. Between these two, the nonlinear limit of Gardner's soil is recommended as a physically valid first approximation. Relative discrepancies between different soil models are observed to reach a maximum for intermediate values of the dimensionless initial ponded depth, and in general are smaller than for surface saturation without ponding.

  6. Evolution of the cerebellum as a neuronal machine for Bayesian state estimation

    NASA Astrophysics Data System (ADS)

    Paulin, M. G.

    2005-09-01

    The cerebellum evolved in association with the electric sense and vestibular sense of the earliest vertebrates. Accurate information provided by these sensory systems would have been essential for precise control of orienting behavior in predation. A simple model shows that individual spikes in electrosensory primary afferent neurons can be interpreted as measurements of prey location. Using this result, I construct a computational neural model in which the spatial distribution of spikes in a secondary electrosensory map forms a Monte Carlo approximation to the Bayesian posterior distribution of prey locations given the sense data. The neural circuit that emerges naturally to perform this task resembles the cerebellar-like hindbrain electrosensory filtering circuitry of sharks and other electrosensory vertebrates. The optimal filtering mechanism can be extended to handle dynamical targets observed from a dynamical platform; that is, to construct an optimal dynamical state estimator using spiking neurons. This may provide a generic model of cerebellar computation. Vertebrate motion-sensing neurons have specific fractional-order dynamical characteristics that allow Bayesian state estimators to be implemented elegantly and efficiently, using simple operations with asynchronous pulses, i.e. spikes. The computational neural models described in this paper represent a novel kind of particle filter, using spikes as particles. The models are specific and make testable predictions about computational mechanisms in cerebellar circuitry, while providing a plausible explanation of cerebellar contributions to aspects of motor control, perception and cognition.

  7. Magnetic gaps in organic tri-radicals: From a simple model to accurate estimates.

    PubMed

    Barone, Vincenzo; Cacelli, Ivo; Ferretti, Alessandro; Prampolini, Giacomo

    2017-03-14

    The calculation of the energy gap between the magnetic states of organic poly-radicals still represents a challenging playground for quantum chemistry, and high-level techniques are required to obtain accurate estimates. On these grounds, the aim of the present study is twofold. From the one side, it shows that, thanks to recent algorithmic and technical improvements, we are able to compute reliable quantum mechanical results for the systems of current fundamental and technological interest. From the other side, proper parameterization of a simple Hubbard Hamiltonian allows for a sound rationalization of magnetic gaps in terms of basic physical effects, unraveling the role played by electron delocalization, Coulomb repulsion, and effective exchange in tuning the magnetic character of the ground state. As case studies, we have chosen three prototypical organic tri-radicals, namely, 1,3,5-trimethylenebenzene, 1,3,5-tridehydrobenzene, and 1,2,3-tridehydrobenzene, which differ either for geometric or electronic structure. After discussing the differences among the three species and their consequences on the magnetic properties in terms of the simple model mentioned above, accurate and reliable values for the energy gap between the lowest quartet and doublet states are computed by means of the so-called difference dedicated configuration interaction (DDCI) technique, and the final results are discussed and compared to both available experimental and computational estimates.

  8. How well can regional fluxes be derived from smaller-scale estimates?

    NASA Technical Reports Server (NTRS)

    Moore, Kathleen E.; Fitzjarrald, David R.; Ritter, John A.

    1992-01-01

    Regional surface fluxes are essential lower boundary conditions for large scale numerical weather and climate models and are the elements of global budgets of important trace gases. Surface properties affecting the exchange of heat, moisture, momentum and trace gases vary with length scales from one meter to hundreds of km. A classical difficulty is that fluxes have been measured directly only at points or along lines. The process of scaling up observations limited in space and/or time to represent larger areas was done by assigning properties to surface classes and combining estimated or calculated fluxes using an area weighted average. It is not clear that a simple area weighted average is sufficient to produce the large scale from the small scale, chiefly due to the effect of internal boundary layers, nor is it known how important the uncertainty is to large scale model outcomes. Simultaneous aircraft and tower data obtained in the relatively simple terrain of the western Alaska tundra were used to determine the extent to which surface type variation can be related to fluxes of heat, moisture, and other properties. Surface type was classified as lake or land with aircraft borne infrared thermometer, and flight level heat and moisture fluxes were related to surface type. The magnitude and variety of sampling errors inherent in eddy correlation flux estimation place limits on how well any flux can be known even in simple geometries.

  9. Modeling reactive transport processes in fractured rock using the time domain random walk approach within a dual-porosity framework

    NASA Astrophysics Data System (ADS)

    Roubinet, D.; Russian, A.; Dentz, M.; Gouze, P.

    2017-12-01

    Characterizing and modeling hydrodynamic reactive transport in fractured rock are critical challenges for various research fields and applications including environmental remediation, geological storage, and energy production. To this end, we consider a recently developed time domain random walk (TDRW) approach, which is adapted to reproduce anomalous transport behaviors and capture heterogeneous structural and physical properties. This method is also very well suited to optimize numerical simulations by memory-shared massive parallelization and provide numerical results at various scales. So far, the TDRW approach has been applied for modeling advective-diffusive transport with mass transfer between mobile and immobile regions and simple (theoretical) reactions in heterogeneous porous media represented as single continuum domains. We extend this approach to dual-continuum representations considering a highly permeable fracture network embedded into a poorly permeable rock matrix with heterogeneous geochemical reactions occurring in both geological structures. The resulting numerical model enables us to extend the range of the modeled heterogeneity scales with an accurate representation of solute transport processes and no assumption on the Fickianity of these processes. The proposed model is compared to existing particle-based methods that are usually used to model reactive transport in fractured rocks assuming a homogeneous surrounding matrix, and is used to evaluate the impact of the matrix heterogeneity on the apparent reaction rates for different 2D and 3D simple-to-complex fracture network configurations.

  10. Preliminary calculation of solar cosmic ray dose to the female breast in space mission

    NASA Technical Reports Server (NTRS)

    Shavers, Mark; Poston, John W.; Atwell, William; Hardy, Alva C.; Wilson, John W.

    1991-01-01

    No regulatory dose limits are specifically assigned for the radiation exposure of female breasts during manned space flight. However, the relatively high radiosensitivity of the glandular tissue of the breasts and its potential exposure to solar flare protons on short- and long-term missions mandate a priori estimation of the associated risks. A model for estimating exposure within the breast is developed for use in future NASA missions. The female breast and torso geometry is represented by a simple interim model. A recently developed proton dose-buildup procedure is used for estimating doses. The model considers geomagnetic shielding, magnetic-storm conditions, spacecraft shielding, and body self-shielding. Inputs to the model include proton energy spectra, spacecraft orbital parameters, STS orbiter-shielding distribution at a given position, and a single parameter allowing for variation in breast size.

  11. Fish tracking by combining motion based segmentation and particle filtering

    NASA Astrophysics Data System (ADS)

    Bichot, E.; Mascarilla, L.; Courtellemont, P.

    2006-01-01

    In this paper, we suggest a new importance sampling scheme to improve a particle filtering based tracking process. This scheme relies on exploitation of motion segmentation. More precisely, we propagate hypotheses from particle filtering to blobs of similar motion to target. Hence, search is driven toward regions of interest in the state space and prediction is more accurate. We also propose to exploit segmentation to update target model. Once the moving target has been identified, a representative model is learnt from its spatial support. We refer to this model in the correction step of the tracking process. The importance sampling scheme and the strategy to update target model improve the performance of particle filtering in complex situations of occlusions compared to a simple Bootstrap approach as shown by our experiments on real fish tank sequences.

  12. Experimental and Numerical Correlation of Gravity Sag in Solar Sail Quality Membranes

    NASA Technical Reports Server (NTRS)

    Black, Jonathan T.; Leifer, Jack; DeMoss, Joshua A.; Walker, Eric N.; Belvin, W. Keith

    2004-01-01

    Solar sails are among the most studied members of the ultra-lightweight and inflatable (Gossamer) space structures family due to their potential to provide propellentless propulsion. They are comprised of ultra-thin membrane panels that, to date, have proven very difficult to experimentally characterize and numerically model due to their reflectivity and flexibility, and the effects of gravity sag and air damping. Numerical models must be correlated with experimental measurements of sub-scale solar sails to verify that the models can be scaled up to represent full-sized solar sails. In this paper, the surface shapes of five horizontally supported 25 micron thick aluminized Kapton membranes were measured to a 1.0 mm resolution using photogrammetry. Several simple numerical models closely match the experimental data, proving the ability of finite element simulations to predict actual behavior of solar sails.

  13. Representing ductile damage with the dual domain material point method

    DOE PAGES

    Long, C. C.; Zhang, D. Z.; Bronkhorst, C. A.; ...

    2015-12-14

    In this study, we incorporate a ductile damage material model into a computational framework based on the Dual Domain Material Point (DDMP) method. As an example, simulations of a flyer plate experiment involving ductile void growth and material failure are performed. The results are compared with experiments performed on high purity tantalum. We also compare the numerical results obtained from the DDMP method with those obtained from the traditional Material Point Method (MPM). Effects of an overstress model, artificial viscosity, and physical viscosity are investigated. Our results show that a physical bulk viscosity and overstress model are important in thismore » impact and failure problem, while physical shear viscosity and artificial shock viscosity have negligible effects. A simple numerical procedure with guaranteed convergence is introduced to solve for the equilibrium plastic state from the ductile damage model.« less

  14. Evaluation of the Williams-type model for barley yields in North Dakota and Minnesota

    NASA Technical Reports Server (NTRS)

    Barnett, T. L. (Principal Investigator)

    1981-01-01

    The Williams-type yield model is based on multiple regression analysis of historial time series data at CRD level pooled to regional level (groups of similar CRDs). Basic variables considered in the analysis include USDA yield, monthly mean temperature, monthly precipitation, soil texture and topographic information, and variables derived from these. Technologic trend is represented by piecewise linear and/or quadratic functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test (1970-1979) demonstrate that biases are small and performance based on root mean square appears to be acceptable for the intended AgRISTARS large area applications. The model is objective, adequate, timely, simple, and not costly. It consideres scientific knowledge on a broad scale but not in detail, and does not provide a good current measure of modeled yield reliability.

  15. Amelioration of ischemic brain damage by peritoneal dialysis

    PubMed Central

    Godino, María del Carmen; Romera, Victor G.; Sánchez-Tomero, José Antonio; Pacheco, Jesus; Canals, Santiago; Lerma, Juan; Vivancos, José; Moro, María Angeles; Torres, Magdalena; Lizasoain, Ignacio; Sánchez-Prieto, José

    2013-01-01

    Ischemic stroke is a devastating condition, for which there is still no effective therapy. Acute ischemic stroke is associated with high concentrations of glutamate in the blood and interstitial brain fluid. The inability of the tissue to retain glutamate within the cells of the brain ultimately provokes neuronal death. Increased concentrations of interstitial glutamate exert further excitotoxic effects on healthy tissue surrounding the infarct zone. We developed a strategy based on peritoneal dialysis to reduce blood glutamate levels, thereby accelerating brain-to-blood glutamate clearance. In a rat model of stroke, this simple procedure reduced the transient increase in glutamate, consequently decreasing the size of the infarct area. Functional magnetic resonance imaging demonstrated that the rescued brain tissue remained functional. Moreover, in patients with kidney failure, peritoneal dialysis significantly decreased glutamate concentrations. Our results suggest that peritoneal dialysis may represent a simple and effective intervention for human stroke patients. PMID:23999426

  16. Thermalizing Sterile Neutrino Dark Matter

    NASA Astrophysics Data System (ADS)

    Hansen, Rasmus S. L.; Vogl, Stefan

    2017-12-01

    Sterile neutrinos produced through oscillations are a well motivated dark matter candidate, but recent constraints from observations have ruled out most of the parameter space. We analyze the impact of new interactions on the evolution of keV sterile neutrino dark matter in the early Universe. Based on general considerations we find a mechanism which thermalizes the sterile neutrinos after an initial production by oscillations. The thermalization of sterile neutrinos is accompanied by dark entropy production which increases the yield of dark matter and leads to a lower characteristic momentum. This resolves the growing tensions with structure formation and x-ray observations and even revives simple nonresonant production as a viable way to produce sterile neutrino dark matter. We investigate the parameters required for the realization of the thermalization mechanism in a representative model and find that a simple estimate based on energy and entropy conservation describes the mechanism well.

  17. Thermalizing Sterile Neutrino Dark Matter.

    PubMed

    Hansen, Rasmus S L; Vogl, Stefan

    2017-12-22

    Sterile neutrinos produced through oscillations are a well motivated dark matter candidate, but recent constraints from observations have ruled out most of the parameter space. We analyze the impact of new interactions on the evolution of keV sterile neutrino dark matter in the early Universe. Based on general considerations we find a mechanism which thermalizes the sterile neutrinos after an initial production by oscillations. The thermalization of sterile neutrinos is accompanied by dark entropy production which increases the yield of dark matter and leads to a lower characteristic momentum. This resolves the growing tensions with structure formation and x-ray observations and even revives simple nonresonant production as a viable way to produce sterile neutrino dark matter. We investigate the parameters required for the realization of the thermalization mechanism in a representative model and find that a simple estimate based on energy and entropy conservation describes the mechanism well.

  18. Universality classes of fluctuation dynamics in hierarchical complex systems

    NASA Astrophysics Data System (ADS)

    Macêdo, A. M. S.; González, Iván R. Roa; Salazar, D. S. P.; Vasconcelos, G. L.

    2017-03-01

    A unified approach is proposed to describe the statistics of the short-time dynamics of multiscale complex systems. The probability density function of the relevant time series (signal) is represented as a statistical superposition of a large time-scale distribution weighted by the distribution of certain internal variables that characterize the slowly changing background. The dynamics of the background is formulated as a hierarchical stochastic model whose form is derived from simple physical constraints, which in turn restrict the dynamics to only two possible classes. The probability distributions of both the signal and the background have simple representations in terms of Meijer G functions. The two universality classes for the background dynamics manifest themselves in the signal distribution as two types of tails: power law and stretched exponential, respectively. A detailed analysis of empirical data from classical turbulence and financial markets shows excellent agreement with the theory.

  19. Uncovering Oscillations, Complexity, and Chaos in Chemical Kinetics Using Mathematica

    NASA Astrophysics Data System (ADS)

    Ferreira, M. M. C.; Ferreira, W. C., Jr.; Lino, A. C. S.; Porto, M. E. G.

    1999-06-01

    Unlike reactions with no peculiar temporal behavior, in oscillatory reactions concentrations can rise and fall spontaneously in a cyclic or disorganized fashion. In this article, the software Mathematica is used for a theoretical study of kinetic mechanisms of oscillating and chaotic reactions. A first simple example is introduced through a three-step reaction, called the Lotka model, which exhibits a temporal behavior characterized by damped oscillations. The phase plane method of dynamic systems theory is introduced for a geometric interpretation of the reaction kinetics without solving the differential rate equations. The equations are later numerically solved using the built-in routine NDSolve and the results are plotted. The next example, still with a very simple mechanism, is the Lotka-Volterra model reaction, which oscillates indefinitely. The kinetic process and rate equations are also represented by a three-step reaction mechanism. The most important difference between this and the former reaction is that the undamped oscillation has two autocatalytic steps instead of one. The periods of oscillations are obtained by using the discrete Fourier transform (DFT)-a well-known tool in spectroscopy, although not so common in this context. In the last section, it is shown how a simple model of biochemical interactions can be useful to understand the complex behavior of important biological systems. The model consists of two allosteric enzymes coupled in series and activated by its own products. This reaction scheme is important for explaining many metabolic mechanisms, such as the glycolytic oscillations in muscles, yeast glycolysis, and the periodic synthesis of cyclic AMP. A few of many possible dynamic behaviors are exemplified through a prototype glycolytic enzymatic reaction proposed by Decroly and Goldbeter. By simply modifying the initial concentrations, limit cycles, chaos, and birhythmicity are computationally obtained and visualized.

  20. WHAT IS A MOMENT ARM? CALCULATING MUSCLE EFFECTIVENESS IN BIOMECHANICAL MODELS USING GENERALIZED COORDINATES

    PubMed Central

    Seth, Ajay; Delp, Scott L.

    2015-01-01

    Biomechanics researchers often use multibody models to represent biological systems. However, the mapping from biology to mechanics and back can be problematic. OpenSim is a popular open source tool used for this purpose, mapping between biological specifications and an underlying generalized coordinate multibody system called Simbody. One quantity of interest to biomechanical researchers and clinicians is “muscle moment arm,” a measure of the effectiveness of a muscle at contributing to a particular motion over a range of configurations. OpenSim can automatically calculate these quantities for any muscle once a model has been built. For simple cases, this calculation is the same as the conventional moment arm calculation in mechanical engineering. But a muscle may span several joints (e.g., wrist, neck, back) and may follow a convoluted path over various curved surfaces. A biological joint may require several bodies or even a mechanism to accurately represent in the multibody model (e.g., knee, shoulder). In these situations we need a careful definition of muscle moment arm that is analogous to the mechanical engineering concept, yet generalized to be of use to biomedical researchers. Here we present some biomechanical modeling challenges and how they are resolved in OpenSim and Simbody to yield biologically meaningful muscle moment arms. PMID:25905111

  1. WHAT IS A MOMENT ARM? CALCULATING MUSCLE EFFECTIVENESS IN BIOMECHANICAL MODELS USING GENERALIZED COORDINATES.

    PubMed

    Sherman, Michael A; Seth, Ajay; Delp, Scott L

    2013-08-01

    Biomechanics researchers often use multibody models to represent biological systems. However, the mapping from biology to mechanics and back can be problematic. OpenSim is a popular open source tool used for this purpose, mapping between biological specifications and an underlying generalized coordinate multibody system called Simbody. One quantity of interest to biomechanical researchers and clinicians is "muscle moment arm," a measure of the effectiveness of a muscle at contributing to a particular motion over a range of configurations. OpenSim can automatically calculate these quantities for any muscle once a model has been built. For simple cases, this calculation is the same as the conventional moment arm calculation in mechanical engineering. But a muscle may span several joints (e.g., wrist, neck, back) and may follow a convoluted path over various curved surfaces. A biological joint may require several bodies or even a mechanism to accurately represent in the multibody model (e.g., knee, shoulder). In these situations we need a careful definition of muscle moment arm that is analogous to the mechanical engineering concept, yet generalized to be of use to biomedical researchers. Here we present some biomechanical modeling challenges and how they are resolved in OpenSim and Simbody to yield biologically meaningful muscle moment arms.

  2. A model for allometric scaling of mammalian metabolism with ambient heat loss.

    PubMed

    Kwak, Ho Sang; Im, Hong G; Shim, Eun Bo

    2016-03-01

    Allometric scaling, which represents the dependence of biological traits or processes on body size, is a long-standing subject in biological science. However, there has been no study to consider heat loss to the ambient and an insulation layer representing mammalian skin and fur for the derivation of the scaling law of metabolism. A simple heat transfer model is proposed to analyze the allometry of mammalian metabolism. The present model extends existing studies by incorporating various external heat transfer parameters and additional insulation layers. The model equations were solved numerically and by an analytic heat balance approach. A general observation is that the present heat transfer model predicted the 2/3 surface scaling law, which is primarily attributed to the dependence of the surface area on the body mass. External heat transfer effects introduced deviations in the scaling law, mainly due to natural convection heat transfer, which becomes more prominent at smaller mass. These deviations resulted in a slight modification of the scaling exponent to a value < 2/3. The finding that additional radiative heat loss and the consideration of an outer insulation fur layer attenuate these deviation effects and render the scaling law closer to 2/3 provides in silico evidence for a functional impact of heat transfer mode on the allometric scaling law in mammalian metabolism.

  3. A biologically inspired two-species exclusion model: effects of RNA polymerase motor traffic on simultaneous DNA replication

    NASA Astrophysics Data System (ADS)

    Ghosh, Soumendu; Mishra, Bhavya; Patra, Shubhadeep; Schadschneider, Andreas; Chowdhury, Debashish

    2018-04-01

    We introduce a two-species exclusion model to describe the key features of the conflict between the RNA polymerase (RNAP) motor traffic, engaged in the transcription of a segment of DNA, concomitant with the progress of two DNA replication forks on the same DNA segment. One of the species of particles (P) represents RNAP motors while the other (R) represents the replication forks. Motivated by the biological phenomena that this model is intended to capture, a maximum of two R particles only are allowed to enter the lattice from two opposite ends whereas the unrestricted number of P particles constitutes a totally asymmetric simple exclusion process (TASEP) in a segment in the middle of the lattice. The model captures three distinct pathways for resolving the co-directional as well as head-on collision between the P and R particles. Using Monte Carlo simulations and heuristic analytical arguments that combine exact results for the TASEP with mean-field approximations, we predict the possible outcomes of the conflict between the traffic of RNAP motors (P particles engaged in transcription) and the replication forks (R particles). In principle, the model can be adapted to experimental conditions to account for the data quantitatively.

  4. Observ-OM and Observ-TAB: Universal syntax solutions for the integration, search, and exchange of phenotype and genotype information.

    PubMed

    Adamusiak, Tomasz; Parkinson, Helen; Muilu, Juha; Roos, Erik; van der Velde, Kasper Joeri; Thorisson, Gudmundur A; Byrne, Myles; Pang, Chao; Gollapudi, Sirisha; Ferretti, Vincent; Hillege, Hans; Brookes, Anthony J; Swertz, Morris A

    2012-05-01

    Genetic and epidemiological research increasingly employs large collections of phenotypic and molecular observation data from high quality human and model organism samples. Standardization efforts have produced a few simple formats for exchange of these various data, but a lightweight and convenient data representation scheme for all data modalities does not exist, hindering successful data integration, such as assignment of mouse models to orphan diseases and phenotypic clustering for pathways. We report a unified system to integrate and compare observation data across experimental projects, disease databases, and clinical biobanks. The core object model (Observ-OM) comprises only four basic concepts to represent any kind of observation: Targets, Features, Protocols (and their Applications), and Values. An easy-to-use file format (Observ-TAB) employs Excel to represent individual and aggregate data in straightforward spreadsheets. The systems have been tested successfully on human biobank, genome-wide association studies, quantitative trait loci, model organism, and patient registry data using the MOLGENIS platform to quickly setup custom data portals. Our system will dramatically lower the barrier for future data sharing and facilitate integrated search across panels and species. All models, formats, documentation, and software are available for free and open source (LGPLv3) at http://www.observ-om.org. © 2012 Wiley Periodicals, Inc.

  5. Robust nonlinear attitude control with disturbance compensation

    NASA Astrophysics Data System (ADS)

    Walchko, Kevin Jack

    Attitude control of small spacecraft is a particularly important component for many missions in the space program: Hubble Space Telescope for observing the cosmos, GPS satellites for navigation, SeaWiFS for studying phytoplankton concentrations in the ocean, etc. Typically designers use proportional derivative control because it is simple to understand and implement. However this method lacks robustness in the presence of disturbances and uncertainties. Thus to improve the fidelity of this simulation, two disturbances were included, fuel slosh and solar snap. Fuel slosh is the unwanted movement of fuel inside of a fuel tank. The fuel slosh model used for the satellite represents each sloshing mode as a mass-spring-damper. The mass represents the wave of fuel that propagates across the tank, the damper represents the baffling that hinders the movement, and the spring represents the force imparted to the spacecraft when the wave impacts the tank wall. This formulation makes the incorporation of multiple modes of interest simple, which is an advance over the typical one sloshing mode, pendulum model. Thermally induce vibrations, or solar snap, occur as a satellite transitions form the day-to-night or night-to-day side of a planet. During this transition, there is a sudden change in the amount of heat flux to the solar panels and vibrations occur. Few authors have looked at the effects of solar snap. The disturbance dynamics were based on the work by Earl Thorten. The simulated effects compared favorably with real flight data taken from satellites that have encountered solar snap. A robust sliding mode controller was developed and compared to a more traditional proportional derivative controller. The controllers were evaluated in the presents of fuel slosh and solar snap. The optimized baseline proportional derivative controller used in this work, showed little effort was needed to obtain better performance using sliding mode. In addition, a colored noise filter was developed to compensate for the fuel sloshing disturbance and incorporated into the sliding mode controller for greater performance increase at the expense of requiring a little more control effort.

  6. Comparison of CEAS and Williams-type models for spring wheat yields in North Dakota and Minnesota

    NASA Technical Reports Server (NTRS)

    Barnett, T. L. (Principal Investigator)

    1982-01-01

    The CEAS and Williams-type yield models are both based on multiple regression analysis of historical time series data at CRD level. The CEAS model develops a separate relation for each CRD; the Williams-type model pools CRD data to regional level (groups of similar CRDs). Basic variables considered in the analyses are USDA yield, monthly mean temperature, monthly precipitation, and variables derived from these. The Williams-type model also used soil texture and topographic information. Technological trend is represented in both by piecewise linear functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test of each model (1970-1979) demonstrate that the models are very similar in performance in all respects. Both models are about equally objective, adequate, timely, simple, and inexpensive. Both consider scientific knowledge on a broad scale but not in detail. Neither provides a good current measure of modeled yield reliability. The CEAS model is considered very slightly preferable for AgRISTARS applications.

  7. Structure-guided statistical textural distinctiveness for salient region detection in natural images.

    PubMed

    Scharfenberger, Christian; Wong, Alexander; Clausi, David A

    2015-01-01

    We propose a simple yet effective structure-guided statistical textural distinctiveness approach to salient region detection. Our method uses a multilayer approach to analyze the structural and textural characteristics of natural images as important features for salient region detection from a scale point of view. To represent the structural characteristics, we abstract the image using structured image elements and extract rotational-invariant neighborhood-based textural representations to characterize each element by an individual texture pattern. We then learn a set of representative texture atoms for sparse texture modeling and construct a statistical textural distinctiveness matrix to determine the distinctiveness between all representative texture atom pairs in each layer. Finally, we determine saliency maps for each layer based on the occurrence probability of the texture atoms and their respective statistical textural distinctiveness and fuse them to compute a final saliency map. Experimental results using four public data sets and a variety of performance evaluation metrics show that our approach provides promising results when compared with existing salient region detection approaches.

  8. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances.

    PubMed

    Gil, Manuel

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.

  9. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances

    PubMed Central

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error. PMID:25279263

  10. Formation of Twisted Elephant Trunks in the Rosette Nebula

    NASA Astrophysics Data System (ADS)

    Carlqvist, P.; Gahm, G. F.; Kristen, H.

    New observations show that dark elephant trunks in the Rosette nebula are often built up by thin filaments. In several of the trunks the filaments seem to form a twisted pattern. This pattern is hard to reconcile with current theory. We propose a new model for the formation of twisted elephant trunks in which electromagnetic forces play an important role. The model considers the behaviour of a twisted magnetic filament in a molecular cloud, where a cluster of hot stars has been recently born. As a result of stellar winds, and radiation pressure, electromagnetic forces, and inertia forces part of the filament can develop into a double helix pointing towards the stars. The double helix represents the twisted elephant trunk. A simple analogy experiment visualizes and supports the trunk model.

  11. A Maximum Entropy Method for Particle Filtering

    NASA Astrophysics Data System (ADS)

    Eyink, Gregory L.; Kim, Sangil

    2006-06-01

    Standard ensemble or particle filtering schemes do not properly represent states of low priori probability when the number of available samples is too small, as is often the case in practical applications. We introduce here a set of parametric resampling methods to solve this problem. Motivated by a general H-theorem for relative entropy, we construct parametric models for the filter distributions as maximum-entropy/minimum-information models consistent with moments of the particle ensemble. When the prior distributions are modeled as mixtures of Gaussians, our method naturally generalizes the ensemble Kalman filter to systems with highly non-Gaussian statistics. We apply the new particle filters presented here to two simple test cases: a one-dimensional diffusion process in a double-well potential and the three-dimensional chaotic dynamical system of Lorenz.

  12. Studies in astronomical time series analysis. IV - Modeling chaotic and random processes with linear filters

    NASA Technical Reports Server (NTRS)

    Scargle, Jeffrey D.

    1990-01-01

    While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.

  13. MODEL CORRELATION STUDY OF A RETRACTABLE BOOM FOR A SOLAR SAIL SPACECRAFT

    NASA Technical Reports Server (NTRS)

    Adetona, O.; Keel, L. H.; Oakley, J. D.; Kappus, K.; Whorton, M. S.; Kim, Y. K.; Rakpczy, J. M.

    2005-01-01

    To realize design concepts, predict dynamic behavior and develop appropriate control strategies for high performance operation of a solar-sail spacecraft, we developed a simple analytical model that represents dynamic behavior of spacecraft with various sizes. Since motion of the vehicle is dominated by retractable booms that support the structure, our study concentrates on developing and validating a dynamic model of a long retractable boom. Extensive tests with various configurations were conducted for the 30 Meter, light-weight, retractable, lattice boom at NASA MSFC that is structurally and dynamically similar to those of a solar-sail spacecraft currently under construction. Experimental data were then compared with the corresponding response of the analytical model. Though mixed results were obtained, the analytical model emulates several key characteristics of the boom. The paper concludes with a detailed discussion of issues observed during the study.

  14. Contact dynamics math model

    NASA Technical Reports Server (NTRS)

    Glaese, John R.; Tobbe, Patrick A.

    1986-01-01

    The Space Station Mechanism Test Bed consists of a hydraulically driven, computer controlled six degree of freedom (DOF) motion system with which docking, berthing, and other mechanisms can be evaluated. Measured contact forces and moments are provided to the simulation host computer to enable representation of orbital contact dynamics. This report describes the development of a generalized math model which represents the relative motion between two rigid orbiting vehicles. The model allows motion in six DOF for each body, with no vehicle size limitation. The rotational and translational equations of motion are derived. The method used to transform the forces and moments from the sensor location to the vehicles' centers of mass is also explained. Two math models of docking mechanisms, a simple translational spring and the Remote Manipulator System end effector, are presented along with simulation results. The translational spring model is used in an attempt to verify the simulation with compensated hardware in the loop results.

  15. Two-Layer Variable Infiltration Capacity Land Surface Representation for General Circulation Models

    NASA Technical Reports Server (NTRS)

    Xu, L.

    1994-01-01

    A simple two-layer variable infiltration capacity (VIC-2L) land surface model suitable for incorporation in general circulation models (GCMs) is described. The model consists of a two-layer characterization of the soil within a GCM grid cell, and uses an aerodynamic representation of latent and sensible heat fluxes at the land surface. The effects of GCM spatial subgrid variability of soil moisture and a hydrologically realistic runoff mechanism are represented in the soil layers. The model was tested using long-term hydrologic and climatalogical data for Kings Creek, Kansas to estimate and validate the hydrological parameters. Surface flux data from three First International Satellite Land Surface Climatology Project Field Experiments (FIFE) intensive field compaigns in the summer and fall of 1987 in central Kansas, and from the Anglo-Brazilian Amazonian Climate Observation Study (ABRACOS) in Brazil were used to validate the mode-simulated surface energy fluxes and surface temperature.

  16. Decentralized control of sound radiation using iterative loop recovery.

    PubMed

    Schiller, Noah H; Cabell, Randolph H; Fuller, Chris R

    2010-10-01

    A decentralized model-based control strategy is designed to reduce low-frequency sound radiation from periodically stiffened panels. While decentralized control systems tend to be scalable, performance can be limited due to modeling error introduced by the unmodeled interaction between neighboring control units. Since bounds on modeling error are not known in advance, it is difficult to ensure the decentralized control system will be robust without making the controller overly conservative. Therefore an iterative approach is suggested, which utilizes frequency-shaped loop recovery. The approach accounts for modeling error introduced by neighboring control loops, requires no communication between subsystems, and is relatively simple. The control strategy is evaluated numerically using a model of a stiffened aluminum panel that is representative of the sidewall of an aircraft. Simulations demonstrate that the iterative approach can achieve significant reductions in radiated sound power from the stiffened panel without destabilizing neighboring control units.

  17. Decentralized Control of Sound Radiation Using Iterative Loop Recovery

    NASA Technical Reports Server (NTRS)

    Schiller, Noah H.; Cabell, Randolph H.; Fuller, Chris R.

    2009-01-01

    A decentralized model-based control strategy is designed to reduce low-frequency sound radiation from periodically stiffened panels. While decentralized control systems tend to be scalable, performance can be limited due to modeling error introduced by the unmodeled interaction between neighboring control units. Since bounds on modeling error are not known in advance, it is difficult to ensure the decentralized control system will be robust without making the controller overly conservative. Therefore an iterative approach is suggested, which utilizes frequency-shaped loop recovery. The approach accounts for modeling error introduced by neighboring control loops, requires no communication between subsystems, and is relatively simple. The control strategy is evaluated numerically using a model of a stiffened aluminum panel that is representative of the sidewall of an aircraft. Simulations demonstrate that the iterative approach can achieve significant reductions in radiated sound power from the stiffened panel without destabilizing neighboring control units.

  18. A residence-time-based transport approach for the groundwater pathway in performance assessment models

    NASA Astrophysics Data System (ADS)

    Robinson, Bruce A.; Chu, Shaoping

    2013-03-01

    This paper presents the theoretical development and numerical implementation of a new modeling approach for representing the groundwater pathway in risk assessment or performance assessment model of a contaminant transport system. The model developed in the present study, called the Residence Time Distribution (RTD) Mixing Model (RTDMM), allows for an arbitrary distribution of fluid travel times to be represented, to capture the effects on the breakthrough curve of flow processes such as channelized flow and fast pathways and complex three-dimensional dispersion. Mathematical methods for constructing the model for a given RTD are derived directly from the theory of residence time distributions in flowing systems. A simple mixing model is presented, along with the basic equations required to enable an arbitrary RTD to be reproduced using the model. The practical advantages of the RTDMM include easy incorporation into a multi-realization probabilistic simulation; computational burden no more onerous than a one-dimensional model with the same number of grid cells; and straightforward implementation into available flow and transport modeling codes, enabling one to then utilize advanced transport features of that code. For example, in this study we incorporated diffusion into the stagnant fluid in the rock matrix away from the flowing fractures, using a generalized dual porosity model formulation. A suite of example calculations presented herein showed the utility of the RTDMM for the case of a radioactive decay chain, dual porosity transport and sorption.

  19. A Simple Illustration of Hemihedral Faces

    ERIC Educational Resources Information Center

    Ault, Addison

    2004-01-01

    A simple way to represent hemihedral faces and to illustrate the relationship between the resulting left-handed and right-handed hemihedra is presented. The illustrations highlight that the chirality corresponds to the absence of reflective symmetry but not necessarily to the absence of a C2 axis of symmetry.

  20. Uniqueness of Petrov Type D Spatially Inhomogeneous Irrotational Silent Models

    NASA Astrophysics Data System (ADS)

    Apostolopoulos, Pantelis S.; Carot, Jaume

    The consistency of the constraint with the evolution equations for spatially inhomogeneous and irrotational silent (SIIS) models of Petrov type I, demands that the former are preserved along the timelike congruence represented by the velocity of the dust fluid, leading to new nontrivial constraints. This fact has been used to conjecture that the resulting models correspond to the spatially homogeneous (SH) models of Bianchi type I, at least for the case where the cosmological constant vanish. By exploiting the full set of the constraint equations as expressed in the 1+3 covariant formalism and using elements from the theory of the spacelike congruences, we provide a direct and simple proof of this conjecture for vacuum and dust fluid models, which shows that the Szekeres family of solutions represents the most general class of SIIS models. The suggested procedure also shows that, the uniqueness of the SIIS of the Petrov type D is not, in general, affected by the presence of a nonzero pressure fluid. Therefore, in order to allow a broader class of Petrov type I solutions apart from the SH models of Bianchi type I, one should consider more general "silent" configurations by relaxing the vanishing of the vorticity and the magnetic part of the Weyl tensor but maintaining their "silence" properties, i.e. the vanishing of the curls of Eab, Hab and the pressure p.

  1. Some research perspectives in galloping phenomena: critical conditions and post-critical behavior

    NASA Astrophysics Data System (ADS)

    Piccardo, Giuseppe; Pagnini, Luisa Carlotta; Tubino, Federica

    2015-01-01

    This paper gives an overview of wind-induced galloping phenomena, describing its manifold features and the many advances that have taken place in this field. Starting from a quasi-steady model of aeroelastic forces exerted by the wind on a rigid cylinder with three degree-of-freedom, two translations and a rotation in the plane of the model cross section, the fluid-structure interaction forces are described in simple terms, yet suitable with complexity of mechanical systems, both in the linear and in the nonlinear field, thus allowing investigation of a wide range of structural typologies and their dynamic behavior. The paper is driven by some key concerns. A great effort is made in underlying strengths and weaknesses of the classic quasi-steady theory as well as of the simplistic assumptions that are introduced in order to investigate such complex phenomena through simple engineering models. A second aspect, which is crucial to the authors' approach, is to take into account and harmonize the engineering, physical and mathematical perspectives in an interdisciplinary way—something which does not happen often. The authors underline that the quasi-steady approach is an irreplaceable tool, tough approximate and simple, for performing engineering analyses; at the same time, the study of this phenomenon gives origin to numerous problems that make the application of high-level mathematical solutions particularly attractive. Finally, the paper discusses a wide range of features of the galloping theory and its practical use which deserve further attention and refinements, pointing to the great potential represented by new fields of application and advanced analysis tools.

  2. Easy-to-use software tools for teaching the basics, design and applications of optical components and systems

    NASA Astrophysics Data System (ADS)

    Gerhard, Christoph; Adams, Geoff

    2015-10-01

    Geometric optics is at the heart of optics teaching. Some of us may remember using pins and string to test the simple lens equation at school. Matters get more complex at undergraduate/postgraduate levels as we are introduced to paraxial rays, real rays, wavefronts, aberration theory and much more. Software is essential for the later stages, and the right software can profitably be used even at school. We present two free PC programs, which have been widely used in optics teaching, and have been further developed in close cooperation with lecturers/professors in order to address the current content of the curricula for optics, photonics and lasers in higher education. PreDesigner is a single thin lens modeller. It illustrates the simple lens law with construction rays and then allows the user to include field size and aperture. Sliders can be used to adjust key values with instant graphical feedback. This tool thus represents a helpful teaching medium for the visualization of basic interrelations in optics. WinLens3DBasic can model multiple thin or thick lenses with real glasses. It shows the system focii, principal planes, nodal points, gives paraxial ray trace values, details the Seidel aberrations, offers real ray tracing and many forms of analysis. It is simple to reverse lenses and model tilts and decenters. This tool therefore provides a good base for learning lens design fundamentals. Much work has been put into offering these features in ways that are easy to use, and offer opportunities to enhance the student's background understanding.

  3. Mass and Environment as Drivers of Galaxy Evolution: Simplicity and its Consequences

    NASA Astrophysics Data System (ADS)

    Peng, Yingjie

    2012-01-01

    The galaxy population appears to be composed of infinitely complex different types and properties at first sight, however, when large samples of galaxies are studied, it appears that the vast majority of galaxies just follow simple scaling relations and similar evolutional modes while the outliers represent some minority. The underlying simplicities of the interrelationships among stellar mass, star formation rate and environment are seen in SDSS and zCOSMOS. We demonstrate that the differential effects of mass and environment are completely separable to z 1, indicating that two distinct physical processes are operating, namely the "mass quenching" and "environment quenching". These two simple quenching processes, plus some additional quenching due to merging, then naturally produce the Schechter form of the galaxy stellar mass functions and make quantitative predictions for the inter-relationships between the Schechter parameters of star-forming and passive galaxies in different environments. All of these detailed quantitative relationships are indeed seen, to very high precision, in SDSS, lending strong support to our simple empirically-based model. The model also offers qualitative explanations for the "anti-hierarchical" age-mass relation and the alpha-enrichment patterns for passive galaxies and makes some other testable predictions such as the mass function of the population of transitory objects that are in the process of being quenched, the galaxy major- and minor-merger rates, the galaxy stellar mass assembly history, star formation history and etc. Although still purely phenomenological, the model makes clear what the evolutionary characteristics of the relevant physical processes must in fact be.

  4. Navigating a Mobile Robot Across Terrain Using Fuzzy Logic

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun; Howard, Ayanna; Bon, Bruce

    2003-01-01

    A strategy for autonomous navigation of a robotic vehicle across hazardous terrain involves the use of a measure of traversability of terrain within a fuzzy-logic conceptual framework. This navigation strategy requires no a priori information about the environment. Fuzzy logic was selected as a basic element of this strategy because it provides a formal methodology for representing and implementing a human driver s heuristic knowledge and operational experience. Within a fuzzy-logic framework, the attributes of human reasoning and decision- making can be formulated by simple IF (antecedent), THEN (consequent) rules coupled with easily understandable and natural linguistic representations. The linguistic values in the rule antecedents convey the imprecision associated with measurements taken by sensors onboard a mobile robot, while the linguistic values in the rule consequents represent the vagueness inherent in the reasoning processes to generate the control actions. The operational strategies of the human expert driver can be transferred, via fuzzy logic, to a robot-navigation strategy in the form of a set of simple conditional statements composed of linguistic variables. These linguistic variables are defined by fuzzy sets in accordance with user-defined membership functions. The main advantages of a fuzzy navigation strategy lie in the ability to extract heuristic rules from human experience and to obviate the need for an analytical model of the robot navigation process.

  5. Summary of the DREAM8 Parameter Estimation Challenge: Toward Parameter Identification for Whole-Cell Models.

    PubMed

    Karr, Jonathan R; Williams, Alex H; Zucker, Jeremy D; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A; Bot, Brian M; Hoff, Bruce R; Kellen, Michael R; Covert, Markus W; Stolovitzky, Gustavo A; Meyer, Pablo

    2015-05-01

    Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.

  6. A hierarchy for modeling high speed propulsion systems

    NASA Technical Reports Server (NTRS)

    Hartley, Tom T.; Deabreu, Alex

    1991-01-01

    General research efforts on reduced order propulsion models for control systems design are overviewed. Methods for modeling high speed propulsion systems are discussed including internal flow propulsion systems that do not contain rotating machinery, such as inlets, ramjets, and scramjets. The discussion is separated into four areas: (1) computational fluid dynamics models for the entire nonlinear system or high order nonlinear models; (2) high order linearized models derived from fundamental physics; (3) low order linear models obtained from the other high order models; and (4) low order nonlinear models (order here refers to the number of dynamic states). Included in the discussion are any special considerations based on the relevant control system designs. The methods discussed are for the quasi-one-dimensional Euler equations of gasdynamic flow. The essential nonlinear features represented are large amplitude nonlinear waves, including moving normal shocks, hammershocks, simple subsonic combustion via heat addition, temperature dependent gases, detonations, and thermal choking. The report also contains a comprehensive list of papers and theses generated by this grant.

  7. Generic Airplane Model Concept and Four Specific Models Developed for Use in Piloted Simulation Studies

    NASA Technical Reports Server (NTRS)

    Hoffler, Keith D.; Fears, Scott P.; Carzoo, Susan W.

    1997-01-01

    A generic airplane model concept was developed to allow configurations with various agility, performance, handling qualities, and pilot vehicle interface to be generated rapidly for piloted simulation studies. The simple concept allows stick shaping and various stick command types or modes to drive an airplane with both linear and nonlinear components. Output from the stick shaping goes to linear models or a series of linear models that can represent an entire flight envelope. The generic model also has provisions for control power limitations, a nonlinear feature. Therefore, departures from controlled flight are possible. Note that only loss of control is modeled, the generic airplane does not accurately model post departure phenomenon. The model concept is presented herein, along with four example airplanes. Agility was varied across the four example airplanes without altering specific excess energy or significantly altering handling qualities. A new feedback scheme to provide angle-of-attack cueing to the pilot, while using a pitch rate command system, was implemented and tested.

  8. A modern approach to the authentication and quality assessment of thyme using UV spectroscopy and chemometric analysis.

    PubMed

    Gad, Haidy A; El-Ahmady, Sherweit H; Abou-Shoer, Mohamed I; Al-Azizi, Mohamed M

    2013-01-01

    Recently, the fields of chemometrics and multivariate analysis have been widely implemented in the quality control of herbal drugs to produce precise results, which is crucial in the field of medicine. Thyme represents an essential medicinal herb that is constantly adulterated due to its resemblance to many other plants with similar organoleptic properties. To establish a simple model for the quality assessment of Thymus species using UV spectroscopy together with known chemometric techniques. The success of this model may also serve as a technique for the quality control of other herbal drugs. The model was constructed using 30 samples of authenticated Thymus vulgaris and challenged with 20 samples of different botanical origins. The methanolic extracts of all samples were assessed using UV spectroscopy together with chemometric techniques: principal component analysis (PCA), soft independent modeling of class analogy (SIMCA) and hierarchical cluster analysis (HCA). The model was able to discriminate T. vulgaris from other Thymus, Satureja, Origanum, Plectranthus and Eriocephalus species, all traded in the Egyptian market as different types of thyme. The model was also able to classify closely related species in clusters using PCA and HCA. The model was finally used to classify 12 commercial thyme varieties into clusters of species incorporated in the model as thyme or non-thyme. The model constructed is highly recommended as a simple and efficient method for distinguishing T. vulgaris from other related species as well as the classification of marketed herbs as thyme or non-thyme. Copyright © 2013 John Wiley & Sons, Ltd.

  9. Passive and active ventricular elastances of the left ventricle

    PubMed Central

    Zhong, Liang; Ghista, Dhanjoo N; Ng, Eddie YK; Lim, Soo T

    2005-01-01

    Background Description of the heart as a pump has been dominated by models based on elastance and compliance. Here, we are presenting a somewhat new concept of time-varying passive and active elastance. The mathematical basis of time-varying elastance of the ventricle is presented. We have defined elastance in terms of the relationship between ventricular pressure and volume, as: dP = EdV + VdE, where E includes passive (Ep) and active (Ea) elastance. By incorporating this concept in left ventricular (LV) models to simulate filling and systolic phases, we have obtained the time-varying expression for Ea and the LV-volume dependent expression for Ep. Methods and Results Using the patient's catheterization-ventriculogram data, the values of passive and active elastance are computed. Ea is expressed as: ; Epis represented as: . Ea is deemed to represent a measure of LV contractility. Hence, Peak dP/dt and ejection fraction (EF) are computed from the monitored data and used as the traditional measures of LV contractility. When our computed peak active elastance (Ea,max) is compared against these traditional indices by linear regression, a high degree of correlation is obtained. As regards Ep, it constitutes a volume-dependent stiffness property of the LV, and is deemed to represent resistance-to-filling. Conclusions Passive and active ventricular elastance formulae can be evaluated from a single-beat P-V data by means of a simple-to-apply LV model. The active elastance (Ea) can be used to characterize the ventricle's contractile state, while passive elastance (Ep) can represent a measure of resistance-to-filling. PMID:15707494

  10. Using expert systems to implement a semantic data model of a large mass storage system

    NASA Technical Reports Server (NTRS)

    Roelofs, Larry H.; Campbell, William J.

    1990-01-01

    The successful development of large volume data storage systems will depend not only on the ability of the designers to store data, but on the ability to manage such data once it is in the system. The hypothesis is that mass storage data management can only be implemented successfully based on highly intelligent meta data management services. There now exists a proposed mass store system standard proposed by the IEEE that addresses many of the issues related to the storage of large volumes of data, however, the model does not consider a major technical issue, namely the high level management of stored data. However, if the model were expanded to include the semantics and pragmatics of the data domain using a Semantic Data Model (SDM) concept, the result would be data that is expressive of the Intelligent Information Fusion (IIF) concept and also organized and classified in context to its use and purpose. The results are presented of a demonstration prototype SDM implemented using the expert system development tool NEXPERT OBJECT. In the prototype, a simple instance of a SDM was created to support a hypothetical application for the Earth Observing System, Data Information System (EOSDIS). The massive amounts of data that EOSDIS will manage requires the definition and design of a powerful information management system in order to support even the most basic needs of the project. The application domain is characterized by a semantic like network that represents the data content and the relationships between the data based on user views and the more generalized domain architectural view of the information world. The data in the domain are represented by objects that define classes, types and instances of the data. In addition, data properties are selectively inherited between parent and daughter relationships in the domain. Based on the SDM a simple information system design is developed from the low level data storage media, through record management and meta data management to the user interface.

  11. Multi-Hypothesis Modelling Capabilities for Robust Data-Model Integration

    NASA Astrophysics Data System (ADS)

    Walker, A. P.; De Kauwe, M. G.; Lu, D.; Medlyn, B.; Norby, R. J.; Ricciuto, D. M.; Rogers, A.; Serbin, S.; Weston, D. J.; Ye, M.; Zaehle, S.

    2017-12-01

    Large uncertainty is often inherent in model predictions due to imperfect knowledge of how to describe the mechanistic processes (hypotheses) that a model is intended to represent. Yet this model hypothesis uncertainty (MHU) is often overlooked or informally evaluated, as methods to quantify and evaluate MHU are limited. MHU is increased as models become more complex because each additional processes added to a model comes with inherent MHU as well as parametric unceratinty. With the current trend of adding more processes to Earth System Models (ESMs), we are adding uncertainty, which can be quantified for parameters but not MHU. Model inter-comparison projects do allow for some consideration of hypothesis uncertainty but in an ad hoc and non-independent fashion. This has stymied efforts to evaluate ecosystem models against data and intepret the results mechanistically because it is not simple to interpret exactly why a model is producing the results it does and identify which model assumptions are key as they combine models of many sub-systems and processes, each of which may be conceptualised and represented mathematically in various ways. We present a novel modelling framework—the multi-assumption architecture and testbed (MAAT)—that automates the combination, generation, and execution of a model ensemble built with different representations of process. We will present the argument that multi-hypothesis modelling needs to be considered in conjunction with other capabilities (e.g. the Predictive Ecosystem Analyser; PecAn) and statistical methods (e.g. sensitivity anaylsis, data assimilation) to aid efforts in robust data model integration to enhance our predictive understanding of biological systems.

  12. On heart rate variability and autonomic activity in homeostasis and in systemic inflammation.

    PubMed

    Scheff, Jeremy D; Griffel, Benjamin; Corbett, Siobhan A; Calvano, Steve E; Androulakis, Ioannis P

    2014-06-01

    Analysis of heart rate variability (HRV) is a promising diagnostic technique due to the noninvasive nature of the measurements involved and established correlations with disease severity, particularly in inflammation-linked disorders. However, the complexities underlying the interpretation of HRV complicate understanding the mechanisms that cause variability. Despite this, such interpretations are often found in literature. In this paper we explored mathematical modeling of the relationship between the autonomic nervous system and the heart, incorporating basic mechanisms such as perturbing mean values of oscillating autonomic activities and saturating signal transduction pathways to explore their impacts on HRV. We focused our analysis on human endotoxemia, a well-established, controlled experimental model of systemic inflammation that provokes changes in HRV representative of acute stress. By contrasting modeling results with published experimental data and analyses, we found that even a simple model linking the autonomic nervous system and the heart confound the interpretation of HRV changes in human endotoxemia. Multiple plausible alternative hypotheses, encoded in a model-based framework, equally reconciled experimental results. In total, our work illustrates how conventional assumptions about the relationships between autonomic activity and frequency-domain HRV metrics break down, even in a simple model. This underscores the need for further experimental work towards unraveling the underlying mechanisms of autonomic dysfunction and HRV changes in systemic inflammation. Understanding the extent of information encoded in HRV signals is critical in appropriately analyzing prior and future studies. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. On heart rate variability and autonomic activity in homeostasis and in systemic inflammation

    PubMed Central

    Scheff, Jeremy D.; Griffel, Benjamin; Corbett, Siobhan A.; Calvano, Steve E.; Androulakis, Ioannis P.

    2014-01-01

    Analysis of heart rate variability (HRV) is a promising diagnostic technique due to the noninvasive nature of the measurements involved and established correlations with disease severity, particularly in inflammation-linked disorders. However, the complexities underlying the interpretation of HRV complicate understanding the mechanisms that cause variability. Despite this, such interpretations are often found in literature. In this paper we explored mathematical modeling of the relationship between the autonomic nervous system and the heart, incorporating basic mechanisms such as perturbing mean values of oscillating autonomic activities and saturating signal transduction pathways to explore their impacts on HRV. We focused our analysis on human endotoxemia, a well-established, controlled experimental model of systemic inflammation that provokes changes in HRV representative of acute stress. By contrasting modeling results with published experimental data and analyses, we found that even a simple model linking the autonomic nervous system and the heart confound the interpretation of HRV changes in human endotoxemia. Multiple plausible alternative hypotheses, encoded in a model-based framework, equally reconciled experimental results. In total, our work illustrates how conventional assumptions about the relationships between autonomic activity and frequency-domain HRV metrics break down, even in a simple model. This underscores the need for further experimental work towards unraveling the underlying mechanisms of autonomic dysfunction and HRV changes in systemic inflammation. Understanding the extent of information encoded in HRV signals is critical in appropriately analyzing prior and future studies. PMID:24680646

  14. A scalable multi-process model of root nitrogen uptake

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Anthony P.

    This article is a Commentary on McMurtrie & Näsholm et al., 218: 119–130. Roots are represented in Terrestrial Ecosystem Models (TEMs) in much less detail than their equivalent above-ground resource acquisition organs – leaves. Often roots in TEMs are simply resource sinks, and below-ground resource acquisition is commonly simulated without any relationship to root dynamics at all, though there are exceptions (e.g. Zaehle & Friend, 2010). The representation of roots as carbon (C) and nitrogen (N) sinks without complementary source functions can lead to strange sensitivities in a model. For example, reducing root lifespans in the Community Land Model (versionmore » 4.5) increases plant production as N cycles more rapidly through the ecosystem without loss of plant function (D. M. Ricciuto, unpublished). The primary reasons for the poorer representation of roots compared with leaves in TEMs are three-fold: (1) data are much harder won, especially in the field; (2) no simple mechanistic models of root function are available; and (3) scaling root function from an individual root to a root system lags behind methods of scaling leaf function to a canopy. Here in this issue of New Phytologist, McMurtrie & Näsholm (pp. 119–130) develop a relatively simple model for root N uptake that mechanistically accounts for processes of N supply (mineralization and transport by diffusion and mass flow) and N demand (root uptake and microbial immobilization).« less

  15. SpikingLab: modelling agents controlled by Spiking Neural Networks in Netlogo.

    PubMed

    Jimenez-Romero, Cristian; Johnson, Jeffrey

    2017-01-01

    The scientific interest attracted by Spiking Neural Networks (SNN) has lead to the development of tools for the simulation and study of neuronal dynamics ranging from phenomenological models to the more sophisticated and biologically accurate Hodgkin-and-Huxley-based and multi-compartmental models. However, despite the multiple features offered by neural modelling tools, their integration with environments for the simulation of robots and agents can be challenging and time consuming. The implementation of artificial neural circuits to control robots generally involves the following tasks: (1) understanding the simulation tools, (2) creating the neural circuit in the neural simulator, (3) linking the simulated neural circuit with the environment of the agent and (4) programming the appropriate interface in the robot or agent to use the neural controller. The accomplishment of the above-mentioned tasks can be challenging, especially for undergraduate students or novice researchers. This paper presents an alternative tool which facilitates the simulation of simple SNN circuits using the multi-agent simulation and the programming environment Netlogo (educational software that simplifies the study and experimentation of complex systems). The engine proposed and implemented in Netlogo for the simulation of a functional model of SNN is a simplification of integrate and fire (I&F) models. The characteristics of the engine (including neuronal dynamics, STDP learning and synaptic delay) are demonstrated through the implementation of an agent representing an artificial insect controlled by a simple neural circuit. The setup of the experiment and its outcomes are described in this work.

  16. A scalable multi-process model of root nitrogen uptake

    DOE PAGES

    Walker, Anthony P.

    2018-02-28

    This article is a Commentary on McMurtrie & Näsholm et al., 218: 119–130. Roots are represented in Terrestrial Ecosystem Models (TEMs) in much less detail than their equivalent above-ground resource acquisition organs – leaves. Often roots in TEMs are simply resource sinks, and below-ground resource acquisition is commonly simulated without any relationship to root dynamics at all, though there are exceptions (e.g. Zaehle & Friend, 2010). The representation of roots as carbon (C) and nitrogen (N) sinks without complementary source functions can lead to strange sensitivities in a model. For example, reducing root lifespans in the Community Land Model (versionmore » 4.5) increases plant production as N cycles more rapidly through the ecosystem without loss of plant function (D. M. Ricciuto, unpublished). The primary reasons for the poorer representation of roots compared with leaves in TEMs are three-fold: (1) data are much harder won, especially in the field; (2) no simple mechanistic models of root function are available; and (3) scaling root function from an individual root to a root system lags behind methods of scaling leaf function to a canopy. Here in this issue of New Phytologist, McMurtrie & Näsholm (pp. 119–130) develop a relatively simple model for root N uptake that mechanistically accounts for processes of N supply (mineralization and transport by diffusion and mass flow) and N demand (root uptake and microbial immobilization).« less

  17. A simple rainfall-runoff model based on hydrological units applied to the Teba catchment (south-east Spain)

    NASA Astrophysics Data System (ADS)

    Donker, N. H. W.

    2001-01-01

    A hydrological model (YWB, yearly water balance) has been developed to model the daily rainfall-runoff relationship of the 202 km2 Teba river catchment, located in semi-arid south-eastern Spain. The period of available data (1976-1993) includes some very rainy years with intensive storms (responsible for flooding parts of the town of Malaga) and also some very dry years.The YWB model is in essence a simple tank model in which the catchment is subdivided into a limited number of meaningful hydrological units. Instead of generating per unit surface runoff resulting from infiltration excess, runoff has been made the result of storage excess. Actual evapotranspiration is obtained by means of curves, included in the software, representing the relationship between the ratio of actual to potential evapotranspiration as a function of soil moisture content for three soil texture classes.The total runoff generated is split between base flow and surface runoff according to a given baseflow index. The two components are routed separately and subsequently joined. A large number of sequential years can be processed, and the results of each year are summarized by a water balance table and a daily based rainfall runoff time series. An attempt has been made to restrict the amount of input data to the minimum.Interactive manual calibration is advocated in order to allow better incorporation of field evidence and the experience of the model user. Field observations allowed for an approximate calibration at the hydrological unit level.

  18. A multidimensional assessment of the validity and utility of alcohol use disorder severity as determined by item response theory models.

    PubMed

    Dawson, Deborah A; Saha, Tulshi D; Grant, Bridget F

    2010-02-01

    The relative severity of the 11 DSM-IV alcohol use disorder (AUD) criteria are represented by their severity threshold scores, an item response theory (IRT) model parameter inversely proportional to their prevalence. These scores can be used to create a continuous severity measure comprising the total number of criteria endorsed, each weighted by its relative severity. This paper assesses the validity of the severity ranking of the 11 criteria and the overall severity score with respect to known AUD correlates, including alcohol consumption, psychological functioning, family history, antisociality, and early initiation of drinking, in a representative population sample of U.S. past-year drinkers (n=26,946). The unadjusted mean values for all validating measures increased steadily with the severity threshold score, except that legal problems, the criterion with the highest score, was associated with lower values than expected. After adjusting for the total number of criteria endorsed, this direct relationship was no longer evident. The overall severity score was no more highly correlated with the validating measures than a simple count of criteria endorsed, nor did the two measures yield different risk curves. This reflects both within-criterion variation in severity and the fact that the number of criteria endorsed and their severity are so highly correlated that severity is essentially redundant. Attempts to formulate a scalar measure of AUD will do as well by relying on simple counts of criteria or symptom items as by using scales weighted by IRT measures of severity. Published by Elsevier Ireland Ltd.

  19. Representative equations for the thermodynamic and transport properties of fluids near the gas-liquid critical point

    NASA Technical Reports Server (NTRS)

    Sengers, J. V.; Basu, R. S.; Sengers, J. M. H. L.

    1981-01-01

    A survey is presented of representative equations for various thermophysical properties of fluids in the critical region. Representative equations for the transport properties are included. Semi-empirical modifications of the theoretically predicted asymtotic critical behavior that yield simple and practical representations of the fluid properties in the critical region are emphasized.

  20. Spacecraft Internal Acoustic Environment Modeling

    NASA Technical Reports Server (NTRS)

    Chu, Shao-Sheng R.; Allen Christopher S.

    2010-01-01

    Acoustic modeling can be used to identify key noise sources, determine/analyze sub-allocated requirements, keep track of the accumulation of minor noise sources, and to predict vehicle noise levels at various stages in vehicle development, first with estimates of noise sources, later with experimental data. This paper describes the implementation of acoustic modeling for design purposes by incrementally increasing model fidelity and validating the accuracy of the model while predicting the noise of sources under various conditions. During FY 07, a simple-geometry Statistical Energy Analysis (SEA) model was developed and validated using a physical mockup and acoustic measurements. A process for modeling the effects of absorptive wall treatments and the resulting reverberation environment were developed. During FY 08, a model with more complex and representative geometry of the Orion Crew Module (CM) interior was built, and noise predictions based on input noise sources were made. A corresponding physical mockup was also built. Measurements were made inside this mockup, and comparisons were made with the model and showed excellent agreement. During FY 09, the fidelity of the mockup and corresponding model were increased incrementally by including a simple ventilation system. The airborne noise contribution of the fans was measured using a sound intensity technique, since the sound power levels were not known beforehand. This is opposed to earlier studies where Reference Sound Sources (RSS) with known sound power level were used. Comparisons of the modeling result with the measurements in the mockup showed excellent results. During FY 10, the fidelity of the mockup and the model were further increased by including an ECLSS (Environmental Control and Life Support System) wall, associated closeout panels, and the gap between ECLSS wall and mockup wall. The effect of sealing the gap and adding sound absorptive treatment to ECLSS wall were also modeled and validated.

  1. A study of hyperelastic models for predicting the mechanical behavior of extensor apparatus.

    PubMed

    Elyasi, Nahid; Taheri, Kimia Karimi; Narooei, Keivan; Taheri, Ali Karimi

    2017-06-01

    In this research, the nonlinear elastic behavior of human extensor apparatus was investigated. To this goal, firstly the best material parameters of hyperelastic strain energy density functions consisting of the Mooney-Rivlin, Ogden, invariants, and general exponential models were derived for the simple tension experimental data. Due to the significance of stress response in other deformation modes of nonlinear models, the calculated parameters were used to study the pure shear and balance biaxial tension behavior of the extensor apparatus. The results indicated that the Mooney-Rivlin model predicts an unstable behavior in the balance biaxial deformation of the extensor apparatus, while the Ogden order 1 represents a stable behavior, although the fitting of experimental data and theoretical model was not satisfactory. However, the Ogden order 6 model was unstable in the simple tension mode and the Ogden order 5 and general exponential models presented accurate and stable results. In order to reduce the material parameters, the invariants model with four material parameters was investigated and this model presented the minimum error and stable behavior in all deformation modes. The ABAQUS Explicit solver was coupled with the VUMAT subroutine code of the invariants model to simulate the mechanical behavior of the central and terminal slips of the extensor apparatus during the passive finger flexion, which is important in the prediction of boutonniere deformity and chronic mallet finger injuries, respectively. Also, to evaluate the adequacy of constitutive models in simulations, the results of the Ogden order 5 were presented. The difference between the predictions was attributed to the better fittings of the invariants model compared with the Ogden model.

  2. Geometric Representations of Condition Queries on Three-Dimensional Vector Fields

    NASA Technical Reports Server (NTRS)

    Henze, Chris

    1999-01-01

    Condition queries on distributed data ask where particular conditions are satisfied. It is possible to represent condition queries as geometric objects by plotting field data in various spaces derived from the data, and by selecting loci within these derived spaces which signify the desired conditions. Rather simple geometric partitions of derived spaces can represent complex condition queries because much complexity can be encapsulated in the derived space mapping itself A geometric view of condition queries provides a useful conceptual unification, allowing one to intuitively understand many existing vector field feature detection algorithms -- and to design new ones -- as variations on a common theme. A geometric representation of condition queries also provides a simple and coherent basis for computer implementation, reducing a wide variety of existing and potential vector field feature detection techniques to a few simple geometric operations.

  3. Area of Stochastic Scrape-Off Layer for a Single-Null Divertor Tokamak Using Simple Map

    NASA Astrophysics Data System (ADS)

    Fisher, Tiffany; Verma, Arun; Punjabi, Alkesh

    1996-11-01

    The magnetic topology of a single-null divertor tokamak is represented by Simple Map (Punjabi A, Verma A and Boozer A, Phys Rev Lett), 69, 3322 (1992) and J Plasma Phys, 52, 91 (1994). The Simple map is characterized by a single parameter k representing the toroidal asymmetry. The width of the stochastic scrape-off layer and its area varies with the map parameter k. We calculate the area of the stochastic scrape-off layer for different k's and obtain a parametric expression for the area in terms of k and y _LastGoodSurface(k). This work is supported by US DOE OFES. Tiffany Fisher is a HU CFRT Summer Fusion High school Workshop Scholar from New Bern High School in North Carolina. She is supported by NASA SHARP Plus Program.

  4. Formally verifying human–automation interaction as part of a system model: limitations and tradeoffs

    PubMed Central

    Bass, Ellen J.

    2011-01-01

    Both the human factors engineering (HFE) and formal methods communities are concerned with improving the design of safety-critical systems. This work discusses a modeling effort that leveraged methods from both fields to perform formal verification of human–automation interaction with a programmable device. This effort utilizes a system architecture composed of independent models of the human mission, human task behavior, human-device interface, device automation, and operational environment. The goals of this architecture were to allow HFE practitioners to perform formal verifications of realistic systems that depend on human–automation interaction in a reasonable amount of time using representative models, intuitive modeling constructs, and decoupled models of system components that could be easily changed to support multiple analyses. This framework was instantiated using a patient controlled analgesia pump in a two phased process where models in each phase were verified using a common set of specifications. The first phase focused on the mission, human-device interface, and device automation; and included a simple, unconstrained human task behavior model. The second phase replaced the unconstrained task model with one representing normative pump programming behavior. Because models produced in the first phase were too large for the model checker to verify, a number of model revisions were undertaken that affected the goals of the effort. While the use of human task behavior models in the second phase helped mitigate model complexity, verification time increased. Additional modeling tools and technological developments are necessary for model checking to become a more usable technique for HFE. PMID:21572930

  5. A Finite Element Analysis for Predicting the Residual Compressive Strength of Impact-Damaged Sandwich Panels

    NASA Technical Reports Server (NTRS)

    Ratcliffe, James G.; Jackson, Wade C.

    2008-01-01

    A simple analysis method has been developed for predicting the residual compressive strength of impact-damaged sandwich panels. The method is tailored for honeycomb core-based sandwich specimens that exhibit an indentation growth failure mode under axial compressive loading, which is driven largely by the crushing behavior of the core material. The analysis method is in the form of a finite element model, where the impact-damaged facesheet is represented using shell elements and the core material is represented using spring elements, aligned in the thickness direction of the core. The nonlinear crush response of the core material used in the analysis is based on data from flatwise compression tests. A comparison with a previous analysis method and some experimental data shows good agreement with results from this new approach.

  6. A Finite Element Analysis for Predicting the Residual Compression Strength of Impact-Damaged Sandwich Panels

    NASA Technical Reports Server (NTRS)

    Ratcliffe, James G.; Jackson, Wade C.

    2008-01-01

    A simple analysis method has been developed for predicting the residual compression strength of impact-damaged sandwich panels. The method is tailored for honeycomb core-based sandwich specimens that exhibit an indentation growth failure mode under axial compression loading, which is driven largely by the crushing behavior of the core material. The analysis method is in the form of a finite element model, where the impact-damaged facesheet is represented using shell elements and the core material is represented using spring elements, aligned in the thickness direction of the core. The nonlinear crush response of the core material used in the analysis is based on data from flatwise compression tests. A comparison with a previous analysis method and some experimental data shows good agreement with results from this new approach.

  7. A FORTRAN program for calculating nonlinear seismic ground response

    USGS Publications Warehouse

    Joyner, William B.

    1977-01-01

    The program described here was designed for calculating the nonlinear seismic response of a system of horizontal soil layers underlain by a semi-infinite elastic medium representing bedrock. Excitation is a vertically incident shear wave in the underlying medium. The nonlinear hysteretic behavior of the soil is represented by a model consisting of simple linear springs and Coulomb friction elements arranged as shown. A boundary condition is used which takes account of finite rigidity in the elastic substratum. The computations are performed by an explicit finite-difference scheme that proceeds step by step in space and time. A brief program description is provided here with instructions for preparing the input and a source listing. A more detailed discussion of the method is presented elsewhere as is the description of a different program employing implicit integration.

  8. Summary of the DREAM8 Parameter Estimation Challenge: Toward Parameter Identification for Whole-Cell Models

    PubMed Central

    Karr, Jonathan R.; Williams, Alex H.; Zucker, Jeremy D.; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A.; Bot, Brian M.; Hoff, Bruce R.; Kellen, Michael R.; Covert, Markus W.; Stolovitzky, Gustavo A.; Meyer, Pablo

    2015-01-01

    Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model’s structure and in silico “experimental” data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation. PMID:26020786

  9. The Light Side of Dark Matter

    NASA Astrophysics Data System (ADS)

    Cisneros, Sophia

    2013-04-01

    We present a new, heuristic, two-parameter model for predicting the rotation curves of disc galaxies. The model is tested on (22) randomly chosen galaxies, represented in 35 data sets. This Lorentz Convolution [LC] model is derived from a non-linear, relativistic solution of a Kerr-type wave equation, where small changes in the photon's frequencies, resulting from the curved space time, are convolved into a sequence of Lorentz transformations. The LC model is parametrized with only the diffuse, luminous stellar and gaseous masses reported with each data set of observations used. The LC model predicts observed rotation curves across a wide range of disk galaxies. The LC model was constructed to occupy the same place in the explanation of rotation curves that Dark Matter does, so that a simple investigation of the relation between luminous and dark matter might be made, via by a parameter (a). We find the parameter (a) to demonstrate interesting structure. We compare the new model prediction to both the NFW model and MOND fits when available.

  10. A simple model of variable residence time flow and nutrient transport in the chalk

    NASA Astrophysics Data System (ADS)

    Jackson, Bethanna M.; Wheater, Howard S.; Mathias, Simon A.; McIntyre, Neil; Butler, Adrian P.

    2006-10-01

    SummaryA basic problem of modelling flow and transport in Chalk catchments arises from the existence of a deep unsaturated zone, with complex interactions between flow in fractures and water held in the fine pores of the rock matrix. The response of the water table to major infiltration episodes is rapid (of the order of days). However, chemical signals are strongly damped, suggesting that this water is of varying age, with a corresponding mixed history of nutrient loading. Clearly this effect should be represented in any model of nutrients in Chalk systems. The applicability of simplified physically-based model formulations to represent the dual response in an integrated way has been investigated by a variety of researchers, but it has been shown that these approximations break down in application to the Chalk. Mathias et al. [Mathias, S., Butler, A.P., Jackson, B.M., Wheater, H.S., this issue. Characterising flow in the Chalk unsaturated zone. In: Wheater, H.S., Peach, D., Neal, C, editors, Hydrology on LOCAR in the Pang/Lambourn, special issue of J. Hydrol, doi:10.1016/j.jhydrol.2006.04.010] present a dual permeability model that explains the observed response, but such complex formulations are not readily incorporated in catchment-scale nutrient models. This paper reviews previous approaches to modelling the Chalk and then presents a pragmatic approach, with transport of solute and water through the unsaturated zone treated separately, and combined at the water table. Varying residence times are included through considering the distance between the water table and the soil surface, and the history of nutrient application at the surface. If an average rate of downwards migration of the nutrients is assumed, it is possible to derive a travel time distribution of nitrate transport to the water table using a DTM (digital terrain model) map of elevation and information on groundwater levels. This distribution can then be implemented through difference equations. The rationale behind the model and the resulting algorithm is described, and the algorithm then applied to a hypothetical case study of nutrient loading located in the Lambourn, a groundwater-dominated Chalk catchment in Southern England. Simulated groundwater concentrations are very similar in magnitude and variability to observed Chalk groundwater series, suggesting that this simple conceptual model may well be able to capture the dominant responses of nutrient transport through the Chalk.

  11. A Simple ab initio Model for the Hydrated Electron that Matches Experiment

    PubMed Central

    Kumar, Anil; Walker, Jonathan A.; Bartels, David M.; Sevilla, Michael D.

    2015-01-01

    Since its discovery over 50 years ago, the “structure” and properties of the hydrated electron has been a subject for wonderment and also fierce debate. In the present work we seriously explore a minimal model for the aqueous electron, consisting of a small water anion cluster embedded in a polarized continuum, using several levels of ab initio calculation and basis set. The minimum energy zero “Kelvin” structure found for any 4-water (or larger) anion cluster, at any post-Hartree-Fock theory level, is very similar to a recently reported embedded-DFT-in-classical-water-MD simulation (UMJ: Uhlig, Marsalek, and Jungwirth, Journal of Physical Chemistry Letters 2012, 3, 3071-5), with four OH bonds oriented toward the maximum charge density in a small central “void”. The minimum calculation with just four water molecules does a remarkably good job of reproducing the resonance Raman properties, the radius of gyration derived from the optical spectrum, the vertical detachment energy, and the hydration free energy. For the first time we also successfully calculate the EPR g-factor and (low temperature ice) hyperfine couplings. The simple tetrahedral anion cluster model conforms very well to experiment, suggesting it does in fact represent the dominant structural motif of the hydrated electron. PMID:26275103

  12. The transition probability and the probability for the left-most particle's position of the q-totally asymmetric zero range process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korhonen, Marko; Lee, Eunghyun

    2014-01-15

    We treat the N-particle zero range process whose jumping rates satisfy a certain condition. This condition is required to use the Bethe ansatz and the resulting model is the q-boson model by Sasamoto and Wadati [“Exact results for one-dimensional totally asymmetric diffusion models,” J. Phys. A 31, 6057–6071 (1998)] or the q-totally asymmetric zero range process (TAZRP) by Borodin and Corwin [“Macdonald processes,” Probab. Theory Relat. Fields (to be published)]. We find the explicit formula of the transition probability of the q-TAZRP via the Bethe ansatz. By using the transition probability we find the probability distribution of the left-most particle'smore » position at time t. To find the probability for the left-most particle's position we find a new identity corresponding to identity for the asymmetric simple exclusion process by Tracy and Widom [“Integral formulas for the asymmetric simple exclusion process,” Commun. Math. Phys. 279, 815–844 (2008)]. For the initial state that all particles occupy a single site, the probability distribution of the left-most particle's position at time t is represented by the contour integral of a determinant.« less

  13. Nature as an engineer: one simple concept of a bio-inspired functional artificial muscle.

    PubMed

    Schmitt, S; Haeufle, D F B; Blickhan, R; Günther, M

    2012-09-01

    The biological muscle is a powerful, flexible and versatile actuator. Its intrinsic characteristics determine the way how movements are generated and controlled. Robotic and prosthetic applications expect to profit from relying on bio-inspired actuators which exhibit natural (muscle-like) characteristics. As of today, when constructing a technical actuator, it is not possible to copy the exact molecular structure of a biological muscle. Alternatively, the question may be put how its characteristics can be realized with known mechanical components. Recently, a mechanical construct for an artificial muscle was proposed, which exhibits hyperbolic force-velocity characteristics. In this paper, we promote the constructing concept which is made by substantiating the mechanical design of biological muscle by a simple model, proving the feasibility of its real-world implementation, and checking their output both for mutual consistency and agreement with biological measurements. In particular, the relations of force, enthalpy rate and mechanical efficiency versus contraction velocity of both the construct's technical implementation and its numerical model were determined in quick-release experiments. All model predictions for these relations and the hardware results are now in good agreement with the biological literature. We conclude that the construct represents a mechanical concept of natural actuation, which is suitable for laying down some useful suggestions when designing bio-inspired actuators.

  14. Dynamical system with plastic self-organized velocity field as an alternative conceptual model of a cognitive system.

    PubMed

    Janson, Natalia B; Marsden, Christopher J

    2017-12-05

    It is well known that architecturally the brain is a neural network, i.e. a collection of many relatively simple units coupled flexibly. However, it has been unclear how the possession of this architecture enables higher-level cognitive functions, which are unique to the brain. Here, we consider the brain from the viewpoint of dynamical systems theory and hypothesize that the unique feature of the brain, the self-organized plasticity of its architecture, could represent the means of enabling the self-organized plasticity of its velocity vector field. We propose that, conceptually, the principle of cognition could amount to the existence of appropriate rules governing self-organization of the velocity field of a dynamical system with an appropriate account of stimuli. To support this hypothesis, we propose a simple non-neuromorphic mathematical model with a plastic self-organized velocity field, which has no prototype in physical world. This system is shown to be capable of basic cognition, which is illustrated numerically and with musical data. Our conceptual model could provide an additional insight into the working principles of the brain. Moreover, hardware implementations of plastic velocity fields self-organizing according to various rules could pave the way to creating artificial intelligence of a novel type.

  15. DNA biosensors that reason.

    PubMed

    Sainz de Murieta, Iñaki; Rodríguez-Patón, Alfonso

    2012-08-01

    Despite the many designs of devices operating with the DNA strand displacement, surprisingly none is explicitly devoted to the implementation of logical deductions. The present article introduces a new model of biosensor device that uses nucleic acid strands to encode simple rules such as "IF DNA_strand(1) is present THEN disease(A)" or "IF DNA_strand(1) AND DNA_strand(2) are present THEN disease(B)". Taking advantage of the strand displacement operation, our model makes these simple rules interact with input signals (either DNA or any type of RNA) to generate an output signal (in the form of nucleotide strands). This output signal represents a diagnosis, which either can be measured using FRET techniques, cascaded as the input of another logical deduction with different rules, or even be a drug that is administered in response to a set of symptoms. The encoding introduces an implicit error cancellation mechanism, which increases the system scalability enabling longer inference cascades with a bounded and controllable signal-noise relation. It also allows the same rule to be used in forward inference or backward inference, providing the option of validly outputting negated propositions (e.g. "diagnosis A excluded"). The models presented in this paper can be used to implement smart logical DNA devices that perform genetic diagnosis in vitro. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  16. Humus and humility in ecosystem model design

    NASA Astrophysics Data System (ADS)

    Rowe, Ed

    2015-04-01

    Prediction is central to science. Empirical scientists couch their predictions as hypotheses and tend to deal with simple models such as regressions, but are modellers as much as are those who combine mechanistic hypotheses into more complex models. There are two main challenges for both groups: to strive for accurate predictions, and to ensure that the work is relevant to wider society. There is a role for blue-sky research, but the multiple environmental changes that characterise the 21st century place an onus on ecosystem scientists to develop tools for understanding environmental change and planning responses. Authors such as Funtowicz and Ravetz (1990) have argued that this situation represents "post-normal" science and that scientists should see themselves more humbly as actors within a societal process rather than as arbiters of truth. Modellers aim for generality, e.g. to accurately simulate the responses of a variety of ecosystems to several different environmental drivers. More accurate predictions can usually be achieved by including more explanatory factors or mechanisms in a model, even though this often results in a less efficient, less parsimonious model. This drives models towards ever-increasing complexity, and many models grow until they are effectively unusable beyond their development team. An alternative way forward is to focus on developing component models. Technologies for integrating dynamic models emphasise the removal of the model engine (algorithms) from code which handles time-stepping and the user interface. Developing components also requires some humility on the part of modellers, since collaboration will be needed to represent the whole system, and also because the idea that a simple component can or should represent the entire understanding of a scientific discipline is often difficult to accept. Policy-makers and land managers typically have questions different to those posed by scientists working within a specialism, and models that are developed in collaboration with stakeholders are much more likely to be used (Sterk et al., 2012). Rather than trying to re-frame the question to suit the model, modellers need the humility to accept that the model is inappropriate and should develop the capacity to model the question. In this study these issues are explored using the MADOC model (Rowe et al., 2014) as an example. MADOC was developed by integrating existing models of humus development, acid-base exchange, and organic matter dissolution to answer a particular policy question: how do acidifying pollutants affect pH in humic soils? Including the negative feedback whereby an increase in pH reduces the solubility of organic acids improved the predictive accuracy for pH and dissolved organic carbon flux in the peats and organomineral soils that are widespread in upland Britain. The model has been used to generate the UK response to data requests under the UN Convention on Long-Range Transboundary Air Pollution. References: Funtowicz, S.O. & Ravetz, J.R., 1990. Uncertainty and Quality in Science for Policy. Kluwer. Rowe, E.C., et al. 2014. Environmental Pollution 184, 271-282. Sterk, B., et al. 2012. Environmental Modelling & Software 26, 310-316.

  17. The Evolution of El Nino-Precipitation Relationships from Satellites and Gauges

    NASA Technical Reports Server (NTRS)

    Curtis, Scott; Adler, Robert F.; Starr, David OC (Technical Monitor)

    2002-01-01

    This study uses a twenty-three year (1979-2001) satellite-gauge merged community data set to further describe the relationship between El Nino Southern Oscillation (ENSO) and precipitation. The globally complete precipitation fields reveal coherent bands of anomalies that extend from the tropics to the polar regions. Also, ENSO-precipitation relationships were analyzed during the six strongest El Ninos from 1979 to 2001. Seasons of evolution, Pre-onset, Onset, Peak, Decay, and Post-decay, were identified based on the strength of the El Nino. Then two simple and independent models, first order harmonic and linear, were fit to the monthly time series of normalized precipitation anomalies for each grid block. The sinusoidal model represents a three-phase evolution of precipitation, either dry-wet-dry or wet-dry-wet. This model is also highly correlated with the evolution of sea surface temperatures in the equatorial Pacific. The linear model represents a two-phase evolution of precipitation, either dry-wet or wet-dry. These models combine to account for over 50% of the precipitation variability for over half the globe during El Nino. Most regions, especially away from the Equator, favor the linear model. Areas that show the largest trend from dry to wet are southeastern Australia, eastern Indian Ocean, southern Japan, and off the coast of Peru. The northern tropical Pacific and Southeast Asia show the opposite trend.

  18. Interactive coupling of regional climate and sulfate aerosol models over eastern Asia

    NASA Astrophysics Data System (ADS)

    Qian, Yun; Giorgi, Filippo

    1999-03-01

    The NCAR regional climate model (RegCM) is interactively coupled to a simple radiatively active sulfate aerosol model over eastern Asia. Both direct and indirect aerosol effects are represented. The coupled model system is tested for two simulation periods, November 1994 and July 1995, with aerosol sources representative of present-day anthropogenic sulfur emissions. The model sensitivity to the intensity of the aerosol source is also studied. The main conclusions from our work are as follows: (1) The aerosol distribution and cycling processes show substantial regional spatial variability, and temporal variability varying on a range of scales, from the diurnal scale of boundary layer and cumulus cloud evolution to the 3-10 day scale of synoptic scale events and the interseasonal scale of general circulation features; (2) both direct and indirect aerosol forcings have regional effects on surface climate; (3) the regional climate response to the aerosol forcing is highly nonlinear, especially during the summer, due to the interactions with cloud and precipitation processes; (4) in our simulations the role of the aerosol indirect effects is dominant over that of direct effects; (5) aerosol-induced feedback processes can affect the aerosol burdens at the subregional scale. This work constitutes the first step in a long term research project aimed at coupling a hierarchy of chemistry/aerosol models to the RegCM over the eastern Asia region.

  19. Mission Simulation of Space Lidar Measurements for Seasonal and Regional CO2 Variations

    NASA Technical Reports Server (NTRS)

    Kawa, Stephan; Collatz, G. J.; Mao, J.; Abshire, J. B.; Sun, X.; Weaver, C. J.

    2010-01-01

    Results of mission simulation studies are presented for a laser-based atmospheric [82 sounder. The simulations are based on real-time carbon cycle process modeling and data analysis. The mission concept corresponds to the Active Sensing of [82 over Nights, Days, and Seasons (ASCENDS) recommended by the US National Academy of Sciences Decadal Survey of Earth Science and Applications from Space. One prerequisite for meaningful quantitative sensor evaluation is realistic CO2 process modeling across a wide range of scales, i.e., does the model have representative spatial and temporal gradients? Examples of model comparison with data will be shown. Another requirement is a relatively complete description of the atmospheric and surface state, which we have obtained from meteorological data assimilation and satellite measurements from MODIS and [ALIPS0. We use radiative transfer model calculations, an instrument model with representative errors ' and a simple retrieval approach to complete the cycle from "nature" run to "pseudo-data" CO2, Several mission and instrument configuration options are examined/ and the sensitivity to key design variables is shown. We use the simulation framework to demonstrate that within reasonable technological assumptions for the system performance, relatively high measurement precision can be obtained, but errors depend strongly on environmental conditions as well as instrument specifications. Examples are also shown of how the resulting pseudo - measurements might be used to address key carbon cycle science questions.

  20. Computing local edge probability in natural scenes from a population of oriented simple cells

    PubMed Central

    Ramachandra, Chaithanya A.; Mel, Bartlett W.

    2013-01-01

    A key computation in visual cortex is the extraction of object contours, where the first stage of processing is commonly attributed to V1 simple cells. The standard model of a simple cell—an oriented linear filter followed by a divisive normalization—fits a wide variety of physiological data, but is a poor performing local edge detector when applied to natural images. The brain's ability to finely discriminate edges from nonedges therefore likely depends on information encoded by local simple cell populations. To gain insight into the corresponding decoding problem, we used Bayes's rule to calculate edge probability at a given location/orientation in an image based on a surrounding filter population. Beginning with a set of ∼ 100 filters, we culled out a subset that were maximally informative about edges, and minimally correlated to allow factorization of the joint on- and off-edge likelihood functions. Key features of our approach include a new, efficient method for ground-truth edge labeling, an emphasis on achieving filter independence, including a focus on filters in the region orthogonal rather than tangential to an edge, and the use of a customized parametric model to represent the individual filter likelihood functions. The resulting population-based edge detector has zero parameters, calculates edge probability based on a sum of surrounding filter influences, is much more sharply tuned than the underlying linear filters, and effectively captures fine-scale edge structure in natural scenes. Our findings predict nonmonotonic interactions between cells in visual cortex, wherein a cell may for certain stimuli excite and for other stimuli inhibit the same neighboring cell, depending on the two cells' relative offsets in position and orientation, and their relative activation levels. PMID:24381295

  1. Exploring global carbon turnover and radiocarbon cycling in terrestrial biosphere models

    NASA Astrophysics Data System (ADS)

    Graven, H. D.; Warren, H.

    2017-12-01

    The uptake of carbon into terrestrial ecosystems through net primary productivity (NPP) and the turnover of that carbon through various pathways are the fundamental drivers of changing carbon stocks on land, in addition to human-induced and natural disturbances. Terrestrial biosphere models use different formulations for carbon uptake and release, resulting in a range of values in NPP of 40-70 PgC/yr and biomass turnover times of about 25-40 years for the preindustrial period in current-generation models from CMIP5. Biases in carbon uptake and turnover impact simulated carbon uptake and storage in the historical period and later in the century under changing climate and CO2 concentration, however evaluating global-scale NPP and carbon turnover is challenging. Scaling up of plot-scale measurements involves uncertainty due to the large heterogeneity across ecosystems and biomass types, some of which are not well-observed. We are developing the modelling of radiocarbon in terrestrial biosphere models, with a particular focus on decadal 14C dynamics after the nuclear weapons testing in the 1950s-60s, including the impact of carbon flux trends and variability on 14C cycling. We use an estimate of the total inventory of excess 14C in the biosphere constructed by Naegler and Levin (2009) using a 14C budget approach incorporating estimates of total 14C produced by the weapons tests and atmospheric and oceanic 14C observations. By simulating radiocarbon in simple biosphere box models using carbon fluxes from the CMIP5 models, we find that carbon turnover is too rapid in many of the simple models - the models appear to take up too much 14C and release it too quickly. Therefore many CMIP5 models may also simulate carbon turnover that is too rapid. A caveat is that the simple box models we use may not adequately represent carbon dynamics in the full-scale models. Explicit simulation of radiocarbon in terrestrial biosphere models would allow more robust evaluation of biosphere models and the investigation of climate-carbon cycle feedbacks on various timescales. Explicit simulation of radiocarbon and carbon-13 in terrestrial biosphere models of Earth System Models, as well as in ocean models, is recommended by CMIP6 and supported by CMIP6 protocols and forcing datasets.

  2. The Role of Model Complexity in Determining Patterns of Chlorophyll Variability in the Coastal Northwest North Atlantic

    NASA Astrophysics Data System (ADS)

    Kuhn, A. M.; Fennel, K.; Bianucci, L.

    2016-02-01

    A key feature of the North Atlantic Ocean's biological dynamics is the annual phytoplankton spring bloom. In the region comprising the continental shelf and adjacent deep ocean of the northwest North Atlantic, we identified two patterns of bloom development: 1) locations with cold temperatures and deep winter mixed layers, where the spring bloom peaks around April and the annual chlorophyll cycle has a large amplitude, and 2) locations with warmer temperatures and shallow winter mixed layers, where the spring bloom peaks earlier in the year, sometimes indiscernible from the fall bloom. These patterns result from a combination of limiting environmental factors and interactions among planktonic groups with different optimal requirements. Simple models that represent the ecosystem with a single phytoplankton (P) and a single zooplankton (Z) group are challenged to reproduce these ecological interactions. Here we investigate the effect that added complexity has on determining spatio-temporal chlorophyll. We compare two ecosystem models, one that contains one P and one Z group, and one with two P and three Z groups. We consider three types of changes in complexity: 1) added dependencies among variables (e.g., temperature dependent rates), 2) modified structural pathways, and 3) added pathways. Subsets of the most sensitive parameters are optimized in each model to replicate observations in the region. For computational efficiency, the parameter optimization is performed using 1D surrogates of a 3D model. We evaluate how model complexity affects model skill, and whether the optimized parameter sets found for each model modify the interpretation of ecosystem functioning. Spatial differences in the parameter sets that best represent different areas hint at the existence of different ecological communities or at physical-biological interactions that are not represented in the simplest model. Our methodology emphasizes the combined use of observations, 1D models to help identifying patterns, and 3D models able to simulate the environment modre realistically, as a means to acquire predictive understanding of the ocean's ecology.

  3. An insect-inspired model for visual binding II: functional analysis and visual attention.

    PubMed

    Northcutt, Brandon D; Higgins, Charles M

    2017-04-01

    We have developed a neural network model capable of performing visual binding inspired by neuronal circuitry in the optic glomeruli of flies: a brain area that lies just downstream of the optic lobes where early visual processing is performed. This visual binding model is able to detect objects in dynamic image sequences and bind together their respective characteristic visual features-such as color, motion, and orientation-by taking advantage of their common temporal fluctuations. Visual binding is represented in the form of an inhibitory weight matrix which learns over time which features originate from a given visual object. In the present work, we show that information represented implicitly in this weight matrix can be used to explicitly count the number of objects present in the visual image, to enumerate their specific visual characteristics, and even to create an enhanced image in which one particular object is emphasized over others, thus implementing a simple form of visual attention. Further, we present a detailed analysis which reveals the function and theoretical limitations of the visual binding network and in this context describe a novel network learning rule which is optimized for visual binding.

  4. Stochastic Forcing for High-Resolution Regional and Global Ocean and Atmosphere-Ocean Coupled Ensemble Forecast System

    NASA Astrophysics Data System (ADS)

    Rowley, C. D.; Hogan, P. J.; Martin, P.; Thoppil, P.; Wei, M.

    2017-12-01

    An extended range ensemble forecast system is being developed in the US Navy Earth System Prediction Capability (ESPC), and a global ocean ensemble generation capability to represent uncertainty in the ocean initial conditions has been developed. At extended forecast times, the uncertainty due to the model error overtakes the initial condition as the primary source of forecast uncertainty. Recently, stochastic parameterization or stochastic forcing techniques have been applied to represent the model error in research and operational atmospheric, ocean, and coupled ensemble forecasts. A simple stochastic forcing technique has been developed for application to US Navy high resolution regional and global ocean models, for use in ocean-only and coupled atmosphere-ocean-ice-wave ensemble forecast systems. Perturbation forcing is added to the tendency equations for state variables, with the forcing defined by random 3- or 4-dimensional fields with horizontal, vertical, and temporal correlations specified to characterize different possible kinds of error. Here, we demonstrate the stochastic forcing in regional and global ensemble forecasts with varying perturbation amplitudes and length and time scales, and assess the change in ensemble skill measured by a range of deterministic and probabilistic metrics.

  5. Influence of the Dukhin and Reynolds numbers on the apparent zeta potential of granular porous media.

    PubMed

    Crespy, A; Bolève, A; Revil, A

    2007-01-01

    The Helmholtz-Smoluchowski (HS) equation is widely used to determine the apparent zeta potential of porous materials using the streaming potential method. We present a model able to correct this apparent zeta potential of granular media of the influence of the Dukhin and Reynolds numbers. The Dukhin number represents the ratio between the surface conductivity (mainly occurring in the Stern layer) and the pore water conductivity. The Reynolds number represents the ratio between inertial and viscous forces in the Navier-Stokes equation. We show here that the HS equation can lead to serious errors if it is used to predict the dependence of zeta potential on flow in the inertial laminar flow regime without taking into account these corrections. For indifferent 1:1 electrolytes (such as sodium chloride), we derived two simple scaling laws for the dependence of the streaming potential coupling coefficient (or the apparent zeta potential) on the Dukhin and Reynolds numbers. Our model is compared with a new set of experimental data obtained on glass bead packs saturated with NaCl solutions at different salinities and pH. We find fairly good agreement between the model and these experimental data.

  6. Simulation of Complex Cracking in Plain Weave C/SiC Composite under Biaxial Loading

    NASA Technical Reports Server (NTRS)

    Cheng, Ron-Bin; Hsu, Su-Yuen

    2012-01-01

    Finite element analysis is performed on a mesh, based on computed geometry of a plain weave C/SiC composite with assumed internal stacking, to reveal the pattern of internal damage due to biaxial normal cyclic loading. The simulation encompasses intertow matrix cracking, matrix cracking inside the tows, and separation at the tow-intertow matrix and tow-tow interfaces. All these dissipative behaviors are represented by traction-separation cohesive laws. Not aimed at quantitatively predicting the overall stress-strain relation, the simulation, however, does not take the actual process of fiber debonding into account. The fiber tows are represented by a simple rule-of-mixture model where the reinforcing phase is a hypothetical one-dimensional material. Numerical results indicate that for the plain weave C/SiC composite, 1) matrix-crack initiation sites are primarily determined by large intertow matrix voids and interlayer tow-tow contacts, 2) the pattern of internal damage strongly depends on the loading path and initial stress, 3) compressive loading inflicts virtually no damage evolution. KEY WORDS: ceramic matrix composite, plain weave, cohesive model, brittle failure, smeared crack model, progressive damage, meso-mechanical analysis, finite element.

  7. Chemical pollution assessment and prioritisation model for the Upper and Middle Vaal water management areas of South Africa.

    PubMed

    Dzwairo, B; Otieno, F A O

    2014-12-01

    A chemical pollution assessment and prioritisation model was developed for the Upper and Middle Vaal water management areas of South Africa in order to provide a simple and practical Pollution Index to assist with mitigation and rehabilitation activities. Historical data for 2003 to 2008 from 21 river sites were cubic-interpolated to daily values. Nine parameters were considered for this purpose, that is, ammonium, chloride, electrical conductivity, dissolved oxygen, pH, fluoride, nitrate, phosphate and sulphate. Parameter selection was based on sub-catchment pollution characteristics and availability of a consistent data range, against a harmonised guideline which provided five classes. Classes 1, 2, 3 and 4 used ideal catchment background values for Vaal Dam, Vaal Barrage, Blesbokspruit/Suikerbosrant and Klip Rivers, respectively. Class 5 represented values which fell above those for Klip River. The Pollution Index, as provided by the model, identified pollution prioritisation monitoring points on Rietspruit-W:K2, Natalspruit:K12, Blesbokspruit:B1, Rietspruit-L:R1/R2, Taaibosspruit:T1 and Leeuspruit:L1. Pre-classification indicated that pollution sources were domestic, industrial and mine effluent. It was concluded that rehabilitation and mitigation measures should prioritise points with high classes. Ability of the model to perform simple scenario building and analysis was considered to be an effective tool for acid mine drainage pollution assessment.

  8. Model Calibration in Watershed Hydrology

    NASA Technical Reports Server (NTRS)

    Yilmaz, Koray K.; Vrugt, Jasper A.; Gupta, Hoshin V.; Sorooshian, Soroosh

    2009-01-01

    Hydrologic models use relatively simple mathematical equations to conceptualize and aggregate the complex, spatially distributed, and highly interrelated water, energy, and vegetation processes in a watershed. A consequence of process aggregation is that the model parameters often do not represent directly measurable entities and must, therefore, be estimated using measurements of the system inputs and outputs. During this process, known as model calibration, the parameters are adjusted so that the behavior of the model approximates, as closely and consistently as possible, the observed response of the hydrologic system over some historical period of time. This Chapter reviews the current state-of-the-art of model calibration in watershed hydrology with special emphasis on our own contributions in the last few decades. We discuss the historical background that has led to current perspectives, and review different approaches for manual and automatic single- and multi-objective parameter estimation. In particular, we highlight the recent developments in the calibration of distributed hydrologic models using parameter dimensionality reduction sampling, parameter regularization and parallel computing.

  9. Fundamental studies of structure borne noise for advanced turboprop applications

    NASA Technical Reports Server (NTRS)

    Eversman, W.; Koval, L. R.

    1985-01-01

    The transmission of sound generated by wing-mounted, advanced turboprop engines into the cabin interior via structural paths is considered. The structural model employed is a beam representation of the wing box carried into the fuselage via a representative frame type of carry through structure. The structure for the cabin cavity is a stiffened shell of rectangular or cylindrical geometry. The structure is modelled using a finite element formulation and the acoustic cavity is modelled using an analytical representation appropriate for the geometry. The structural and acoustic models are coupled by the use of hard wall cavity modes for the interior and vacuum structural modes for the shell. The coupling is accomplished using a combination of analytical and finite element models. The advantage is the substantial reduction in dimensionality achieved by modelling the interior analytically. The mathematical model for the interior noise problem is demonstrated with a simple plate/cavity system which has all of the features of the fuselage interior noise problem.

  10. Cognitive Hypnotherapy for Accessing and Healing Emotional Injuries for Anxiety Disorders.

    PubMed

    Alladin, Assen

    2016-07-01

    Although anxiety disorders on the surface may appear simple, they often represent complex problems that are compounded by underlying factors. For these reasons, treatment of anxiety disorders should be individualized. This article describes cognitive hypnotherapy, an individual comprehensive treatment protocol that integrates cognitive, behavioral, mindfulness, psychodynamic, and hypnotic strategies in the management of anxiety disorders. The treatment approach is based on the self-wounds model of anxiety disorders, which provides the rationale for integrating diverse strategies in the psychotherapy for anxiety disorders. Due to its evidence-based and integrated nature, the psychotherapy described here provides accuracy, efficacy, and sophistication in the formulation and treatment of anxiety disorders. This model can be easily adapted to the understanding and treatment of other emotional disorders.

  11. Structure of amplitude correlations in open chaotic systems

    NASA Astrophysics Data System (ADS)

    Ericson, Torleif E. O.

    2013-02-01

    The Verbaarschot-Weidenmüller-Zirnbauer (VWZ) model is believed to correctly represent the correlations of two S-matrix elements for an open quantum chaotic system, but the solution has considerable complexity and is presently only accessed numerically. Here a procedure is developed to deduce its features over the full range of the parameter space in a transparent and simple analytical form preserving accuracy to a considerable degree. The bulk of the VWZ correlations are described by the Gorin-Seligman expression for the two-amplitude correlations of the Ericson-Gorin-Seligman model. The structure of the remaining correction factors for correlation functions is discussed with special emphasis of the rôle of the level correlation hole both for inelastic and elastic correlations.

  12. Soil Water Characteristics of Cores from Low- and High-Centered Polygons, Barrow, Alaska, 2012

    DOE Data Explorer

    Graham, David; Moon, Ji-Won

    2016-08-22

    This dataset includes soil water characteristic curves for soil and permafrost in two representative frozen cores collected from a high-center polygon (HCP) and a low-center polygon (LCP) from the Barrow Environmental Observatory. Data include soil water content and soil water potential measured using the simple evaporation method for hydrological and biogeochemical simulations and experimental data analysis. Data can be used to generate a soil moisture characteristic curve, which can be fit to a variety of hydrological functions to infer critical parameters for soil physics. Considering the measured the soil water properties, the van Genuchten model predicted well the HCP, in contrast, the Kosugi model well fitted LCP which had more saturated condition.

  13. Spectral likelihood expansions for Bayesian inference

    NASA Astrophysics Data System (ADS)

    Nagel, Joseph B.; Sudret, Bruno

    2016-03-01

    A spectral approach to Bayesian inference is presented. It pursues the emulation of the posterior probability density. The starting point is a series expansion of the likelihood function in terms of orthogonal polynomials. From this spectral likelihood expansion all statistical quantities of interest can be calculated semi-analytically. The posterior is formally represented as the product of a reference density and a linear combination of polynomial basis functions. Both the model evidence and the posterior moments are related to the expansion coefficients. This formulation avoids Markov chain Monte Carlo simulation and allows one to make use of linear least squares instead. The pros and cons of spectral Bayesian inference are discussed and demonstrated on the basis of simple applications from classical statistics and inverse modeling.

  14. Beneath Our Feet: Strategies for Locomotion in Granular Media

    NASA Astrophysics Data System (ADS)

    Hosoi, A. E.; Goldman, Daniel I.

    2015-01-01

    “If you find yourself in a hole, stop digging.” Although Denis Healey's famous adage ( Metcalfe 2007 ) may offer sound advice for politicians, it is less relevant to worms, clams, and other higher organisms that rely on their digging ability for survival. In this article, we review recent work on the development of simple models that elucidate the fundamental principles underlying digging and burrowing strategies employed by biological systems. Four digging regimes are identified based on dimensionless digger size and the dimensionless inertial number. We select biological organisms to represent three of the four regimes: razor clams, sandfish, and nematodes. Models for all three diggers are derived and discussed, and analogies are drawn to low-Reynolds number swimmers.

  15. Analyzing the Discovery Potential for Light Dark Matter.

    PubMed

    Izaguirre, Eder; Krnjaic, Gordan; Schuster, Philip; Toro, Natalia

    2015-12-18

    In this Letter, we determine the present status of sub-GeV thermal dark matter annihilating through standard model mixing, with special emphasis on interactions through the vector portal. Within representative simple models, we carry out a complete and precise calculation of the dark matter abundance and of all available constraints. We also introduce a concise framework for comparing different experimental approaches, and use this comparison to identify important ranges of dark matter mass and couplings to better explore in future experiments. The requirement that dark matter be a thermal relic sets a sharp sensitivity target for terrestrial experiments, and so we highlight complementary experimental approaches that can decisively reach this milestone sensitivity over the entire sub-GeV mass range.

  16. The Effects of Intrinsic Noise on an Inhomogeneous Lattice of Chemical Oscillators

    NASA Astrophysics Data System (ADS)

    Giver, Michael; Jabeen, Zahera; Chakraborty, Bulbul

    2012-02-01

    Intrinsic or demographic noise has been shown to play an important role in the dynamics of a variety of systems including biochemical reactions within cells, predator-prey populations, and oscillatory chemical reaction systems, and is known to give rise to oscillations and pattern formation well outside the parameter range predicted by standard mean-field analysis. Motivated by an experimental model of cells and tissues where the cells are represented by chemical reagents isolated in emulsion droplets, we study the stochastic Brusselator, a simple activator-inhibitor chemical reaction model. Our work extends the results of recent studies on the zero and one dimensional system to the case of a non-uniform one dimensional lattice using a combination of analytical techniques and Monte Carlo simulations.

  17. Analysis of screeching in a cold flow jet experiment

    NASA Technical Reports Server (NTRS)

    Wang, M. E.; Slone, R. M., Jr.; Robertson, J. E.; Keefe, L.

    1975-01-01

    The screech phenomenon observed in a one-sixtieth scale model space shuttle test of the solid rocket booster exhaust flow noise has been investigated. A critical review is given of the cold flow test data representative of Space Shuttle launch configurations to define those parameters which contribute to screech generation. An acoustic feedback mechanism is found to be responsible for the generation of screech. A simple equation which permits prediction of screech frequency in terms of basic testing parameters such as the jet exhaust Mach number and the separating distance from nozzle exit to the surface of model launch pad is presented and is found in good agreement with the test data. Finally, techniques are recommended to eliminate or reduce the screech.

  18. Development of a solar-powered residential air conditioner: Economic analysis

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The results of investigations aimed at the development of cost models to be used in the economic assessment of Rankine-powered air conditioning systems for residential application are summarized. The rationale used in the development of the cost model was to: (1) collect cost data on complete systems and on the major equipment used in these systems; (2) reduce these data and establish relationships between cost and other engineering parameters such as weight, size, power level, etc; and (3) derive simple correlations from which cost-to-the-user can be calculated from performance requirements. The equipment considered in the survey included heat exchangers, fans, motors, and turbocompressors. This kind of hardware represents more than 2/3 of the total cost of conventional air conditioners.

  19. Molecular graph convolutions: moving beyond fingerprints

    PubMed Central

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-01-01

    Molecular “fingerprints” encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement. PMID:27558503

  20. Finite Element Analysis of Magnetic Damping Effects on G-Jitter Induced Fluid Flow

    NASA Technical Reports Server (NTRS)

    Pan, Bo; Li, Ben Q.; deGroh, Henry C., III

    1997-01-01

    This paper reports some interim results on numerical modeling and analyses of magnetic damping of g-jitter driven fluid flow in microgravity. A finite element model is developed to represent the fluid flow, thermal and solute transport phenomena in a 2-D cavity under g-jitter conditions with and without an applied magnetic field. The numerical model is checked by comparing with analytical solutions obtained for a simple parallel plate channel flow driven by g-jitter in a transverse magnetic field. The model is then applied to study the effect of steady state g-jitter induced oscillation and on the solute redistribution in the liquid that bears direct relevance to the Bridgman-Stockbarger single crystal growth processes. A selection of computed results is presented and the results indicate that an applied magnetic field can effectively damp the velocity caused by g-jitter and help to reduce the time variation of solute redistribution.

Top