Sample records for modeling complex physical

  1. Investigation of model-based physical design restrictions (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Lucas, Kevin; Baron, Stanislas; Belledent, Jerome; Boone, Robert; Borjon, Amandine; Couderc, Christophe; Patterson, Kyle; Riviere-Cazaux, Lionel; Rody, Yves; Sundermann, Frank; Toublan, Olivier; Trouiller, Yorick; Urbani, Jean-Christophe; Wimmer, Karl

    2005-05-01

    As lithography and other patterning processes become more complex and more non-linear with each generation, the task of physical design rules necessarily increases in complexity also. The goal of the physical design rules is to define the boundary between the physical layout structures which will yield well from those which will not. This is essentially a rule-based pre-silicon guarantee of layout correctness. However the rapid increase in design rule requirement complexity has created logistical problems for both the design and process functions. Therefore, similar to the semiconductor industry's transition from rule-based to model-based optical proximity correction (OPC) due to increased patterning complexity, opportunities for improving physical design restrictions by implementing model-based physical design methods are evident. In this paper we analyze the possible need and applications for model-based physical design restrictions (MBPDR). We first analyze the traditional design rule evolution, development and usage methodologies for semiconductor manufacturers. Next we discuss examples of specific design rule challenges requiring new solution methods in the patterning regime of low K1 lithography and highly complex RET. We then evaluate possible working strategies for MBPDR in the process development and product design flows, including examples of recent model-based pre-silicon verification techniques. Finally we summarize with a proposed flow and key considerations for MBPDR implementation.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, Philip LaRoche

    At the end of his life, Stephen Jay Kline, longtime professor of mechanical engineering at Stanford University, completed a book on how to address complex systems. The title of the book is 'Conceptual Foundations of Multi-Disciplinary Thinking' (1995), but the topic of the book is systems. Kline first establishes certain limits that are characteristic of our conscious minds. Kline then establishes a complexity measure for systems and uses that complexity measure to develop a hierarchy of systems. Kline then argues that our minds, due to their characteristic limitations, are unable to model the complex systems in that hierarchy. Computers aremore » of no help to us here. Our attempts at modeling these complex systems are based on the way we successfully model some simple systems, in particular, 'inert, naturally-occurring' objects and processes, such as what is the focus of physics. But complex systems overwhelm such attempts. As a result, the best we can do in working with these complex systems is to use a heuristic, what Kline calls the 'Guideline for Complex Systems.' Kline documents the problems that have developed due to 'oversimple' system models and from the inappropriate application of a system model from one domain to another. One prominent such problem is the Procrustean attempt to make the disciplines that deal with complex systems be 'physics-like.' Physics deals with simple systems, not complex ones, using Kline's complexity measure. The models that physics has developed are inappropriate for complex systems. Kline documents a number of the wasteful and dangerous fallacies of this type.« less

  3. Balancing model complexity and measurements in hydrology

    NASA Astrophysics Data System (ADS)

    Van De Giesen, N.; Schoups, G.; Weijs, S. V.

    2012-12-01

    The Data Processing Inequality implies that hydrological modeling can only reduce, and never increase, the amount of information available in the original data used to formulate and calibrate hydrological models: I(X;Z(Y)) ≤ I(X;Y). Still, hydrologists around the world seem quite content building models for "their" watersheds to move our discipline forward. Hydrological models tend to have a hybrid character with respect to underlying physics. Most models make use of some well established physical principles, such as mass and energy balances. One could argue that such principles are based on many observations, and therefore add data. These physical principles, however, are applied to hydrological models that often contain concepts that have no direct counterpart in the observable physical universe, such as "buckets" or "reservoirs" that fill up and empty out over time. These not-so-physical concepts are more like the Artificial Neural Networks and Support Vector Machines of the Artificial Intelligence (AI) community. Within AI, one quickly came to the realization that by increasing model complexity, one could basically fit any dataset but that complexity should be controlled in order to be able to predict unseen events. The more data are available to train or calibrate the model, the more complex it can be. Many complexity control approaches exist in AI, with Solomonoff inductive inference being one of the first formal approaches, the Akaike Information Criterion the most popular, and Statistical Learning Theory arguably being the most comprehensive practical approach. In hydrology, complexity control has hardly been used so far. There are a number of reasons for that lack of interest, the more valid ones of which will be presented during the presentation. For starters, there are no readily available complexity measures for our models. Second, some unrealistic simplifications of the underlying complex physics tend to have a smoothing effect on possible model outcomes, thereby preventing the most obvious results of over-fitting. Thirdly, dependence within and between time series poses an additional analytical problem. Finally, there are arguments to be made that the often discussed "equifinality" in hydrological models is simply a different manifestation of the lack of complexity control. In turn, this points toward a general idea, which is actually quite popular in sciences other than hydrology, that additional data gathering is a good way to increase the information content of our descriptions of hydrological reality.

  4. The Mathematics of High School Physics

    NASA Astrophysics Data System (ADS)

    Kanderakis, Nikos

    2016-10-01

    In the seventeenth and eighteenth centuries, mathematicians and physical philosophers managed to study, via mathematics, various physical systems of the sublunar world through idealized and simplified models of these systems, constructed with the help of geometry. By analyzing these models, they were able to formulate new concepts, laws and theories of physics and then through models again, to apply these concepts and theories to new physical phenomena and check the results by means of experiment. Students' difficulties with the mathematics of high school physics are well known. Science education research attributes them to inadequately deep understanding of mathematics and mainly to inadequate understanding of the meaning of symbolic mathematical expressions. There seem to be, however, more causes of these difficulties. One of them, not independent from the previous ones, is the complex meaning of the algebraic concepts used in school physics (e.g. variables, parameters, functions), as well as the complexities added by physics itself (e.g. that equations' symbols represent magnitudes with empirical meaning and units instead of pure numbers). Another source of difficulties is that the theories and laws of physics are often applied, via mathematics, to simplified, and idealized physical models of the world and not to the world itself. This concerns not only the applications of basic theories but also all authentic end-of-the-chapter problems. Hence, students have to understand and participate in a complex interplay between physics concepts and theories, physical and mathematical models, and the real world, often without being aware that they are working with models and not directly with the real world.

  5. "Let's get physical": advantages of a physical model over 3D computer models and textbooks in learning imaging anatomy.

    PubMed

    Preece, Daniel; Williams, Sarah B; Lam, Richard; Weller, Renate

    2013-01-01

    Three-dimensional (3D) information plays an important part in medical and veterinary education. Appreciating complex 3D spatial relationships requires a strong foundational understanding of anatomy and mental 3D visualization skills. Novel learning resources have been introduced to anatomy training to achieve this. Objective evaluation of their comparative efficacies remains scarce in the literature. This study developed and evaluated the use of a physical model in demonstrating the complex spatial relationships of the equine foot. It was hypothesized that the newly developed physical model would be more effective for students to learn magnetic resonance imaging (MRI) anatomy of the foot than textbooks or computer-based 3D models. Third year veterinary medicine students were randomly assigned to one of three teaching aid groups (physical model; textbooks; 3D computer model). The comparative efficacies of the three teaching aids were assessed through students' abilities to identify anatomical structures on MR images. Overall mean MRI assessment scores were significantly higher in students utilizing the physical model (86.39%) compared with students using textbooks (62.61%) and the 3D computer model (63.68%) (P < 0.001), with no significant difference between the textbook and 3D computer model groups (P = 0.685). Student feedback was also more positive in the physical model group compared with both the textbook and 3D computer model groups. Our results suggest that physical models may hold a significant advantage over alternative learning resources in enhancing visuospatial and 3D understanding of complex anatomical architecture, and that 3D computer models have significant limitations with regards to 3D learning. © 2013 American Association of Anatomists.

  6. Towards physical principles of biological evolution

    NASA Astrophysics Data System (ADS)

    Katsnelson, Mikhail I.; Wolf, Yuri I.; Koonin, Eugene V.

    2018-03-01

    Biological systems reach organizational complexity that far exceeds the complexity of any known inanimate objects. Biological entities undoubtedly obey the laws of quantum physics and statistical mechanics. However, is modern physics sufficient to adequately describe, model and explain the evolution of biological complexity? Detailed parallels have been drawn between statistical thermodynamics and the population-genetic theory of biological evolution. Based on these parallels, we outline new perspectives on biological innovation and major transitions in evolution, and introduce a biological equivalent of thermodynamic potential that reflects the innovation propensity of an evolving population. Deep analogies have been suggested to also exist between the properties of biological entities and processes, and those of frustrated states in physics, such as glasses. Such systems are characterized by frustration whereby local state with minimal free energy conflict with the global minimum, resulting in ‘emergent phenomena’. We extend such analogies by examining frustration-type phenomena, such as conflicts between different levels of selection, in biological evolution. These frustration effects appear to drive the evolution of biological complexity. We further address evolution in multidimensional fitness landscapes from the point of view of percolation theory and suggest that percolation at level above the critical threshold dictates the tree-like evolution of complex organisms. Taken together, these multiple connections between fundamental processes in physics and biology imply that construction of a meaningful physical theory of biological evolution might not be a futile effort. However, it is unrealistic to expect that such a theory can be created in one scoop; if it ever comes to being, this can only happen through integration of multiple physical models of evolutionary processes. Furthermore, the existing framework of theoretical physics is unlikely to suffice for adequate modeling of the biological level of complexity, and new developments within physics itself are likely to be required.

  7. Simple universal models capture all classical spin physics.

    PubMed

    De las Cuevas, Gemma; Cubitt, Toby S

    2016-03-11

    Spin models are used in many studies of complex systems because they exhibit rich macroscopic behavior despite their microscopic simplicity. Here, we prove that all the physics of every classical spin model is reproduced in the low-energy sector of certain "universal models," with at most polynomial overhead. This holds for classical models with discrete or continuous degrees of freedom. We prove necessary and sufficient conditions for a spin model to be universal and show that one of the simplest and most widely studied spin models, the two-dimensional Ising model with fields, is universal. Our results may facilitate physical simulations of Hamiltonians with complex interactions. Copyright © 2016, American Association for the Advancement of Science.

  8. Modeling the Stress Complexities of Teaching and Learning of School Physics in Nigeria

    ERIC Educational Resources Information Center

    Emetere, Moses E.

    2014-01-01

    This study was designed to investigate the validity of the stress complexity model (SCM) to teaching and learning of school physics in Abuja municipal area council of Abuja, North. About two hundred students were randomly selected by a simple random sampling technique from some schools within the Abuja municipal area council. A survey research…

  9. Numerical Modeling of Pulsed Electrical Discharges for High-Speed Flow Control

    DTIC Science & Technology

    2012-02-01

    dimensions , and later on more complex problems. Subsequent work compared different physical models for pulsed discharges: one-moment (drift-diffusion with...two dimensions , and later on more complex problems. Subsequent work compared different physical models for pulsed discharges: one-moment (drift...The state of a particle can be specified by its position and velocity. In principal, the motion of a large group of particles can be predicted from

  10. Efficient physics-based tracking of heart surface motion for beating heart surgery robotic systems.

    PubMed

    Bogatyrenko, Evgeniya; Pompey, Pascal; Hanebeck, Uwe D

    2011-05-01

    Tracking of beating heart motion in a robotic surgery system is required for complex cardiovascular interventions. A heart surface motion tracking method is developed, including a stochastic physics-based heart surface model and an efficient reconstruction algorithm. The algorithm uses the constraints provided by the model that exploits the physical characteristics of the heart. The main advantage of the model is that it is more realistic than most standard heart models. Additionally, no explicit matching between the measurements and the model is required. The application of meshless methods significantly reduces the complexity of physics-based tracking. Based on the stochastic physical model of the heart surface, this approach considers the motion of the intervention area and is robust to occlusions and reflections. The tracking algorithm is evaluated in simulations and experiments on an artificial heart. Providing higher accuracy than the standard model-based methods, it successfully copes with occlusions and provides high performance even when all measurements are not available. Combining the physical and stochastic description of the heart surface motion ensures physically correct and accurate prediction. Automatic initialization of the physics-based cardiac motion tracking enables system evaluation in a clinical environment.

  11. On the use of Empirical Data to Downscale Non-scientific Scepticism About Results From Complex Physical Based Models

    NASA Astrophysics Data System (ADS)

    Germer, S.; Bens, O.; Hüttl, R. F.

    2008-12-01

    The scepticism of non-scientific local stakeholders about results from complex physical based models is a major problem concerning the development and implementation of local climate change adaptation measures. This scepticism originates from the high complexity of such models. Local stakeholders perceive complex models as black-box models, as it is impossible to gasp all underlying assumptions and mathematically formulated processes at a glance. The use of physical based models is, however, indispensible to study complex underlying processes and to predict future environmental changes. The increase of climate change adaptation efforts following the release of the latest IPCC report indicates that the communication of facts about what has already changed is an appropriate tool to trigger climate change adaptation. Therefore we suggest increasing the practice of empirical data analysis in addition to modelling efforts. The analysis of time series can generate results that are easier to comprehend for non-scientific stakeholders. Temporal trends and seasonal patterns of selected hydrological parameters (precipitation, evapotranspiration, groundwater levels and river discharge) can be identified and the dependence of trends and seasonal patters to land use, topography and soil type can be highlighted. A discussion about lag times between the hydrological parameters can increase the awareness of local stakeholders for delayed environment responses.

  12. Surfing on Protein Waves: Proteophoresis as a Mechanism for Bacterial Genome Partitioning

    NASA Astrophysics Data System (ADS)

    Walter, J.-C.; Dorignac, J.; Lorman, V.; Rech, J.; Bouet, J.-Y.; Nollmann, M.; Palmeri, J.; Parmeggiani, A.; Geniet, F.

    2017-07-01

    Efficient bacterial chromosome segregation typically requires the coordinated action of a three-component machinery, fueled by adenosine triphosphate, called the partition complex. We present a phenomenological model accounting for the dynamic activity of this system that is also relevant for the physics of catalytic particles in active environments. The model is obtained by coupling simple linear reaction-diffusion equations with a proteophoresis, or "volumetric" chemophoresis, force field that arises from protein-protein interactions and provides a physically viable mechanism for complex translocation. This minimal description captures most known experimental observations: dynamic oscillations of complex components, complex separation, and subsequent symmetrical positioning. The predictions of our model are in phenomenological agreement with and provide substantial insight into recent experiments. From a nonlinear physics view point, this system explores the active separation of matter at micrometric scales with a dynamical instability between static positioning and traveling wave regimes triggered by the dynamical spontaneous breaking of rotational symmetry.

  13. An evaluation of 3-D traffic simulation modeling capabilities

    DOT National Transportation Integrated Search

    2007-06-01

    The use of 3D modeling in simulation has become the standard for both the military and private sector. Compared to physical models, 3D models are more affordable, more flexible, and can incorporate complex operations. Unlike a physical model, a dynam...

  14. A novel medical image data-based multi-physics simulation platform for computational life sciences.

    PubMed

    Neufeld, Esra; Szczerba, Dominik; Chavannes, Nicolas; Kuster, Niels

    2013-04-06

    Simulating and modelling complex biological systems in computational life sciences requires specialized software tools that can perform medical image data-based modelling, jointly visualize the data and computational results, and handle large, complex, realistic and often noisy anatomical models. The required novel solvers must provide the power to model the physics, biology and physiology of living tissue within the full complexity of the human anatomy (e.g. neuronal activity, perfusion and ultrasound propagation). A multi-physics simulation platform satisfying these requirements has been developed for applications including device development and optimization, safety assessment, basic research, and treatment planning. This simulation platform consists of detailed, parametrized anatomical models, a segmentation and meshing tool, a wide range of solvers and optimizers, a framework for the rapid development of specialized and parallelized finite element method solvers, a visualization toolkit-based visualization engine, a Python scripting interface for customized applications, a coupling framework, and more. Core components are cross-platform compatible and use open formats. Several examples of applications are presented: hyperthermia cancer treatment planning, tumour growth modelling, evaluating the magneto-haemodynamic effect as a biomarker and physics-based morphing of anatomical models.

  15. Unraveling dynamics of human physical activity patterns in chronic pain conditions

    NASA Astrophysics Data System (ADS)

    Paraschiv-Ionescu, Anisoara; Buchser, Eric; Aminian, Kamiar

    2013-06-01

    Chronic pain is a complex disabling experience that negatively affects the cognitive, affective and physical functions as well as behavior. Although the interaction between chronic pain and physical functioning is a well-accepted paradigm in clinical research, the understanding of how pain affects individuals' daily life behavior remains a challenging task. Here we develop a methodological framework allowing to objectively document disruptive pain related interferences on real-life physical activity. The results reveal that meaningful information is contained in the temporal dynamics of activity patterns and an analytical model based on the theory of bivariate point processes can be used to describe physical activity behavior. The model parameters capture the dynamic interdependence between periods and events and determine a `signature' of activity pattern. The study is likely to contribute to the clinical understanding of complex pain/disease-related behaviors and establish a unified mathematical framework to quantify the complex dynamics of various human activities.

  16. Hunting Solomonoff's Swans: Exploring the Boundary Between Physics and Statistics in Hydrological Modeling

    NASA Astrophysics Data System (ADS)

    Nearing, G. S.

    2014-12-01

    Statistical models consistently out-perform conceptual models in the short term, however to account for a nonstationary future (or an unobserved past) scientists prefer to base predictions on unchanging and commutable properties of the universe - i.e., physics. The problem with physically-based hydrology models is, of course, that they aren't really based on physics - they are based on statistical approximations of physical interactions, and we almost uniformly lack an understanding of the entropy associated with these approximations. Thermodynamics is successful precisely because entropy statistics are computable for homogeneous (well-mixed) systems, and ergodic arguments explain the success of Newton's laws to describe systems that are fundamentally quantum in nature. Unfortunately, similar arguments do not hold for systems like watersheds that are heterogeneous at a wide range of scales. Ray Solomonoff formalized the situation in 1968 by showing that given infinite evidence, simultaneously minimizing model complexity and entropy in predictions always leads to the best possible model. The open question in hydrology is about what happens when we don't have infinite evidence - for example, when the future will not look like the past, or when one watershed does not behave like another. How do we isolate stationary and commutable components of watershed behavior? I propose that one possible answer to this dilemma lies in a formal combination of physics and statistics. In this talk I outline my recent analogue (Solomonoff's theorem was digital) of Solomonoff's idea that allows us to quantify the complexity/entropy tradeoff in a way that is intuitive to physical scientists. I show how to formally combine "physical" and statistical methods for model development in a way that allows us to derive the theoretically best possible model given any given physics approximation(s) and available observations. Finally, I apply an analogue of Solomonoff's theorem to evaluate the tradeoff between model complexity and prediction power.

  17. Physics-based statistical model and simulation method of RF propagation in urban environments

    DOEpatents

    Pao, Hsueh-Yuan; Dvorak, Steven L.

    2010-09-14

    A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.

  18. Physical Models that Provide Guidance in Visualization Deconstruction in an Inorganic Context

    ERIC Educational Resources Information Center

    Schiltz, Holly K.; Oliver-Hoyo, Maria T.

    2012-01-01

    Three physical model systems have been developed to help students deconstruct the visualization needed when learning symmetry and group theory. The systems provide students with physical and visual frames of reference to facilitate the complex visualization involved in symmetry concepts. The permanent reflection plane demonstration presents an…

  19. Meta II: Multi-Model Language Suite for Cyber Physical Systems

    DTIC Science & Technology

    2013-03-01

    AVM META) projects have developed tools for designing cyber physical (or Mechatronic ) Systems . These systems are increasingly complex, take much...projects have developed tools for designing cyber physical (CPS) (or Mechatronic ) systems . Exemplified by modern amphibious and ground military...and parametric interface of Simulink models and defines associations with CyPhy components and component interfaces. 2. Embedded Systems Modeling

  20. Integrating a Smartphone and Molecular Modeling for Determining the Binding Constant and Stoichiometry Ratio of the Iron(II)-Phenanthroline Complex: An Activity for Analytical and Physical Chemistry Laboratories

    ERIC Educational Resources Information Center

    de Morais, Camilo de L. M.; Silva, Se´rgio R. B.; Vieira, Davi S.; Lima, Ka´ssio M. G.

    2016-01-01

    The binding constant and stoichiometry ratio for the formation of iron(II)-(1,10-phenanthroline) or iron(II)-o-phenanthroline complexes has been determined by a combination of a low-cost analytical method using a smartphone and a molecular modeling method as a laboratory experiment designed for analytical and physical chemistry courses. Intensity…

  1. Model-Based Reasoning in Upper-division Lab Courses

    NASA Astrophysics Data System (ADS)

    Lewandowski, Heather

    2015-05-01

    Modeling, which includes developing, testing, and refining models, is a central activity in physics. Well-known examples from AMO physics include everything from the Bohr model of the hydrogen atom to the Bose-Hubbard model of interacting bosons in a lattice. Modeling, while typically considered a theoretical activity, is most fully represented in the laboratory where measurements of real phenomena intersect with theoretical models, leading to refinement of models and experimental apparatus. However, experimental physicists use models in complex ways and the process is often not made explicit in physics laboratory courses. We have developed a framework to describe the modeling process in physics laboratory activities. The framework attempts to abstract and simplify the complex modeling process undertaken by expert experimentalists. The framework can be applied to understand typical processes such the modeling of the measurement tools, modeling ``black boxes,'' and signal processing. We demonstrate that the framework captures several important features of model-based reasoning in a way that can reveal common student difficulties in the lab and guide the development of curricula that emphasize modeling in the laboratory. We also use the framework to examine troubleshooting in the lab and guide students to effective methods and strategies.

  2. Modelling and Order of Acoustic Transfer Functions Due to Reflections from Augmented Objects

    NASA Astrophysics Data System (ADS)

    Kuster, Martin; de Vries, Diemer

    2006-12-01

    It is commonly accepted that the sound reflections from real physical objects are much more complicated than what usually is and can be modelled by room acoustics modelling software. The main reason for this limitation is the level of detail inherent in the physical object in terms of its geometrical and acoustic properties. In the present paper, the complexity of the sound reflections from a corridor wall is investigated by modelling the corresponding acoustic transfer functions at several receiver positions in front of the wall. The complexity for different wall configurations has been examined and the changes have been achieved by altering its acoustic image. The results show that for a homogenous flat wall, the complexity is significant and for a wall including various smaller objects, the complexity is highly dependent on the position of the receiver with respect to the objects.

  3. Let's Have a Coffee with the Standard Model of Particle Physics!

    ERIC Educational Resources Information Center

    Woithe, Julia; Wiener, Gerfried J.; Van der Veken, Frederik F.

    2017-01-01

    The Standard Model of particle physics is one of the most successful theories in physics and describes the fundamental interactions between elementary particles. It is encoded in a compact description, the so-called "Lagrangian," which even fits on t-shirts and coffee mugs. This mathematical formulation, however, is complex and only…

  4. Modelling Systems of Classical/Quantum Identical Particles by Focusing on Algorithms

    ERIC Educational Resources Information Center

    Guastella, Ivan; Fazio, Claudio; Sperandeo-Mineo, Rosa Maria

    2012-01-01

    A procedure modelling ideal classical and quantum gases is discussed. The proposed approach is mainly based on the idea that modelling and algorithm analysis can provide a deeper understanding of particularly complex physical systems. Appropriate representations and physical models able to mimic possible pseudo-mechanisms of functioning and having…

  5. Computational physics of the mind

    NASA Astrophysics Data System (ADS)

    Duch, Włodzisław

    1996-08-01

    In the XIX century and earlier physicists such as Newton, Mayer, Hooke, Helmholtz and Mach were actively engaged in the research on psychophysics, trying to relate psychological sensations to intensities of physical stimuli. Computational physics allows to simulate complex neural processes giving a chance to answer not only the original psychophysical questions but also to create models of the mind. In this paper several approaches relevant to modeling of the mind are outlined. Since direct modeling of the brain functions is rather limited due to the complexity of such models a number of approximations is introduced. The path from the brain, or computational neurosciences, to the mind, or cognitive sciences, is sketched, with emphasis on higher cognitive functions such as memory and consciousness. No fundamental problems in understanding of the mind seem to arise. From a computational point of view realistic models require massively parallel architectures.

  6. Spiritual and Affective Responses to a Physical Church and Corresponding Virtual Model.

    PubMed

    Murdoch, Matt; Davies, Jim

    2017-11-01

    Architectural and psychological theories posit that built environments have the potential to elicit complex psychological responses. However, few researchers have seriously explored this potential. Given the increasing importance and fidelity of virtual worlds, such research should explore whether virtual models of built environments are also capable of eliciting complex psychological responses. The goal of this study was to test these hypotheses, using a church, a corresponding virtual model, and an inclusive measure of state spirituality ("spiritual feelings"). Participants (n = 33) explored a physical church and corresponding virtual model, completing a measure of spiritual feelings after exploring the outside and inside of each version of the church. Using spiritual feelings after exploring the outside of the church as a baseline measure, change in state spirituality was assessed by taking the difference between spiritual feelings after exploring the inside and outside of the church (inside-outside) for both models. Although this change was greater in response to the physical church, there was no significant difference between the two models in eliciting such change in spiritual feelings. Despite the limitations of this exploratory study, these findings indicate that both built environments and corresponding virtual models are capable of evoking complex psychological responses.

  7. A Stratified Acoustic Model Accounting for Phase Shifts for Underwater Acoustic Networks

    PubMed Central

    Wang, Ping; Zhang, Lin; Li, Victor O. K.

    2013-01-01

    Accurate acoustic channel models are critical for the study of underwater acoustic networks. Existing models include physics-based models and empirical approximation models. The former enjoy good accuracy, but incur heavy computational load, rendering them impractical in large networks. On the other hand, the latter are computationally inexpensive but inaccurate since they do not account for the complex effects of boundary reflection losses, the multi-path phenomenon and ray bending in the stratified ocean medium. In this paper, we propose a Stratified Acoustic Model (SAM) based on frequency-independent geometrical ray tracing, accounting for each ray's phase shift during the propagation. It is a feasible channel model for large scale underwater acoustic network simulation, allowing us to predict the transmission loss with much lower computational complexity than the traditional physics-based models. The accuracy of the model is validated via comparisons with the experimental measurements in two different oceans. Satisfactory agreements with the measurements and with other computationally intensive classical physics-based models are demonstrated. PMID:23669708

  8. A stratified acoustic model accounting for phase shifts for underwater acoustic networks.

    PubMed

    Wang, Ping; Zhang, Lin; Li, Victor O K

    2013-05-13

    Accurate acoustic channel models are critical for the study of underwater acoustic networks. Existing models include physics-based models and empirical approximation models. The former enjoy good accuracy, but incur heavy computational load, rendering them impractical in large networks. On the other hand, the latter are computationally inexpensive but inaccurate since they do not account for the complex effects of boundary reflection losses, the multi-path phenomenon and ray bending in the stratified ocean medium. In this paper, we propose a Stratified Acoustic Model (SAM) based on frequency-independent geometrical ray tracing, accounting for each ray's phase shift during the propagation. It is a feasible channel model for large scale underwater acoustic network simulation, allowing us to predict the transmission loss with much lower computational complexity than the traditional physics-based models. The accuracy of the model is validated via comparisons with the experimental measurements in two different oceans. Satisfactory agreements with the measurements and with other computationally intensive classical physics-based models are demonstrated.

  9. The Physics of Colour Vision.

    ERIC Educational Resources Information Center

    Goldman, Martin

    1985-01-01

    An elementary physical model of cone receptor cells is explained and applied to complexities of human color vision. One-, two-, and three-receptor systems are considered, with the later shown to be the best model for the human eye. Color blindness is also discussed. (DH)

  10. Tinamit: Making coupled system dynamics models accessible to stakeholders

    NASA Astrophysics Data System (ADS)

    Malard, Julien; Inam Baig, Azhar; Rojas Díaz, Marcela; Hassanzadeh, Elmira; Adamowski, Jan; Tuy, Héctor; Melgar-Quiñonez, Hugo

    2017-04-01

    Model coupling is increasingly used as a method of combining the best of two models when representing socio-environmental systems, though barriers to successful model adoption by stakeholders are particularly present with the use of coupled models, due to their high complexity and typically low implementation flexibility. Coupled system dynamics - physically-based modelling is a promising method to improve stakeholder participation in environmental modelling while retaining a high level of complexity for physical process representation, as the system dynamics components are readily understandable and can be built by stakeholders themselves. However, this method is not without limitations in practice, including 1) inflexible and complicated coupling methods, 2) difficult model maintenance after the end of the project, and 3) a wide variety of end-user cultures and languages. We have developed the open-source Python-language software tool Tinamit to overcome some of these limitations to the adoption of stakeholder-based coupled system dynamics - physically-based modelling. The software is unique in 1) its inclusion of both a graphical user interface (GUI) and a library of available commands (API) that allow users with little or no coding abilities to rapidly, effectively, and flexibly couple models, 2) its multilingual support for the GUI, allowing users to couple models in their preferred language (and to add new languages as necessary for their community work), and 3) its modular structure allowing for very easy model coupling and modification without the direct use of code, and to which programming-savvy users can easily add support for new types of physically-based models. We discuss how the use of Tinamit for model coupling can greatly increase the accessibility of coupled models to stakeholders, using an example of a stakeholder-built system dynamics model of soil salinity issues in Pakistan coupled with the physically-based soil salinity and water flow model SAHYSMOD. Different socioeconomic and environmental policies for soil salinity remediation are tested within the coupled model, allowing for the identification of the most efficient actions from an environmental and a farmer economy standpoint while taking into account the complex feedbacks between socioeconomics and the physical environment.

  11. Challenges in Developing Models Describing Complex Soil Systems

    NASA Astrophysics Data System (ADS)

    Simunek, J.; Jacques, D.

    2014-12-01

    Quantitative mechanistic models that consider basic physical, mechanical, chemical, and biological processes have the potential to be powerful tools to integrate our understanding of complex soil systems, and the soil science community has often called for models that would include a large number of these diverse processes. However, once attempts have been made to develop such models, the response from the community has not always been overwhelming, especially after it discovered that these models are consequently highly complex, requiring not only a large number of parameters, not all of which can be easily (or at all) measured and/or identified, and which are often associated with large uncertainties, but also requiring from their users deep knowledge of all/most of these implemented physical, mechanical, chemical and biological processes. Real, or perceived, complexity of these models then discourages users from using them even for relatively simple applications, for which they would be perfectly adequate. Due to the nonlinear nature and chemical/biological complexity of the soil systems, it is also virtually impossible to verify these types of models analytically, raising doubts about their applicability. Code inter-comparisons, which is then likely the most suitable method to assess code capabilities and model performance, requires existence of multiple models of similar/overlapping capabilities, which may not always exist. It is thus a challenge not only to developed models describing complex soil systems, but also to persuade the soil science community in using them. As a result, complex quantitative mechanistic models are still an underutilized tool in soil science research. We will demonstrate some of the challenges discussed above on our own efforts in developing quantitative mechanistic models (such as HP1/2) for complex soil systems.

  12. Lumped Parameter Models for Predicting Nitrogen Transport in Lower Coastal Plain Watersheds

    Treesearch

    Devendra M. Amatya; George M. Chescheir; Glen P. Fernandez; R. Wayne Skaggs; F. Birgand; J.W. Gilliam

    2003-01-01

    hl recent years physically based comprehensive disfributed watershed scale hydrologic/water quality models have been developed and applied 10 evaluate cumulative effects of land arld water management practices on receiving waters, Although fhesc complex physically based models are capable of simulating the impacts ofthese changes in large watersheds, they are often...

  13. Managing Complex Interoperability Solutions using Model-Driven Architecture

    DTIC Science & Technology

    2011-06-01

    such as Oracle or MySQL . Each data model for a specific RDBMS is a distinct PSM. Or the system may want to exchange information with other C2...reduced number of transformations, e.g., from an RDBMS physical schema to the corresponding SQL script needed to instantiate the tables in a relational...tance of models. In engineering, a model serves several purposes: 1. It presents an abstract view of a complex system or of a complex information

  14. Cyber-Physical Test Platform for Microgrids: Combining Hardware, Hardware-in-the-Loop, and Network-Simulator-in-the-Loop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Austin; Chakraborty, Sudipta; Wang, Dexin

    This paper presents a cyber-physical testbed, developed to investigate the complex interactions between emerging microgrid technologies such as grid-interactive power sources, control systems, and a wide variety of communication platforms and bandwidths. The cyber-physical testbed consists of three major components for testing and validation: real time models of a distribution feeder model with microgrid assets that are integrated into the National Renewable Energy Laboratory's (NREL) power hardware-in-the-loop (PHIL) platform; real-time capable network-simulator-in-the-loop (NSIL) models; and physical hardware including inverters and a simple system controller. Several load profiles and microgrid configurations were tested to examine the effect on system performance withmore » increasing channel delays and router processing delays in the network simulator. Testing demonstrated that the controller's ability to maintain a target grid import power band was severely diminished with increasing network delays and laid the foundation for future testing of more complex cyber-physical systems.« less

  15. Modeling of Inelastic Collisions in a Multifluid Plasma: Excitation and Deexcitation

    DTIC Science & Technology

    2016-05-31

    AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES For publication in Physics of Plasma Vol #22, Issue...the fundamental physical processes may be individually known, it is not always clear how their combination affects the overall operation, or at what...arises from the complexity of the physical processes needed to be captured in the model. The required level of detail of the CR model is typically not

  16. Modeling of Inelastic Collisions in a Multifluid Plasma: Excitation and Deexcitation (Preprint)

    DTIC Science & Technology

    2015-06-01

    AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES For publication in Physics of Plasma PA Case...the fundamental physical processes may be individually known, it is not always clear how their combination affects the overall operation, or at what...arises from the complexity of the physical processes needed to be captured in the model. The required level of detail of the CR model is typically not

  17. Understanding large multiprotein complexes: applying a multiple allosteric networks model to explain the function of the Mediator transcription complex.

    PubMed

    Lewis, Brian A

    2010-01-15

    The regulation of transcription and of many other cellular processes involves large multi-subunit protein complexes. In the context of transcription, it is known that these complexes serve as regulatory platforms that connect activator DNA-binding proteins to a target promoter. However, there is still a lack of understanding regarding the function of these complexes. Why do multi-subunit complexes exist? What is the molecular basis of the function of their constituent subunits, and how are these subunits organized within a complex? What is the reason for physical connections between certain subunits and not others? In this article, I address these issues through a model of network allostery and its application to the eukaryotic RNA polymerase II Mediator transcription complex. The multiple allosteric networks model (MANM) suggests that protein complexes such as Mediator exist not only as physical but also as functional networks of interconnected proteins through which information is transferred from subunit to subunit by the propagation of an allosteric state known as conformational spread. Additionally, there are multiple distinct sub-networks within the Mediator complex that can be defined by their connections to different subunits; these sub-networks have discrete functions that are activated when specific subunits interact with other activator proteins.

  18. Relating the Stored Magnetic Energy of a Parallel-Plate Inductor to the Work of External Forces

    ERIC Educational Resources Information Center

    Gauthier, N.

    2007-01-01

    Idealized models are often used in introductory physics courses. For one, such models involve simple mathematics, which is a definite plus since complex mathematical manipulations quickly become an obstacle rather than a tool for a beginner. Idealized models facilitate a student's understanding and grasp of a given physical phenomenon, yet they…

  19. Evaluating crown fire rate of spread predictions from physics-based models

    Treesearch

    C. M. Hoffman; J. Ziegler; J. Canfield; R. R. Linn; W. Mell; C. H. Sieg; F. Pimont

    2015-01-01

    Modeling the behavior of crown fires is challenging due to the complex set of coupled processes that drive the characteristics of a spreading wildfire and the large range of spatial and temporal scales over which these processes occur. Detailed physics-based modeling approaches such as FIRETEC and the Wildland Urban Interface Fire Dynamics Simulator (WFDS) simulate...

  20. Understanding Complex Natural Systems by Articulating Structure-Behavior-Function Models

    ERIC Educational Resources Information Center

    Vattam, Swaroop S.; Goel, Ashok K.; Rugaber, Spencer; Hmelo-Silver, Cindy E.; Jordan, Rebecca; Gray, Steven; Sinha, Suparna

    2011-01-01

    Artificial intelligence research on creative design has led to Structure-Behavior-Function (SBF) models that emphasize functions as abstractions for organizing understanding of physical systems. Empirical studies on understanding complex systems suggest that novice understanding is shallow, typically focusing on their visible structures and…

  1. Theoretical study on interaction of cytochrome f and plastocyanin complex by a simple coarse-grained model with molecular crowding effect

    NASA Astrophysics Data System (ADS)

    Nakagawa, Satoshi; Kurniawan, Isman; Kodama, Koichi; Arwansyah, Muhammad Saleh; Kawaguchi, Kazutomo; Nagao, Hidemi

    2018-03-01

    We present a simple coarse-grained model with the molecular crowding effect in solvent to investigate the structure and dynamics of protein complexes including association and/or dissociation processes and investigate some physical properties such as the structure and the reaction rate from the viewpoint of the hydrophobic intermolecular interactions of protein complex. In the present coarse-grained model, a function depending upon the density of hydrophobic amino acid residues in a binding area of the complex is introduced, and the function involves the molecular crowding effect for the intermolecular interactions of hydrophobic amino acid residues between proteins. We propose a hydrophobic intermolecular potential energy between proteins by using the density-dependent function. The present coarse-grained model is applied to the complex of cytochrome f and plastocyanin by using the Langevin dynamics simulation to investigate some physical properties such as the complex structure, the electron transfer reaction rate constant from plastocyanin to cytochrome f and so on. We find that for proceeding the electron transfer reaction, the distance between metals in their active sites is necessary within about 18 Å. We discuss some typical complex structures formed in the present simulation in relation to the molecular crowding effect on hydrophobic interactions.

  2. Use of system identification techniques for improving airframe finite element models using test data

    NASA Technical Reports Server (NTRS)

    Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.

    1991-01-01

    A method for using system identification techniques to improve airframe finite element models was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.

  3. Initial Results from an Energy-Aware Airborne Dynamic, Data-Driven Application System Performing Sampling in Coherent Boundary-Layer Structures

    NASA Astrophysics Data System (ADS)

    Frew, E.; Argrow, B. M.; Houston, A. L.; Weiss, C.

    2014-12-01

    The energy-aware airborne dynamic, data-driven application system (EA-DDDAS) performs persistent sampling in complex atmospheric conditions by exploiting wind energy using the dynamic data-driven application system paradigm. The main challenge for future airborne sampling missions is operation with tight integration of physical and computational resources over wireless communication networks, in complex atmospheric conditions. The physical resources considered here include sensor platforms, particularly mobile Doppler radar and unmanned aircraft, the complex conditions in which they operate, and the region of interest. Autonomous operation requires distributed computational effort connected by layered wireless communication. Onboard decision-making and coordination algorithms can be enhanced by atmospheric models that assimilate input from physics-based models and wind fields derived from multiple sources. These models are generally too complex to be run onboard the aircraft, so they need to be executed in ground vehicles in the field, and connected over broadband or other wireless links back to the field. Finally, the wind field environment drives strong interaction between the computational and physical systems, both as a challenge to autonomous path planning algorithms and as a novel energy source that can be exploited to improve system range and endurance. Implementation details of a complete EA-DDDAS will be provided, along with preliminary flight test results targeting coherent boundary-layer structures.

  4. A Program of Continuing Research on Representing, Manipulating, and Reasoning about Physical Objects

    DTIC Science & Technology

    1991-09-30

    graphics with the goal of automatically converting complex graphics models into forms more appropriate for radiosity computation. 2.4 Least Constraint We...to computer graphics with the goal of automatically 7 converting complex graphics models into forms more appropriate for radiosity com- putation. 8 4

  5. Teaching Complex Concepts in the Geosciences by Integrating Analytical Reasoning with GIS

    ERIC Educational Resources Information Center

    Houser, Chris; Bishop, Michael P.; Lemmons, Kelly

    2017-01-01

    Conceptual models have long served as a means for physical geographers to organize their understanding of feedback mechanisms and complex systems. Analytical reasoning provides undergraduate students with an opportunity to develop conceptual models based upon their understanding of surface processes and environmental conditions. This study…

  6. Molecular Modeling and Physicochemical Properties of Supramolecular Complexes of Limonene with α- and β-Cyclodextrins.

    PubMed

    Dos Passos Menezes, Paula; Dos Santos, Polliana Barbosa Pereira; Dória, Grace Anne Azevedo; de Sousa, Bruna Maria Hipólito; Serafini, Mairim Russo; Nunes, Paula Santos; Quintans-Júnior, Lucindo José; de Matos, Iara Lisboa; Alves, Péricles Barreto; Bezerra, Daniel Pereira; Mendonça Júnior, Francisco Jaime Bezerra; da Silva, Gabriel Francisco; de Aquino, Thiago Mendonça; de Souza Bento, Edson; Scotti, Marcus Tullius; Scotti, Luciana; de Souza Araujo, Adriano Antunes

    2017-02-01

    This study evaluated three different methods for the formation of an inclusion complex between alpha- and beta-cyclodextrin (α- and β-CD) and limonene (LIM) with the goal of improving the physicochemical properties of limonene. The study samples were prepared through physical mixing (PM), paste complexation (PC), and slurry complexation (SC) methods in the molar ratio of 1:1 (cyclodextrin:limonene). The complexes prepared were evaluated with thermogravimetry/derivate thermogravimetry, infrared spectroscopy, X-ray diffraction, complexation efficiency through gas chromatography/mass spectrometry analyses, molecular modeling, and nuclear magnetic resonance. The results showed that the physical mixing procedure did not produce complexation, but the paste and slurry methods produced inclusion complexes, which demonstrated interactions outside of the cavity of the CDs. However, the paste obtained with β-cyclodextrin did not demonstrate complexation in the gas chromatographic technique because, after extraction, most of the limonene was either surface-adsorbed by β-cyclodextrin or volatilized during the procedure. We conclude that paste complexation and slurry complexation are effective and economic methods to improve the physicochemical character of limonene and could have important applications in pharmacological activities in terms of an increase in solubility.

  7. Models of chromatin spatial organisation in the cell nucleus

    NASA Astrophysics Data System (ADS)

    Nicodemi, Mario

    2014-03-01

    In the cell nucleus chromosomes have a complex architecture serving vital functional purposes. Recent experiments have started unveiling the interaction map of DNA sites genome-wide, revealing different levels of organisation at different scales. The principles, though, which orchestrate such a complex 3D structure remain still mysterious. I will overview the scenario emerging from some classical polymer physics models of the general aspect of chromatin spatial organisation. The available experimental data, which can be rationalised in a single framework, support a picture where chromatin is a complex mixture of differently folded regions, self-organised across spatial scales according to basic physical mechanisms. I will also discuss applications to specific DNA loci, e.g. the HoxB locus, where models informed with biological details, and tested against targeted experiments, can help identifying the determinants of folding.

  8. Simulating Fire Disturbance and Plant Mortality Using Antecedent Eco-hydrological Conditions to Inform a Physically Based Combustion Model

    NASA Astrophysics Data System (ADS)

    Atchley, A. L.; Linn, R.; Middleton, R. S.; Runde, I.; Coon, E.; Michaletz, S. T.

    2016-12-01

    Wildfire is a complex agent of change that both affects and depends on eco-hydrological systems, thereby constituting a tightly linked system of disturbances and eco-hydrological conditions. For example, structure, build-up, and moisture content of fuel are dependent on eco-hydrological regimes, which impacts fire spread and intensity. Fire behavior, on the other hand, determines the severity and extent of eco-hydrological disturbance, often resulting in a mosaic of untouched, stressed, damaged, or completely destroyed vegetation within the fire perimeter. This in turn drives new eco-hydrological system behavior. The cycles of disturbance and recovery present a complex evolving system with many unknowns especially in the face of climate change that has implications for fire risk, water supply, and forest composition. Physically-based numerical experiments that attempt to capture the complex linkages between eco-hydrological regimes that affect fire behavior and the echo-hydrological response from those fire disturbances help build the understanding required to project how fire disturbance and eco-hydrological conditions coevolve over time. Here we explore the use of FIRETEC—a physically-based 3D combustion model that solves conservation of mass, momentum, energy, and chemical species—to resolve fire spread over complex terrain and fuel structures. Uniquely, we couple a physically-based plant mortality model with FIRETEC and examine the resultant hydrologic impact. In this proof of concept demonstration we spatially distribute fuel structure and moisture content based on the eco-hydrological condition to use as input for FIRETEC. The fire behavior simulation then produces localized burn severity and heat injures which are used as input to a spatially-informed plant mortality model. Ultimately we demonstrate the applicability of physically-based models to explore integrated disturbance and eco-hydrologic response to wildfire behavior and specifically map how fire spread and intensity is affect by the antecedent eco-hydrological condition, which then affects the resulting tree mortality patterns.

  9. A compact physical model for the simulation of pNML-based architectures

    NASA Astrophysics Data System (ADS)

    Turvani, G.; Riente, F.; Plozner, E.; Schmitt-Landsiedel, D.; Breitkreutz-v. Gamm, S.

    2017-05-01

    Among emerging technologies, perpendicular Nanomagnetic Logic (pNML) seems to be very promising because of its capability of combining logic and memory onto the same device, scalability, 3D-integration and low power consumption. Recently, Full Adder (FA) structures clocked by a global magnetic field have been experimentally demonstrated and detailed characterizations of the switching process governing the domain wall (DW) nucleation probability Pnuc and time tnuc have been performed. However, the design of pNML architectures represent a crucial point in the study of this technology; this can have a remarkable impact on the reliability of pNML structures. Here, we present a compact model developed in VHDL which enables to simulate complex pNML architectures while keeping into account critical physical parameters. Therefore, such parameters have been extracted from the experiments, fitted by the corresponding physical equations and encapsulated into the proposed model. Within this, magnetic structures are decomposed into a few basic elements (nucleation centers, nanowires, inverters etc.) represented by the according physical description. To validate the model, we redesigned a FA and compared our simulation results to the experiment. With this compact model of pNML devices we have envisioned a new methodology which makes it possible to simulate and test the physical behavior of complex architectures with very low computational costs.

  10. Use of system identification techniques for improving airframe finite element models using test data

    NASA Technical Reports Server (NTRS)

    Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.

    1993-01-01

    A method for using system identification techniques to improve airframe finite element models using test data was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in the total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all of the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.

  11. Complexity growth in minimal massive 3D gravity

    NASA Astrophysics Data System (ADS)

    Qaemmaqami, Mohammad M.

    2018-01-01

    We study the complexity growth by using "complexity =action " (CA) proposal in the minimal massive 3D gravity (MMG) model which is proposed for resolving the bulk-boundary clash problem of topologically massive gravity (TMG). We observe that the rate of the complexity growth for Banados-Teitelboim-Zanelli (BTZ) black hole saturates the proposed bound by physical mass of the BTZ black hole in the MMG model, when the angular momentum parameter and the inner horizon of black hole goes to zero.

  12. Sharp Truncation of an Electric Field: An Idealized Model That Warrants Caution

    ERIC Educational Resources Information Center

    Tu, Hong; Zhu, Jiongming

    2016-01-01

    In physics, idealized models are often used to simplify complex situations. The motivation of the idealization is to make the real complex system tractable by adopting certain simplifications. In this treatment some unnecessary, negligible aspects are stripped away (so-called Aristotelian idealization), or some deliberate distortions are involved…

  13. Final Report, DOE Early Career Award: Predictive modeling of complex physical systems: new tools for statistical inference, uncertainty quantification, and experimental design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marzouk, Youssef

    Predictive simulation of complex physical systems increasingly rests on the interplay of experimental observations with computational models. Key inputs, parameters, or structural aspects of models may be incomplete or unknown, and must be developed from indirect and limited observations. At the same time, quantified uncertainties are needed to qualify computational predictions in the support of design and decision-making. In this context, Bayesian statistics provides a foundation for inference from noisy and limited data, but at prohibitive computional expense. This project intends to make rigorous predictive modeling *feasible* in complex physical systems, via accelerated and scalable tools for uncertainty quantification, Bayesianmore » inference, and experimental design. Specific objectives are as follows: 1. Develop adaptive posterior approximations and dimensionality reduction approaches for Bayesian inference in high-dimensional nonlinear systems. 2. Extend accelerated Bayesian methodologies to large-scale {\\em sequential} data assimilation, fully treating nonlinear models and non-Gaussian state and parameter distributions. 3. Devise efficient surrogate-based methods for Bayesian model selection and the learning of model structure. 4. Develop scalable simulation/optimization approaches to nonlinear Bayesian experimental design, for both parameter inference and model selection. 5. Demonstrate these inferential tools on chemical kinetic models in reacting flow, constructing and refining thermochemical and electrochemical models from limited data. Demonstrate Bayesian filtering on canonical stochastic PDEs and in the dynamic estimation of inhomogeneous subsurface properties and flow fields.« less

  14. A moist aquaplanet variant of the Held–Suarez test for atmospheric model dynamical cores

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thatcher, Diana R.; Jablonowski, Christiane

    A moist idealized test case (MITC) for atmospheric model dynamical cores is presented. The MITC is based on the Held–Suarez (HS) test that was developed for dry simulations on “a flat Earth” and replaces the full physical parameterization package with a Newtonian temperature relaxation and Rayleigh damping of the low-level winds. This new variant of the HS test includes moisture and thereby sheds light on the nonlinear dynamics–physics moisture feedbacks without the complexity of full-physics parameterization packages. In particular, it adds simplified moist processes to the HS forcing to model large-scale condensation, boundary-layer mixing, and the exchange of latent and sensible heat betweenmore » the atmospheric surface and an ocean-covered planet. Using a variety of dynamical cores of the National Center for Atmospheric Research (NCAR)'s Community Atmosphere Model (CAM), this paper demonstrates that the inclusion of the moist idealized physics package leads to climatic states that closely resemble aquaplanet simulations with complex physical parameterizations. This establishes that the MITC approach generates reasonable atmospheric circulations and can be used for a broad range of scientific investigations. This paper provides examples of two application areas. First, the test case reveals the characteristics of the physics–dynamics coupling technique and reproduces coupling issues seen in full-physics simulations. In particular, it is shown that sudden adjustments of the prognostic fields due to moist physics tendencies can trigger undesirable large-scale gravity waves, which can be remedied by a more gradual application of the physical forcing. Second, the moist idealized test case can be used to intercompare dynamical cores. These examples demonstrate the versatility of the MITC approach and suggestions are made for further application areas. Furthermore, the new moist variant of the HS test can be considered a test case of intermediate complexity.« less

  15. A moist aquaplanet variant of the Held–Suarez test for atmospheric model dynamical cores

    DOE PAGES

    Thatcher, Diana R.; Jablonowski, Christiane

    2016-04-04

    A moist idealized test case (MITC) for atmospheric model dynamical cores is presented. The MITC is based on the Held–Suarez (HS) test that was developed for dry simulations on “a flat Earth” and replaces the full physical parameterization package with a Newtonian temperature relaxation and Rayleigh damping of the low-level winds. This new variant of the HS test includes moisture and thereby sheds light on the nonlinear dynamics–physics moisture feedbacks without the complexity of full-physics parameterization packages. In particular, it adds simplified moist processes to the HS forcing to model large-scale condensation, boundary-layer mixing, and the exchange of latent and sensible heat betweenmore » the atmospheric surface and an ocean-covered planet. Using a variety of dynamical cores of the National Center for Atmospheric Research (NCAR)'s Community Atmosphere Model (CAM), this paper demonstrates that the inclusion of the moist idealized physics package leads to climatic states that closely resemble aquaplanet simulations with complex physical parameterizations. This establishes that the MITC approach generates reasonable atmospheric circulations and can be used for a broad range of scientific investigations. This paper provides examples of two application areas. First, the test case reveals the characteristics of the physics–dynamics coupling technique and reproduces coupling issues seen in full-physics simulations. In particular, it is shown that sudden adjustments of the prognostic fields due to moist physics tendencies can trigger undesirable large-scale gravity waves, which can be remedied by a more gradual application of the physical forcing. Second, the moist idealized test case can be used to intercompare dynamical cores. These examples demonstrate the versatility of the MITC approach and suggestions are made for further application areas. Furthermore, the new moist variant of the HS test can be considered a test case of intermediate complexity.« less

  16. Environmental Modeling

    EPA Pesticide Factsheets

    EPA's modeling community is working to gain insights into certain parts of a physical, biological, economic, or social system by conducting environmental assessments for Agency decision making to complex environmental issues.

  17. Statistical inference of empirical constituents in partitioned analysis from integral-effect experiments: An application in thermo-mechanical coupling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Garrison N.; Atamturktur, Sez; Brown, D. Andrew

    Rapid advancements in parallel computing over the last two decades have enabled simulations of complex, coupled systems through partitioning. In partitioned analysis, independently developed constituent models communicate, representing dependencies between multiple physical phenomena that occur in the full system. Figure 1 schematically demonstrates a coupled system with two constituent models, each resolving different physical behavior. In this figure, the constituent model, denoted as the “consumer,” relies upon some input parameter that is being provided by the constituent model acting as a “feeder”. The role of the feeder model is to map operating conditions (i.e. those that are stimulating the process)more » to consumer inputs, thus providing functional inputs to the consumer model*. Problems arise if the feeder model cannot be built–a challenge that is prevalent for highly complex systems in extreme operational conditions that push the limits of our understanding of underlying physical behavior. Often, these are also the situations where separate-effect experiments isolating the physical phenomena are not available; meaning that experimentally determining the unknown constituent behavior is not possible (Bauer and Holland, 1995; Unal et al., 2013), and that integral-effect experiments that reflect the behavior of the complete system tend to be the only available observations. In this paper, the authors advocate for the usefulness of integral-effect experiments in furthering a model developer’s knowledge of the physics principles governing the system behavior of interest.« less

  18. Statistical inference of empirical constituents in partitioned analysis from integral-effect experiments: An application in thermo-mechanical coupling

    DOE PAGES

    Stevens, Garrison N.; Atamturktur, Sez; Brown, D. Andrew; ...

    2018-04-16

    Rapid advancements in parallel computing over the last two decades have enabled simulations of complex, coupled systems through partitioning. In partitioned analysis, independently developed constituent models communicate, representing dependencies between multiple physical phenomena that occur in the full system. Figure 1 schematically demonstrates a coupled system with two constituent models, each resolving different physical behavior. In this figure, the constituent model, denoted as the “consumer,” relies upon some input parameter that is being provided by the constituent model acting as a “feeder”. The role of the feeder model is to map operating conditions (i.e. those that are stimulating the process)more » to consumer inputs, thus providing functional inputs to the consumer model*. Problems arise if the feeder model cannot be built–a challenge that is prevalent for highly complex systems in extreme operational conditions that push the limits of our understanding of underlying physical behavior. Often, these are also the situations where separate-effect experiments isolating the physical phenomena are not available; meaning that experimentally determining the unknown constituent behavior is not possible (Bauer and Holland, 1995; Unal et al., 2013), and that integral-effect experiments that reflect the behavior of the complete system tend to be the only available observations. In this paper, the authors advocate for the usefulness of integral-effect experiments in furthering a model developer’s knowledge of the physics principles governing the system behavior of interest.« less

  19. The highly intelligent virtual agents for modeling financial markets

    NASA Astrophysics Data System (ADS)

    Yang, G.; Chen, Y.; Huang, J. P.

    2016-02-01

    Researchers have borrowed many theories from statistical physics, like ensemble, Ising model, etc., to study complex adaptive systems through agent-based modeling. However, one fundamental difference between entities (such as spins) in physics and micro-units in complex adaptive systems is that the latter are usually with high intelligence, such as investors in financial markets. Although highly intelligent virtual agents are essential for agent-based modeling to play a full role in the study of complex adaptive systems, how to create such agents is still an open question. Hence, we propose three principles for designing high artificial intelligence in financial markets and then build a specific class of agents called iAgents based on these three principles. Finally, we evaluate the intelligence of iAgents through virtual index trading in two different stock markets. For comparison, we also include three other types of agents in this contest, namely, random traders, agents from the wealth game (modified on the famous minority game), and agents from an upgraded wealth game. As a result, iAgents perform the best, which gives a well support for the three principles. This work offers a general framework for the further development of agent-based modeling for various kinds of complex adaptive systems.

  20. Retrieving hydrological connectivity from empirical causality in karst systems

    NASA Astrophysics Data System (ADS)

    Delforge, Damien; Vanclooster, Marnik; Van Camp, Michel; Poulain, Amaël; Watlet, Arnaud; Hallet, Vincent; Kaufmann, Olivier; Francis, Olivier

    2017-04-01

    Because of their complexity, karst systems exhibit nonlinear dynamics. Moreover, if one attempts to model a karst, the hidden behavior complicates the choice of the most suitable model. Therefore, both intense investigation methods and nonlinear data analysis are needed to reveal the underlying hydrological connectivity as a prior for a consistent physically based modelling approach. Convergent Cross Mapping (CCM), a recent method, promises to identify causal relationships between time series belonging to the same dynamical systems. The method is based on phase space reconstruction and is suitable for nonlinear dynamics. As an empirical causation detection method, it could be used to highlight the hidden complexity of a karst system by revealing its inner hydrological and dynamical connectivity. Hence, if one can link causal relationships to physical processes, the method should show great potential to support physically based model structure selection. We present the results of numerical experiments using karst model blocks combined in different structures to generate time series from actual rainfall series. CCM is applied between the time series to investigate if the empirical causation detection is consistent with the hydrological connectivity suggested by the karst model.

  1. Let’s have a coffee with the Standard Model of particle physics!

    NASA Astrophysics Data System (ADS)

    Woithe, Julia; Wiener, Gerfried J.; Van der Veken, Frederik F.

    2017-05-01

    The Standard Model of particle physics is one of the most successful theories in physics and describes the fundamental interactions between elementary particles. It is encoded in a compact description, the so-called ‘Lagrangian’, which even fits on t-shirts and coffee mugs. This mathematical formulation, however, is complex and only rarely makes it into the physics classroom. Therefore, to support high school teachers in their challenging endeavour of introducing particle physics in the classroom, we provide a qualitative explanation of the terms of the Lagrangian and discuss their interpretation based on associated Feynman diagrams.

  2. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    NASA Astrophysics Data System (ADS)

    Hadjidoukas, P. E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.

    2015-03-01

    We present Π4U, an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.

  3. A physical model for dementia

    NASA Astrophysics Data System (ADS)

    Sotolongo-Costa, O.; Gaggero-Sager, L. M.; Becker, J. T.; Maestu, F.; Sotolongo-Grau, O.

    2017-04-01

    Aging associated brain decline often result in some kind of dementia. Even when this is a complex brain disorder a physical model can be used in order to describe its general behavior. A probabilistic model for the development of dementia is obtained and fitted to some experimental data obtained from the Alzheimer's Disease Neuroimaging Initiative. It is explained how dementia appears as a consequence of aging and why it is irreversible.

  4. Simulating adsorption of U(VI) under transient groundwater flow and hydrochemistry: Physical versus chemical nonequilibrium model

    USGS Publications Warehouse

    Greskowiak, J.; Hay, M.B.; Prommer, H.; Liu, C.; Post, V.E.A.; Ma, R.; Davis, J.A.; Zheng, C.; Zachara, J.M.

    2011-01-01

    Coupled intragrain diffusional mass transfer and nonlinear surface complexation processes play an important role in the transport behavior of U(VI) in contaminated aquifers. Two alternative model approaches for simulating these coupled processes were analyzed and compared: (1) the physical nonequilibrium approach that explicitly accounts for aqueous speciation and instantaneous surface complexation reactions in the intragrain regions and approximates the diffusive mass exchange between the immobile intragrain pore water and the advective pore water as multirate first-order mass transfer and (2) the chemical nonequilibrium approach that approximates the diffusion-limited intragrain surface complexation reactions by a set of multiple first-order surface complexation reaction kinetics, thereby eliminating the explicit treatment of aqueous speciation in the intragrain pore water. A model comparison has been carried out for column and field scale scenarios, representing the highly transient hydrological and geochemical conditions in the U(VI)-contaminated aquifer at the Hanford 300A site, Washington, USA. It was found that the response of U(VI) mass transfer behavior to hydrogeochemically induced changes in U(VI) adsorption strength was more pronounced in the physical than in the chemical nonequilibrium model. The magnitude of the differences in model behavior depended particularly on the degree of disequilibrium between the advective and immobile phase U(VI) concentrations. While a clear difference in U(VI) transport behavior between the two models was noticeable for the column-scale scenarios, only minor differences were found for the Hanford 300A field scale scenarios, where the model-generated disequilibrium conditions were less pronounced as a result of frequent groundwater flow reversals. Copyright 2011 by the American Geophysical Union.

  5. Results and Lessons Learned from a Coupled Social and Physical Hydrology Model: Testing Alternative Water Management Policies and Institutional Structures Using Agent-Based Modeling and Regional Hydrology

    NASA Astrophysics Data System (ADS)

    Murphy, J.; Lammers, R. B.; Prousevitch, A.; Ozik, J.; Altaweel, M.; Collier, N. T.; Kliskey, A. D.; Alessa, L.

    2015-12-01

    Water Management in the U.S. Southwest is under increasing scrutiny as many areas endure persistent drought. The impact of these prolonged dry conditions is a product of regional climate and hydrological conditions, but also of a highly engineered water management infrastructure and a complex web of social arrangements whereby water is allocated, shared, exchanged, used, re-used, and finally consumed. We coupled an agent-based model with a regional hydrological model to understand the dynamics in one richly studied and highly populous area: southern Arizona, U.S.A., including metropolitan Phoenix and Tucson. There, multiple management entities representing an array of municipalities and other water providers and customers, including private companies and Native American tribes are enmeshed in a complex legal and economic context in which water is bought, leased, banked, and exchanged in a variety of ways and on multiple temporal and physical scales. A recurrent question in the literature of adaptive management is the impact of management structure on overall system performance. To explore this, we constructed an agent-based model to capture this social complexity, and coupled this with a physical hydrological model that we used to drive the system under a variety of water stress scenarios and to assess the regional impact of the social system's performance. We report the outcomes of ensembles of runs in which varieties of alternative policy constraints and management strategies are considered. We hope to contribute to policy discussions in this area and connected and legislatively similar areas (such as California) as current conditions change and existing legal and policy structures are revised. Additionally, we comment on the challenges of integrating models that ostensibly are in different domains (physical and social) but that independently represent a system in which physical processes and human actions are closely intertwined and difficult to disentangle.

  6. Elucidation of solution state complexation in wet-granulated oven-dried ibuprofen and beta-cyclodextrin: FT-IR and 1H-NMR studies.

    PubMed

    Ghorab, M K; Adeyeye, M C

    2001-08-01

    The effect of oven-dried wet granulation on the complexation of beta-cyclodextrin with ibuprofen (IBU) in solution was investigated using Fourier transform infrared spectroscopy (FT-IR), proton nuclear magnetic resonance (1H NMR), and molecular modeling. Granulation was carried out using 5 mL of three different granulating solvents; water, ethanol (95% v/v), and isopropanol and the granules were oven-dried at 60 degrees C for 2 h. The granules were compared to oven-dried physical mixture and conventionally prepared complex. Phase solubility study was performed to investigate the stability of the granulation-formed complexes in solution. FT-IR was used to examine the complexation in the granules while 1H NMR, and molecular modeling studies were carried out to determine the mechanism of complexation in the water-prepared granules. The solubility studies suggested a 1:1 complex between IBU and betaCD. It also showed that the stability of the complex in solution was in the following order with respect to the granulating solvents: ethanol > water > isopropanol. The FT-IR study revealed a shift in the carboxylic acid stretching band and decrease in the intensities of the C-H bending bands of the isopropyl group and the out-of-plane aromatic ring, of IBU, in granules compared to the oven-dried physical mixture. This indicated that granules might have some extent of solid state complexation that could further enhance dissolution and the IBU-betaCD solution state complexation. 1H NMR showed that water prepared oven-dried granules had a different 1H NMR spectrum compared to similarly made oven-dried physical mixture, indicative of complexation in the former. The 1H NMR and the molecular modeling studies together revealed that solution state complexation from the granules occurred by inclusion of the isopropyl group together with part of the aromatic ring of IBU into the betaCD cavity probably through its wider side. These results indicate that granulation process induced faster complexation in solution which enhances the solubility and the dissolution rate of poorly soluble drugs. The extent of complexation in the granules was dependent on the type of solvent used.

  7. Biomedically relevant chemical and physical properties of coal combustion products.

    PubMed Central

    Fisher, G L

    1983-01-01

    The evaluation of the potential public and occupational health hazards of developing and existing combustion processes requires a detailed understanding of the physical and chemical properties of effluents available for human and environmental exposures. These processes produce complex mixtures of gases and aerosols which may interact synergistically or antagonistically with biological systems. Because of the physicochemical complexity of the effluents, the biomedically relevant properties of these materials must be carefully assessed. Subsequent to release from combustion sources, environmental interactions further complicate assessment of the toxicity of combustion products. This report provides an overview of the biomedically relevant physical and chemical properties of coal fly ash. Coal fly ash is presented as a model complex mixture for health and safety evaluation of combustion processes. PMID:6337824

  8. Development of a 3D coupled physical-biogeochemical model for the Marseille coastal area (NW Mediterranean Sea): what complexity is required in the coastal zone?

    PubMed

    Fraysse, Marion; Pinazo, Christel; Faure, Vincent Martin; Fuchs, Rosalie; Lazzari, Paolo; Raimbault, Patrick; Pairaud, Ivane

    2013-01-01

    Terrestrial inputs (natural and anthropogenic) from rivers, the atmosphere and physical processes strongly impact the functioning of coastal pelagic ecosystems. The objective of this study was to develop a tool for the examination of these impacts on the Marseille coastal area, which experiences inputs from the Rhone River and high rates of atmospheric deposition. Therefore, a new 3D coupled physical/biogeochemical model was developed. Two versions of the biogeochemical model were tested, one model considering only the carbon (C) and nitrogen (N) cycles and a second model that also considers the phosphorus (P) cycle. Realistic simulations were performed for a period of 5 years (2007-2011). The model accuracy assessment showed that both versions of the model were able of capturing the seasonal changes and spatial characteristics of the ecosystem. The model also reproduced upwelling events and the intrusion of Rhone River water into the Bay of Marseille well. Those processes appeared to greatly impact this coastal oligotrophic area because they induced strong increases in chlorophyll-a concentrations in the surface layer. The model with the C, N and P cycles better reproduced the chlorophyll-a concentrations at the surface than did the model without the P cycle, especially for the Rhone River water. Nevertheless, the chlorophyll-a concentrations at depth were better represented by the model without the P cycle. Therefore, the complexity of the biogeochemical model introduced errors into the model results, but it also improved model results during specific events. Finally, this study suggested that in coastal oligotrophic areas, improvements in the description and quantification of the hydrodynamics and the terrestrial inputs should be preferred over increasing the complexity of the biogeochemical model.

  9. Development of a 3D Coupled Physical-Biogeochemical Model for the Marseille Coastal Area (NW Mediterranean Sea): What Complexity Is Required in the Coastal Zone?

    PubMed Central

    Fraysse, Marion; Pinazo, Christel; Faure, Vincent Martin; Fuchs, Rosalie; Lazzari, Paolo; Raimbault, Patrick; Pairaud, Ivane

    2013-01-01

    Terrestrial inputs (natural and anthropogenic) from rivers, the atmosphere and physical processes strongly impact the functioning of coastal pelagic ecosystems. The objective of this study was to develop a tool for the examination of these impacts on the Marseille coastal area, which experiences inputs from the Rhone River and high rates of atmospheric deposition. Therefore, a new 3D coupled physical/biogeochemical model was developed. Two versions of the biogeochemical model were tested, one model considering only the carbon (C) and nitrogen (N) cycles and a second model that also considers the phosphorus (P) cycle. Realistic simulations were performed for a period of 5 years (2007–2011). The model accuracy assessment showed that both versions of the model were able of capturing the seasonal changes and spatial characteristics of the ecosystem. The model also reproduced upwelling events and the intrusion of Rhone River water into the Bay of Marseille well. Those processes appeared to greatly impact this coastal oligotrophic area because they induced strong increases in chlorophyll-a concentrations in the surface layer. The model with the C, N and P cycles better reproduced the chlorophyll-a concentrations at the surface than did the model without the P cycle, especially for the Rhone River water. Nevertheless, the chlorophyll-a concentrations at depth were better represented by the model without the P cycle. Therefore, the complexity of the biogeochemical model introduced errors into the model results, but it also improved model results during specific events. Finally, this study suggested that in coastal oligotrophic areas, improvements in the description and quantification of the hydrodynamics and the terrestrial inputs should be preferred over increasing the complexity of the biogeochemical model. PMID:24324589

  10. Comparing Virtual and Physical Robotics Environments for Supporting Complex Systems and Computational Thinking

    ERIC Educational Resources Information Center

    Berland, Matthew; Wilensky, Uri

    2015-01-01

    Both complex systems methods (such as agent-based modeling) and computational methods (such as programming) provide powerful ways for students to understand new phenomena. To understand how to effectively teach complex systems and computational content to younger students, we conducted a study in four urban middle school classrooms comparing…

  11. Using Remote Sensing Data to Constrain Models of Fault Interactions and Plate Boundary Deformation

    NASA Astrophysics Data System (ADS)

    Glasscoe, M. T.; Donnellan, A.; Lyzenga, G. A.; Parker, J. W.; Milliner, C. W. D.

    2016-12-01

    Determining the distribution of slip and behavior of fault interactions at plate boundaries is a complex problem. Field and remotely sensed data often lack the necessary coverage to fully resolve fault behavior. However, realistic physical models may be used to more accurately characterize the complex behavior of faults constrained with observed data, such as GPS, InSAR, and SfM. These results will improve the utility of using combined models and data to estimate earthquake potential and characterize plate boundary behavior. Plate boundary faults exhibit complex behavior, with partitioned slip and distributed deformation. To investigate what fraction of slip becomes distributed deformation off major faults, we examine a model fault embedded within a damage zone of reduced elastic rigidity that narrows with depth and forward model the slip and resulting surface deformation. The fault segments and slip distributions are modeled using the JPL GeoFEST software. GeoFEST (Geophysical Finite Element Simulation Tool) is a two- and three-dimensional finite element software package for modeling solid stress and strain in geophysical and other continuum domain applications [Lyzenga, et al., 2000; Glasscoe, et al., 2004; Parker, et al., 2008, 2010]. New methods to advance geohazards research using computer simulations and remotely sensed observations for model validation are required to understand fault slip, the complex nature of fault interaction and plate boundary deformation. These models help enhance our understanding of the underlying processes, such as transient deformation and fault creep, and can aid in developing observation strategies for sUAV, airborne, and upcoming satellite missions seeking to determine how faults behave and interact and assess their associated hazard. Models will also help to characterize this behavior, which will enable improvements in hazard estimation. Validating the model results against remotely sensed observations will allow us to better constrain fault zone rheology and physical properties, having implications for the overall understanding of earthquake physics, fault interactions, plate boundary deformation and earthquake hazard, preparedness and risk reduction.

  12. Material model for physically based rendering

    NASA Astrophysics Data System (ADS)

    Robart, Mathieu; Paulin, Mathias; Caubet, Rene

    1999-09-01

    In computer graphics, a complete knowledge of the interactions between light and a material is essential to obtain photorealistic pictures. Physical measurements allow us to obtain data on the material response, but are limited to industrial surfaces and depend on measure conditions. Analytic models do exist, but they are often inadequate for common use: the empiric ones are too simple to be realistic, and the physically-based ones are often to complex or too specialized to be generally useful. Therefore, we have developed a multiresolution virtual material model, that not only describes the surface of a material, but also its internal structure thanks to distribution functions of microelements, arranged in layers. Each microelement possesses its own response to an incident light, from an elementary reflection to a complex response provided by its inner structure, taking into account geometry, energy, polarization, . . ., of each light ray. This model is virtually illuminated, in order to compute its response to an incident radiance. This directional response is stored in a compressed data structure using spherical wavelets, and is destined to be used in a rendering model such as directional radiosity.

  13. Experimental Validation of Various Temperature Modells for Semi-Physical Tyre Model Approaches

    NASA Astrophysics Data System (ADS)

    Hackl, Andreas; Scherndl, Christoph; Hirschberg, Wolfgang; Lex, Cornelia

    2017-10-01

    With increasing level of complexity and automation in the area of automotive engineering, the simulation of safety relevant Advanced Driver Assistance Systems (ADAS) leads to increasing accuracy demands in the description of tyre contact forces. In recent years, with improvement in tyre simulation, the needs for coping with tyre temperatures and the resulting changes in tyre characteristics are rising significantly. Therefore, experimental validation of three different temperature model approaches is carried out, discussed and compared in the scope of this article. To investigate or rather evaluate the range of application of the presented approaches in combination with respect of further implementation in semi-physical tyre models, the main focus lies on the a physical parameterisation. Aside from good modelling accuracy, focus is held on computational time and complexity of the parameterisation process. To evaluate this process and discuss the results, measurements from a Hoosier racing tyre 6.0 / 18.0 10 LCO C2000 from an industrial flat test bench are used. Finally the simulation results are compared with the measurement data.

  14. "Let's Get Physical": Advantages of a Physical Model over 3D Computer Models and Textbooks in Learning Imaging Anatomy

    ERIC Educational Resources Information Center

    Preece, Daniel; Williams, Sarah B.; Lam, Richard; Weller, Renate

    2013-01-01

    Three-dimensional (3D) information plays an important part in medical and veterinary education. Appreciating complex 3D spatial relationships requires a strong foundational understanding of anatomy and mental 3D visualization skills. Novel learning resources have been introduced to anatomy training to achieve this. Objective evaluation of their…

  15. Efficient evaluation of wireless real-time control networks.

    PubMed

    Horvath, Peter; Yampolskiy, Mark; Koutsoukos, Xenofon

    2015-02-11

    In this paper, we present a system simulation framework for the design and performance evaluation of complex wireless cyber-physical systems. We describe the simulator architecture and the specific developments that are required to simulate cyber-physical systems relying on multi-channel, multihop mesh networks. We introduce realistic and efficient physical layer models and a system simulation methodology, which provides statistically significant performance evaluation results with low computational complexity. The capabilities of the proposed framework are illustrated in the example of WirelessHART, a centralized, real-time, multi-hop mesh network designed for industrial control and monitor applications.

  16. >From naive to sophisticated behavior in multiagents-based financial market models

    NASA Astrophysics Data System (ADS)

    Mansilla, R.

    2000-09-01

    The behavior of physical complexity and mutual information function of the outcome of a model of heterogeneous, inductive rational agents inspired by the El Farol Bar problem and the Minority Game is studied. The first magnitude is a measure rooted in the Kolmogorov-Chaitin theory and the second a measure related to Shannon's information entropy. Extensive computer simulations were done, as a result of which, is proposed an ansatz for physical complexity of the type C(l)=lα and the dependence of the exponent α from the parameters of the model is established. The accuracy of our results and the relationship with the behavior of mutual information function as a measure of time correlation of agents choice are discussed.

  17. Recent advances in mathematical criminology. Comment on "Statistical physics of crime: A review" by M.R. D'Orsogna and M. Perc

    NASA Astrophysics Data System (ADS)

    Rodríguez, Nancy

    2015-03-01

    The use of mathematical tools has long proved to be useful in gaining understanding of complex systems in physics [1]. Recently, many researchers have realized that there is an analogy between emerging phenomena in complex social systems and complex physical or biological systems [4,5,12]. This realization has particularly benefited the modeling and understanding of crime, a ubiquitous phenomena that is far from being understood. In fact, when one is interested in the bulk behavior of patterns that emerge from small and seemingly unrelated interactions as well as decisions that occur at the individual level, the mathematical tools that have been developed in statistical physics, game theory, network theory, dynamical systems, and partial differential equations can be useful in shedding light into the dynamics of these patterns [2-4,6,12].

  18. Recovering the Physical Properties of Molecular Gas in Galaxies from CO SLED Modeling

    NASA Astrophysics Data System (ADS)

    Kamenetzky, J.; Privon, G. C.; Narayanan, D.

    2018-05-01

    Modeling of the spectral line energy distribution (SLED) of the CO molecule can reveal the physical conditions (temperature and density) of molecular gas in Galactic clouds and other galaxies. Recently, the Herschel Space Observatory and ALMA have offered, for the first time, a comprehensive view of the rotational J = 4‑3 through J = 13‑12 lines, which arise from a complex, diverse range of physical conditions that must be simplified to one, two, or three components when modeled. Here we investigate the recoverability of physical conditions from SLEDs produced by galaxy evolution simulations containing a large dynamical range in physical properties. These simulated SLEDs were generally fit well by one component of gas whose properties largely resemble or slightly underestimate the luminosity-weighted properties of the simulations when clumping due to nonthermal velocity dispersion is taken into account. If only modeling the first three rotational lines, the median values of the marginalized parameter distributions better represent the luminosity-weighted properties of the simulations, but the uncertainties in the fitted parameters are nearly an order of magnitude, compared to approximately 0.2 dex in the “best-case” scenario of a fully sampled SLED through J = 10‑9. This study demonstrates that while common CO SLED modeling techniques cannot reveal the underlying complexities of the molecular gas, they can distinguish bulk luminosity-weighted properties that vary with star formation surface densities and galaxy evolution, if a sufficient number of lines are detected and modeled.

  19. An applet for the Gabor similarity scaling of the differences between complex stimuli.

    PubMed

    Margalit, Eshed; Biederman, Irving; Herald, Sarah B; Yue, Xiaomin; von der Malsburg, Christoph

    2016-11-01

    It is widely accepted that after the first cortical visual area, V1, a series of stages achieves a representation of complex shapes, such as faces and objects, so that they can be understood and recognized. A major challenge for the study of complex shape perception has been the lack of a principled basis for scaling of the physical differences between stimuli so that their similarity can be specified, unconfounded by early-stage differences. Without the specification of such similarities, it is difficult to make sound inferences about the contributions of later stages to neural activity or psychophysical performance. A Web-based app is described that is based on the Malsburg Gabor-jet model (Lades et al., 1993), which allows easy specification of the V1 similarity of pairs of stimuli, no matter how intricate. The model predicts the psycho physical discriminability of metrically varying faces and complex blobs almost perfectly (Yue, Biederman, Mangini, von der Malsburg, & Amir, 2012), and serves as the input stage of a large family of contemporary neurocomputational models of vision.

  20. Representing Operational Modes for Situation Awareness

    NASA Astrophysics Data System (ADS)

    Kirchhübel, Denis; Lind, Morten; Ravn, Ole

    2017-01-01

    Operating complex plants is an increasingly demanding task for human operators. Diagnosis of and reaction to on-line events requires the interpretation of real time data. Vast amounts of sensor data as well as operational knowledge about the state and design of the plant are necessary to deduct reasonable reactions to abnormal situations. Intelligent computational support tools can make the operator’s task easier, but they require knowledge about the overall system in form of some model. While tools used for fault-tolerant control design based on physical principles and relations are valuable tools for designing robust systems, the models become too complex when considering the interactions on a plant-wide level. The alarm systems meant to support human operators in the diagnosis of the plant-wide situation on the other hand fail regularly in situations where these interactions of systems lead to many related alarms overloading the operator with alarm floods. Functional modelling can provide a middle way to reduce the complexity of plant-wide models by abstracting from physical details to more general functions and behaviours. Based on functional models the propagation of failures through the interconnected systems can be inferred and alarm floods can potentially be reduced to their root-cause. However, the desired behaviour of a complex system changes due to operating procedures that require more than one physical and functional configuration. In this paper a consistent representation of possible configurations is deduced from the analysis of an exemplary start-up procedure by functional models. The proposed interpretation of the modelling concepts simplifies the functional modelling of distinct modes. The analysis further reveals relevant links between the quantitative sensor data and the qualitative perspective of the diagnostics tool based on functional models. This will form the basis for the ongoing development of a novel real-time diagnostics system based on the on-line adaptation of the underlying MFM model.

  1. How human drivers control their vehicle

    NASA Astrophysics Data System (ADS)

    Wagner, P.

    2006-08-01

    The data presented here show that human drivers apply a discrete noisy control mechanism to drive their vehicle. A car-following model built on these observations, together with some physical limitations (crash-freeness, acceleration), lead to non-Gaussian probability distributions in the speed difference and distance which are in good agreement with empirical data. All model parameters have a clear physical meaning and can be measured. Despite its apparent complexity, this model is simple to understand and might serve as a starting point to develop even quantitatively correct models.

  2. Let's Get Physical: Teaching Physics through Gymnastics

    ERIC Educational Resources Information Center

    Sojourner, Elena J.; Burgasser, Adam J.; Weise, Eric D.

    2018-01-01

    The concept of embodied learning--that we can learn with our bodies and with our minds--is a well-established concept in physics and math education research, and includes symbolic understanding (e.g., gestures that track how students think or facilitate learning to model complex systems of energy flow) as well as the literal experience of…

  3. The SPARK Programs: A Public Health Model of Physical Education Research and Dissemination

    ERIC Educational Resources Information Center

    McKenzie, Thomas L.; Sallis, James F.; Rosengard, Paul; Ballard, Kymm

    2016-01-01

    SPARK [Sports, Play, and Active Recreation for Kids], in its current form, is a brand that represents a collection of exemplary, research-based, physical education and physical activity programs that emphasize a highly active curriculum, on-site staff development, and follow-up support. Given its complexity (e.g., multiple school levels, inclusion…

  4. Large Eddy Simulation of High Reynolds Number Complex Flows

    NASA Astrophysics Data System (ADS)

    Verma, Aman

    Marine configurations are subject to a variety of complex hydrodynamic phenomena affecting the overall performance of the vessel. The turbulent flow affects the hydrodynamic drag, propulsor performance and structural integrity, control-surface effectiveness, and acoustic signature of the marine vessel. Due to advances in massively parallel computers and numerical techniques, an unsteady numerical simulation methodology such as Large Eddy Simulation (LES) is well suited to study such complex turbulent flows whose Reynolds numbers (Re) are typically on the order of 10. 6. LES also promises increasedaccuracy over RANS based methods in predicting unsteady phenomena such as cavitation and noise production. This dissertation develops the capability to enable LES of high Re flows in complex geometries (e.g. a marine vessel) on unstructured grids and provide physical insight into the turbulent flow. LES is performed to investigate the geometry induced separated flow past a marine propeller attached to a hull, in an off-design condition called crashback. LES shows good quantitative agreement with experiments and provides a physical mechanism to explain the increase in side-force on the propeller blades below an advance ratio of J=-0.7. Fundamental developments in the dynamic subgrid-scale model for LES are pursued to improve the LES predictions, especially for complex flows on unstructured grids. A dynamic procedure is proposed to estimate a Lagrangian time scale based on a surrogate correlation without any adjustable parameter. The proposed model is applied to turbulent channel, cylinder and marine propeller flows and predicts improved results over other model variants due to a physically consistent Lagrangian time scale. A wall model is proposed for application to LES of high Reynolds number wall-bounded flows. The wall model is formulated as the minimization of a generalized constraint in the dynamic model for LES and applied to LES of turbulent channel flow at various Reynolds numbers up to Reτ=10000 and coarse grid resolutions to obtain significant improvement.

  5. A study of the electrical properties of complex resistor network based on NW model

    NASA Astrophysics Data System (ADS)

    Chang, Yunfeng; Li, Yunting; Yang, Liu; Guo, Lu; Liu, Gaochao

    2015-04-01

    The power and resistance of two-port complex resistor network based on NW small world network model are studied in this paper. Mainly, we study the dependence of the network power and resistance on the degree of port vertices, the connection probability and the shortest distance. Qualitative analysis and a simplified formula for network resistance are given out. Finally, we define a branching parameter and give out its physical meaning in the analysis of complex resistor network.

  6. Field Extension of Real Values of Physical Observables in Classical Theory can Help Attain Quantum Results

    NASA Astrophysics Data System (ADS)

    Wang, Hai; Kumar, Asutosh; Cho, Minhyung; Wu, Junde

    2018-04-01

    Physical quantities are assumed to take real values, which stems from the fact that an usual measuring instrument that measures a physical observable always yields a real number. Here we consider the question of what would happen if physical observables are allowed to assume complex values. In this paper, we show that by allowing observables in the Bell inequality to take complex values, a classical physical theory can actually get the same upper bound of the Bell expression as quantum theory. Also, by extending the real field to the quaternionic field, we can puzzle out the GHZ problem using local hidden variable model. Furthermore, we try to build a new type of hidden-variable theory of a single qubit based on the result.

  7. Agent-Based Models in Social Physics

    NASA Astrophysics Data System (ADS)

    Quang, Le Anh; Jung, Nam; Cho, Eun Sung; Choi, Jae Han; Lee, Jae Woo

    2018-06-01

    We review the agent-based models (ABM) on social physics including econophysics. The ABM consists of agent, system space, and external environment. The agent is autonomous and decides his/her behavior by interacting with the neighbors or the external environment with the rules of behavior. Agents are irrational because they have only limited information when they make decisions. They adapt using learning from past memories. Agents have various attributes and are heterogeneous. ABM is a non-equilibrium complex system that exhibits various emergence phenomena. The social complexity ABM describes human behavioral characteristics. In ABMs of econophysics, we introduce the Sugarscape model and the artificial market models. We review minority games and majority games in ABMs of game theory. Social flow ABM introduces crowding, evacuation, traffic congestion, and pedestrian dynamics. We also review ABM for opinion dynamics and voter model. We discuss features and advantages and disadvantages of Netlogo, Repast, Swarm, and Mason, which are representative platforms for implementing ABM.

  8. iMarNet: an ocean biogeochemistry model inter-comparison project within a common physical ocean modelling framework

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, L.; Yool, A.; Allen, J. I.; Anderson, T. R.; Barciela, R.; Buitenhuis, E. T.; Butenschön, M.; Enright, C.; Halloran, P. R.; Le Quéré, C.; de Mora, L.; Racault, M.-F.; Sinha, B.; Totterdell, I. J.; Cox, P. M.

    2014-07-01

    Ocean biogeochemistry (OBGC) models span a wide range of complexities from highly simplified, nutrient-restoring schemes, through nutrient-phytoplankton-zooplankton-detritus (NPZD) models that crudely represent the marine biota, through to models that represent a broader trophic structure by grouping organisms as plankton functional types (PFT) based on their biogeochemical role (Dynamic Green Ocean Models; DGOM) and ecosystem models which group organisms by ecological function and trait. OBGC models are now integral components of Earth System Models (ESMs), but they compete for computing resources with higher resolution dynamical setups and with other components such as atmospheric chemistry and terrestrial vegetation schemes. As such, the choice of OBGC in ESMs needs to balance model complexity and realism alongside relative computing cost. Here, we present an inter-comparison of six OBGC models that were candidates for implementation within the next UK Earth System Model (UKESM1). The models cover a large range of biological complexity (from 7 to 57 tracers) but all include representations of at least the nitrogen, carbon, alkalinity and oxygen cycles. Each OBGC model was coupled to the Nucleus for the European Modelling of the Ocean (NEMO) ocean general circulation model (GCM), and results from physically identical hindcast simulations were compared. Model skill was evaluated for biogeochemical metrics of global-scale bulk properties using conventional statistical techniques. The computing cost of each model was also measured in standardised tests run at two resource levels. No model is shown to consistently outperform or underperform all other models across all metrics. Nonetheless, the simpler models that are easier to tune are broadly closer to observations across a number of fields, and thus offer a high-efficiency option for ESMs that prioritise high resolution climate dynamics. However, simpler models provide limited insight into more complex marine biogeochemical processes and ecosystem pathways, and a parallel approach of low resolution climate dynamics and high complexity biogeochemistry is desirable in order to provide additional insights into biogeochemistry-climate interactions.

  9. iMarNet: an ocean biogeochemistry model intercomparison project within a common physical ocean modelling framework

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, L.; Yool, A.; Allen, J. I.; Anderson, T. R.; Barciela, R.; Buitenhuis, E. T.; Butenschön, M.; Enright, C.; Halloran, P. R.; Le Quéré, C.; de Mora, L.; Racault, M.-F.; Sinha, B.; Totterdell, I. J.; Cox, P. M.

    2014-12-01

    Ocean biogeochemistry (OBGC) models span a wide variety of complexities, including highly simplified nutrient-restoring schemes, nutrient-phytoplankton-zooplankton-detritus (NPZD) models that crudely represent the marine biota, models that represent a broader trophic structure by grouping organisms as plankton functional types (PFTs) based on their biogeochemical role (dynamic green ocean models) and ecosystem models that group organisms by ecological function and trait. OBGC models are now integral components of Earth system models (ESMs), but they compete for computing resources with higher resolution dynamical setups and with other components such as atmospheric chemistry and terrestrial vegetation schemes. As such, the choice of OBGC in ESMs needs to balance model complexity and realism alongside relative computing cost. Here we present an intercomparison of six OBGC models that were candidates for implementation within the next UK Earth system model (UKESM1). The models cover a large range of biological complexity (from 7 to 57 tracers) but all include representations of at least the nitrogen, carbon, alkalinity and oxygen cycles. Each OBGC model was coupled to the ocean general circulation model Nucleus for European Modelling of the Ocean (NEMO) and results from physically identical hindcast simulations were compared. Model skill was evaluated for biogeochemical metrics of global-scale bulk properties using conventional statistical techniques. The computing cost of each model was also measured in standardised tests run at two resource levels. No model is shown to consistently outperform all other models across all metrics. Nonetheless, the simpler models are broadly closer to observations across a number of fields and thus offer a high-efficiency option for ESMs that prioritise high-resolution climate dynamics. However, simpler models provide limited insight into more complex marine biogeochemical processes and ecosystem pathways, and a parallel approach of low-resolution climate dynamics and high-complexity biogeochemistry is desirable in order to provide additional insights into biogeochemistry-climate interactions.

  10. Tele-Immersion: Preferred Infrastructure for Anatomy Instruction

    ERIC Educational Resources Information Center

    Silverstein, Jonathan C.; Ehrenfeld, Jesse M.; Croft, Darin A.; Dech, Fred W.; Small, Stephen; Cook, Sandy

    2006-01-01

    Understanding spatial relationships among anatomic structures is an essential skill for physicians. Traditional medical education--using books, lectures, physical models, and cadavers--may be insufficient for teaching complex anatomical relationships. This study was designed to measure whether teaching complex anatomy to medical students using…

  11. Scientific and technical complex for modeling, researching and testing of rocket-space vehicles’ electric power installations

    NASA Astrophysics Data System (ADS)

    Bezruchko, Konstantin; Davidov, Albert

    2009-01-01

    In the given article scientific and technical complex for modeling, researching and testing of rocket-space vehicles' power installations which was created in Power Source Laboratory of National Aerospace University "KhAI" is described. This scientific and technical complex gives the opportunity to replace the full-sized tests on model tests and to reduce financial and temporary inputs at modeling, researching and testing of rocket-space vehicles' power installations. Using the given complex it is possible to solve the problems of designing and researching of rocket-space vehicles' power installations efficiently, and also to provide experimental researches of physical processes and tests of solar and chemical batteries of rocket-space complexes and space vehicles. Scientific and technical complex also allows providing accelerated tests, diagnostics, life-time control and restoring of chemical accumulators for rocket-space vehicles' power supply systems.

  12. Graphene growth process modeling: a physical-statistical approach

    NASA Astrophysics Data System (ADS)

    Wu, Jian; Huang, Qiang

    2014-09-01

    As a zero-band semiconductor, graphene is an attractive material for a wide variety of applications such as optoelectronics. Among various techniques developed for graphene synthesis, chemical vapor deposition on copper foils shows high potential for producing few-layer and large-area graphene. Since fabrication of high-quality graphene sheets requires the understanding of growth mechanisms, and methods of characterization and control of grain size of graphene flakes, analytical modeling of graphene growth process is therefore essential for controlled fabrication. The graphene growth process starts with randomly nucleated islands that gradually develop into complex shapes, grow in size, and eventually connect together to cover the copper foil. To model this complex process, we develop a physical-statistical approach under the assumption of self-similarity during graphene growth. The growth kinetics is uncovered by separating island shapes from area growth rate. We propose to characterize the area growth velocity using a confined exponential model, which not only has clear physical explanation, but also fits the real data well. For the shape modeling, we develop a parametric shape model which can be well explained by the angular-dependent growth rate. This work can provide useful information for the control and optimization of graphene growth process on Cu foil.

  13. The sensitivity of precipitation simulations to the soot aerosol presence

    NASA Astrophysics Data System (ADS)

    Palamarchuk, Iuliia; Ivanov, Sergiy; Mahura, Alexander; Ruban, Igor

    2016-04-01

    The role of aerosols in nonlinear feedbacks on atmospheric processes is in a focus of many researches. Particularly, the importance of black carbon particles for evolution of physical weather including precipitation formation and release is investigated by numerical modelling as well as observation networks. However, certain discrepancies between results obtained by different methods are remained. The increasing of complexity in numerical weather modelling systems leads to enlarging a volume of output data and promises to reveal new aspects in complexity of interactions and feedbacks. The Harmonie-38h1.2 model with the AROME physical package is used to study changes in precipitation life-cycle under black carbon polluted conditions. A model configuration includes a radar data assimilation procedure on a high resolution domain covering the Scandinavia region. Model results show that precipitation rate and distribution as well as other variables of atmospheric dynamics and physics over the domain are sensitive to aerosol concentrations. The attention should also be paid to numerical aspects, such as a list of observation types involved in assimilation. The use of high resolution radar information allows to include mesoscale features in initial conditions and to decrease the growth rate of a model error with the lead time.

  14. Thermostatted kinetic equations as models for complex systems in physics and life sciences.

    PubMed

    Bianca, Carlo

    2012-12-01

    Statistical mechanics is a powerful method for understanding equilibrium thermodynamics. An equivalent theoretical framework for nonequilibrium systems has remained elusive. The thermodynamic forces driving the system away from equilibrium introduce energy that must be dissipated if nonequilibrium steady states are to be obtained. Historically, further terms were introduced, collectively called a thermostat, whose original application was to generate constant-temperature equilibrium ensembles. This review surveys kinetic models coupled with time-reversible deterministic thermostats for the modeling of large systems composed both by inert matter particles and living entities. The introduction of deterministic thermostats allows to model the onset of nonequilibrium stationary states that are typical of most real-world complex systems. The first part of the paper is focused on a general presentation of the main physical and mathematical definitions and tools: nonequilibrium phenomena, Gauss least constraint principle and Gaussian thermostats. The second part provides a review of a variety of thermostatted mathematical models in physics and life sciences, including Kac, Boltzmann, Jager-Segel and the thermostatted (continuous and discrete) kinetic for active particles models. Applications refer to semiconductor devices, nanosciences, biological phenomena, vehicular traffic, social and economics systems, crowds and swarms dynamics. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Sensitivity of Precipitation in Coupled Land-Atmosphere Models

    NASA Technical Reports Server (NTRS)

    Neelin, David; Zeng, N.; Suarez, M.; Koster, R.

    2004-01-01

    The project objective was to understand mechanisms by which atmosphere-land-ocean processes impact precipitation in the mean climate and interannual variations, focusing on tropical and subtropical regions. A combination of modeling tools was used: an intermediate complexity land-atmosphere model developed at UCLA known as the QTCM and the NASA Seasonal-to-Interannual Prediction Program general circulation model (NSIPP GCM). The intermediate complexity model was used to develop hypotheses regarding the physical mechanisms and theory for the interplay of large-scale dynamics, convective heating, cloud radiative effects and land surface feedbacks. The theoretical developments were to be confronted with diagnostics from the more complex GCM to validate or modify the theory.

  16. Modeling wind fields and fire propagation following bark beetle outbreaks in spatially-heterogeneous pinyon-juniper woodland fuel complexes

    Treesearch

    Rodman R. Linn; Carolyn H. Sieg; Chad M. Hoffman; Judith L. Winterkamp; Joel D. McMillin

    2013-01-01

    We used a physics-based model, HIGRAD/FIRETEC, to explore changes in within-stand wind behavior and fire propagation associated with three time periods in pinyon-juniper woodlands following a drought-induced bark beetle outbreak and subsequent tree mortality. Pinyon-juniper woodland fuel complexes are highly heterogeneous. Trees often are clumped, with sparse patches...

  17. A Complex Network Approach to Stylometry

    PubMed Central

    Amancio, Diego Raphael

    2015-01-01

    Statistical methods have been widely employed to study the fundamental properties of language. In recent years, methods from complex and dynamical systems proved useful to create several language models. Despite the large amount of studies devoted to represent texts with physical models, only a limited number of studies have shown how the properties of the underlying physical systems can be employed to improve the performance of natural language processing tasks. In this paper, I address this problem by devising complex networks methods that are able to improve the performance of current statistical methods. Using a fuzzy classification strategy, I show that the topological properties extracted from texts complement the traditional textual description. In several cases, the performance obtained with hybrid approaches outperformed the results obtained when only traditional or networked methods were used. Because the proposed model is generic, the framework devised here could be straightforwardly used to study similar textual applications where the topology plays a pivotal role in the description of the interacting agents. PMID:26313921

  18. Physical modelling of the nuclear pore complex

    PubMed Central

    Fassati, Ariberto; Ford, Ian J.; Hoogenboom, Bart W.

    2013-01-01

    Physically interesting behaviour can arise when soft matter is confined to nanoscale dimensions. A highly relevant biological example of such a phenomenon is the Nuclear Pore Complex (NPC) found perforating the nuclear envelope of eukaryotic cells. In the central conduit of the NPC, of ∼30–60 nm diameter, a disordered network of proteins regulates all macromolecular transport between the nucleus and the cytoplasm. In spite of a wealth of experimental data, the selectivity barrier of the NPC has yet to be explained fully. Experimental and theoretical approaches are complicated by the disordered and heterogeneous nature of the NPC conduit. Modelling approaches have focused on the behaviour of the partially unfolded protein domains in the confined geometry of the NPC conduit, and have demonstrated that within the range of parameters thought relevant for the NPC, widely varying behaviour can be observed. In this review, we summarise recent efforts to physically model the NPC barrier and function. We illustrate how attempts to understand NPC barrier function have employed many different modelling techniques, each of which have contributed to our understanding of the NPC.

  19. Forum: The challenge of global change

    NASA Astrophysics Data System (ADS)

    Roederer, Juan G.

    1990-09-01

    How can we sustain a public sense of the common danger of global change while remaining honest in view of the realities of scientific uncertainty? How can we nurture this sense of common danger without making statements based on half-baked ideas, statistically unreliable results, or oversimplified models? How can we strike a balance between the need to overstate a case to attract the attention of the media and the obligation to adhere strictly to the ethos of science?The task of achieving a scientific understanding of the inner workings of the terrestrial environment is one of the most difficult and ambitious endeavors of humankind. It is full of traps, temptations and deceptions for the participating scientists. We are dealing with a horrendously complex, strongly interactive, highly non-linear system. Lessons learned from disciplines such as plasma physics and solid state physics which have been dealing with complex non-linear systems for decades, are not very encouraging. The first thing one learns is that there are intrinsic, physical limits to the quantitative predictability of a complex system that have nothing to do with the particular techniques employed to model it.

  20. Development of an Unstructured, Three-Dimensional Material Response Design Tool

    NASA Technical Reports Server (NTRS)

    Schulz, Joseph; Stern, Eric; Palmer, Grant; Muppidi, Suman; Schroeder, Olivia

    2017-01-01

    A preliminary verification and validation of a new material response model is presented. This model, Icarus, is intended to serve as a design tool for the thermal protection systems of re-entry vehicles. Currently, the capability of the model is limited to simulating the pyrolysis of a material as a result of the radiative and convective surface heating imposed on the material from the surrounding high enthalpy gas. Since the major focus behind the development of Icarus has been model extensibility, the hope is that additional physics can be quickly added. The extensibility is critical since thermal protection systems are becoming increasing complex, e.g. woven carbon polymers. Additionally, as a three-dimensional, unstructured, finite-volume model, Icarus is capable of modeling complex geometries as well as multi-dimensional physics, which have been shown to be important in some scenarios and are not captured by one-dimensional models. In this paper, the mathematical and numerical formulation is presented followed by a discussion of the software architecture and some preliminary verification and validation studies.

  1. Characterizing time series: when Granger causality triggers complex networks

    NASA Astrophysics Data System (ADS)

    Ge, Tian; Cui, Yindong; Lin, Wei; Kurths, Jürgen; Liu, Chong

    2012-08-01

    In this paper, we propose a new approach to characterize time series with noise perturbations in both the time and frequency domains by combining Granger causality and complex networks. We construct directed and weighted complex networks from time series and use representative network measures to describe their physical and topological properties. Through analyzing the typical dynamical behaviors of some physical models and the MIT-BIHMassachusetts Institute of Technology-Beth Israel Hospital. human electrocardiogram data sets, we show that the proposed approach is able to capture and characterize various dynamics and has much potential for analyzing real-world time series of rather short length.

  2. Wuestite (Fe/1-x/O) - A review of its defect structure and physical properties

    NASA Technical Reports Server (NTRS)

    Hazen, R. M.; Jeanloz, R.

    1984-01-01

    Such complexities of the Wustite structure as nonstoichiometry, ferric iron variable site distribution, long and short range ordering, and exsolution, yield complex physical properties. Magnesiowustite, a phase which has been suggested to occur in the earth's lower mantle, is also expected to exhibit many of these complexities. Geophysical models including the properties of (Mg, Fe)O should accordingly take into account the uncertainties associated with the synthesis and measurement of iron-rich oxides. Given the variability of the Fe(1-x)O structure, it is important that future researchers define the structural state and extent of exsolution of their samples.

  3. Model annotation for synthetic biology: automating model to nucleotide sequence conversion

    PubMed Central

    Misirli, Goksel; Hallinan, Jennifer S.; Yu, Tommy; Lawson, James R.; Wimalaratne, Sarala M.; Cooling, Michael T.; Wipat, Anil

    2011-01-01

    Motivation: The need for the automated computational design of genetic circuits is becoming increasingly apparent with the advent of ever more complex and ambitious synthetic biology projects. Currently, most circuits are designed through the assembly of models of individual parts such as promoters, ribosome binding sites and coding sequences. These low level models are combined to produce a dynamic model of a larger device that exhibits a desired behaviour. The larger model then acts as a blueprint for physical implementation at the DNA level. However, the conversion of models of complex genetic circuits into DNA sequences is a non-trivial undertaking due to the complexity of mapping the model parts to their physical manifestation. Automating this process is further hampered by the lack of computationally tractable information in most models. Results: We describe a method for automatically generating DNA sequences from dynamic models implemented in CellML and Systems Biology Markup Language (SBML). We also identify the metadata needed to annotate models to facilitate automated conversion, and propose and demonstrate a method for the markup of these models using RDF. Our algorithm has been implemented in a software tool called MoSeC. Availability: The software is available from the authors' web site http://research.ncl.ac.uk/synthetic_biology/downloads.html. Contact: anil.wipat@ncl.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21296753

  4. Lithium-ion battery models: a comparative study and a model-based powerline communication

    NASA Astrophysics Data System (ADS)

    Saidani, Fida; Hutter, Franz X.; Scurtu, Rares-George; Braunwarth, Wolfgang; Burghartz, Joachim N.

    2017-09-01

    In this work, various Lithium-ion (Li-ion) battery models are evaluated according to their accuracy, complexity and physical interpretability. An initial classification into physical, empirical and abstract models is introduced. Also known as white, black and grey boxes, respectively, the nature and characteristics of these model types are compared. Since the Li-ion battery cell is a thermo-electro-chemical system, the models are either in the thermal or in the electrochemical state-space. Physical models attempt to capture key features of the physical process inside the cell. Empirical models describe the system with empirical parameters offering poor analytical, whereas abstract models provide an alternative representation. In addition, a model selection guideline is proposed based on applications and design requirements. A complex model with a detailed analytical insight is of use for battery designers but impractical for real-time applications and in situ diagnosis. In automotive applications, an abstract model reproducing the battery behavior in an equivalent but more practical form, mainly as an equivalent circuit diagram, is recommended for the purpose of battery management. As a general rule, a trade-off should be reached between the high fidelity and the computational feasibility. Especially if the model is embedded in a real-time monitoring unit such as a microprocessor or a FPGA, the calculation time and memory requirements rise dramatically with a higher number of parameters. Moreover, examples of equivalent circuit models of Lithium-ion batteries are covered. Equivalent circuit topologies are introduced and compared according to the previously introduced criteria. An experimental sequence to model a 20 Ah cell is presented and the results are used for the purposes of powerline communication.

  5. Regional-scale brine migration along vertical pathways due to CO2 injection - Part 2: A simulated case study in the North German Basin

    NASA Astrophysics Data System (ADS)

    Kissinger, Alexander; Noack, Vera; Knopf, Stefan; Konrad, Wilfried; Scheer, Dirk; Class, Holger

    2017-06-01

    Saltwater intrusion into potential drinking water aquifers due to the injection of CO2 into deep saline aquifers is one of the hazards associated with the geological storage of CO2. Thus, in a site-specific risk assessment, models for predicting the fate of the displaced brine are required. Practical simulation of brine displacement involves decisions regarding the complexity of the model. The choice of an appropriate level of model complexity depends on multiple criteria: the target variable of interest, the relevant physical processes, the computational demand, the availability of data, and the data uncertainty. In this study, we set up a regional-scale geological model for a realistic (but not real) onshore site in the North German Basin with characteristic geological features for that region. A major aim of this work is to identify the relevant parameters controlling saltwater intrusion in a complex structural setting and to test the applicability of different model simplifications. The model that is used to identify relevant parameters fully couples flow in shallow freshwater aquifers and deep saline aquifers. This model also includes variable-density transport of salt and realistically incorporates surface boundary conditions with groundwater recharge. The complexity of this model is then reduced in several steps, by neglecting physical processes (two-phase flow near the injection well, variable-density flow) and by simplifying the complex geometry of the geological model. The results indicate that the initial salt distribution prior to the injection of CO2 is one of the key parameters controlling shallow aquifer salinization. However, determining the initial salt distribution involves large uncertainties in the regional-scale hydrogeological parameterization and requires complex and computationally demanding models (regional-scale variable-density salt transport). In order to evaluate strategies for minimizing leakage into shallow aquifers, other target variables can be considered, such as the volumetric leakage rate into shallow aquifers or the pressure buildup in the injection horizon. Our results show that simplified models, which neglect variable-density salt transport, can reach an acceptable agreement with more complex models.

  6. Rock.XML - Towards a library of rock physics models

    NASA Astrophysics Data System (ADS)

    Jensen, Erling Hugo; Hauge, Ragnar; Ulvmoen, Marit; Johansen, Tor Arne; Drottning, Åsmund

    2016-08-01

    Rock physics modelling provides tools for correlating physical properties of rocks and their constituents to the geophysical observations we measure on a larger scale. Many different theoretical and empirical models exist, to cover the range of different types of rocks. However, upon reviewing these, we see that they are all built around a few main concepts. Based on this observation, we propose a format for digitally storing the specifications for rock physics models which we have named Rock.XML. It does not only contain data about the various constituents, but also the theories and how they are used to combine these building blocks to make a representative model for a particular rock. The format is based on the Extensible Markup Language XML, making it flexible enough to handle complex models as well as scalable towards extending it with new theories and models. This technology has great advantages as far as documenting and exchanging models in an unambiguous way between people and between software. Rock.XML can become a platform for creating a library of rock physics models; making them more accessible to everyone.

  7. An Amotivation Model in Physical Education

    ERIC Educational Resources Information Center

    Shen, Bo; Wingert, Robert K.; Li, Weidong; Sun, Haichun; Rukavina, Paul Bernard

    2010-01-01

    Amotivation refers to a state in which individuals cannot perceive a relationship between their behavior and that behavior's subsequent outcome. With the belief that considering amotivation as a multidimensional construct could reflect the complexity of motivational deficits in physical education, we developed this study to validate an amotivation…

  8. Fractal and multifractal models for extreme bursts in space plasmas.

    NASA Astrophysics Data System (ADS)

    Watkins, Nicholas; Chapman, Sandra; Credgington, Dan; Rosenberg, Sam; Sanchez, Raul

    2010-05-01

    Space plasmas may be said to show at least two types of "universality". One type arises from the fact that plasma physics underpins all astrophysical systems, while another arises from the generic properties of coupled nonlinear physical systems, a branch of the emerging science of complexity. Much work in complexity science is contributing to the physical understanding of the ways by which complex interactions in such systems cause driven or random perturbations to be nonlinearly amplified in amplitude and/or spread out over a wide range of frequencies. These mechanisms lead to non-Gaussian fluctuations and long-ranged temporal memory (referred to by Mandelbrot as the "Noah" and "Joseph" effects, respectively). This poster discusses a standard toy model (linear fractional stable motion, LFSM) which combines the Noah and Joseph effects in a controllable way. I will describe how LFSM is being used to explore the interplay of the above two effects in the distribution of bursts above thresholds, with applications to extreme events in space time series. I will describe ongoing work to improve the accuracy of maximum likelihood-based estimation of burst size and waiting time distributions for LFSM first reported in Watkins et al [Space Science Review, 2005; PRE, 2009]. The relevance of turbulent cascades to space plasmas necessitates comparison between this model and multifractal models, and early results will be described [Watkins et al, PRL comment, 2009].

  9. Integrated Formulation of Beacon-Based Exception Analysis for Multimissions

    NASA Technical Reports Server (NTRS)

    Mackey, Ryan; James, Mark; Park, Han; Zak, Mickail

    2003-01-01

    Further work on beacon-based exception analysis for multimissions (BEAM), a method of real-time, automated diagnosis of a complex electromechanical systems, has greatly expanded its capability and suitability of application. This expanded formulation, which fully integrates physical models and symbolic analysis, is described. The new formulation of BEAM expands upon previous advanced techniques for analysis of signal data, utilizing mathematical modeling of the system physics, and expert-system reasoning,

  10. Revealing physical interaction networks from statistics of collective dynamics

    PubMed Central

    Nitzan, Mor; Casadiego, Jose; Timme, Marc

    2017-01-01

    Revealing physical interactions in complex systems from observed collective dynamics constitutes a fundamental inverse problem in science. Current reconstruction methods require access to a system’s model or dynamical data at a level of detail often not available. We exploit changes in invariant measures, in particular distributions of sampled states of the system in response to driving signals, and use compressed sensing to reveal physical interaction networks. Dynamical observations following driving suffice to infer physical connectivity even if they are temporally disordered, are acquired at large sampling intervals, and stem from different experiments. Testing various nonlinear dynamic processes emerging on artificial and real network topologies indicates high reconstruction quality for existence as well as type of interactions. These results advance our ability to reveal physical interaction networks in complex synthetic and natural systems. PMID:28246630

  11. Student Cognitive Difficulties and Mental Model Development of Complex Earth and Environmental Systems

    NASA Astrophysics Data System (ADS)

    Sell, K.; Herbert, B.; Schielack, J.

    2004-05-01

    Students organize scientific knowledge and reason about environmental issues through manipulation of mental models. The nature of the environmental sciences, which are focused on the study of complex, dynamic systems, may present cognitive difficulties to students in their development of authentic, accurate mental models of environmental systems. The inquiry project seeks to develop and assess the coupling of information technology (IT)-based learning with physical models in order to foster rich mental model development of environmental systems in geoscience undergraduate students. The manipulation of multiple representations, the development and testing of conceptual models based on available evidence, and exposure to authentic, complex and ill-constrained problems were the components of investigation utilized to reach the learning goals. Upper-level undergraduate students enrolled in an environmental geology course at Texas A&M University participated in this research which served as a pilot study. Data based on rubric evaluations interpreted by principal component analyses suggest students' understanding of the nature of scientific inquiry is limited and the ability to cross scales and link systems proved problematic. Results categorized into content knowledge and cognition processes where reasoning, critical thinking and cognitive load were driving factors behind difficulties in student learning. Student mental model development revealed multiple misconceptions and lacked complexity and completeness to represent the studied systems. Further, the positive learning impacts of the implemented modules favored the physical model over the IT-based learning projects, likely due to cognitive load issues. This study illustrates the need to better understand student difficulties in solving complex problems when using IT, where the appropriate scaffolding can then be implemented to enhance student learning of the earth system sciences.

  12. Modeling and simulation of high dimensional stochastic multiscale PDE systems at the exascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zabaras, Nicolas J.

    2016-11-08

    Predictive Modeling of multiscale and Multiphysics systems requires accurate data driven characterization of the input uncertainties, and understanding of how they propagate across scales and alter the final solution. This project develops a rigorous mathematical framework and scalable uncertainty quantification algorithms to efficiently construct realistic low dimensional input models, and surrogate low complexity systems for the analysis, design, and control of physical systems represented by multiscale stochastic PDEs. The work can be applied to many areas including physical and biological processes, from climate modeling to systems biology.

  13. Model-Based Diagnostics for Propellant Loading Systems

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew John; Foygel, Michael; Smelyanskiy, Vadim N.

    2011-01-01

    The loading of spacecraft propellants is a complex, risky operation. Therefore, diagnostic solutions are necessary to quickly identify when a fault occurs, so that recovery actions can be taken or an abort procedure can be initiated. Model-based diagnosis solutions, established using an in-depth analysis and understanding of the underlying physical processes, offer the advanced capability to quickly detect and isolate faults, identify their severity, and predict their effects on system performance. We develop a physics-based model of a cryogenic propellant loading system, which describes the complex dynamics of liquid hydrogen filling from a storage tank to an external vehicle tank, as well as the influence of different faults on this process. The model takes into account the main physical processes such as highly nonequilibrium condensation and evaporation of the hydrogen vapor, pressurization, and also the dynamics of liquid hydrogen and vapor flows inside the system in the presence of helium gas. Since the model incorporates multiple faults in the system, it provides a suitable framework for model-based diagnostics and prognostics algorithms. Using this model, we analyze the effects of faults on the system, derive symbolic fault signatures for the purposes of fault isolation, and perform fault identification using a particle filter approach. We demonstrate the detection, isolation, and identification of a number of faults using simulation-based experiments.

  14. Linking Local Scale Ecosystem Science to Regional Scale Management

    NASA Astrophysics Data System (ADS)

    Shope, C. L.; Tenhunen, J.; Peiffer, S.

    2012-04-01

    Ecosystem management with respect to sufficient water yield, a quality water supply, habitat and biodiversity conservation, and climate change effects requires substantial observational data at a range of scales. Complex interactions of local physical processes oftentimes vary over space and time, particularly in locations with extreme meteorological conditions. Modifications to local conditions (ie: agricultural land use changes, nutrient additions, landscape management, water usage) can further affect regional ecosystem services. The international, inter-disciplinary TERRECO research group is intensively investigating a variety of local processes, parameters, and conditions to link complex physical, economic, and social interactions at the regional scale. Field-based meteorology, hydrology, soil physics, plant production, solute and sediment transport, economic, and social behavior data were measured in a South Korean catchment. The data are used to parameterize suite of models describing local to landscape level water, sediment, nutrient, and monetary relationships. We focus on using the agricultural and hydrological SWAT model to synthesize the experimental field data and local-scale models throughout the catchment. The approach of our study was to describe local scientific processes, link potential interrelationships between different processes, and predict environmentally efficient management efforts. The Haean catchment case study shows how research can be structured to provide cross-disciplinary scientific linkages describing complex ecosystems and landscapes that can be used for regional management evaluations and predictions.

  15. Time scale variations of the CIV resonance lines in HD 24534

    NASA Astrophysics Data System (ADS)

    Tsatsi, A.

    2012-01-01

    Many lines in the spectra of hot emission stars (Be and Oe) present peculiar and very complex profiles. As a result we can not find a classical theoretical distribution to fit these physical profiles; hence many physical parameters of the regions where these lines are created are difficult to be determined. In this paper, we adopt the Gauss-Rotation model (GR-model), that proposed the idea that these complex profiles consist of a number of independent Discrete or Satellite Absorption Components (DACs, SACs). The model is applied for CIV (λλ 1548.187, 1550.772 A) resonance lines in the spectra of HD 24534 (X Persei), taken by I.U.E. at three different periods. From this analysis we can calculate the values of a group of physical parameters, such as the apparent rotational and radial velocities, the random velocities of the thermal motions of the ions, as well as the Full Width at Half Maximum (FWHM) and the absorbed energy of the independent regions of matter which produce the main and the satellite components of the studied spectral lines. Finally, we calculate the time scale variation of the above physical parameters.

  16. Hidden physics models: Machine learning of nonlinear partial differential equations

    NASA Astrophysics Data System (ADS)

    Raissi, Maziar; Karniadakis, George Em

    2018-03-01

    While there is currently a lot of enthusiasm about "big data", useful data is usually "small" and expensive to acquire. In this paper, we present a new paradigm of learning partial differential equations from small data. In particular, we introduce hidden physics models, which are essentially data-efficient learning machines capable of leveraging the underlying laws of physics, expressed by time dependent and nonlinear partial differential equations, to extract patterns from high-dimensional data generated from experiments. The proposed methodology may be applied to the problem of learning, system identification, or data-driven discovery of partial differential equations. Our framework relies on Gaussian processes, a powerful tool for probabilistic inference over functions, that enables us to strike a balance between model complexity and data fitting. The effectiveness of the proposed approach is demonstrated through a variety of canonical problems, spanning a number of scientific domains, including the Navier-Stokes, Schrödinger, Kuramoto-Sivashinsky, and time dependent linear fractional equations. The methodology provides a promising new direction for harnessing the long-standing developments of classical methods in applied mathematics and mathematical physics to design learning machines with the ability to operate in complex domains without requiring large quantities of data.

  17. Perceived Physical Availability of Alcohol at Work and Workplace Alcohol Use and Impairment: Testing a Structural Model

    PubMed Central

    Frone, Michael R.; Trinidad, Jonathan R.

    2014-01-01

    This study develops and tests a new conceptual model of perceived physical availability of alcohol at work that provides unique insight into three dimensions of workplace physical availability of alcohol and their direct and indirect relations to workplace alcohol use and impairment. Data were obtained from a national probability sample of 2,727 U.S. workers. The results support the proposed conceptual model and provide empirical support for a positive relation of perceived physical availability of alcohol at work to workplace alcohol use and two dimensions of workplace impairment (workplace intoxication and workplace hangover). Ultimately, the findings suggest that perceived physical availability of alcohol at work is a risk factor for alcohol use and impairment during the workday, and that this relation is more complex than previously hypothesized. PMID:25243831

  18. Wavefield complexity and stealth structures: Resolution constraints by wave physics

    NASA Astrophysics Data System (ADS)

    Nissen-Meyer, T.; Leng, K.

    2017-12-01

    Imaging the Earth's interior relies on understanding how waveforms encode information from heterogeneous multi-scale structure. This relation is given by elastodynamics, but forward modeling in the context of tomography primarily serves to deliver synthetic waveforms and gradients for the inversion procedure. While this is entirely appropriate, it depreciates a wealth of complementary inference that can be obtained from the complexity of the wavefield. Here, we are concerned with the imprint of realistic multi-scale Earth structure on the wavefield, and the question on the inherent physical resolution limit of structures encoded in seismograms. We identify parameter and scattering regimes where structures remain invisible as a function of seismic wavelength, structural multi-scale geometry, scattering strength, and propagation path. Ultimately, this will aid in interpreting tomographic images by acknowledging the scope of "forgotten" structures, and shall offer guidance for optimising the selection of seismic data for tomography. To do so, we use our novel 3D modeling method AxiSEM3D which tackles global wave propagation in visco-elastic, anisotropic 3D structures with undulating boundaries at unprecedented resolution and efficiency by exploiting the inherent azimuthal smoothness of wavefields via a coupled Fourier expansion-spectral-element approach. The method links computational cost to wavefield complexity and thereby lends itself well to exploring the relation between waveforms and structures. We will show various examples of multi-scale heterogeneities which appear or disappear in the waveform, and argue that the nature of the structural power spectrum plays a central role in this. We introduce the concept of wavefield learning to examine the true wavefield complexity for a complexity-dependent modeling framework and discriminate which scattering structures can be retrieved by surface measurements. This leads to the question of physical invisibility and the tomographic resolution limit, and offers insight as to why tomographic images still show stark differences for smaller-scale heterogeneities despite progress in modeling and data resolution. Finally, we give an outlook on how we expand this modeling framework towards an inversion procedure guided by wavefield complexity.

  19. Intercomparison of 7 Planetary Boundary-Layer/Surface-Layer Physics Schemes over Complex Terrain for Battlefield Situational Awareness

    DTIC Science & Technology

    This study considers the performance of 7 of the Weather Research and Forecast model boundary-layer (BL) parameterization schemes in a complex...schemes performed best. The surface parameters, planetary BL structure, and vertical profiles are important for US Army Research Laboratory

  20. A Digital Ecosystems Model of Assessment Feedback on Student Learning

    ERIC Educational Resources Information Center

    Gomez, Stephen; Andersson, Holger; Park, Julian; Maw, Stephen; Crook, Anne; Orsmond, Paul

    2013-01-01

    The term ecosystem has been used to describe complex interactions between living organisms and the physical world. The principles underlying ecosystems can also be applied to complex human interactions in the digital world. As internet technologies make an increasing contribution to teaching and learning practice in higher education, the…

  1. NREL: Renewable Resource Data Center - SMARTS

    Science.gov Websites

    SMARTS - Simple Model of the Atmospheric Radiative Transfer of Sunshine Renewable Resource Data Center The Simple Model of the Atmospheric Radiative Transfer of Sunshine, or SMARTS, predicts clear-sky architecture, atmospheric science, photobiology, and health physics. SMARTS is a complex model that requires

  2. A New Comptonization Model for Weakly Magnetized Accreting NS LMXBs

    NASA Astrophysics Data System (ADS)

    Paizis, A.; Farinelli, R.; Titarchuk, L.; Frontera, F.; Cocchi, M.; Ferrigno, C.

    2009-05-01

    We have developed a new Comptonization model to propose, for the first time, a self consistent physical interpretation of the complex spectral evolution seen in NS LMXBs. The model and its application to LMXBs are presented and compared to the Simbol-X expected capabilities.

  3. Complex network problems in physics, computer science and biology

    NASA Astrophysics Data System (ADS)

    Cojocaru, Radu Ionut

    There is a close relation between physics and mathematics and the exchange of ideas between these two sciences are well established. However until few years ago there was no such a close relation between physics and computer science. Even more, only recently biologists started to use methods and tools from statistical physics in order to study the behavior of complex system. In this thesis we concentrate on applying and analyzing several methods borrowed from computer science to biology and also we use methods from statistical physics in solving hard problems from computer science. In recent years physicists have been interested in studying the behavior of complex networks. Physics is an experimental science in which theoretical predictions are compared to experiments. In this definition, the term prediction plays a very important role: although the system is complex, it is still possible to get predictions for its behavior, but these predictions are of a probabilistic nature. Spin glasses, lattice gases or the Potts model are a few examples of complex systems in physics. Spin glasses and many frustrated antiferromagnets map exactly to computer science problems in the NP-hard class defined in Chapter 1. In Chapter 1 we discuss a common result from artificial intelligence (AI) which shows that there are some problems which are NP-complete, with the implication that these problems are difficult to solve. We introduce a few well known hard problems from computer science (Satisfiability, Coloring, Vertex Cover together with Maximum Independent Set and Number Partitioning) and then discuss their mapping to problems from physics. In Chapter 2 we provide a short review of combinatorial optimization algorithms and their applications to ground state problems in disordered systems. We discuss the cavity method initially developed for studying the Sherrington-Kirkpatrick model of spin glasses. We extend this model to the study of a specific case of spin glass on the Bethe lattice at zero temperature and then we apply this formalism to the K-SAT problem defined in Chapter 1. The phase transition which physicists study often corresponds to a change in the computational complexity of the corresponding computer science problem. Chapter 3 presents phase transitions which are specific to the problems discussed in Chapter 1 and also known results for the K-SAT problem. We discuss the replica method and experimental evidences of replica symmetry breaking. The physics approach to hard problems is based on replica methods which are difficult to understand. In Chapter 4 we develop novel methods for studying hard problems using methods similar to the message passing techniques that were discussed in Chapter 2. Although we concentrated on the symmetric case, cavity methods show promise for generalizing our methods to the un-symmetric case. As has been highlighted by John Hopfield, several key features of biological systems are not shared by physical systems. Although living entities follow the laws of physics and chemistry, the fact that organisms adapt and reproduce introduces an essential ingredient that is missing in the physical sciences. In order to extract information from networks many algorithm have been developed. In Chapter 5 we apply polynomial algorithms like minimum spanning tree in order to study and construct gene regulatory networks from experimental data. As future work we propose the use of algorithms like min-cut/max-flow and Dijkstra for understanding key properties of these networks.

  4. Illness perceptions in anorexia nervosa: a qualitative investigation.

    PubMed

    Higbed, Laurie; Fox, John R E

    2010-09-01

    Anorexia nervosa (AN) is an eating disorder characterized by the egosyntonic nature of symptoms, denial of illness, and ambivalence about treatment engagement. Within the physical health literature, people's beliefs about their illness have been found to impact upon coping and treatment outcomes and this has largely been explored using the self-regulation model. This model has also been applied to mental health and more recently to AN, with beliefs about the disorder being associated with readiness to change. However, qualitative investigations have indicated that physical health models have limited applicability for assessing people's beliefs about mental illness. This may be particularly pertinent to AN, given the complexity of the disorder. Therefore, this study explored illness perceptions in AN using a qualitative design which was not restricted by a physical illness model but focused on personal models of AN from the perspective of those experiencing the disorder. Semi-structured interviews were conducted with thirteen participants who were currently in treatment for AN. Interview transcripts were analysed using grounded theory methodology. An interpretative theory of illness perceptions in AN was developed and comprised four related categories: 'making sense of AN', 'the relationship between AN and the self', 'the recovery struggle', and 'coping with treatment'. Patient's accounts transcended the dimensions offered by physical illness models, with the implication that methods for assessing illness beliefs in AN require adaptation for a full understanding to be gained and the complexity of perceptions to be captured.

  5. A Complex Approach to UXO Discrimination: Combining Advanced EMI Forward Models and Statistical Signal Processing

    DTIC Science & Technology

    2012-01-01

    discrimination at live-UXO sites. Namely, under this project first we developed and implemented advanced, physically complete forward EMI models such as, the...detection and discrimination at live-UXO sites. Namely, under this project first we developed and implemented advanced, physically complete forward EMI...Shubitidze of Sky Research and Dartmouth College, conceived, implemented , and tested most of the approaches presented in this report. He developed

  6. Multi-scale heat and mass transfer modelling of cell and tissue cryopreservation

    PubMed Central

    Xu, Feng; Moon, Sangjun; Zhang, Xiaohui; Shao, Lei; Song, Young Seok; Demirci, Utkan

    2010-01-01

    Cells and tissues undergo complex physical processes during cryopreservation. Understanding the underlying physical phenomena is critical to improve current cryopreservation methods and to develop new techniques. Here, we describe multi-scale approaches for modelling cell and tissue cryopreservation including heat transfer at macroscale level, crystallization, cell volume change and mass transport across cell membranes at microscale level. These multi-scale approaches allow us to study cell and tissue cryopreservation. PMID:20047939

  7. Top-down models in biology: explanation and control of complex living systems above the molecular level.

    PubMed

    Pezzulo, Giovanni; Levin, Michael

    2016-11-01

    It is widely assumed in developmental biology and bioengineering that optimal understanding and control of complex living systems follows from models of molecular events. The success of reductionism has overshadowed attempts at top-down models and control policies in biological systems. However, other fields, including physics, engineering and neuroscience, have successfully used the explanations and models at higher levels of organization, including least-action principles in physics and control-theoretic models in computational neuroscience. Exploiting the dynamic regulation of pattern formation in embryogenesis and regeneration requires new approaches to understand how cells cooperate towards large-scale anatomical goal states. Here, we argue that top-down models of pattern homeostasis serve as proof of principle for extending the current paradigm beyond emergence and molecule-level rules. We define top-down control in a biological context, discuss the examples of how cognitive neuroscience and physics exploit these strategies, and illustrate areas in which they may offer significant advantages as complements to the mainstream paradigm. By targeting system controls at multiple levels of organization and demystifying goal-directed (cybernetic) processes, top-down strategies represent a roadmap for using the deep insights of other fields for transformative advances in regenerative medicine and systems bioengineering. © 2016 The Author(s).

  8. Top-down models in biology: explanation and control of complex living systems above the molecular level

    PubMed Central

    2016-01-01

    It is widely assumed in developmental biology and bioengineering that optimal understanding and control of complex living systems follows from models of molecular events. The success of reductionism has overshadowed attempts at top-down models and control policies in biological systems. However, other fields, including physics, engineering and neuroscience, have successfully used the explanations and models at higher levels of organization, including least-action principles in physics and control-theoretic models in computational neuroscience. Exploiting the dynamic regulation of pattern formation in embryogenesis and regeneration requires new approaches to understand how cells cooperate towards large-scale anatomical goal states. Here, we argue that top-down models of pattern homeostasis serve as proof of principle for extending the current paradigm beyond emergence and molecule-level rules. We define top-down control in a biological context, discuss the examples of how cognitive neuroscience and physics exploit these strategies, and illustrate areas in which they may offer significant advantages as complements to the mainstream paradigm. By targeting system controls at multiple levels of organization and demystifying goal-directed (cybernetic) processes, top-down strategies represent a roadmap for using the deep insights of other fields for transformative advances in regenerative medicine and systems bioengineering. PMID:27807271

  9. Modelling Complex Fenestration Systems using physical and virtual models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thanachareonkit, Anothai; Scartezzini, Jean-Louis

    2010-04-15

    Physical or virtual models are commonly used to visualize the conceptual ideas of architects, lighting designers and researchers; they are also employed to assess the daylighting performance of buildings, particularly in cases where Complex Fenestration Systems (CFS) are considered. Recent studies have however revealed a general tendency of physical models to over-estimate this performance, compared to those of real buildings; these discrepancies can be attributed to several reasons. In order to identify the main error sources, a series of comparisons in-between a real building (a single office room within a test module) and the corresponding physical and virtual models wasmore » undertaken. The physical model was placed in outdoor conditions, which were strictly identical to those of the real building, as well as underneath a scanning sky simulator. The virtual model simulations were carried out by way of the Radiance program using the GenSky function; an alternative evaluation method, named Partial Daylight Factor method (PDF method), was also employed with the physical model together with sky luminance distributions acquired by a digital sky scanner during the monitoring of the real building. The overall daylighting performance of physical and virtual models were assessed and compared. The causes of discrepancies between the daylighting performance of the real building and the models were analysed. The main identified sources of errors are the reproduction of building details, the CFS modelling and the mocking-up of the geometrical and photometrical properties. To study the impact of these errors on daylighting performance assessment, computer simulation models created using the Radiance program were also used to carry out a sensitivity analysis of modelling errors. The study of the models showed that large discrepancies can occur in daylighting performance assessment. In case of improper mocking-up of the glazing for instance, relative divergences of 25-40% can be found in different room locations, suggesting that more light is entering than actually monitored in the real building. All these discrepancies can however be reduced by making an effort to carefully mock up the geometry and photometry of the real building. A synthesis is presented in this article which can be used as guidelines for daylighting designers to avoid or estimate errors during CFS daylighting performance assessment. (author)« less

  10. The science of complexity and the role of mathematics

    NASA Astrophysics Data System (ADS)

    Bountis, T.; Johnson, J.; Provata, A.; Tsironis, G.

    2016-09-01

    In the middle of the second decade of the 21st century, Complexity Science has reached a turning point. Its rapid advancement over the last 30 years has led to remarkable new concepts, methods and techniques, whose applications to complex systems of the physical, biological and social sciences has produced a great number of exciting results. The approach has so far depended almost exclusively on the solution of a wide variety of mathematical models by sophisticated numerical techniques and extensive simulations that have inspired a new generation of researchers interested in complex systems. Still, the impact of Complexity beyond the natural sciences, its applications to Medicine, Technology, Economics, Society and Policy are only now beginning to be explored. Furthermore, its basic principles and methods have so far remained within the realm of high level research institutions, out of reach of society's urgent need for practical applications. To address these issues, evaluate the current situation and bring Complexity Science closer to university students, a series of Ph.D. Schools on Mathematical Modeling of Complex Systems was launched, starting in July 2011 at the University of Patras, Greece (see http://www.math.upatras.gr/˜phdsch11). These Schools lasted two weeks each and included, beyond introductory lectures, a conference component of 2-3 days where students were exposed to recent results mainly presented by young researchers. The Ph.D. Schools continued successfully, the 2nd one taking place at Pescara, Italy (2012), (see http://www.nodycosy.unich.it), the 3d one at Heraklion, Crete, Greece (2013) (see http://nlsconf2013.physics.uoc.gr) and the 4th one in Athens, Greece (2014) (see http://nlsconf2014.physics.uoc.gr) (2014). This Special Theme volume contains the proceedings of the 5th Ph.D. School-Conference of this series held at the University of Patras, Greece, 20 30 July, 2015 (see http://www.math.upatras.gr/˜phdsch15) and includes many of the introductory lectures and research talks presented at this event. The primary concern of all those that participated in these events was to emphasize the role of mathematics, modeling and numerical simulation, which are indispensable for understanding what we call complex behavior of physical, biological, technological and socio - economical systems. In the discussions that took place, a great number of participants expressed the need to formulate a unifying theory of complex systems based on the main conclusions that have been reached so far in the science of Complexity. As Guest Editors of this volume, we also feel that it is important to reach some fundamental conclusions concerning common phenomena, theories and methodologies that arise in Complexity. We should all work to explore common rules and approaches, particularly in view of the remarkable challenges that face us all regarding complex social problems that threaten present day society and civilization as we know them.

  11. 3D Printing of Molecular Models

    ERIC Educational Resources Information Center

    Gardner, Adam; Olson, Arthur

    2016-01-01

    Physical molecular models have played a valuable role in our understanding of the invisible nano-scale world. We discuss 3D printing and its use in producing models of the molecules of life. Complex biomolecular models, produced from 3D printed parts, can demonstrate characteristics of molecular structure and function, such as viral self-assembly,…

  12. An appraisal of the literature on teaching physical examination skills.

    PubMed

    Easton, Graham; Stratford-Martin, James; Atherton, Helen

    2012-07-01

    To discover which models for teaching physical examination skills have been proposed, and to appraise the evidence for each. We conducted a narrative review of relevant literature from 1990-2010. We searched the databases MEDLINE, PsycINFO, and ERIC (The Education Resource Information Centre) for the terms: 'physical examination' AND 'teaching' as both MESH terms and keyword searches. We excluded web-based or video teaching, non-physical examination skills (e.g. communication skills), and articles about simulated patients or models. We identified five relevant articles. These five studies outlined several approaches to teaching physical examination skills, including Peyton's 4-step model, an adaptation of his model to a 6-step model; the silent run through; and collaborative discovery. There was little evidence to support one method over others. One controlled trial suggested that silent run-through could improve performance of complex motor tasks, and another suggested that collaborative discovery improves students' ability to recognise key findings in cardiac examinations. There are several models for teaching physical examinations, but few are designed specifically for that purpose and there is little evidence to back any one model over another. We propose an approach which adopts several key features of these models. Future research could usefully evaluate the effectiveness of the proposed models, or develop innovative practical models for teaching examination skills.

  13. Linking market interaction intensity of 3D Ising type financial model with market volatility

    NASA Astrophysics Data System (ADS)

    Fang, Wen; Ke, Jinchuan; Wang, Jun; Feng, Ling

    2016-11-01

    Microscopic interaction models in physics have been used to investigate the complex phenomena of economic systems. The simple interactions involved can lead to complex behaviors and help the understanding of mechanisms in the financial market at a systemic level. This article aims to develop a financial time series model through 3D (three-dimensional) Ising dynamic system which is widely used as an interacting spins model to explain the ferromagnetism in physics. Through Monte Carlo simulations of the financial model and numerical analysis for both the simulation return time series and historical return data of Hushen 300 (HS300) index in Chinese stock market, we show that despite its simplicity, this model displays stylized facts similar to that seen in real financial market. We demonstrate a possible underlying link between volatility fluctuations of real stock market and the change in interaction strengths of market participants in the financial model. In particular, our stochastic interaction strength in our model demonstrates that the real market may be consistently operating near the critical point of the system.

  14. A new method to real-normalize measured complex modes

    NASA Technical Reports Server (NTRS)

    Wei, Max L.; Allemang, Randall J.; Zhang, Qiang; Brown, David L.

    1987-01-01

    A time domain subspace iteration technique is presented to compute a set of normal modes from the measured complex modes. By using the proposed method, a large number of physical coordinates are reduced to a smaller number of model or principal coordinates. Subspace free decay time responses are computed using properly scaled complex modal vectors. Companion matrix for the general case of nonproportional damping is then derived in the selected vector subspace. Subspace normal modes are obtained through eigenvalue solution of the (M sub N) sup -1 (K sub N) matrix and transformed back to the physical coordinates to get a set of normal modes. A numerical example is presented to demonstrate the outlined theory.

  15. Are Model Transferability And Complexity Antithetical? Insights From Validation of a Variable-Complexity Empirical Snow Model in Space and Time

    NASA Astrophysics Data System (ADS)

    Lute, A. C.; Luce, Charles H.

    2017-11-01

    The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.

  16. Physical principles for DNA tile self-assembly.

    PubMed

    Evans, Constantine G; Winfree, Erik

    2017-06-19

    DNA tiles provide a promising technique for assembling structures with nanoscale resolution through self-assembly by basic interactions rather than top-down assembly of individual structures. Tile systems can be programmed to grow based on logical rules, allowing for a small number of tile types to assemble large, complex assemblies that can retain nanoscale resolution. Such algorithmic systems can even assemble different structures using the same tiles, based on inputs that seed the growth. While programming and theoretical analysis of tile self-assembly often makes use of abstract logical models of growth, experimentally implemented systems are governed by nanoscale physical processes that can lead to very different behavior, more accurately modeled by taking into account the thermodynamics and kinetics of tile attachment and detachment in solution. This review discusses the relationships between more abstract and more physically realistic tile assembly models. A central concern is how consideration of model differences enables the design of tile systems that robustly exhibit the desired abstract behavior in realistic physical models and in experimental implementations. Conversely, we identify situations where self-assembly in abstract models can not be well-approximated by physically realistic models, putting constraints on physical relevance of the abstract models. To facilitate the discussion, we introduce a unified model of tile self-assembly that clarifies the relationships between several well-studied models in the literature. Throughout, we highlight open questions regarding the physical principles for DNA tile self-assembly.

  17. Implementing a modeling software for animated protein-complex interactions using a physics simulation library.

    PubMed

    Ueno, Yutaka; Ito, Shuntaro; Konagaya, Akihiko

    2014-12-01

    To better understand the behaviors and structural dynamics of proteins within a cell, novel software tools are being developed that can create molecular animations based on the findings of structural biology. This study proposes our method developed based on our prototypes to detect collisions and examine the soft-body dynamics of molecular models. The code was implemented with a software development toolkit for rigid-body dynamics simulation and a three-dimensional graphics library. The essential functions of the target software system included the basic molecular modeling environment, collision detection in the molecular models, and physical simulations of the movement of the model. Taking advantage of recent software technologies such as physics simulation modules and interpreted scripting language, the functions required for accurate and meaningful molecular animation were implemented efficiently.

  18. The SAM framework: modeling the effects of management factors on human behavior in risk analysis.

    PubMed

    Murphy, D M; Paté-Cornell, M E

    1996-08-01

    Complex engineered systems, such as nuclear reactors and chemical plants, have the potential for catastrophic failure with disastrous consequences. In recent years, human and management factors have been recognized as frequent root causes of major failures in such systems. However, classical probabilistic risk analysis (PRA) techniques do not account for the underlying causes of these errors because they focus on the physical system and do not explicitly address the link between components' performance and organizational factors. This paper describes a general approach for addressing the human and management causes of system failure, called the SAM (System-Action-Management) framework. Beginning with a quantitative risk model of the physical system, SAM expands the scope of analysis to incorporate first the decisions and actions of individuals that affect the physical system. SAM then links management factors (incentives, training, policies and procedures, selection criteria, etc.) to those decisions and actions. The focus of this paper is on four quantitative models of action that describe this last relationship. These models address the formation of intentions for action and their execution as a function of the organizational environment. Intention formation is described by three alternative models: a rational model, a bounded rationality model, and a rule-based model. The execution of intentions is then modeled separately. These four models are designed to assess the probabilities of individual actions from the perspective of management, thus reflecting the uncertainties inherent to human behavior. The SAM framework is illustrated for a hypothetical case of hazardous materials transportation. This framework can be used as a tool to increase the safety and reliability of complex technical systems by modifying the organization, rather than, or in addition to, re-designing the physical system.

  19. Utilizing Educational Theoretical Models to Support Effective Physical Education Pedagogy

    ERIC Educational Resources Information Center

    Usher, Wayne; Edwards, Allan; de Meyrick, Bianca

    2015-01-01

    Physical education (PE) pedagogy has traditionally been viewed as drillstyle teaching. Whilst this traditional pedagogical approach provides exposure to various skills, used within a school-based PE and sporting context, it does not demonstrate a student's competence associated with their ability to apply these skills in complex game situations.…

  20. Physical Modeling in the Geological Sciences: An Annotated Bibliography. CEGS Programs Publication No. 16.

    ERIC Educational Resources Information Center

    Charlesworth, L. J., Jr.; Passero, Richard Nicholas

    The bibliography identifies, describes, and evaluates devices and techniques discussed in the world's literature to demonstrate or stimulate natural physical geologic phenomena in classroom or laboratory teaching or research situations. The aparatus involved ranges from the very simple and elementary to the highly complex, sophisticated, and…

  1. Development of the US3D Code for Advanced Compressible and Reacting Flow Simulations

    NASA Technical Reports Server (NTRS)

    Candler, Graham V.; Johnson, Heath B.; Nompelis, Ioannis; Subbareddy, Pramod K.; Drayna, Travis W.; Gidzak, Vladimyr; Barnhardt, Michael D.

    2015-01-01

    Aerothermodynamics and hypersonic flows involve complex multi-disciplinary physics, including finite-rate gas-phase kinetics, finite-rate internal energy relaxation, gas-surface interactions with finite-rate oxidation and sublimation, transition to turbulence, large-scale unsteadiness, shock-boundary layer interactions, fluid-structure interactions, and thermal protection system ablation and thermal response. Many of the flows have a large range of length and time scales, requiring large computational grids, implicit time integration, and large solution run times. The University of Minnesota NASA US3D code was designed for the simulation of these complex, highly-coupled flows. It has many of the features of the well-established DPLR code, but uses unstructured grids and has many advanced numerical capabilities and physical models for multi-physics problems. The main capabilities of the code are described, the physical modeling approaches are discussed, the different types of numerical flux functions and time integration approaches are outlined, and the parallelization strategy is overviewed. Comparisons between US3D and the NASA DPLR code are presented, and several advanced simulations are presented to illustrate some of novel features of the code.

  2. Enriching gender in physics education research: A binary past and a complex future

    NASA Astrophysics Data System (ADS)

    Traxler, Adrienne

    2017-01-01

    This talk draws on research in physics, science education, and women's studies to propose a more nuanced treatment of gender in physics education research (PER). A growing body of PER has examined gender differences in students' participation, performance, and attitudes toward physics. Though valuable, this body of work often follows a ``binary deficit'' model of gender, where the achievements of men are implicitly taken as the most appropriate standard and where individual experiences and student identities are undervalued. I will discuss more up-to-date viewpoints on gender from other fields, as well as work on the intersection of identities [e.g., gender with race and ethnicity, or with lesbian, gay, bisexual, and transgender (LGBT) status]. A few PER studies examine the intersection of gender and race, and identify the lack of a unitary identity as a key challenge of ``belonging'' in physics. Acknowledging this complexity of identity allows further critique of the binary deficit model, which casts gender as a fixed binary trait and frames research questions around investigating deficiencies in women rather than issues of systemic bias. More nuanced models of gender allow a greater range and fluidity of gender identities, and highlight deficiencies in data that exclude women's experiences. I will conclude by suggesting new investigations that might build on an expanded gender framework in PER.

  3. Fault management for the Space Station Freedom control center

    NASA Technical Reports Server (NTRS)

    Clark, Colin; Jowers, Steven; Mcnenny, Robert; Culbert, Chris; Kirby, Sarah; Lauritsen, Janet

    1992-01-01

    This paper describes model based reasoning fault isolation in complex systems using automated digraph analysis. It discusses the use of the digraph representation as the paradigm for modeling physical systems and a method for executing these failure models to provide real-time failure analysis. It also discusses the generality, ease of development and maintenance, complexity management, and susceptibility to verification and validation of digraph failure models. It specifically describes how a NASA-developed digraph evaluation tool and an automated process working with that tool can identify failures in a monitored system when supplied with one or more fault indications. This approach is well suited to commercial applications of real-time failure analysis in complex systems because it is both powerful and cost effective.

  4. Three-dimensional computer simulation of radiostereometric analysis (RSA) in distal radius fractures.

    PubMed

    Madanat, Rami; Moritz, Niko; Aro, Hannu T

    2007-01-01

    Physical phantom models have conventionally been used to determine the accuracy and precision of radiostereometric analysis (RSA) in various orthopaedic applications. Using a phantom model of a fracture of the distal radius it has previously been shown that RSA is a highly accurate and precise method for measuring both translation and rotation in three-dimensions (3-D). The main shortcoming of a physical phantom model is its inability to mimic complex 3-D motion. The goal of this study was to create a realistic computer model for preoperative planning of RSA studies and to test the accuracy of RSA in measuring complex movements in fractures of the distal radius using this new model. The 3-D computer model was created from a set of tomographic scans. The simulation of the radiographic imaging was performed using ray-tracing software (POV-Ray). RSA measurements were performed according to standard protocol. Using a two-part fracture model (AO/ASIF type A2), it was found that for simple movements in one axis, translations in the range of 25microm-2mm could be measured with an accuracy of +/-2microm. Rotations ranging from 16 degrees to 2 degrees could be measured with an accuracy of +/-0.015 degrees . Using a three-part fracture model the corresponding values of accuracy were found to be +/-4microm and +/-0.031 degrees for translation and rotation, respectively. For complex 3-D motion in a three-part fracture model (AO/ASIF type C1) the accuracy was +/-6microm for translation and +/-0.120 degrees for rotation. The use of 3-D computer modelling can provide a method for preoperative planning of RSA studies in complex fractures of the distal radius and in other clinical situations in which the RSA method is applicable.

  5. Performance of GeantV EM Physics Models

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2017-10-01

    The recent progress in parallel hardware architectures with deeper vector pipelines or many-cores technologies brings opportunities for HEP experiments to take advantage of SIMD and SIMT computing models. Launched in 2013, the GeantV project studies performance gains in propagating multiple particles in parallel, improving instruction throughput and data locality in HEP event simulation on modern parallel hardware architecture. Due to the complexity of geometry description and physics algorithms of a typical HEP application, performance analysis is indispensable in identifying factors limiting parallel execution. In this report, we will present design considerations and preliminary computing performance of GeantV physics models on coprocessors (Intel Xeon Phi and NVidia GPUs) as well as on mainstream CPUs.

  6. High accuracy mantle convection simulation through modern numerical methods - II: realistic models and problems

    NASA Astrophysics Data System (ADS)

    Heister, Timo; Dannberg, Juliane; Gassmöller, Rene; Bangerth, Wolfgang

    2017-08-01

    Computations have helped elucidate the dynamics of Earth's mantle for several decades already. The numerical methods that underlie these simulations have greatly evolved within this time span, and today include dynamically changing and adaptively refined meshes, sophisticated and efficient solvers, and parallelization to large clusters of computers. At the same time, many of the methods - discussed in detail in a previous paper in this series - were developed and tested primarily using model problems that lack many of the complexities that are common to the realistic models our community wants to solve today. With several years of experience solving complex and realistic models, we here revisit some of the algorithm designs of the earlier paper and discuss the incorporation of more complex physics. In particular, we re-consider time stepping and mesh refinement algorithms, evaluate approaches to incorporate compressibility, and discuss dealing with strongly varying material coefficients, latent heat, and how to track chemical compositions and heterogeneities. Taken together and implemented in a high-performance, massively parallel code, the techniques discussed in this paper then allow for high resolution, 3-D, compressible, global mantle convection simulations with phase transitions, strongly temperature dependent viscosity and realistic material properties based on mineral physics data.

  7. Models for the modern power grid

    NASA Astrophysics Data System (ADS)

    Nardelli, Pedro H. J.; Rubido, Nicolas; Wang, Chengwei; Baptista, Murilo S.; Pomalaza-Raez, Carlos; Cardieri, Paulo; Latva-aho, Matti

    2014-10-01

    This article reviews different kinds of models for the electric power grid that can be used to understand the modern power system, the smart grid. From the physical network to abstract energy markets, we identify in the literature different aspects that co-determine the spatio-temporal multilayer dynamics of power system. We start our review by showing how the generation, transmission and distribution characteristics of the traditional power grids are already subject to complex behaviour appearing as a result of the the interplay between dynamics of the nodes and topology, namely synchronisation and cascade effects. When dealing with smart grids, the system complexity increases even more: on top of the physical network of power lines and controllable sources of electricity, the modernisation brings information networks, renewable intermittent generation, market liberalisation, prosumers, among other aspects. In this case, we forecast a dynamical co-evolution of the smart grid and other kind of networked systems that cannot be understood isolated. This review compiles recent results that model electric power grids as complex systems, going beyond pure technological aspects. From this perspective, we then indicate possible ways to incorporate the diverse co-evolving systems into the smart grid model using, for example, network theory and multi-agent simulation.

  8. Enriching gender in physics education research: A binary past and a complex future

    NASA Astrophysics Data System (ADS)

    Traxler, Adrienne L.; Cid, Ximena C.; Blue, Jennifer; Barthelemy, Ramón

    2016-12-01

    [This paper is part of the Focused Collection on Gender in Physics.] In this article, we draw on previous reports from physics, science education, and women's studies to propose a more nuanced treatment of gender in physics education research (PER). A growing body of PER examines gender differences in participation, performance, and attitudes toward physics. We have three critiques of this work: (i) it does not question whether the achievements of men are the most appropriate standard, (ii) individual experiences and student identities are undervalued, and (iii) the binary model of gender is not questioned. Driven by these critiques, we propose a conception of gender that is more up to date with other fields and discuss gender as performance as an extended example. We also discuss work on the intersection of identities [e.g., gender with race and ethnicity, socioeconomic status, lesbian, gay, bisexual, and transgender (LGBT) status], much of which has been conducted outside of physics. Within PER, some studies examine the intersection of gender and race, and identify the lack of a single identity as a key challenge of "belonging" in physics. Acknowledging this complexity enables us to further critique what we term a binary gender deficit model. This framework, which is implicit in much of the gender-based PER, casts gender as a fixed binary trait and suggests that women are deficient in characteristics necessary to succeed. Alternative models of gender allow a greater range and fluidity of gender identities, and highlight deficiencies in data that exclude women's experiences. We suggest new investigations that diverge from this expanded gender framework in PER.

  9. IMPETUS - Interactive MultiPhysics Environment for Unified Simulations.

    PubMed

    Ha, Vi Q; Lykotrafitis, George

    2016-12-08

    We introduce IMPETUS - Interactive MultiPhysics Environment for Unified Simulations, an object oriented, easy-to-use, high performance, C++ program for three-dimensional simulations of complex physical systems that can benefit a large variety of research areas, especially in cell mechanics. The program implements cross-communication between locally interacting particles and continuum models residing in the same physical space while a network facilitates long-range particle interactions. Message Passing Interface is used for inter-processor communication for all simulations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Coupled Mechanical-Electrochemical-Thermal Modeling for Accelerated Design of EV Batteries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santhanagopalan, Shriram; Zhang, Chao; Kim, Gi-Heon

    2015-05-03

    This presentation provides an overview of the mechanical electrochemical-thermal (M-ECT) modeling efforts. The physical phenomena occurring in a battery are many and complex and operate at different scales (particle, electrodes, cell, and pack). A better understanding of the interplay between different physics occurring at different scales through modeling could provide insight to design improved batteries for electric vehicles. Work funded by the U.S. DOE has resulted in development of computer-aided engineering (CAE) tools to accelerate electrochemical and thermal design of batteries; mechanical modeling is under way. Three competitive CAE tools are now commercially available.

  11. Foucault as Complexity Theorist: Overcoming the Problems of Classical Philosophical Analysis

    ERIC Educational Resources Information Center

    Olssen, Mark

    2008-01-01

    This article explores the affinities and parallels between Foucault's Nietzschean view of history and models of complexity developed in the physical sciences in the twentieth century. It claims that Foucault's rejection of structuralism and Marxism can be explained as a consequence of his own approach which posits a radical ontology whereby the…

  12. Monte Carlo simulations of flexible polyanions complexing with whey proteins at their isoelectric point.

    PubMed

    de Vries, R

    2004-02-15

    Electrostatic complexation of flexible polyanions with the whey proteins alpha-lactalbumin and beta-lactoglobulin is studied using Monte Carlo simulations. The proteins are considered at their respective isoelectric points. Discrete charges on the model polyelectrolytes and proteins interact through Debye-Huckel potentials. Protein excluded volume is taken into account through a coarse-grained model of the protein shape. Consistent with experimental results, it is found that alpha-lactalbumin complexes much more strongly than beta-lactoglobulin. For alpha-lactalbumin, strong complexation is due to localized binding to a single large positive "charge patch," whereas for beta-lactoglobulin, weak complexation is due to diffuse binding to multiple smaller charge patches. Copyright 2004 American Institute of Physics

  13. A Physics-Based Vibrotactile Feedback Library for Collision Events.

    PubMed

    Park, Gunhyuk; Choi, Seungmoon

    2017-01-01

    We present PhysVib: a software solution on the mobile platform extending an open-source physics engine in a multi-rate rendering architecture for automatic vibrotactile feedback upon collision events. PhysVib runs concurrently with a physics engine at a low update rate and generates vibrotactile feedback commands at a high update rate based on the simulation results of the physics engine using an exponentially-decaying sinusoidal model. We demonstrate through a user study that this vibration model is more appropriate to our purpose in terms of perceptual quality than more complex models based on sound synthesis. We also evaluated the perceptual performance of PhysVib by comparing eight vibrotactile rendering methods. Experimental results suggested that PhysVib enables more realistic vibrotactile feedback than the other methods as to perceived similarity to the visual events. PhysVib is an effective solution for providing physically plausible vibrotactile responses while reducing application development time to great extent.

  14. Diagnosis by integrating model-based reasoning with knowledge-based reasoning

    NASA Technical Reports Server (NTRS)

    Bylander, Tom

    1988-01-01

    Our research investigates how observations can be categorized by integrating a qualitative physical model with experiential knowledge. Our domain is diagnosis of pathologic gait in humans, in which the observations are the gait motions, muscle activity during gait, and physical exam data, and the diagnostic hypotheses are the potential muscle weaknesses, muscle mistimings, and joint restrictions. Patients with underlying neurological disorders typically have several malfunctions. Among the problems that need to be faced are: the ambiguity of the observations, the ambiguity of the qualitative physical model, correspondence of the observations and hypotheses to the qualitative physical model, the inherent uncertainty of experiential knowledge, and the combinatorics involved in forming composite hypotheses. Our system divides the work so that the knowledge-based reasoning suggests which hypotheses appear more likely than others, the qualitative physical model is used to determine which hypotheses explain which observations, and another process combines these functionalities to construct a composite hypothesis based on explanatory power and plausibility. We speculate that the reasoning architecture of our system is generally applicable to complex domains in which a less-than-perfect physical model and less-than-perfect experiential knowledge need to be combined to perform diagnosis.

  15. Extracting physical chemistry from mechanics: a new approach to investigate DNA interactions with drugs and proteins in single molecule experiments.

    PubMed

    Rocha, M S

    2015-09-01

    In this review we focus on the idea of establishing connections between the mechanical properties of DNA-ligand complexes and the physical chemistry of DNA-ligand interactions. This type of connection is interesting because it opens the possibility of performing a robust characterization of such interactions by using only one experimental technique: single molecule stretching. Furthermore, it also opens new possibilities in comparing results obtained by very different approaches, in particular when comparing single molecule techniques to ensemble-averaging techniques. We start the manuscript reviewing important concepts of DNA mechanics, from the basic mechanical properties to the Worm-Like Chain model. Next we review the basic concepts of the physical chemistry of DNA-ligand interactions, revisiting the most important models used to analyze the binding data and discussing their binding isotherms. Then, we discuss the basic features of the single molecule techniques most used to stretch DNA-ligand complexes and to obtain "force × extension" data, from which the mechanical properties of the complexes can be determined. We also discuss the characteristics of the main types of interactions that can occur between DNA and ligands, from covalent binding to simple electrostatic driven interactions. Finally, we present a historical survey of the attempts to connect mechanics to physical chemistry for DNA-ligand systems, emphasizing a recently developed fitting approach useful to connect the persistence length of DNA-ligand complexes to the physicochemical properties of the interaction. Such an approach in principle can be used for any type of ligand, from drugs to proteins, even if multiple binding modes are present.

  16. Statistical mechanics of complex neural systems and high dimensional data

    NASA Astrophysics Data System (ADS)

    Advani, Madhu; Lahiri, Subhaneil; Ganguli, Surya

    2013-03-01

    Recent experimental advances in neuroscience have opened new vistas into the immense complexity of neuronal networks. This proliferation of data challenges us on two parallel fronts. First, how can we form adequate theoretical frameworks for understanding how dynamical network processes cooperate across widely disparate spatiotemporal scales to solve important computational problems? Second, how can we extract meaningful models of neuronal systems from high dimensional datasets? To aid in these challenges, we give a pedagogical review of a collection of ideas and theoretical methods arising at the intersection of statistical physics, computer science and neurobiology. We introduce the interrelated replica and cavity methods, which originated in statistical physics as powerful ways to quantitatively analyze large highly heterogeneous systems of many interacting degrees of freedom. We also introduce the closely related notion of message passing in graphical models, which originated in computer science as a distributed algorithm capable of solving large inference and optimization problems involving many coupled variables. We then show how both the statistical physics and computer science perspectives can be applied in a wide diversity of contexts to problems arising in theoretical neuroscience and data analysis. Along the way we discuss spin glasses, learning theory, illusions of structure in noise, random matrices, dimensionality reduction and compressed sensing, all within the unified formalism of the replica method. Moreover, we review recent conceptual connections between message passing in graphical models, and neural computation and learning. Overall, these ideas illustrate how statistical physics and computer science might provide a lens through which we can uncover emergent computational functions buried deep within the dynamical complexities of neuronal networks.

  17. WE-D-303-00: Computational Phantoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, John; Brigham and Women’s Hospital and Dana-Farber Cancer Institute, Boston, MA

    2015-06-15

    Modern medical physics deals with complex problems such as 4D radiation therapy and imaging quality optimization. Such problems involve a large number of radiological parameters, and anatomical and physiological breathing patterns. A major challenge is how to develop, test, evaluate and compare various new imaging and treatment techniques, which often involves testing over a large range of radiological parameters as well as varying patient anatomies and motions. It would be extremely challenging, if not impossible, both ethically and practically, to test every combination of parameters and every task on every type of patient under clinical conditions. Computer-based simulation using computationalmore » phantoms offers a practical technique with which to evaluate, optimize, and compare imaging technologies and methods. Within simulation, the computerized phantom provides a virtual model of the patient’s anatomy and physiology. Imaging data can be generated from it as if it was a live patient using accurate models of the physics of the imaging and treatment process. With sophisticated simulation algorithms, it is possible to perform virtual experiments entirely on the computer. By serving as virtual patients, computational phantoms hold great promise in solving some of the most complex problems in modern medical physics. In this proposed symposium, we will present the history and recent developments of computational phantom models, share experiences in their application to advanced imaging and radiation applications, and discuss their promises and limitations. Learning Objectives: Understand the need and requirements of computational phantoms in medical physics research Discuss the developments and applications of computational phantoms Know the promises and limitations of computational phantoms in solving complex problems.« less

  18. Computational Model of Secondary Palate Fusion and Disruption

    EPA Science Inventory

    Morphogenetic events are driven by cell-generated physical forces and complex cellular dynamics. To improve our capacity to predict developmental effects from cellular alterations, we built a multi-cellular agent-based model in CompuCell3D that recapitulates the cellular networks...

  19. Hierarchical Model for the Evolution of Cloud Complexes

    NASA Astrophysics Data System (ADS)

    Sánchez D., Néstor M.; Parravano, Antonio

    1999-01-01

    The structure of cloud complexes appears to be well described by a tree structure (i.e., a simplified ``stick man'') representation when the image is partitioned into ``clouds.'' In this representation, the parent-child relationships are assigned according to containment. Based on this picture, a hierarchical model for the evolution of cloud complexes, including star formation, is constructed. The model follows the mass evolution of each substructure by computing its mass exchange with its parent and children. The parent-child mass exchange (evaporation or condensation) depends on the radiation density at the interphase. At the end of the ``lineage,'' stars may be born or die, so that there is a nonstationary mass flow in the hierarchical structure. For a variety of parameter sets the system follows the same series of steps to transform diffuse gas into stars, and the regulation of the mass flux in the tree by previously formed stars dominates the evolution of the star formation. For the set of parameters used here as a reference model, the system tends to produce initial mass functions (IMFs) that have a maximum at a mass that is too high (~2 Msolar) and the characteristic times for evolution seem too long. We show that these undesired properties can be improved by adjusting the model parameters. The model requires further physics (e.g., allowing for multiple stellar systems and clump collisions) before a definitive comparison with observations can be made. Instead, the emphasis here is to illustrate some general properties of this kind of complex nonlinear model for the star formation process. Notwithstanding the simplifications involved, the model reveals an essential feature that will likely remain if additional physical processes are included, that is, the detailed behavior of the system is very sensitive to the variations on the initial and external conditions, suggesting that a ``universal'' IMF is very unlikely. When an ensemble of IMFs corresponding to a variety of initial or external conditions is examined, the slope of the IMF at high masses shows variations comparable to the range derived from observational data. These facts suggest that the considered physical processes (phase transitions regulated by the radiation field) may play a role in the global evolution of molecular complexes.

  20. Simulation Based Earthquake Forecasting with RSQSim

    NASA Astrophysics Data System (ADS)

    Gilchrist, J. J.; Jordan, T. H.; Dieterich, J. H.; Richards-Dinger, K. B.

    2016-12-01

    We are developing a physics-based forecasting model for earthquake ruptures in California. We employ the 3D boundary element code RSQSim to generate synthetic catalogs with millions of events that span up to a million years. The simulations incorporate rate-state fault constitutive properties in complex, fully interacting fault systems. The Unified California Earthquake Rupture Forecast Version 3 (UCERF3) model and data sets are used for calibration of the catalogs and specification of fault geometry. Fault slip rates match the UCERF3 geologic slip rates and catalogs are tuned such that earthquake recurrence matches the UCERF3 model. Utilizing the Blue Waters Supercomputer, we produce a suite of million-year catalogs to investigate the epistemic uncertainty in the physical parameters used in the simulations. In particular, values of the rate- and state-friction parameters a and b, the initial shear and normal stress, as well as the earthquake slip speed, are varied over several simulations. In addition to testing multiple models with homogeneous values of the physical parameters, the parameters a, b, and the normal stress are varied with depth as well as in heterogeneous patterns across the faults. Cross validation of UCERF3 and RSQSim is performed within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM) to determine the affect of the uncertainties in physical parameters observed in the field and measured in the lab, on the uncertainties in probabilistic forecasting. We are particularly interested in the short-term hazards of multi-event sequences due to complex faulting and multi-fault ruptures.

  1. OpenFOAM: Open source CFD in research and industry

    NASA Astrophysics Data System (ADS)

    Jasak, Hrvoje

    2009-12-01

    The current focus of development in industrial Computational Fluid Dynamics (CFD) is integration of CFD into Computer-Aided product development, geometrical optimisation, robust design and similar. On the other hand, in CFD research aims to extend the boundaries ofpractical engineering use in "non-traditional " areas. Requirements of computational flexibility and code integration are contradictory: a change of coding paradigm, with object orientation, library components, equation mimicking is proposed as a way forward. This paper describes OpenFOAM, a C++ object oriented library for Computational Continuum Mechanics (CCM) developed by the author. Efficient and flexible implementation of complex physical models is achieved by mimicking the form ofpartial differential equation in software, with code functionality provided in library form. Open Source deployment and development model allows the user to achieve desired versatility in physical modeling without the sacrifice of complex geometry support and execution efficiency.

  2. Monitoring a Complex Physical System using a Hybrid Dynamic Bayes Net

    NASA Technical Reports Server (NTRS)

    Lerner, Uri; Moses, Brooks; Scott, Maricia; McIlraith, Sheila; Keller, Daphne

    2005-01-01

    The Reverse Water Gas Shift system (RWGS) is a complex physical system designed to produce oxygen from the carbon dioxide atmosphere on Mars. If sent to Mars, it would operate without human supervision, thus requiring a reliable automated system for monitoring and control. The RWGS presents many challenges typical of real-world systems, including: noisy and biased sensors, nonlinear behavior, effects that are manifested over different time granularities, and unobservability of many important quantities. In this paper we model the RWGS using a hybrid (discrete/continuous) Dynamic Bayesian Network (DBN), where the state at each time slice contains 33 discrete and 184 continuous variables. We show how the system state can be tracked using probabilistic inference over the model. We discuss how to deal with the various challenges presented by the RWGS, providing a suite of techniques that are likely to be useful in a wide range of applications. In particular, we describe a general framework for dealing with nonlinear behavior using numerical integration techniques, extending the successful Unscented Filter. We also show how to use a fixed-point computation to deal with effects that develop at different time scales, specifically rapid changes occuring during slowly changing processes. We test our model using real data collected from the RWGS, demonstrating the feasibility of hybrid DBNs for monitoring complex real-world physical systems.

  3. Zipper model for the melting of thin films

    NASA Astrophysics Data System (ADS)

    Abdullah, Mikrajuddin; Khairunnisa, Shafira; Akbar, Fathan

    2016-01-01

    We propose an alternative model to Lindemann’s criterion for melting that explains the melting of thin films on the basis of a molecular zipper-like mechanism. Using this model, a unique criterion for melting is obtained. We compared the results of the proposed model with experimental data of melting points and heat of fusion for many materials and obtained interesting results. The interesting thing reported here is how complex physics problems can sometimes be modeled with simple objects around us that seemed to have no correlation. This kind of approach is sometimes very important in physics education and should always be taught to undergraduate or graduate students.

  4. Physics-driven Spatiotemporal Regularization for High-dimensional Predictive Modeling: A Novel Approach to Solve the Inverse ECG Problem

    NASA Astrophysics Data System (ADS)

    Yao, Bing; Yang, Hui

    2016-12-01

    This paper presents a novel physics-driven spatiotemporal regularization (STRE) method for high-dimensional predictive modeling in complex healthcare systems. This model not only captures the physics-based interrelationship between time-varying explanatory and response variables that are distributed in the space, but also addresses the spatial and temporal regularizations to improve the prediction performance. The STRE model is implemented to predict the time-varying distribution of electric potentials on the heart surface based on the electrocardiogram (ECG) data from the distributed sensor network placed on the body surface. The model performance is evaluated and validated in both a simulated two-sphere geometry and a realistic torso-heart geometry. Experimental results show that the STRE model significantly outperforms other regularization models that are widely used in current practice such as Tikhonov zero-order, Tikhonov first-order and L1 first-order regularization methods.

  5. A meteorological distribution system for high-resolution terrestrial modeling (MicroMet)

    Treesearch

    Glen E. Liston; Kelly Elder

    2006-01-01

    An intermediate-complexity, quasi-physically based, meteorological model (MicroMet) has been developed to produce high-resolution (e.g., 30-m to 1-km horizontal grid increment) atmospheric forcings required to run spatially distributed terrestrial models over a wide variety of landscapes. The following eight variables, required to run most terrestrial models, are...

  6. Network model of bilateral power markets based on complex networks

    NASA Astrophysics Data System (ADS)

    Wu, Yang; Liu, Junyong; Li, Furong; Yan, Zhanxin; Zhang, Li

    2014-06-01

    The bilateral power transaction (BPT) mode becomes a typical market organization with the restructuring of electric power industry, the proper model which could capture its characteristics is in urgent need. However, the model is lacking because of this market organization's complexity. As a promising approach to modeling complex systems, complex networks could provide a sound theoretical framework for developing proper simulation model. In this paper, a complex network model of the BPT market is proposed. In this model, price advantage mechanism is a precondition. Unlike other general commodity transactions, both of the financial layer and the physical layer are considered in the model. Through simulation analysis, the feasibility and validity of the model are verified. At same time, some typical statistical features of BPT network are identified. Namely, the degree distribution follows the power law, the clustering coefficient is low and the average path length is a bit long. Moreover, the topological stability of the BPT network is tested. The results show that the network displays a topological robustness to random market member's failures while it is fragile against deliberate attacks, and the network could resist cascading failure to some extent. These features are helpful for making decisions and risk management in BPT markets.

  7. Bayesian approach to MSD-based analysis of particle motion in live cells.

    PubMed

    Monnier, Nilah; Guo, Syuan-Ming; Mori, Masashi; He, Jun; Lénárt, Péter; Bathe, Mark

    2012-08-08

    Quantitative tracking of particle motion using live-cell imaging is a powerful approach to understanding the mechanism of transport of biological molecules, organelles, and cells. However, inferring complex stochastic motion models from single-particle trajectories in an objective manner is nontrivial due to noise from sampling limitations and biological heterogeneity. Here, we present a systematic Bayesian approach to multiple-hypothesis testing of a general set of competing motion models based on particle mean-square displacements that automatically classifies particle motion, properly accounting for sampling limitations and correlated noise while appropriately penalizing model complexity according to Occam's Razor to avoid over-fitting. We test the procedure rigorously using simulated trajectories for which the underlying physical process is known, demonstrating that it chooses the simplest physical model that explains the observed data. Further, we show that computed model probabilities provide a reliability test for the downstream biological interpretation of associated parameter values. We subsequently illustrate the broad utility of the approach by applying it to disparate biological systems including experimental particle trajectories from chromosomes, kinetochores, and membrane receptors undergoing a variety of complex motions. This automated and objective Bayesian framework easily scales to large numbers of particle trajectories, making it ideal for classifying the complex motion of large numbers of single molecules and cells from high-throughput screens, as well as single-cell-, tissue-, and organism-level studies. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  8. Applicability study of classical and contemporary models for effective complex permittivity of metal powders.

    PubMed

    Kiley, Erin M; Yakovlev, Vadim V; Ishizaki, Kotaro; Vaucher, Sebastien

    2012-01-01

    Microwave thermal processing of metal powders has recently been a topic of a substantial interest; however, experimental data on the physical properties of mixtures involving metal particles are often unavailable. In this paper, we perform a systematic analysis of classical and contemporary models of complex permittivity of mixtures and discuss the use of these models for determining effective permittivity of dielectric matrices with metal inclusions. Results from various mixture and core-shell mixture models are compared to experimental data for a titanium/stearic acid mixture and a boron nitride/graphite mixture (both obtained through the original measurements), and for a tungsten/Teflon mixture (from literature). We find that for certain experiments, the average error in determining the effective complex permittivity using Lichtenecker's, Maxwell Garnett's, Bruggeman's, Buchelnikov's, and Ignatenko's models is about 10%. This suggests that, for multiphysics computer models describing the processing of metal powder in the full temperature range, input data on effective complex permittivity obtained from direct measurement has, up to now, no substitute.

  9. A computer model for one-dimensional mass and energy transport in and around chemically reacting particles, including complex gas-phase chemistry, multicomponent molecular diffusion, surface evaporation, and heterogeneous reaction

    NASA Technical Reports Server (NTRS)

    Cho, S. Y.; Yetter, R. A.; Dryer, F. L.

    1992-01-01

    Various chemically reacting flow problems highlighting chemical and physical fundamentals rather than flow geometry are presently investigated by means of a comprehensive mathematical model that incorporates multicomponent molecular diffusion, complex chemistry, and heterogeneous processes, in the interest of obtaining sensitivity-related information. The sensitivity equations were decoupled from those of the model, and then integrated one time-step behind the integration of the model equations, and analytical Jacobian matrices were applied to improve the accuracy of sensitivity coefficients that are calculated together with model solutions.

  10. Design and Validation of 3D Printed Complex Bone Models with Internal Anatomic Fidelity for Surgical Training and Rehearsal.

    PubMed

    Unger, Bertram J; Kraut, Jay; Rhodes, Charlotte; Hochman, Jordan

    2014-01-01

    Physical models of complex bony structures can be used for surgical skills training. Current models focus on surface rendering but suffer from a lack of internal accuracy due to limitations in the manufacturing process. We describe a technique for generating internally accurate rapid-prototyped anatomical models with solid and hollow structures from clinical and microCT data using a 3D printer. In a face validation experiment, otolaryngology residents drilled a cadaveric bone and its corresponding printed model. The printed bone models were deemed highly realistic representations across all measured parameters and the educational value of the models was strongly appreciated.

  11. Dynamic Fuzzy Model Development for a Drum-type Boiler-turbine Plant Through GK Clustering

    NASA Astrophysics Data System (ADS)

    Habbi, Ahcène; Zelmat, Mimoun

    2008-10-01

    This paper discusses a TS fuzzy model identification method for an industrial drum-type boiler plant using the GK fuzzy clustering approach. The fuzzy model is constructed from a set of input-output data that covers a wide operating range of the physical plant. The reference data is generated using a complex first-principle-based mathematical model that describes the key dynamical properties of the boiler-turbine dynamics. The proposed fuzzy model is derived by means of fuzzy clustering method with particular attention on structure flexibility and model interpretability issues. This may provide a basement of a new way to design model based control and diagnosis mechanisms for the complex nonlinear plant.

  12. Multi-Physics Demonstration Problem with the SHARP Reactor Simulation Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merzari, E.; Shemon, E. R.; Yu, Y. Q.

    This report describes to employ SHARP to perform a first-of-a-kind analysis of the core radial expansion phenomenon in an SFR. This effort required significant advances in the framework Multi-Physics Demonstration Problem with the SHARP Reactor Simulation Toolkit used to drive the coupled simulations, manipulate the mesh in response to the deformation of the geometry, and generate the necessary modified mesh files. Furthermore, the model geometry is fairly complex, and consistent mesh generation for the three physics modules required significant effort. Fully-integrated simulations of a 7-assembly mini-core test problem have been performed, and the results are presented here. Physics models ofmore » a full-core model of the Advanced Burner Test Reactor have also been developed for each of the three physics modules. Standalone results of each of the three physics modules for the ABTR are presented here, which provides a demonstration of the feasibility of the fully-integrated simulation.« less

  13. Statistical Physics of Cascading Failures in Complex Networks

    NASA Astrophysics Data System (ADS)

    Panduranga, Nagendra Kumar

    Systems such as the power grid, world wide web (WWW), and internet are categorized as complex systems because of the presence of a large number of interacting elements. For example, the WWW is estimated to have a billion webpages and understanding the dynamics of such a large number of individual agents (whose individual interactions might not be fully known) is a challenging task. Complex network representations of these systems have proved to be of great utility. Statistical physics is the study of emergence of macroscopic properties of systems from the characteristics of the interactions between individual molecules. Hence, statistical physics of complex networks has been an effective approach to study these systems. In this dissertation, I have used statistical physics to study two distinct phenomena in complex systems: i) Cascading failures and ii) Shortest paths in complex networks. Understanding cascading failures is considered to be one of the "holy grails" in the study of complex systems such as the power grid, transportation networks, and economic systems. Studying failures of these systems as percolation on complex networks has proved to be insightful. Previously, cascading failures have been studied extensively using two different models: k-core percolation and interdependent networks. The first part of this work combines the two models into a general model, solves it analytically, and validates the theoretical predictions through extensive computer simulations. The phase diagram of the percolation transition has been systematically studied as one varies the average local k-core threshold and the coupling between networks. The phase diagram of the combined processes is very rich and includes novel features that do not appear in the models which study each of the processes separately. For example, the phase diagram consists of first- and second-order transition regions separated by two tricritical lines that merge together and enclose a two-stage transition region. In the two-stage transition, the size of the giant component undergoes a first-order jump at a certain occupation probability followed by a continuous second-order transition at a smaller occupation probability. Furthermore, at certain fixed interdependencies, the percolation transition cycles from first-order to second-order to two-stage to first-order as the k-core threshold is increased. We setup the analytical equations describing the phase boundaries of the two-stage transition region and we derive the critical exponents for each type of transition. Understanding the shortest paths between individual elements in systems like communication networks and social media networks is important in the study of information cascades in these systems. Often, large heterogeneity can be present in the connections between nodes in these networks. Certain sets of nodes can be more highly connected among themselves than with the nodes from other sets. These sets of nodes are often referred to as 'communities'. The second part of this work studies the effect of the presence of communities on the distribution of shortest paths in a network using a modular Erdős-Renyi network model. In this model, the number of communities and the degree of modularity of the network can be tuned using the parameters of the model. We find that the model reaches a percolation threshold while tuning the degree of modularity of the network and the distribution of the shortest paths in the network can be used as an indicator of how the communities are connected.

  14. Intertwining Evidence- and Model-Based Reasoning in Physics Sensemaking: An Example from Electrostatics

    ERIC Educational Resources Information Center

    Russ, Rosemary S.; Odden, Tor Ole B.

    2017-01-01

    Our field has long valued the goal of teaching students not just the facts of physics, but also the thinking and reasoning skills of professional physicists. The complexity inherent in scientific reasoning demands that we think carefully about how we conceptualize for ourselves, enact in our classes, and encourage in our students the relationship…

  15. Solving the quantum many-body problem with artificial neural networks

    NASA Astrophysics Data System (ADS)

    Carleo, Giuseppe; Troyer, Matthias

    2017-02-01

    The challenge posed by the many-body problem in quantum physics originates from the difficulty of describing the nontrivial correlations encoded in the exponential complexity of the many-body wave function. Here we demonstrate that systematic machine learning of the wave function can reduce this complexity to a tractable computational form for some notable cases of physical interest. We introduce a variational representation of quantum states based on artificial neural networks with a variable number of hidden neurons. A reinforcement-learning scheme we demonstrate is capable of both finding the ground state and describing the unitary time evolution of complex interacting quantum systems. Our approach achieves high accuracy in describing prototypical interacting spins models in one and two dimensions.

  16. Exploring cluster Monte Carlo updates with Boltzmann machines

    NASA Astrophysics Data System (ADS)

    Wang, Lei

    2017-11-01

    Boltzmann machines are physics informed generative models with broad applications in machine learning. They model the probability distribution of an input data set with latent variables and generate new samples accordingly. Applying the Boltzmann machines back to physics, they are ideal recommender systems to accelerate the Monte Carlo simulation of physical systems due to their flexibility and effectiveness. More intriguingly, we show that the generative sampling of the Boltzmann machines can even give different cluster Monte Carlo algorithms. The latent representation of the Boltzmann machines can be designed to mediate complex interactions and identify clusters of the physical system. We demonstrate these findings with concrete examples of the classical Ising model with and without four-spin plaquette interactions. In the future, automatic searches in the algorithm space parametrized by Boltzmann machines may discover more innovative Monte Carlo updates.

  17. Emergence Processes up to Consciousness Using the Multiplicity Principle and Quantum Physics

    NASA Astrophysics Data System (ADS)

    Ehresmann, Andrée C.; Vanbremeersch, Jean-Paul

    2002-09-01

    Evolution is marked by the emergence of new objects and interactions. Pursuing our preceding work on Memory Evolutive Systems (MES; cf. our Internet site), we propose a general mathematical model for this process, based on Category Theory. Its main characteristics is the Multiplicity Principle (MP) which asserts the existence of complex objects with several possible configurations. The MP entails the emergence of non-reducible more and more complex objects (emergentist reductionism). From the laws of Quantum Physics, it follows that the MP is valid for the category of particles and atoms, hence, by complexification, for any natural autonomous anticipatory complex system, such as biological systems up to neural systems, or social systems. Applying the model to the MES of neurons, we describe the emergence of higher and higher cognitive processes and of a semantic memory. Consciousness is characterized by the development of a permanent `personal' memory, the archetypal core, which allows the formation of extended landscapes with an integration of the temporal dimensions.

  18. Mathematical and Numerical Techniques in Energy and Environmental Modeling

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Ewing, R. E.

    Mathematical models have been widely used to predict, understand, and optimize many complex physical processes, from semiconductor or pharmaceutical design to large-scale applications such as global weather models to astrophysics. In particular, simulation of environmental effects of air pollution is extensive. Here we address the need for using similar models to understand the fate and transport of groundwater contaminants and to design in situ remediation strategies. Three basic problem areas need to be addressed in the modeling and simulation of the flow of groundwater contamination. First, one obtains an effective model to describe the complex fluid/fluid and fluid/rock interactions that control the transport of contaminants in groundwater. This includes the problem of obtaining accurate reservoir descriptions at various length scales and modeling the effects of this heterogeneity in the reservoir simulators. Next, one develops accurate discretization techniques that retain the important physical properties of the continuous models. Finally, one develops efficient numerical solution algorithms that utilize the potential of the emerging computing architectures. We will discuss recent advances and describe the contribution of each of the papers in this book in these three areas. Keywords: reservoir simulation, mathematical models, partial differential equations, numerical algorithms

  19. An Evaluation of Diagnostic Atmospheric Dispersion Models for ’Cold Spill’ Applications at Vandenberg Air Force Base, California

    DTIC Science & Technology

    1992-12-30

    this report was 1ý.F,;,Amdauncdunc Rsearch Professor of Physics Approvei: b K. E. Woel Chairman Department of Physics Released by: P .Mar o, Dean of...R.J. Yamartino, 1987: Environmental Protection Agency complex terrain model development: final rep. EPA/600/3-88/006, U.S., 486 pp. Stull , R.B., 1988...Denmark Mr. Randall Nyman 1 ACTA Vandenberg AFB, CA 93437-5000 Prof. Gordon Schacher 1 Dean of Faculty and Graduate Studies Naval Postgraduate

  20. Kinetics and Photochemistry of Ruthenium Bisbipyridine Diacetonitrile Complexes: An Interdisciplinary Inorganic and Physical Chemistry Laboratory Exercise.

    PubMed

    Rapp, Teresa L; Phillips, Susan R; Dmochowski, Ivan J

    2016-12-13

    The study of ruthenium polypyridyl complexes can be widely applied across disciplines in the undergraduate curriculum. Ruthenium photochemistry has advanced many fields including dye-sensitized solar cells, photoredox catalysis, light-driven water oxidation, and biological electron transfer. Equally promising are ruthenium polypyridyl complexes that provide a sterically bulky, photolabile moiety for transiently "caging" biologically active molecules. Photouncaging involves the use of visible (1-photon) or near-IR (2-photon) light to break one or more bonds between ruthenium and coordinated ligand(s), which can occur on short time scales and in high quantum yields. In this work we demonstrate the use of a model "caged" acetonitrile complex, Ru(2,2'-bipyridine) 2 (acetonitrile) 2 , or RuMeCN in an advanced synthesis and physical chemistry laboratory. Students made RuMeCN in an advanced synthesis laboratory course and performed UV-vis spectroscopy and electrochemistry. The following semester students investigated RuMeCN photolysis kinetics in a physical chemistry laboratory. These two exercises may also be combined to create a 2-week module in an advanced undergraduate laboratory course.

  1. Kinetics and Photochemistry of Ruthenium Bisbipyridine Diacetonitrile Complexes: An Interdisciplinary Inorganic and Physical Chemistry Laboratory Exercise

    PubMed Central

    2016-01-01

    The study of ruthenium polypyridyl complexes can be widely applied across disciplines in the undergraduate curriculum. Ruthenium photochemistry has advanced many fields including dye-sensitized solar cells, photoredox catalysis, light-driven water oxidation, and biological electron transfer. Equally promising are ruthenium polypyridyl complexes that provide a sterically bulky, photolabile moiety for transiently “caging” biologically active molecules. Photouncaging involves the use of visible (1-photon) or near-IR (2-photon) light to break one or more bonds between ruthenium and coordinated ligand(s), which can occur on short time scales and in high quantum yields. In this work we demonstrate the use of a model “caged” acetonitrile complex, Ru(2,2′-bipyridine)2(acetonitrile)2, or RuMeCN in an advanced synthesis and physical chemistry laboratory. Students made RuMeCN in an advanced synthesis laboratory course and performed UV–vis spectroscopy and electrochemistry. The following semester students investigated RuMeCN photolysis kinetics in a physical chemistry laboratory. These two exercises may also be combined to create a 2-week module in an advanced undergraduate laboratory course. PMID:28649139

  2. Deep Learning Fluid Mechanics

    NASA Astrophysics Data System (ADS)

    Barati Farimani, Amir; Gomes, Joseph; Pande, Vijay

    2017-11-01

    We have developed a new data-driven model paradigm for the rapid inference and solution of the constitutive equations of fluid mechanic by deep learning models. Using generative adversarial networks (GAN), we train models for the direct generation of solutions to steady state heat conduction and incompressible fluid flow without knowledge of the underlying governing equations. Rather than using artificial neural networks to approximate the solution of the constitutive equations, GANs can directly generate the solutions to these equations conditional upon an arbitrary set of boundary conditions. Both models predict temperature, velocity and pressure fields with great test accuracy (>99.5%). The application of our framework for inferring and generating the solutions of partial differential equations can be applied to any physical phenomena and can be used to learn directly from experiments where the underlying physical model is complex or unknown. We also have shown that our framework can be used to couple multiple physics simultaneously, making it amenable to tackle multi-physics problems.

  3. Artificial intelligence support for scientific model-building

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.

    1992-01-01

    Scientific model-building can be a time-intensive and painstaking process, often involving the development of large and complex computer programs. Despite the effort involved, scientific models cannot easily be distributed and shared with other scientists. In general, implemented scientific models are complex, idiosyncratic, and difficult for anyone but the original scientific development team to understand. We believe that artificial intelligence techniques can facilitate both the model-building and model-sharing process. In this paper, we overview our effort to build a scientific modeling software tool that aids the scientist in developing and using models. This tool includes an interactive intelligent graphical interface, a high-level domain specific modeling language, a library of physics equations and experimental datasets, and a suite of data display facilities.

  4. Synthetic Earthquake Statistics From Physical Fault Models for the Lower Rhine Embayment

    NASA Astrophysics Data System (ADS)

    Brietzke, G. B.; Hainzl, S.; Zöller, G.

    2012-04-01

    As of today, seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates they fail to provide a link between the observed seismicity and the underlying physical processes. Solving a state-of-the-art fully dynamic description set of all relevant physical processes related to earthquake fault systems is likely not useful since it comes with a large number of degrees of freedom, poor constraints on its model parameters and a huge computational effort. Here, quasi-static and quasi-dynamic physical fault simulators provide a compromise between physical completeness and computational affordability and aim at providing a link between basic physical concepts and statistics of seismicity. Within the framework of quasi-static and quasi-dynamic earthquake simulators we investigate a model of the Lower Rhine Embayment (LRE) that is based upon seismological and geological data. We present and discuss statistics of the spatio-temporal behavior of generated synthetic earthquake catalogs with respect to simplification (e.g. simple two-fault cases) as well as to complication (e.g. hidden faults, geometric complexity, heterogeneities of constitutive parameters).

  5. A review on locomotion robophysics: the study of movement at the intersection of robotics, soft matter and dynamical systems.

    PubMed

    Aguilar, Jeffrey; Zhang, Tingnan; Qian, Feifei; Kingsbury, Mark; McInroe, Benjamin; Mazouchova, Nicole; Li, Chen; Maladen, Ryan; Gong, Chaohui; Travers, Matt; Hatton, Ross L; Choset, Howie; Umbanhowar, Paul B; Goldman, Daniel I

    2016-11-01

    Discovery of fundamental principles which govern and limit effective locomotion (self-propulsion) is of intellectual interest and practical importance. Human technology has created robotic moving systems that excel in movement on and within environments of societal interest: paved roads, open air and water. However, such devices cannot yet robustly and efficiently navigate (as animals do) the enormous diversity of natural environments which might be of future interest for autonomous robots; examples include vertical surfaces like trees and cliffs, heterogeneous ground like desert rubble and brush, turbulent flows found near seashores, and deformable/flowable substrates like sand, mud and soil. In this review we argue for the creation of a physics of moving systems-a 'locomotion robophysics'-which we define as the pursuit of principles of self-generated motion. Robophysics can provide an important intellectual complement to the discipline of robotics, largely the domain of researchers from engineering and computer science. The essential idea is that we must complement the study of complex robots in complex situations with systematic study of simplified robotic devices in controlled laboratory settings and in simplified theoretical models. We must thus use the methods of physics to examine both locomotor successes and failures using parameter space exploration, systematic control, and techniques from dynamical systems. Using examples from our and others' research, we will discuss how such robophysical studies have begun to aid engineers in the creation of devices that have begun to achieve life-like locomotor abilities on and within complex environments, have inspired interesting physics questions in low dimensional dynamical systems, geometric mechanics and soft matter physics, and have been useful to develop models for biological locomotion in complex terrain. The rapidly decreasing cost of constructing robot models with easy access to significant computational power bodes well for scientists and engineers to engage in a discipline which can readily integrate experiment, theory and computation.

  6. A review on locomotion robophysics: the study of movement at the intersection of robotics, soft matter and dynamical systems

    NASA Astrophysics Data System (ADS)

    Aguilar, Jeffrey; Zhang, Tingnan; Qian, Feifei; Kingsbury, Mark; McInroe, Benjamin; Mazouchova, Nicole; Li, Chen; Maladen, Ryan; Gong, Chaohui; Travers, Matt; Hatton, Ross L.; Choset, Howie; Umbanhowar, Paul B.; Goldman, Daniel I.

    2016-11-01

    Discovery of fundamental principles which govern and limit effective locomotion (self-propulsion) is of intellectual interest and practical importance. Human technology has created robotic moving systems that excel in movement on and within environments of societal interest: paved roads, open air and water. However, such devices cannot yet robustly and efficiently navigate (as animals do) the enormous diversity of natural environments which might be of future interest for autonomous robots; examples include vertical surfaces like trees and cliffs, heterogeneous ground like desert rubble and brush, turbulent flows found near seashores, and deformable/flowable substrates like sand, mud and soil. In this review we argue for the creation of a physics of moving systems—a ‘locomotion robophysics’—which we define as the pursuit of principles of self-generated motion. Robophysics can provide an important intellectual complement to the discipline of robotics, largely the domain of researchers from engineering and computer science. The essential idea is that we must complement the study of complex robots in complex situations with systematic study of simplified robotic devices in controlled laboratory settings and in simplified theoretical models. We must thus use the methods of physics to examine both locomotor successes and failures using parameter space exploration, systematic control, and techniques from dynamical systems. Using examples from our and others’ research, we will discuss how such robophysical studies have begun to aid engineers in the creation of devices that have begun to achieve life-like locomotor abilities on and within complex environments, have inspired interesting physics questions in low dimensional dynamical systems, geometric mechanics and soft matter physics, and have been useful to develop models for biological locomotion in complex terrain. The rapidly decreasing cost of constructing robot models with easy access to significant computational power bodes well for scientists and engineers to engage in a discipline which can readily integrate experiment, theory and computation.

  7. Uranium plume persistence impacted by hydrologic and geochemical heterogeneity in the groundwater and river water interaction zone of Hanford site

    NASA Astrophysics Data System (ADS)

    Chen, X.; Zachara, J. M.; Vermeul, V. R.; Freshley, M.; Hammond, G. E.

    2015-12-01

    The behavior of a persistent uranium plume in an extended groundwater- river water (GW-SW) interaction zone at the DOE Hanford site is dominantly controlled by river stage fluctuations in the adjacent Columbia River. The plume behavior is further complicated by substantial heterogeneity in physical and geochemical properties of the host aquifer sediments. Multi-scale field and laboratory experiments and reactive transport modeling were integrated to understand the complex plume behavior influenced by highly variable hydrologic and geochemical conditions in time and space. In this presentation we (1) describe multiple data sets from field-scale uranium adsorption and desorption experiments performed at our experimental well-field, (2) develop a reactive transport model that incorporates hydrologic and geochemical heterogeneities characterized from multi-scale and multi-type datasets and a surface complexation reaction network based on laboratory studies, and (3) compare the modeling and observation results to provide insights on how to refine the conceptual model and reduce prediction uncertainties. The experimental results revealed significant spatial variability in uranium adsorption/desorption behavior, while modeling demonstrated that ambient hydrologic and geochemical conditions and heterogeneities in sediment physical and chemical properties both contributed to complex plume behavior and its persistence. Our analysis provides important insights into the characterization, understanding, modeling, and remediation of groundwater contaminant plumes influenced by surface water and groundwater interactions.

  8. Learning Physics-based Models in Hydrology under the Framework of Generative Adversarial Networks

    NASA Astrophysics Data System (ADS)

    Karpatne, A.; Kumar, V.

    2017-12-01

    Generative adversarial networks (GANs), that have been highly successful in a number of applications involving large volumes of labeled and unlabeled data such as computer vision, offer huge potential for modeling the dynamics of physical processes that have been traditionally studied using simulations of physics-based models. While conventional physics-based models use labeled samples of input/output variables for model calibration (estimating the right parametric forms of relationships between variables) or data assimilation (identifying the most likely sequence of system states in dynamical systems), there is a greater opportunity to explore the full power of machine learning (ML) methods (e.g, GANs) for studying physical processes currently suffering from large knowledge gaps, e.g. ground-water flow. However, success in this endeavor requires a principled way of combining the strengths of ML methods with physics-based numerical models that are founded on a wealth of scientific knowledge. This is especially important in scientific domains like hydrology where the number of data samples is small (relative to Internet-scale applications such as image recognition where machine learning methods has found great success), and the physical relationships are complex (high-dimensional) and non-stationary. We will present a series of methods for guiding the learning of GANs using physics-based models, e.g., by using the outputs of physics-based models as input data to the generator-learner framework, and by using physics-based models as generators trained using validation data in the adversarial learning framework. These methods are being developed under the broad paradigm of theory-guided data science that we are developing to integrate scientific knowledge with data science methods for accelerating scientific discovery.

  9. Bring It On, Complexity! Present and Future of Self-Organising Middle-Out Abstraction

    NASA Astrophysics Data System (ADS)

    Mammen, Sebastian Von; Steghöfer, Jan-Philipp

    The following sections are included: * The Great Complexity Challenge * Self-Organising Middle-Out Abstraction * Optimising Graphics, Physics and Artificial Intelligence * Emergence and Hierarchies in a Natural System * The Technical Concept of SOMO * Observation of interactions * Interaction pattern recognition and behavioural abstraction * Creating and adjusting hierarchies * Confidence measures * Execution model * Learning SOMO: parameters, knowledge propagation, and procreation * Current Implementations * Awareness Beyond Virtuality * Integration and emergence * Model inference * SOMO net * SOMO after me * The Future of SOMO

  10. Synchronisation of chaos and its applications

    NASA Astrophysics Data System (ADS)

    Eroglu, Deniz; Lamb, Jeroen S. W.; Pereira, Tiago

    2017-07-01

    Dynamical networks are important models for the behaviour of complex systems, modelling physical, biological and societal systems, including the brain, food webs, epidemic disease in populations, power grids and many other. Such dynamical networks can exhibit behaviour in which deterministic chaos, exhibiting unpredictability and disorder, coexists with synchronisation, a classical paradigm of order. We survey the main theory behind complete, generalised and phase synchronisation phenomena in simple as well as complex networks and discuss applications to secure communications, parameter estimation and the anticipation of chaos.

  11. A test harness for accelerating physics parameterization advancements into operations

    NASA Astrophysics Data System (ADS)

    Firl, G. J.; Bernardet, L.; Harrold, M.; Henderson, J.; Wolff, J.; Zhang, M.

    2017-12-01

    The process of transitioning advances in parameterization of sub-grid scale processes from initial idea to implementation is often much quicker than the transition from implementation to use in an operational setting. After all, considerable work must be undertaken by operational centers to fully test, evaluate, and implement new physics. The process is complicated by the scarcity of like-to-like comparisons, availability of HPC resources, and the ``tuning problem" whereby advances in physics schemes are difficult to properly evaluate without first undertaking the expensive and time-consuming process of tuning to other schemes within a suite. To address this process shortcoming, the Global Model TestBed (GMTB), supported by the NWS NGGPS project and undertaken by the Developmental Testbed Center, has developed a physics test harness. It implements the concept of hierarchical testing, where the same code can be tested in model configurations of varying complexity from single column models (SCM) to fully coupled, cycled global simulations. Developers and users may choose at which level of complexity to engage. Several components of the physics test harness have been implemented, including a SCM and an end-to-end workflow that expands upon the one used at NOAA/EMC to run the GFS operationally, although the testbed components will necessarily morph to coincide with changes to the operational configuration (FV3-GFS). A standard, relatively user-friendly interface known as the Interoperable Physics Driver (IPD) is available for physics developers to connect their codes. This prerequisite exercise allows access to the testbed tools and removes a technical hurdle for potential inclusion into the Common Community Physics Package (CCPP). The testbed offers users the opportunity to conduct like-to-like comparisons between the operational physics suite and new development as well as among multiple developments. GMTB staff have demonstrated use of the testbed through a comparison between the 2017 operational GFS suite and one containing the Grell-Freitas convective parameterization. An overview of the physics test harness and its early use will be presented.

  12. The distribution of density in supersonic turbulence

    NASA Astrophysics Data System (ADS)

    Squire, Jonathan; Hopkins, Philip F.

    2017-11-01

    We propose a model for the statistics of the mass density in supersonic turbulence, which plays a crucial role in star formation and the physics of the interstellar medium (ISM). The model is derived by considering the density to be arranged as a collection of strong shocks of width ˜ M^{-2}, where M is the turbulent Mach number. With two physically motivated parameters, the model predicts all density statistics for M>1 turbulence: the density probability distribution and its intermittency (deviation from lognormality), the density variance-Mach number relation, power spectra and structure functions. For the proposed model parameters, reasonable agreement is seen between model predictions and numerical simulations, albeit within the large uncertainties associated with current simulation results. More generally, the model could provide a useful framework for more detailed analysis of future simulations and observational data. Due to the simple physical motivations for the model in terms of shocks, it is straightforward to generalize to more complex physical processes, which will be helpful in future more detailed applications to the ISM. We see good qualitative agreement between such extensions and recent simulations of non-isothermal turbulence.

  13. An Empirical Polarizable Force Field Based on the Classical Drude Oscillator Model: Development History and Recent Applications

    PubMed Central

    2016-01-01

    Molecular mechanics force fields that explicitly account for induced polarization represent the next generation of physical models for molecular dynamics simulations. Several methods exist for modeling induced polarization, and here we review the classical Drude oscillator model, in which electronic degrees of freedom are modeled by charged particles attached to the nuclei of their core atoms by harmonic springs. We describe the latest developments in Drude force field parametrization and application, primarily in the last 15 years. Emphasis is placed on the Drude-2013 polarizable force field for proteins, DNA, lipids, and carbohydrates. We discuss its parametrization protocol, development history, and recent simulations of biologically interesting systems, highlighting specific studies in which induced polarization plays a critical role in reproducing experimental observables and understanding physical behavior. As the Drude oscillator model is computationally tractable and available in a wide range of simulation packages, it is anticipated that use of these more complex physical models will lead to new and important discoveries of the physical forces driving a range of chemical and biological phenomena. PMID:26815602

  14. A generic framework for individual-based modelling and physical-biological interaction

    PubMed Central

    2018-01-01

    The increased availability of high-resolution ocean data globally has enabled more detailed analyses of physical-biological interactions and their consequences to the ecosystem. We present IBMlib, which is a versatile, portable and computationally effective framework for conducting Lagrangian simulations in the marine environment. The purpose of the framework is to handle complex individual-level biological models of organisms, combined with realistic 3D oceanographic model of physics and biogeochemistry describing the environment of the organisms without assumptions about spatial or temporal scales. The open-source framework features a minimal robust interface to facilitate the coupling between individual-level biological models and oceanographic models, and we provide application examples including forward/backward simulations, habitat connectivity calculations, assessing ocean conditions, comparison of physical circulation models, model ensemble runs and recently posterior Eulerian simulations using the IBMlib framework. We present the code design ideas behind the longevity of the code, our implementation experiences, as well as code performance benchmarking. The framework may contribute substantially to progresses in representing, understanding, predicting and eventually managing marine ecosystems. PMID:29351280

  15. Cross hole GPR traveltime inversion using a fast and accurate neural network as a forward model

    NASA Astrophysics Data System (ADS)

    Mejer Hansen, Thomas

    2017-04-01

    Probabilistic formulated inverse problems can be solved using Monte Carlo based sampling methods. In principle both advanced prior information, such as based on geostatistics, and complex non-linear forward physical models can be considered. However, in practice these methods can be associated with huge computational costs that in practice limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error, that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival travel time inversion of cross hole ground-penetrating radar (GPR) data. An accurate forward model, based on 2D full-waveform modeling followed by automatic travel time picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the full forward model, and considerably faster, and more accurate, than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of the types of inverse problems that can be solved using non-linear Monte Carlo sampling techniques.

  16. Evolving Scale-Free Networks by Poisson Process: Modeling and Degree Distribution.

    PubMed

    Feng, Minyu; Qu, Hong; Yi, Zhang; Xie, Xiurui; Kurths, Jurgen

    2016-05-01

    Since the great mathematician Leonhard Euler initiated the study of graph theory, the network has been one of the most significant research subject in multidisciplinary. In recent years, the proposition of the small-world and scale-free properties of complex networks in statistical physics made the network science intriguing again for many researchers. One of the challenges of the network science is to propose rational models for complex networks. In this paper, in order to reveal the influence of the vertex generating mechanism of complex networks, we propose three novel models based on the homogeneous Poisson, nonhomogeneous Poisson and birth death process, respectively, which can be regarded as typical scale-free networks and utilized to simulate practical networks. The degree distribution and exponent are analyzed and explained in mathematics by different approaches. In the simulation, we display the modeling process, the degree distribution of empirical data by statistical methods, and reliability of proposed networks, results show our models follow the features of typical complex networks. Finally, some future challenges for complex systems are discussed.

  17. Laser Powered Launch Vehicle Performance Analyses

    NASA Technical Reports Server (NTRS)

    Chen, Yen-Sen; Liu, Jiwen; Wang, Ten-See (Technical Monitor)

    2001-01-01

    The purpose of this study is to establish the technical ground for modeling the physics of laser powered pulse detonation phenomenon. Laser powered propulsion systems involve complex fluid dynamics, thermodynamics and radiative transfer processes. Successful predictions of the performance of laser powered launch vehicle concepts depend on the sophisticate models that reflects the underlying flow physics including the laser ray tracing the focusing, inverse Bremsstrahlung (IB) effects, finite-rate air chemistry, thermal non-equilibrium, plasma radiation and detonation wave propagation, etc. The proposed work will extend the base-line numerical model to an efficient design analysis tool. The proposed model is suitable for 3-D analysis using parallel computing methods.

  18. Design and modelling of a 3D compliant leg for Bioloid

    NASA Astrophysics Data System (ADS)

    Couto, Mafalda; Santos, Cristina; Machado, José

    2012-09-01

    In the growing field of rehabilitation robotics, the modelling of a real robot is a complex and passionate challenge. On the crossing point of mechanics, physics and computer-science, the development of a complete 3D model involves the knowledge of the different physic properties, for an accurate simulation. In this paper, it is proposed the design of an efficient three-dimensional model of the quadruped Bioloid robot setting segmented pantographic legs, in order to actively retract the quadruped legs during locomotion and minimizing large forces due to shocks, such that the robot is able to safely and dynamically interact with the user or the environment.

  19. The physics of volume rendering

    NASA Astrophysics Data System (ADS)

    Peters, Thomas

    2014-11-01

    Radiation transfer is an important topic in several physical disciplines, probably most prominently in astrophysics. Computer scientists use radiation transfer, among other things, for the visualization of complex data sets with direct volume rendering. In this article, I point out the connection between physical radiation transfer and volume rendering, and I describe an implementation of direct volume rendering in the astrophysical radiation transfer code RADMC-3D. I show examples for the use of this module on analytical models and simulation data.

  20. Ambiguities in the identification of giant molecular cloud complexes from longitude-velocity diagrams

    NASA Technical Reports Server (NTRS)

    Adler, David S.; Roberts, William W., Jr.

    1992-01-01

    Techniques which use longitude-velocity diagrams to identify molecular cloud complexes in the disk of the Galaxy are investigated by means of model Galactic disks generated from N-body cloud-particle simulations. A procedure similar to the method used to reduce the low-level emission in Galactic l-v diagrams is employed to isolate complexes of emission in the model l-v diagram (LVCs) from the 'background'clouds. The LVCs produced in this manner yield a size-line-width relationship with a slope of 0.58 and a mass spectrum with a slope of 1.55, consistent with Galactic observations. It is demonstrated that associations identified as LVCs are often chance superpositions of clouds spread out along the line of sight in the disk of the model system. This indicates that the l-v diagram cannot be used to unambiguously determine the location of molecular cloud complexes in the model Galactic disk. The modeling results also indicate that the existence of a size-line-width relationship is not a reliable indicator of the physical nature of cloud complexes, in particular, whether the complexes are gravitationally bound objects.

  1. A matrix for the qualitative evaluation of nursing tasks.

    PubMed

    Durosaiye, Isaiah O; Hadjri, Karim; Liyanage, Champika L; Bennett, Kina

    2018-04-01

    To formulate a model for patient-nurse interaction; to compile a comprehensive list of nursing tasks on hospital wards; and to construct a nursing tasks demand matrix. The physical demands associated with nursing profession are of growing interest among researchers. Yet, it is the complexity of nursing tasks that defines the demands of ward nurses' role. This study explores nursing tasks, based on patient-nurse interaction on hospital wards. Extant literature was reviewed to formulate a patient-nurse interaction model. Twenty ward nurses were interviewed to compile a list of nursing tasks. These nursing tasks were mapped against the patient-nurse interaction model. A patient-nurse interaction model was created, consisting of: (1) patient care, (2) patient surveillance and (3) patient support. Twenty-three nursing tasks were identified. The nursing tasks demand matrix was constructed. Ward managers may use a nursing tasks demand matrix to determine the demands of nursing tasks on ward nurses. While many studies have explored either the physical or the psychosocial aspects of nursing tasks separately, this study suggests that the physicality of nursing tasks must be evaluated in tandem with their complexity. Ward managers may take a holistic approach to nursing tasks evaluation by using a nursing tasks demand matrix. © 2017 John Wiley & Sons Ltd.

  2. Blast Fragmentation Modeling and Analysis

    DTIC Science & Technology

    2010-10-31

    weapons device containing a multiphase blast explosive (MBX). 1. INTRODUCTION The ARL Survivability Lethality and Analysis Directorate (SLAD) is...velocity. In order to simulate the highly complex phenomenon, the exploding cylinder is modeled with the hydrodynamics code ALE3D , an arbitrary...Lagrangian-Eulerian multiphysics code, developed at Lawrence Livermore National Laboratory. ALE3D includes physical properties, constitutive models for

  3. Can there be a physics of financial markets? Methodological reflections on econophysics

    NASA Astrophysics Data System (ADS)

    Huber, Tobias A.; Sornette, Didier

    2016-12-01

    We address the question whether there can be a physical science of financial markets. In particular, we examine the argument that, given the reflexivity of financial markets (i.e., the feedback mechanism between expectations and prices), there is a fundamental difference between social and physical systems, which demands a new scientific method. By providing a selective history of the mutual cross-fertilization between physics and economics, we reflect on the methodological differences of how models and theories get constructed in these fields. We argue that the novel conception of financial markets as complex adaptive systems is one of the most important contributions of econophysics and show that this field of research provides the methods, concepts, and tools to scientifically account for reflexivity. We conclude by arguing that a new science of economic and financial systems should not only be physics-based, but needs to integrate findings from other scientific fields, so that a truly multi-disciplinary complex systems science of financial markets can be built.

  4. Structural requirements for the assembly of LINC complexes and their function in cellular mechanical stiffness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart-Hutchinson, P.J.; Hale, Christopher M.; Wirtz, Denis

    The evolutionary-conserved interactions between KASH and SUN domain-containing proteins within the perinuclear space establish physical connections, called LINC complexes, between the nucleus and the cytoskeleton. Here, we show that the KASH domains of Nesprins 1, 2 and 3 interact promiscuously with luminal domains of Sun1 and Sun2. These constructs disrupt endogenous LINC complexes as indicated by the displacement of endogenous Nesprins from the nuclear envelope. We also provide evidence that KASH domains most probably fit a pocket provided by SUN domains and that post-translational modifications are dispensable for that interaction. We demonstrate that the disruption of endogenous LINC complexes affectmore » cellular mechanical stiffness to an extent that compares to the loss of mechanical stiffness previously reported in embryonic fibroblasts derived from mouse lacking A-type lamins, a mouse model of muscular dystrophies and cardiomyopathies. These findings support a model whereby physical connections between the nucleus and the cytoskeleton are mediated by interactions between diverse combinations of Sun proteins and Nesprins through their respective evolutionary-conserved domains. Furthermore, they emphasize, for the first time, the relevance of LINC complexes in cellular mechanical stiffness suggesting a possible involvement of their disruption in various laminopathies, a group of human diseases linked to mutations of A-type lamins.« less

  5. WE-D-303-01: Development and Application of Digital Human Phantoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Segars, P.

    2015-06-15

    Modern medical physics deals with complex problems such as 4D radiation therapy and imaging quality optimization. Such problems involve a large number of radiological parameters, and anatomical and physiological breathing patterns. A major challenge is how to develop, test, evaluate and compare various new imaging and treatment techniques, which often involves testing over a large range of radiological parameters as well as varying patient anatomies and motions. It would be extremely challenging, if not impossible, both ethically and practically, to test every combination of parameters and every task on every type of patient under clinical conditions. Computer-based simulation using computationalmore » phantoms offers a practical technique with which to evaluate, optimize, and compare imaging technologies and methods. Within simulation, the computerized phantom provides a virtual model of the patient’s anatomy and physiology. Imaging data can be generated from it as if it was a live patient using accurate models of the physics of the imaging and treatment process. With sophisticated simulation algorithms, it is possible to perform virtual experiments entirely on the computer. By serving as virtual patients, computational phantoms hold great promise in solving some of the most complex problems in modern medical physics. In this proposed symposium, we will present the history and recent developments of computational phantom models, share experiences in their application to advanced imaging and radiation applications, and discuss their promises and limitations. Learning Objectives: Understand the need and requirements of computational phantoms in medical physics research Discuss the developments and applications of computational phantoms Know the promises and limitations of computational phantoms in solving complex problems.« less

  6. Regional Assessment of Storm-triggered Shall Landslide Risks using the SLIDE (SLope-Infiltration-Distributed Equilibrium) Model

    NASA Astrophysics Data System (ADS)

    Hong, Y.; Kirschbaum, D. B.; Fukuoka, H.

    2011-12-01

    The key to advancing the predictability of rainfall-triggered landslides is to use physically based slope-stability models that simulate the dynamical response of the subsurface moisture to spatiotemporal variability of rainfall in complex terrains. An early warning system applying such physical models has been developed to predict rainfall-induced shallow landslides over Java Island in Indonesia and Honduras. The prototyped early warning system integrates three major components: (1) a susceptibility mapping or hotspot identification component based on a land surface geospatial database (topographical information, maps of soil properties, and local landslide inventory etc.); (2) a satellite-based precipitation monitoring system (http://trmm.gsfc.nasa.gov) and a precipitation forecasting model (i.e. Weather Research Forecast); and (3) a physically-based, rainfall-induced landslide prediction model SLIDE (SLope-Infiltration-Distributed Equilibrium). The system utilizes the modified physical model to calculate a Factor of Safety (FS) that accounts for the contribution of rainfall infiltration and partial saturation to the shear strength of the soil in topographically complex terrains. The system's prediction performance has been evaluated using a local landslide inventory. In Java Island, Indonesia, evaluation of SLIDE modeling results by local news reports shows that the system successfully predicted landslides in correspondence to the time of occurrence of the real landslide events. Further study of SLIDE is implemented in Honduras where Hurricane Mitch triggered widespread landslides in 1998. Results shows within the approximately 1,200 square kilometers study areas, the values of hit rates reached as high as 78% and 75%, while the error indices were 35% and 49%. Despite positive model performance, the SLIDE model is limited in the early warning system by several assumptions including, using general parameter calibration rather than in situ tests and neglecting geologic information. Advantages and limitations of this model will be discussed with respect to future applications of landslide assessment and prediction over large scales. In conclusion, integration of spatially distributed remote sensing precipitation products and in-situ datasets and physical models in this prototype system enable us to further develop a regional early warning tool in the future for forecasting storm-induced landslides.

  7. Physical biology of human brain development.

    PubMed

    Budday, Silvia; Steinmann, Paul; Kuhl, Ellen

    2015-01-01

    Neurodevelopment is a complex, dynamic process that involves a precisely orchestrated sequence of genetic, environmental, biochemical, and physical events. Developmental biology and genetics have shaped our understanding of the molecular and cellular mechanisms during neurodevelopment. Recent studies suggest that physical forces play a central role in translating these cellular mechanisms into the complex surface morphology of the human brain. However, the precise impact of neuronal differentiation, migration, and connection on the physical forces during cortical folding remains unknown. Here we review the cellular mechanisms of neurodevelopment with a view toward surface morphogenesis, pattern selection, and evolution of shape. We revisit cortical folding as the instability problem of constrained differential growth in a multi-layered system. To identify the contributing factors of differential growth, we map out the timeline of neurodevelopment in humans and highlight the cellular events associated with extreme radial and tangential expansion. We demonstrate how computational modeling of differential growth can bridge the scales-from phenomena on the cellular level toward form and function on the organ level-to make quantitative, personalized predictions. Physics-based models can quantify cortical stresses, identify critical folding conditions, rationalize pattern selection, and predict gyral wavelengths and gyrification indices. We illustrate that physical forces can explain cortical malformations as emergent properties of developmental disorders. Combining biology and physics holds promise to advance our understanding of human brain development and enable early diagnostics of cortical malformations with the ultimate goal to improve treatment of neurodevelopmental disorders including epilepsy, autism spectrum disorders, and schizophrenia.

  8. Chimaera simulation of complex states of flowing matter.

    PubMed

    Succi, S

    2016-11-13

    We discuss a unified mesoscale framework (chimaera) for the simulation of complex states of flowing matter across scales of motion. The chimaera framework can deal with each of the three macro-meso-micro levels through suitable 'mutations' of the basic mesoscale formulation. The idea is illustrated through selected simulations of complex micro- and nanoscale flows.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2016 The Author(s).

  9. Modelling dishes and exploring culinary 'precisions': the two issues of molecular gastronomy.

    PubMed

    This, Hervé

    2005-04-01

    The scientific strategy of molecular gastronomy includes modelling 'culinary definitions' and experimental explorations of 'culinary precisions'. A formalism that describes complex dispersed systems leads to a physical classification of classical sauces, as well as to the invention of an infinite number of new dishes.

  10. A complex permittivity model for field estimation of soil water contents using time domain reflectometry

    USDA-ARS?s Scientific Manuscript database

    Accurate electromagnetic sensing of soil water contents (') under field conditions is complicated by the dependence of permittivity on specific surface area, temperature, and apparent electrical conductivity, all which may vary across space or time. We present a physically-based mixing model to pred...

  11. Physical robustness of canopy temperature models for crop heat stress simulation across environments and production conditions

    USDA-ARS?s Scientific Manuscript database

    Despite widespread application in studying climate change impacts, most crop models ignore complex interactions among air temperature, crop and soil water status, CO2 concentration and atmospheric conditions that influence crop canopy temperature. The current study extended previous studies by evalu...

  12. Data management in the mission data system

    NASA Technical Reports Server (NTRS)

    Wagner, David A.

    2005-01-01

    As spacecraft evolve from simple embedded devices to become more sophisticated computing platforms with complex behaviors it is increasingly necessary to model and manage the flow of data, and to provide uniform models for managing data that promote adaptability, yet pay heed to the physical limitations of the embedded and space environments.

  13. A volumetric ablation model of EPDM considering complex physicochemical process in porous structure of char layer

    NASA Astrophysics Data System (ADS)

    Yang, Liu; Xiao-Jing, Yu; Jian-Ming, Ma; Yi-Wen, Guan; Jiang, Li; Qiang, Li; Sa, Yang

    2017-06-01

    A volumetric ablation model for EPDM (ethylene- propylene-diene monomer) is established in this paper. This model considers the complex physicochemical process in the porous structure of a char layer. An ablation physics model based on a porous structure of a char layer and another model of heterogeneous volumetric ablation char layer physics are then built. In the model, porosity is used to describe the porous structure of a char layer. Gas diffusion and chemical reactions are introduced to the entire porous structure. Through detailed formation analysis, the causes of the compact or loose structure in the char layer and chemical vapor deposition (CVD) reaction between pyrolysis gas and char layer skeleton are introduced. The Arrhenius formula is adopted to determine the methods for calculating carbon deposition rate C which is the consumption rate caused by thermochemical reactions in the char layer, and porosity evolution. The critical porosity value is used as a criterion for char layer porous structure failure under gas flow and particle erosion. This critical porosity value is obtained by fitting experimental parameters and surface porosity of the char layer. Linear ablation and mass ablation rates are confirmed with the critical porosity value. Results of linear ablation and mass ablation rate calculations generally coincide with experimental results, suggesting that the ablation analysis proposed in this paper can accurately reflect practical situations and that the physics and mathematics models built are accurate and reasonable.

  14. Modeling bed load transport and step-pool morphology with a reduced-complexity approach

    NASA Astrophysics Data System (ADS)

    Saletti, Matteo; Molnar, Peter; Hassan, Marwan A.; Burlando, Paolo

    2016-04-01

    Steep mountain channels are complex fluvial systems, where classical methods developed for lowland streams fail to capture the dynamics of sediment transport and bed morphology. Estimations of sediment transport based on average conditions have more than one order of magnitude of uncertainty because of the wide grain-size distribution of the bed material, the small relative submergence of coarse grains, the episodic character of sediment supply, and the complex boundary conditions. Most notably, bed load transport is modulated by the structure of the bed, where grains are imbricated in steps and similar bedforms and, therefore, they are much more stable then predicted. In this work we propose a new model based on a reduced-complexity (RC) approach focused on the reproduction of the step-pool morphology. In our 2-D cellular-automaton model entrainment, transport and deposition of particles are considered via intuitive rules based on physical principles. A parsimonious set of parameters allows the control of the behavior of the system, and the basic processes can be considered in a deterministic or stochastic way. The probability of entrainment of grains (and, as a consequence, particle travel distances and resting times) is a function of flow conditions and bed topography. Sediment input is fed at the upper boundary of the channel at a constant or variable rate. Our model yields realistic results in terms of longitudinal bed profiles and sediment transport trends. Phases of aggradation and degradation can be observed in the channel even under a constant input and the memory of the morphology can be quantified with long-range persistence indicators. Sediment yield at the channel outlet shows intermittency as observed in natural streams. Steps are self-formed in the channel and their stability is tested against the model parameters. Our results show the potential of RC models as complementary tools to more sophisticated models. They provide a realistic description of complex morphological systems and help to better identify the key physical principles that rule their dynamics.

  15. Atmospheric stability and complex terrain: comparing measurements and CFD

    NASA Astrophysics Data System (ADS)

    Koblitz, T.; Bechmann, A.; Berg, J.; Sogachev, A.; Sørensen, N.; Réthoré, P.-E.

    2014-12-01

    For wind resource assessment, the wind industry is increasingly relying on Computational Fluid Dynamics models that focus on modeling the airflow in a neutrally stratified surface layer. So far, physical processes that are specific to the atmospheric boundary layer, for example the Coriolis force, buoyancy forces and heat transport, are mostly ignored in state-of-the-art flow solvers. In order to decrease the uncertainty of wind resource assessment, the effect of thermal stratification on the atmospheric boundary layer should be included in such models. The present work focuses on non-neutral atmospheric flow over complex terrain including physical processes like stability and Coriolis force. We examine the influence of these effects on the whole atmospheric boundary layer using the DTU Wind Energy flow solver EllipSys3D. To validate the flow solver, measurements from Benakanahalli hill, a field experiment that took place in India in early 2010, are used. The experiment was specifically designed to address the combined effects of stability and Coriolis force over complex terrain, and provides a dataset to validate flow solvers. Including those effects into EllipSys3D significantly improves the predicted flow field when compared against the measurements.

  16. Time Domain and Frequency Domain Deterministic Channel Modeling for Tunnel/Mining Environments.

    PubMed

    Zhou, Chenming; Jacksha, Ronald; Yan, Lincan; Reyes, Miguel; Kovalchik, Peter

    2017-01-01

    Understanding wireless channels in complex mining environments is critical for designing optimized wireless systems operated in these environments. In this paper, we propose two physics-based, deterministic ultra-wideband (UWB) channel models for characterizing wireless channels in mining/tunnel environments - one in the time domain and the other in the frequency domain. For the time domain model, a general Channel Impulse Response (CIR) is derived and the result is expressed in the classic UWB tapped delay line model. The derived time domain channel model takes into account major propagation controlling factors including tunnel or entry dimensions, frequency, polarization, electrical properties of the four tunnel walls, and transmitter and receiver locations. For the frequency domain model, a complex channel transfer function is derived analytically. Based on the proposed physics-based deterministic channel models, channel parameters such as delay spread, multipath component number, and angular spread are analyzed. It is found that, despite the presence of heavy multipath, both channel delay spread and angular spread for tunnel environments are relatively smaller compared to that of typical indoor environments. The results and findings in this paper have application in the design and deployment of wireless systems in underground mining environments.

  17. Time Domain and Frequency Domain Deterministic Channel Modeling for Tunnel/Mining Environments

    PubMed Central

    Zhou, Chenming; Jacksha, Ronald; Yan, Lincan; Reyes, Miguel; Kovalchik, Peter

    2018-01-01

    Understanding wireless channels in complex mining environments is critical for designing optimized wireless systems operated in these environments. In this paper, we propose two physics-based, deterministic ultra-wideband (UWB) channel models for characterizing wireless channels in mining/tunnel environments — one in the time domain and the other in the frequency domain. For the time domain model, a general Channel Impulse Response (CIR) is derived and the result is expressed in the classic UWB tapped delay line model. The derived time domain channel model takes into account major propagation controlling factors including tunnel or entry dimensions, frequency, polarization, electrical properties of the four tunnel walls, and transmitter and receiver locations. For the frequency domain model, a complex channel transfer function is derived analytically. Based on the proposed physics-based deterministic channel models, channel parameters such as delay spread, multipath component number, and angular spread are analyzed. It is found that, despite the presence of heavy multipath, both channel delay spread and angular spread for tunnel environments are relatively smaller compared to that of typical indoor environments. The results and findings in this paper have application in the design and deployment of wireless systems in underground mining environments.† PMID:29457801

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballouz, Ronald-Louis; Richardson, Derek C.; Morishima, Ryuji

    We study the B ring’s complex optical depth structure. The source of this structure may be the complex dynamics of the Keplerian shear and the self-gravity of the ring particles. The outcome of these dynamic effects depends sensitively on the collisional and physical properties of the particles. Two mechanisms can emerge that dominate the macroscopic physical structure of the ring: self-gravity wakes and viscous overstability. Here we study the interplay between these two mechanisms by using our recently developed particle collision method that allows us to better model the inter-particle contact physics. We find that for a constant ring surfacemore » density and particle internal density, particles with rough surfaces tend to produce axisymmetric ring features associated with the viscous overstability, while particles with smoother surfaces produce self-gravity wakes.« less

  19. Modeling radionuclide migration from underground nuclear explosions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harp, Dylan Robert; Stauffer, Philip H.; Viswanathan, Hari S.

    2017-03-06

    The travel time of radionuclide gases to the ground surface in fracture rock depends on many complex factors. Numerical simulators are the most complete repositories of knowledge of the complex processes governing radionuclide gas migration to the ground surface allowing us to verify conceptualizations of physical processes against observations and forecast radionuclide gas travel times to the ground surface and isotopic ratios

  20. INTERDISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Synchronization in Complex Networks with Multiple Connections

    NASA Astrophysics Data System (ADS)

    Wu, Qing-Chu; Fu, Xin-Chu; Sun, Wei-Gang

    2010-01-01

    In this paper a class of networks with multiple connections are discussed. The multiple connections include two different types of links between nodes in complex networks. For this new model, we give a simple generating procedure. Furthermore, we investigate dynamical synchronization behavior in a delayed two-layer network, giving corresponding theoretical analysis and numerical examples.

  1. A Self-Critique of Self-Organized Criticality in Astrophysics

    NASA Astrophysics Data System (ADS)

    Aschwanden, Markus J.

    2015-08-01

    The concept of ``self-organized criticality'' (SOC) was originally proposed as an explanation of 1/f-noise by Bak, Tang, and Wiesenfeld (1987), but turned out to have a far broader significance for scale-free nonlinear energy dissipation processes occurring in the entire universe. Over the last 30 years, an inspiring cross-fertilization from complexity theory to solar and astrophysics took place, where the SOC concept was initially applied to solar flares, stellar flares, and magnetospheric substorms, and later extended to the radiation belt, the heliosphere, lunar craters, the asteroid belt, the Saturn ring, pulsar glitches, soft X-ray repeaters, blazars, black-hole objects, cosmic rays, and boson clouds. The application of SOC concepts has been performed by numerical cellular automaton simulations, by analytical calculations of statistical (powerlaw-like) distributions based on physical scaling laws, and by observational tests of theoretically predicted size distributions and waiting time distributions. Attempts have been undertaken to import physical models into numerical SOC toy models. The novel applications stimulated also vigorous debates about the discrimination between SOC-related and non-SOC processes, such as phase transitions, turbulence, random-walk diffusion, percolation, branching processes, network theory, chaos theory, fractality, multi-scale, and other complexity phenomena. We review SOC models applied to astrophysical observations, attempt to describe what physics can be captured by SOC models, and offer a critique of weaknesses and strengths in existing SOC models.

  2. A Self-Critique of Self-Organized Criticality in Astrophysics

    NASA Astrophysics Data System (ADS)

    Aschwanden, Markus J.

    The concept of ``self-organized criticality'' (SOC) was originally proposed as an explanation of 1/f-noise by Bak, Tang, and Wiesenfeld (1987), but turned out to have a far broader significance for scale-free nonlinear energy dissipation processes occurring in the entire universe. Over the last 30 years, an inspiring cross-fertilization from complexity theory to solar and astrophysics took place, where the SOC concept was initially applied to solar flares, stellar flares, and magnetospheric substorms, and later extended to the radiation belt, the heliosphere, lunar craters, the asteroid belt, the Saturn ring, pulsar glitches, soft X-ray repeaters, blazars, black-hole objects, cosmic rays, and boson clouds. The application of SOC concepts has been performed by numerical cellular automaton simulations, by analytical calculations of statistical (powerlaw-like) distributions based on physical scaling laws, and by observational tests of theoretically predicted size distributions and waiting time distributions. Attempts have been undertaken to import physical models into numerical SOC toy models. The novel applications stimulated also vigorous debates about the discrimination between SOC-related and non-SOC processes, such as phase transitions, turbulence, random-walk diffusion, percolation, branching processes, network theory, chaos theory, fractality, multi-scale, and other complexity phenomena. We review SOC models applied to astrophysical observations, attempt to describe what physics can be captured by SOC models, and offer a critique of weaknesses and strengths in existing SOC models.

  3. Predicting Physical Interactions between Protein Complexes*

    PubMed Central

    Clancy, Trevor; Rødland, Einar Andreas; Nygard, Ståle; Hovig, Eivind

    2013-01-01

    Protein complexes enact most biochemical functions in the cell. Dynamic interactions between protein complexes are frequent in many cellular processes. As they are often of a transient nature, they may be difficult to detect using current genome-wide screens. Here, we describe a method to computationally predict physical interactions between protein complexes, applied to both humans and yeast. We integrated manually curated protein complexes and physical protein interaction networks, and we designed a statistical method to identify pairs of protein complexes where the number of protein interactions between a complex pair is due to an actual physical interaction between the complexes. An evaluation against manually curated physical complex-complex interactions in yeast revealed that 50% of these interactions could be predicted in this manner. A community network analysis of the highest scoring pairs revealed a biologically sensible organization of physical complex-complex interactions in the cell. Such analyses of proteomes may serve as a guide to the discovery of novel functional cellular relationships. PMID:23438732

  4. The evolution of process-based hydrologic models: historical challenges and the collective quest for physical realism

    NASA Astrophysics Data System (ADS)

    Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis; Woods, Ross A.; Uijlenhoet, Remko; Bennett, Katrina E.; Pauwels, Valentijn R. N.; Cai, Xitian; Wood, Andrew W.; Peters-Lidard, Christa D.

    2017-07-01

    The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.

  5. The evolution of process-based hydrologic models: historical challenges and the collective quest for physical realism

    NASA Astrophysics Data System (ADS)

    Clark, M. P.; Nijssen, B.; Wood, A.; Mizukami, N.; Newman, A. J.

    2017-12-01

    The diversity in hydrologic models has historically led to great controversy on the "correct" approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.

  6. Modeling Patient Treatment With Medical Records: An Abstraction Hierarchy to Understand User Competencies and Needs.

    PubMed

    St-Maurice, Justin D; Burns, Catherine M

    2017-07-28

    Health care is a complex sociotechnical system. Patient treatment is evolving and needs to incorporate the use of technology and new patient-centered treatment paradigms. Cognitive work analysis (CWA) is an effective framework for understanding complex systems, and work domain analysis (WDA) is useful for understanding complex ecologies. Although previous applications of CWA have described patient treatment, due to their scope of work patients were previously characterized as biomedical machines, rather than patient actors involved in their own care. An abstraction hierarchy that characterizes patients as beings with complex social values and priorities is needed. This can help better understand treatment in a modern approach to care. The purpose of this study was to perform a WDA to represent the treatment of patients with medical records. The methods to develop this model included the analysis of written texts and collaboration with subject matter experts. Our WDA represents the ecology through its functional purposes, abstract functions, generalized functions, physical functions, and physical forms. Compared with other work domain models, this model is able to articulate the nuanced balance between medical treatment, patient education, and limited health care resources. Concepts in the analysis were similar to the modeling choices of other WDAs but combined them in as a comprehensive, systematic, and contextual overview. The model is helpful to understand user competencies and needs. Future models could be developed to model the patient's domain and enable the exploration of the shared decision-making (SDM) paradigm. Our work domain model links treatment goals, decision-making constraints, and task workflows. This model can be used by system developers who would like to use ecological interface design (EID) to improve systems. Our hierarchy is the first in a future set that could explore new treatment paradigms. Future hierarchies could model the patient as a controller and could be useful for mobile app development. ©Justin D St-Maurice, Catherine M Burns. Originally published in JMIR Human Factors (http://humanfactors.jmir.org), 28.07.2017.

  7. Modeling Patient Treatment With Medical Records: An Abstraction Hierarchy to Understand User Competencies and Needs

    PubMed Central

    2017-01-01

    Background Health care is a complex sociotechnical system. Patient treatment is evolving and needs to incorporate the use of technology and new patient-centered treatment paradigms. Cognitive work analysis (CWA) is an effective framework for understanding complex systems, and work domain analysis (WDA) is useful for understanding complex ecologies. Although previous applications of CWA have described patient treatment, due to their scope of work patients were previously characterized as biomedical machines, rather than patient actors involved in their own care. Objective An abstraction hierarchy that characterizes patients as beings with complex social values and priorities is needed. This can help better understand treatment in a modern approach to care. The purpose of this study was to perform a WDA to represent the treatment of patients with medical records. Methods The methods to develop this model included the analysis of written texts and collaboration with subject matter experts. Our WDA represents the ecology through its functional purposes, abstract functions, generalized functions, physical functions, and physical forms. Results Compared with other work domain models, this model is able to articulate the nuanced balance between medical treatment, patient education, and limited health care resources. Concepts in the analysis were similar to the modeling choices of other WDAs but combined them in as a comprehensive, systematic, and contextual overview. The model is helpful to understand user competencies and needs. Future models could be developed to model the patient’s domain and enable the exploration of the shared decision-making (SDM) paradigm. Conclusion Our work domain model links treatment goals, decision-making constraints, and task workflows. This model can be used by system developers who would like to use ecological interface design (EID) to improve systems. Our hierarchy is the first in a future set that could explore new treatment paradigms. Future hierarchies could model the patient as a controller and could be useful for mobile app development. PMID:28754650

  8. [Research Progress on the Interaction Effects and Its Neural Mechanisms between Physical Fatigue and Mental Fatigue].

    PubMed

    Zhang, Lixin; Zhang, Chuncui; He, Feng; Zhao, Xin; Qi, Hongzhi; Wan, Baikun; Ming, Dong

    2015-10-01

    Fatigue is an exhaustion state caused by prolonged physical work and mental work, which can reduce working efficiency and even cause industrial accidents. Fatigue is a complex concept involving both physiological and psychological factors. Fatigue can cause a decline of concentration and work performance and induce chronic diseases. Prolonged fatigue may endanger life safety. In most of the scenarios, physical and mental workloads co-lead operator into fatigue state. Thus, it is very important to study the interaction influence and its neural mechanisms between physical and mental fatigues. This paper introduces recent progresses on the interaction effects and discusses some research challenges and future development directions. It is believed that mutual influence between physical fatigue and mental fatigue may occur in the central nervous system. Revealing the basal ganglia function and dopamine release may be important to explore the neural mechanisms between physical fatigue and mental fatigue. Future effort is to optimize fatigue models, to evaluate parameters and to explore the neural mechanisms so as to provide scientific basis and theoretical guidance for complex task designs and fatigue monitoring.

  9. Book Review:

    NASA Astrophysics Data System (ADS)

    McKane, Alan

    2003-12-01

    This is a book about the modelling of complex systems and, unlike many books on this subject, concentrates on the discussion of specific systems and gives practical methods for modelling and simulating them. This is not to say that the author does not devote space to the general philosophy and definition of complex systems and agent-based modelling, but the emphasis is definitely on the development of concrete methods for analysing them. This is, in my view, to be welcomed and I thoroughly recommend the book, especially to those with a theoretical physics background who will be very much at home with the language and techniques which are used. The author has developed a formalism for understanding complex systems which is based on the Langevin approach to the study of Brownian motion. This is a mesoscopic description; details of the interactions between the Brownian particle and the molecules of the surrounding fluid are replaced by a randomly fluctuating force. Thus all microscopic detail is replaced by a coarse-grained description which encapsulates the essence of the interactions at the finer level of description. In a similar way, the influences on Brownian agents in a multi-agent system are replaced by stochastic influences which sum up the effects of these interactions on a finer scale. Unlike Brownian particles, Brownian agents are not structureless particles, but instead have some internal states so that, for instance, they may react to changes in the environment or to the presence of other agents. Most of the book is concerned with developing the idea of Brownian agents using the techniques of statistical physics. This development parallels that for Brownian particles in physics, but the author then goes on to apply the technique to problems in biology, economics and the social sciences. This is a clear and well-written book which is a useful addition to the literature on complex systems. It will be interesting to see if the use of Brownian agents becomes a standard tool in the study of complex systems in the future.

  10. Agent autonomy approach to probabilistic physics-of-failure modeling of complex dynamic systems with interacting failure mechanisms

    NASA Astrophysics Data System (ADS)

    Gromek, Katherine Emily

    A novel computational and inference framework of the physics-of-failure (PoF) reliability modeling for complex dynamic systems has been established in this research. The PoF-based reliability models are used to perform a real time simulation of system failure processes, so that the system level reliability modeling would constitute inferences from checking the status of component level reliability at any given time. The "agent autonomy" concept is applied as a solution method for the system-level probabilistic PoF-based (i.e. PPoF-based) modeling. This concept originated from artificial intelligence (AI) as a leading intelligent computational inference in modeling of multi agents systems (MAS). The concept of agent autonomy in the context of reliability modeling was first proposed by M. Azarkhail [1], where a fundamentally new idea of system representation by autonomous intelligent agents for the purpose of reliability modeling was introduced. Contribution of the current work lies in the further development of the agent anatomy concept, particularly the refined agent classification within the scope of the PoF-based system reliability modeling, new approaches to the learning and the autonomy properties of the intelligent agents, and modeling interacting failure mechanisms within the dynamic engineering system. The autonomous property of intelligent agents is defined as agent's ability to self-activate, deactivate or completely redefine their role in the analysis. This property of agents and the ability to model interacting failure mechanisms of the system elements makes the agent autonomy fundamentally different from all existing methods of probabilistic PoF-based reliability modeling. 1. Azarkhail, M., "Agent Autonomy Approach to Physics-Based Reliability Modeling of Structures and Mechanical Systems", PhD thesis, University of Maryland, College Park, 2007.

  11. Semi-supervised Machine Learning for Analysis of Hydrogeochemical Data and Models

    NASA Astrophysics Data System (ADS)

    Vesselinov, Velimir; O'Malley, Daniel; Alexandrov, Boian; Moore, Bryan

    2017-04-01

    Data- and model-based analyses such as uncertainty quantification, sensitivity analysis, and decision support using complex physics models with numerous model parameters and typically require a huge number of model evaluations (on order of 10^6). Furthermore, model simulations of complex physics may require substantial computational time. For example, accounting for simultaneously occurring physical processes such as fluid flow and biogeochemical reactions in heterogeneous porous medium may require several hours of wall-clock computational time. To address these issues, we have developed a novel methodology for semi-supervised machine learning based on Non-negative Matrix Factorization (NMF) coupled with customized k-means clustering. The algorithm allows for automated, robust Blind Source Separation (BSS) of groundwater types (contamination sources) based on model-free analyses of observed hydrogeochemical data. We have also developed reduced order modeling tools, which coupling support vector regression (SVR), genetic algorithms (GA) and artificial and convolutional neural network (ANN/CNN). SVR is applied to predict the model behavior within prior uncertainty ranges associated with the model parameters. ANN and CNN procedures are applied to upscale heterogeneity of the porous medium. In the upscaling process, fine-scale high-resolution models of heterogeneity are applied to inform coarse-resolution models which have improved computational efficiency while capturing the impact of fine-scale effects at the course scale of interest. These techniques are tested independently on a series of synthetic problems. We also present a decision analysis related to contaminant remediation where the developed reduced order models are applied to reproduce groundwater flow and contaminant transport in a synthetic heterogeneous aquifer. The tools are coded in Julia and are a part of the MADS high-performance computational framework (https://github.com/madsjulia/Mads.jl).

  12. Tensor renormalization group methods for spin and gauge models

    NASA Astrophysics Data System (ADS)

    Zou, Haiyuan

    The analysis of the error of perturbative series by comparing it to the exact solution is an important tool to understand the non-perturbative physics of statistical models. For some toy models, a new method can be used to calculate higher order weak coupling expansion and modified perturbation theory can be constructed. However, it is nontrivial to generalize the new method to understand the critical behavior of high dimensional spin and gauge models. Actually, it is a big challenge in both high energy physics and condensed matter physics to develop accurate and efficient numerical algorithms to solve these problems. In this thesis, one systematic way named tensor renormalization group method is discussed. The applications of the method to several spin and gauge models on a lattice are investigated. theoretically, the new method allows one to write an exact representation of the partition function of models with local interactions. E.g. O(N) models, Z2 gauge models and U(1) gauge models. Practically, by using controllable approximations, results in both finite volume and the thermodynamic limit can be obtained. Another advantage of the new method is that it is insensitive to sign problems for models with complex coupling and chemical potential. Through the new approach, the Fisher's zeros of the 2D O(2) model in the complex coupling plane can be calculated and the finite size scaling of the results agrees well with the Kosterlitz-Thouless assumption. Applying the method to the O(2) model with a chemical potential, new phase diagram of the models can be obtained. The structure of the tensor language may provide a new tool to understand phase transition properties in general.

  13. Ocean Carbon States: Data Mining in Observations and Numerical Simulations Results

    NASA Astrophysics Data System (ADS)

    Latto, R.; Romanou, A.

    2017-12-01

    Advanced data mining techniques are rapidly becoming widely used in Climate and Earth Sciences with the purpose of extracting new meaningful information from increasingly larger and more complex datasets. This is particularly important in studies of the global carbon cycle, where any lack of understanding of its combined physical and biogeochemical drivers is detrimental to our ability to accurately describe, understand, and predict CO2 concentrations and their changes in the major carbon reservoirs. The analysis presented here evaluates the use of cluster analysis as a means of identifying and comparing spatial and temporal patterns extracted from observational and model datasets. As the observational data is organized into various regimes, which we will call "ocean carbon states", we gain insight into the physical and/or biogeochemical processes controlling the ocean carbon cycle as well as how well these processes are simulated by a state-of-the-art climate model. We find that cluster analysis effectively produces realistic, dynamic regimes that can be associated with specific processes at different temporal scales for both observations and the model. In addition, we show how these regimes can be used to illustrate and characterize the model biases in the model air-sea flux of CO2. These biases are attributed to biases in salinity, sea surface temperature, wind speed, and nitrate, which are then used to identify the physical processes that are inaccurately reproduced by the model. In this presentation, we provide a proof-of-concept application using simple datasets, and we expand to more complex ones, using several physical and biogeochemical variable pairs, thus providing considerable insight into the mechanisms and phases of the ocean carbon cycle over different temporal and spatial scales.

  14. The application of CFD to the modelling of fires in complex geometries

    NASA Astrophysics Data System (ADS)

    Burns, A. D.; Clarke, D. S.; Guilbert, P.; Jones, I. P.; Simcox, S.; Wilkes, N. S.

    The application of Computational Fluid Dynamics (CFD) to industrial safety is a challenging activity. In particular it involves the interaction of several different physical processes, including turbulence, combustion, radiation, buoyancy, compressible flow and shock waves in complex three-dimensional geometries. In addition, there may be multi-phase effects arising, for example, from sprinkler systems for extinguishing fires. The FLOW3D software (1-3) from Computational Fluid Dynamics Services (CFDS) is in widespread use in industrial safety problems, both within AEA Technology, and also by CFDS's commercial customers, for example references (4-13). This paper discusses some other applications of FLOW3D to safety problems. These applications illustrate the coupling of the gas flows with radiation models and combustion models, particularly for complex geometries where simpler radiation models are not applicable.

  15. A Molecular Dynamic Modeling of Hemoglobin-Hemoglobin Interactions

    NASA Astrophysics Data System (ADS)

    Wu, Tao; Yang, Ye; Sheldon Wang, X.; Cohen, Barry; Ge, Hongya

    2010-05-01

    In this paper, we present a study of hemoglobin-hemoglobin interaction with model reduction methods. We begin with a simple spring-mass system with given parameters (mass and stiffness). With this known system, we compare the mode superposition method with Singular Value Decomposition (SVD) based Principal Component Analysis (PCA). Through PCA we are able to recover the principal direction of this system, namely the model direction. This model direction will be matched with the eigenvector derived from mode superposition analysis. The same technique will be implemented in a much more complicated hemoglobin-hemoglobin molecule interaction model, in which thousands of atoms in hemoglobin molecules are coupled with tens of thousands of T3 water molecule models. In this model, complex inter-atomic and inter-molecular potentials are replaced by nonlinear springs. We employ the same method to get the most significant modes and their frequencies of this complex dynamical system. More complex physical phenomena can then be further studied by these coarse grained models.

  16. A comparison study of two snow models using data from different Alpine sites

    NASA Astrophysics Data System (ADS)

    Piazzi, Gaia; Riboust, Philippe; Campo, Lorenzo; Cremonese, Edoardo; Gabellani, Simone; Le Moine, Nicolas; Morra di Cella, Umberto; Ribstein, Pierre; Thirel, Guillaume

    2017-04-01

    The hydrological balance of an Alpine catchment is strongly affected by snowpack dynamics. Melt-water supplies a significant component of the annual water budget, both in terms of soil moisture and runoff, which play a critical role in floods generation and impact water resource management in snow-dominated basins. Several snow models have been developed with variable degrees of complexity, mainly depending on their target application and the availability of computational resources and data. According to the level of detail, snow models range from statistical snowmelt-runoff and degree-day methods using composite snow-soil or explicit snow layer(s), to physically-based and energy balance snow models, consisting of detailed internal snow-process schemes. Intermediate-complexity approaches have been widely developed resulting in simplified versions of the physical parameterization schemes with a reduced snowpack layering. Nevertheless, an increasing model complexity does not necessarily entail improved model simulations. This study presents a comparison analysis between two snow models designed for hydrological purposes. The snow module developed at UPMC and IRSTEA is a mono-layer energy balance model analytically resolving heat and phase change equations into the snowpack. Vertical mass exchange into the snowpack is also analytically resolved. The model is intended to be used for hydrological studies but also to give a realistic estimation of the snowpack state at watershed scale (SWE and snow depth). The structure of the model allows it to be easily calibrated using snow observation. This model is further presented in EGU2017-7492. The snow module of SMASH (Snow Multidata Assimilation System for Hydrology) consists in a multi-layer snow dynamic scheme. It is physically based on mass and energy balances and it reproduces the main physical processes occurring within the snowpack: accumulation, density dynamics, melting, sublimation, radiative balance, heat and mass exchanges. The model is driven by observed forcing meteorological data (air temperature, wind velocity, relative air humidity, precipitation and incident solar radiation) to provide an estimation of the snowpack state. In this study, no DA is used. For more details on the DA scheme, please see EGU2017-7777. Observed data supplied by meteorological stations located in three experimental Alpine sites are used: Col de Porte (1325 m, France); Torgnon (2160 m, Italy); Weissfluhjoch (2540 m, Switzerland). Performances of the two models are compared through evaluations of snow mass, snow depth, albedo and surface temperature simulations in order to better understand and pinpoint limits and potentialities of the analyzed schemes and the impact of different parameterizations on models simulations.

  17. Protein-Protein Interactions of Azurin Complex by Coarse-Grained Simulations with a Gō-Like Model

    NASA Astrophysics Data System (ADS)

    Rusmerryani, Micke; Takasu, Masako; Kawaguchi, Kazutomo; Saito, Hiroaki; Nagao, Hidemi

    Proteins usually perform their biological functions by forming a complex with other proteins. It is very important to study the protein-protein interactions since these interactions are crucial in many processes of a living organism. In this study, we develop a coarse grained model to simulate protein complex in liquid system. We carry out molecular dynamics simulations with topology-based potential interactions to simulate dynamical properties of Pseudomonas Aeruginosa azurin complex systems. Azurin is known to play an essential role as an anticancer agent and bind many important intracellular molecules. Some physical properties are monitored during simulation time to get a better understanding of the influence of protein-protein interactions to the azurin complex dynamics. These studies will provide valuable insights for further investigation on protein-protein interactions in more realistic system.

  18. Describing Ecosystem Complexity through Integrated Catchment Modeling

    NASA Astrophysics Data System (ADS)

    Shope, C. L.; Tenhunen, J. D.; Peiffer, S.

    2011-12-01

    Land use and climate change have been implicated in reduced ecosystem services (ie: high quality water yield, biodiversity, and agricultural yield. The prediction of ecosystem services expected under future land use decisions and changing climate conditions has become increasingly important. Complex policy and management decisions require the integration of physical, economic, and social data over several scales to assess effects on water resources and ecology. Field-based meteorology, hydrology, soil physics, plant production, solute and sediment transport, economic, and social behavior data were measured in a South Korean catchment. A variety of models are being used to simulate plot and field scale experiments within the catchment. Results from each of the local-scale models provide identification of sensitive, local-scale parameters which are then used as inputs into a large-scale watershed model. We used the spatially distributed SWAT model to synthesize the experimental field data throughout the catchment. The approach of our study was that the range in local-scale model parameter results can be used to define the sensitivity and uncertainty in the large-scale watershed model. Further, this example shows how research can be structured for scientific results describing complex ecosystems and landscapes where cross-disciplinary linkages benefit the end result. The field-based and modeling framework described is being used to develop scenarios to examine spatial and temporal changes in land use practices and climatic effects on water quantity, water quality, and sediment transport. Development of accurate modeling scenarios requires understanding the social relationship between individual and policy driven land management practices and the value of sustainable resources to all shareholders.

  19. A Review of Hypersonics Aerodynamics, Aerothermodynamics and Plasmadynamics Activities within NASA's Fundamental Aeronautics Program

    NASA Technical Reports Server (NTRS)

    Salas, Manuel D.

    2007-01-01

    The research program of the aerodynamics, aerothermodynamics and plasmadynamics discipline of NASA's Hypersonic Project is reviewed. Details are provided for each of its three components: 1) development of physics-based models of non-equilibrium chemistry, surface catalytic effects, turbulence, transition and radiation; 2) development of advanced simulation tools to enable increased spatial and time accuracy, increased geometrical complexity, grid adaptation, increased physical-processes complexity, uncertainty quantification and error control; and 3) establishment of experimental databases from ground and flight experiments to develop better understanding of high-speed flows and to provide data to validate and guide the development of simulation tools.

  20. Coupled effects of vertical mixing and benthic grazing on phytoplankton populations in shallow, turbid estuaries

    USGS Publications Warehouse

    Koseff, Jeffrey R.; Holen, Jacqueline K.; Monismith, Stephen G.; Cloern, James E.

    1993-01-01

    Coastal ocean waters tend to have very different patterns of phytoplankton biomass variability from the open ocean, and the connections between physical variability and phytoplankton bloom dynamics are less well established for these shallow systems. Predictions of biological responses to physical variability in these environments is inherently difficult because the recurrent seasonal patterns of mixing are complicated by aperiodic fluctuations in river discharge and the high-frequency components of tidal variability. We might expect, then, less predictable and more complex bloom dynamics in these shallow coastal systems compared with the open ocean. Given this complex and dynamic physical environment, can we develop a quantitative framework to define the physical regimes necessary for bloom inception, and can we identify the important mechanisms of physical-biological coupling that lead to the initiation and termination of blooms in estuaries and shallow coastal waters? Numerical modeling provides one approach to address these questions. Here we present results of simulation experiments with a refined version of Cloern's (1991) model in which mixing processes are treated more realistically to reflect the dynamic nature of turbulence generation in estuaries. We investigated several simple models for the turbulent mixing coefficient. We found that the addition of diurnal tidal variation to Cloern's model greatly reduces biomass growth indicating that variations of mixing on the time scale of hours are crucial. Furthermore, we found that for conditions representative of South San Francisco Bay, numerical simulations only allowed for bloom development when the water column was stratified and when minimal mixing was prescribed in the upper layer. Stratification, however, itself is not sufficient to ensure that a bloom will develop: minimal wind stirring is a further prerequisite to bloom development in shallow turbid estuaries with abundant populations of benthic suspension feeders.

  1. Characterization of structural connections using free and forced response test data

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles; Huckelbridge, Arthur A.

    1989-01-01

    The accurate prediction of system dynamic response often has been limited by deficiencies in existing capabilities to characterize connections adequately. Connections between structural components often are complex mechanically, and difficult to accurately model analytically. Improved analytical models for connections are needed to improve system dynamic preditions. A procedure for identifying physical connection properties from free and forced response test data is developed, then verified utilizing a system having both a linear and nonlinear connection. Connection properties are computed in terms of physical parameters so that the physical characteristics of the connections can better be understood, in addition to providing improved input for the system model. The identification procedure is applicable to multi-degree of freedom systems, and does not require that the test data be measured directly at the connection locations.

  2. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    DOE PAGES

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less

  3. A novel phenomenological multi-physics model of Li-ion battery cells

    NASA Astrophysics Data System (ADS)

    Oh, Ki-Yong; Samad, Nassim A.; Kim, Youngki; Siegel, Jason B.; Stefanopoulou, Anna G.; Epureanu, Bogdan I.

    2016-09-01

    A novel phenomenological multi-physics model of Lithium-ion battery cells is developed for control and state estimation purposes. The model can capture electrical, thermal, and mechanical behaviors of battery cells under constrained conditions, e.g., battery pack conditions. Specifically, the proposed model predicts the core and surface temperatures and reaction force induced from the volume change of battery cells because of electrochemically- and thermally-induced swelling. Moreover, the model incorporates the influences of changes in preload and ambient temperature on the force considering severe environmental conditions electrified vehicles face. Intensive experimental validation demonstrates that the proposed multi-physics model accurately predicts the surface temperature and reaction force for a wide operational range of preload and ambient temperature. This high fidelity model can be useful for more accurate and robust state of charge estimation considering the complex dynamic behaviors of the battery cell. Furthermore, the inherent simplicity of the mechanical measurements offers distinct advantages to improve the existing power and thermal management strategies for battery management.

  4. The Rise of Complexity in Flood Forecasting: Opportunities, Challenges and Tradeoffs

    NASA Astrophysics Data System (ADS)

    Wood, A. W.; Clark, M. P.; Nijssen, B.

    2017-12-01

    Operational flood forecasting is currently undergoing a major transformation. Most national flood forecasting services have relied for decades on lumped, highly calibrated conceptual hydrological models running on local office computing resources, providing deterministic streamflow predictions at gauged river locations that are important to stakeholders and emergency managers. A variety of recent technological advances now make it possible to run complex, high-to-hyper-resolution models for operational hydrologic prediction over large domains, and the US National Weather Service is now attempting to use hyper-resolution models to create new forecast services and products. Yet other `increased-complexity' forecasting strategies also exist that pursue different tradeoffs between model complexity (i.e., spatial resolution, physics) and streamflow forecast system objectives. There is currently a pressing need for a greater understanding in the hydrology community of the opportunities, challenges and tradeoffs associated with these different forecasting approaches, and for a greater participation by the hydrology community in evaluating, guiding and implementing these approaches. Intermediate-resolution forecast systems, for instance, use distributed land surface model (LSM) physics but retain the agility to deploy ensemble methods (including hydrologic data assimilation and hindcast-based post-processing). Fully coupled numerical weather prediction (NWP) systems, another example, use still coarser LSMs to produce ensemble streamflow predictions either at the model scale or after sub-grid scale runoff routing. Based on the direct experience of the authors and colleagues in research and operational forecasting, this presentation describes examples of different streamflow forecast paradigms, from the traditional to the recent hyper-resolution, to illustrate the range of choices facing forecast system developers. We also discuss the degree to which the strengths and weaknesses of each strategy map onto the requirements for different types of forecasting services (e.g., flash flooding, river flooding, seasonal water supply prediction).

  5. Materials used to simulate physical properties of human skin.

    PubMed

    Dąbrowska, A K; Rotaru, G-M; Derler, S; Spano, F; Camenzind, M; Annaheim, S; Stämpfli, R; Schmid, M; Rossi, R M

    2016-02-01

    For many applications in research, material development and testing, physical skin models are preferable to the use of human skin, because more reliable and reproducible results can be obtained. This article gives an overview of materials applied to model physical properties of human skin to encourage multidisciplinary approaches for more realistic testing and improved understanding of skin-material interactions. The literature databases Web of Science, PubMed and Google Scholar were searched using the terms 'skin model', 'skin phantom', 'skin equivalent', 'synthetic skin', 'skin substitute', 'artificial skin', 'skin replica', and 'skin model substrate.' Articles addressing material developments or measurements that include the replication of skin properties or behaviour were analysed. It was found that the most common materials used to simulate skin are liquid suspensions, gelatinous substances, elastomers, epoxy resins, metals and textiles. Nano- and micro-fillers can be incorporated in the skin models to tune their physical properties. While numerous physical skin models have been reported, most developments are research field-specific and based on trial-and-error methods. As the complexity of advanced measurement techniques increases, new interdisciplinary approaches are needed in future to achieve refined models which realistically simulate multiple properties of human skin. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  6. RNA Helicases at work: binding and rearranging

    PubMed Central

    Jankowsky, Eckhard

    2010-01-01

    RNA helicases are ubiquitous, highly conserved enzymes that participate in nearly all aspects of RNA metabolism. These proteins bind or remodel RNA or RNA–protein complexes in an ATP-dependent fashion. How RNA helicases physically perform their cellular tasks has been a longstanding question, but in recent years, intriguing models have started to link structure, mechanism and biological function for some RNA helicases. This review outlines our current view on major structural and mechanistic themes of RNA helicase function, and on emerging physical models for cellular roles of these enzymes. PMID:20813532

  7. Coupling physically based and data-driven models for assessing freshwater inflow into the Small Aral Sea

    NASA Astrophysics Data System (ADS)

    Ayzel, Georgy; Izhitskiy, Alexander

    2018-06-01

    The Aral Sea desiccation and related changes in hydroclimatic conditions on a regional level is a hot topic for past decades. The key problem of scientific research projects devoted to an investigation of modern Aral Sea basin hydrological regime is its discontinuous nature - the only limited amount of papers takes into account the complex runoff formation system entirely. Addressing this challenge we have developed a continuous prediction system for assessing freshwater inflow into the Small Aral Sea based on coupling stack of hydrological and data-driven models. Results show a good prediction skill and approve the possibility to develop a valuable water assessment tool which utilizes the power of classical physically based and modern machine learning models both for territories with complex water management system and strong water-related data scarcity. The source code and data of the proposed system is available on a Github page (https://github.com/SMASHIproject/IWRM2018).

  8. Automated Systematic Generation and Exploration of Flat Direction Phenomenology in Free Fermionic Heterotic String Theory

    NASA Astrophysics Data System (ADS)

    Greenwald, Jared

    Any good physical theory must resolve current experimental data as well as offer predictions for potential searches in the future. The Standard Model of particle physics, Grand Unied Theories, Minimal Supersymmetric Models and Supergravity are all attempts to provide such a framework. However, they all lack the ability to predict many of the parameters that each of the theories utilize. String theory may yield a solution to this naturalness (or self-predictiveness) problem as well as offer a unifed theory of gravity. Studies in particle physics phenomenology based on perturbative low energy analysis of various string theories can help determine the candidacy of such models. After a review of principles and problems leading up to our current understanding of the universe, we will discuss some of the best particle physics model building techniques that have been developed using string theory. This will culminate in the introduction of a novel approach to a computational, systematic analysis of the various physical phenomena that arise from these string models. We focus on the necessary assumptions, complexity and open questions that arise while making a fully-automated at direction analysis program.

  9. Quantum physics: Interactions propel a magnetic dance

    NASA Astrophysics Data System (ADS)

    Leblanc, Lindsay J.

    2017-06-01

    A combination of leading-edge techniques has enabled interaction-induced magnetic motion to be observed for pairs of ultracold atoms -- a breakthrough in the development of models of complex quantum behaviour. See Letter p.519

  10. Review on the Modeling of Electrostatic MEMS

    PubMed Central

    Chuang, Wan-Chun; Lee, Hsin-Li; Chang, Pei-Zen; Hu, Yuh-Chung

    2010-01-01

    Electrostatic-driven microelectromechanical systems devices, in most cases, consist of couplings of such energy domains as electromechanics, optical electricity, thermoelectricity, and electromagnetism. Their nonlinear working state makes their analysis complex and complicated. This article introduces the physical model of pull-in voltage, dynamic characteristic analysis, air damping effect, reliability, numerical modeling method, and application of electrostatic-driven MEMS devices. PMID:22219707

  11. Master-slave system with force feedback based on dynamics of virtual model

    NASA Technical Reports Server (NTRS)

    Nojima, Shuji; Hashimoto, Hideki

    1994-01-01

    A master-slave system can extend manipulating and sensing capabilities of a human operator to a remote environment. But the master-slave system has two serious problems: one is the mechanically large impedance of the system; the other is the mechanical complexity of the slave for complex remote tasks. These two problems reduce the efficiency of the system. If the slave has local intelligence, it can help the human operator by using its good points like fast calculation and large memory. The authors suggest that the slave is a dextrous hand with many degrees of freedom able to manipulate an object of known shape. It is further suggested that the dimensions of the remote work space be shared by the human operator and the slave. The effect of the large impedance of the system can be reduced in a virtual model, a physical model constructed in a computer with physical parameters as if it were in the real world. A method to determine the damping parameter dynamically for the virtual model is proposed. Experimental results show that this virtual model is better than the virtual model with fixed damping.

  12. Shock-wave flow regimes at entry into the diffuser of a hypersonic ramjet engine: Influence of physical properties of the gas medium

    NASA Astrophysics Data System (ADS)

    Tarnavskii, G. A.

    2006-07-01

    The physical aspects of the effective-adiabatic-exponent model making it possible to decompose the total problem on modeling of high-velocity gas flows into individual subproblems (“physicochemical processes” and “ aeromechanics”), which ensures the creation of a universal and efficient computer complex divided into a number of independent units, have been analyzed. Shock-wave structures appearing at entry into the duct of a hypersonic aircraft have been investigated based on this methodology, and the influence of the physical properties of the gas medium in a wide range of variations of the effective adiabatic exponent has been studied.

  13. Experimental Investigation of a 2D Supercritical Circulation-Control Airfoil Using Particle Image Velocimetry

    NASA Technical Reports Server (NTRS)

    Jones, Gregory S.; Yao, Chung-Sheng; Allan, Brian G.

    2006-01-01

    Recent efforts in extreme short takeoff and landing aircraft configurations have renewed the interest in circulation control wing design and optimization. The key to accurately designing and optimizing these configurations rests in the modeling of the complex physics of these flows. This paper will highlight the physics of the stagnation and separation regions on two typical circulation control airfoil sections.

  14. Emerging concepts for management of river ecosystems and challenges to applied integration of physical and biological sciences in the Pacific Northwest, USA

    Treesearch

    Bruce E. Rieman; Jason B. Dunham; James L. Clayton

    2006-01-01

    Integration of biological and physical concepts is necessary to understand and conserve the ecological integrity of river systems. Past attempts at integration have often focused at relatively small scales and on mechanistic models that may not capture the complexity of natural systems leaving substantial uncertainty about ecological responses to management actions....

  15. Semantic Information Processing of Physical Simulation Based on Scientific Concept Vocabulary Model

    NASA Astrophysics Data System (ADS)

    Kino, Chiaki; Suzuki, Yoshio; Takemiya, Hiroshi

    Scientific Concept Vocabulary (SCV) has been developed to actualize Cognitive methodology based Data Analysis System: CDAS which supports researchers to analyze large scale data efficiently and comprehensively. SCV is an information model for processing semantic information for physics and engineering. In the model of SCV, all semantic information is related to substantial data and algorisms. Consequently, SCV enables a data analysis system to recognize the meaning of execution results output from a numerical simulation. This method has allowed a data analysis system to extract important information from a scientific view point. Previous research has shown that SCV is able to describe simple scientific indices and scientific perceptions. However, it is difficult to describe complex scientific perceptions by currently-proposed SCV. In this paper, a new data structure for SCV has been proposed in order to describe scientific perceptions in more detail. Additionally, the prototype of the new model has been constructed and applied to actual data of numerical simulation. The result means that the new SCV is able to describe more complex scientific perceptions.

  16. Mediating Effects of Self-Efficacy, Benefits and Barriers on the Association between Peer and Parental Factors and Physical Activity among Adolescent Girls with a Lower Educational Level.

    PubMed

    Verloigne, Maite; Cardon, Greet; De Craemer, Marieke; D'Haese, Sara; De Bourdeaudhuij, Ilse

    2016-01-01

    The prevalence of physical activity among lower educated adolescent girls is low, suggesting it is important to have insights into the complex processes that may underlie their physical activity levels. Therefore, this study aimed to examine the mediating effects of self-efficacy, perceived benefits and barriers on the associations between peer and parental variables and physical activity among lower educated adolescent girls. In total, 226 girls (mean age 16.0±1.0 years; 53% technical education; 47% vocational education) from a convenience sample of 6 secondary schools in Flanders, Belgium, completed a questionnaire on their total physical activity level and related peer and parental variables (i.e. modeling of physical activity, co-participation in physical activities and encouragement to be active) and personal variables (i.e. self-efficacy to be active, and specific perceived benefits of physical activity and specific barriers to be active). Mediating effects were tested using MacKinnon's product-of-coefficients test based on multilevel linear regression analyses. Higher peer and parental modeling, co-participation and encouragement were significantly related to a higher physical activity level among adolescent girls (p<0.05). Self-efficacy, the perceived benefits of having fun, being around friends or meeting new people, and not being bored and the perceived barrier of not liking physical activity mediated several associations between peer and parental variables and girls' physical activity, with some of the mediated proportions exceeding 60%. This study contributed to a better understanding of the complexity of how parental and peer factors work together with personal factors to influence the physical activity levels of adolescent girls with a lower educational level. Interventions should involve both peers and parents, as they may influence girls' physical activity both directly and indirectly through the internalisation of several personal variables, such as self-efficacy to be active and the perceived benefit of having fun.

  17. Using the PhysX engine for physics-based virtual surgery with force feedback.

    PubMed

    Maciel, Anderson; Halic, Tansel; Lu, Zhonghua; Nedel, Luciana P; De, Suvranu

    2009-09-01

    The development of modern surgical simulators is highly challenging, as they must support complex simulation environments. The demand for higher realism in such simulators has driven researchers to adopt physics-based models, which are computationally very demanding. This poses a major problem, since real-time interactions must permit graphical updates of 30 Hz and a much higher rate of 1 kHz for force feedback (haptics). Recently several physics engines have been developed which offer multi-physics simulation capabilities, including rigid and deformable bodies, cloth and fluids. While such physics engines provide unique opportunities for the development of surgical simulators, their higher latencies, compared to what is necessary for real-time graphics and haptics, offer significant barriers to their use in interactive simulation environments. In this work, we propose solutions to this problem and demonstrate how a multimodal surgical simulation environment may be developed based on NVIDIA's PhysX physics library. Hence, models that are undergoing relatively low-frequency updates in PhysX can exist in an environment that demands much higher frequency updates for haptics. We use a collision handling layer to interface between the physical response provided by PhysX and the haptic rendering device to provide both real-time tissue response and force feedback. Our simulator integrates a bimanual haptic interface for force feedback and per-pixel shaders for graphics realism in real time. To demonstrate the effectiveness of our approach, we present the simulation of the laparoscopic adjustable gastric banding (LAGB) procedure as a case study. To develop complex and realistic surgical trainers with realistic organ geometries and tissue properties demands stable physics-based deformation methods, which are not always compatible with the interaction level required for such trainers. We have shown that combining different modelling strategies for behaviour, collision and graphics is possible and desirable. Such multimodal environments enable suitable rates to simulate the major steps of the LAGB procedure.

  18. Transforming community access to space science models

    NASA Astrophysics Data System (ADS)

    MacNeice, Peter; Hesse, Michael; Kuznetsova, Maria; Maddox, Marlo; Rastaetter, Lutz; Berrios, David; Pulkkinen, Antti

    2012-04-01

    Researching and forecasting the ever changing space environment (often referred to as space weather) and its influence on humans and their activities are model-intensive disciplines. This is true because the physical processes involved are complex, but, in contrast to terrestrial weather, the supporting observations are typically sparse. Models play a vital role in establishing a physically meaningful context for interpreting limited observations, testing theory, and producing both nowcasts and forecasts. For example, with accurate forecasting of hazardous space weather conditions, spacecraft operators can place sensitive systems in safe modes, and power utilities can protect critical network components from damage caused by large currents induced in transmission lines by geomagnetic storms.

  19. Transforming Community Access to Space Science Models

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter; Heese, Michael; Kunetsova, Maria; Maddox, Marlo; Rastaetter, Lutz; Berrios, David; Pulkkinen, Antti

    2012-01-01

    Researching and forecasting the ever changing space environment (often referred to as space weather) and its influence on humans and their activities are model-intensive disciplines. This is true because the physical processes involved are complex, but, in contrast to terrestrial weather, the supporting observations are typically sparse. Models play a vital role in establishing a physically meaningful context for interpreting limited observations, testing theory, and producing both nowcasts and forecasts. For example, with accurate forecasting of hazardous space weather conditions, spacecraft operators can place sensitive systems in safe modes, and power utilities can protect critical network components from damage caused by large currents induced in transmission lines by geomagnetic storms.

  20. Physics behind the mechanical nucleosome positioning code

    NASA Astrophysics Data System (ADS)

    Zuiddam, Martijn; Everaers, Ralf; Schiessel, Helmut

    2017-11-01

    The positions along DNA molecules of nucleosomes, the most abundant DNA-protein complexes in cells, are influenced by the sequence-dependent DNA mechanics and geometry. This leads to the "nucleosome positioning code", a preference of nucleosomes for certain sequence motives. Here we introduce a simplified model of the nucleosome where a coarse-grained DNA molecule is frozen into an idealized superhelical shape. We calculate the exact sequence preferences of our nucleosome model and find it to reproduce qualitatively all the main features known to influence nucleosome positions. Moreover, using well-controlled approximations to this model allows us to come to a detailed understanding of the physics behind the sequence preferences of nucleosomes.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patnaik, P. C.

    The SIGMET mesoscale meteorology simulation code represents an extension, in terms of physical modelling detail and numerical approach, of the work of Anthes (1972) and Anthes and Warner (1974). The code utilizes a finite difference technique to solve the so-called primitive equations which describe transient flow in the atmosphere. The SIGMET modelling contains all of the physics required to simulate the time dependent meteorology of a region with description of both the planetary boundary layer and upper level flow as they are affected by synoptic forcing and complex terrain. The mathematical formulation of the SIGMET model and the various physicalmore » effects incorporated into it are summarized.« less

  2. Inferring mass in complex scenes by mental simulation.

    PubMed

    Hamrick, Jessica B; Battaglia, Peter W; Griffiths, Thomas L; Tenenbaum, Joshua B

    2016-12-01

    After observing a collision between two boxes, you can immediately tell which is empty and which is full of books based on how the boxes moved. People form rich perceptions about the physical properties of objects from their interactions, an ability that plays a crucial role in learning about the physical world through our experiences. Here, we present three experiments that demonstrate people's capacity to reason about the relative masses of objects in naturalistic 3D scenes. We find that people make accurate inferences, and that they continue to fine-tune their beliefs over time. To explain our results, we propose a cognitive model that combines Bayesian inference with approximate knowledge of Newtonian physics by estimating probabilities from noisy physical simulations. We find that this model accurately predicts judgments from our experiments, suggesting that the same simulation mechanism underlies both peoples' predictions and inferences about the physical world around them. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Using the USU ionospheric model to predict radio propagation through a simulated ionosphere

    NASA Astrophysics Data System (ADS)

    Huffines, Gary R.

    1990-12-01

    To evaluate the capabilities of communication, navigation, and defense systems utilizing electromagnetic waves which interact with the ionosphere, a three-dimensional ray tracing program was used. A simple empirical model (Chapman function) and a complex physical model (Schunk and Sojka model) were used to compare the representation of ionospheric conditions. Four positions were chosen to test four different features of the Northern Hemispheric ionosphere. It seems that decreasing electron density has little or no effect on the horizontal components of the ray path while increasing electron density causes deviations in the ray path. It was also noted that rays in the physical model's mid-latitude trough region escaped the ionosphere for all frequencies used in this study.

  4. A comprehensive approach to identify dominant controls of the behavior of a land surface-hydrology model across various hydroclimatic conditions

    NASA Astrophysics Data System (ADS)

    Haghnegahdar, Amin; Elshamy, Mohamed; Yassin, Fuad; Razavi, Saman; Wheater, Howard; Pietroniro, Al

    2017-04-01

    Complex physically-based environmental models are being increasingly used as the primary tool for watershed planning and management due to advances in computation power and data acquisition. Model sensitivity analysis plays a crucial role in understanding the behavior of these complex models and improving their performance. Due to the non-linearity and interactions within these complex models, Global sensitivity analysis (GSA) techniques should be adopted to provide a comprehensive understanding of model behavior and identify its dominant controls. In this study we adopt a multi-basin multi-criteria GSA approach to systematically assess the behavior of the Modélisation Environmentale-Surface et Hydrologie (MESH) across various hydroclimatic conditions in Canada including areas in the Great Lakes Basin, Mackenzie River Basin, and South Saskatchewan River Basin. MESH is a semi-distributed physically-based coupled land surface-hydrology modelling system developed by Environment and Climate Change Canada (ECCC) for various water resources management purposes in Canada. We use a novel method, called Variogram Analysis of Response Surfaces (VARS), to perform sensitivity analysis. VARS is a variogram-based GSA technique that can efficiently provide a spectrum of sensitivity information across a range of scales within the parameter space. We use multiple metrics to identify dominant controls of model response (e.g. streamflow) to model parameters under various conditions such as high flows, low flows, and flow volume. We also investigate the influence of initial conditions on model behavior as part of this study. Our preliminary results suggest that this type of GSA can significantly help with estimating model parameters, decreasing calibration computational burden, and reducing prediction uncertainty.

  5. Parameter Sensitivity and Laboratory Benchmarking of a Biogeochemical Process Model for Enhanced Anaerobic Dechlorination

    NASA Astrophysics Data System (ADS)

    Kouznetsova, I.; Gerhard, J. I.; Mao, X.; Barry, D. A.; Robinson, C.; Brovelli, A.; Harkness, M.; Fisher, A.; Mack, E. E.; Payne, J. A.; Dworatzek, S.; Roberts, J.

    2008-12-01

    A detailed model to simulate trichloroethene (TCE) dechlorination in anaerobic groundwater systems has been developed and implemented through PHAST, a robust and flexible geochemical modeling platform. The approach is comprehensive but retains flexibility such that models of varying complexity can be used to simulate TCE biodegradation in the vicinity of nonaqueous phase liquid (NAPL) source zones. The complete model considers a full suite of biological (e.g., dechlorination, fermentation, sulfate and iron reduction, electron donor competition, toxic inhibition, pH inhibition), physical (e.g., flow and mass transfer) and geochemical processes (e.g., pH modulation, gas formation, mineral interactions). Example simulations with the model demonstrated that the feedback between biological, physical, and geochemical processes is critical. Successful simulation of a thirty-two-month column experiment with site soil, complex groundwater chemistry, and exhibiting both anaerobic dechlorination and endogenous respiration, provided confidence in the modeling approach. A comprehensive suite of batch simulations was then conducted to estimate the sensitivity of predicted TCE degradation to the 36 model input parameters. A local sensitivity analysis was first employed to rank the importance of parameters, revealing that 5 parameters consistently dominated model predictions across a range of performance metrics. A global sensitivity analysis was then performed to evaluate the influence of a variety of full parameter data sets available in the literature. The modeling study was performed as part of the SABRE (Source Area BioREmediation) project, a public/private consortium whose charter is to determine if enhanced anaerobic bioremediation can result in effective and quantifiable treatment of chlorinated solvent DNAPL source areas. The modelling conducted has provided valuable insight into the complex interactions between processes in the evolving biogeochemical systems, particularly at the laboratory scale.

  6. Enabling large-scale viscoelastic calculations via neural network acceleration

    NASA Astrophysics Data System (ADS)

    Robinson DeVries, P.; Thompson, T. B.; Meade, B. J.

    2017-12-01

    One of the most significant challenges involved in efforts to understand the effects of repeated earthquake cycle activity are the computational costs of large-scale viscoelastic earthquake cycle models. Deep artificial neural networks (ANNs) can be used to discover new, compact, and accurate computational representations of viscoelastic physics. Once found, these efficient ANN representations may replace computationally intensive viscoelastic codes and accelerate large-scale viscoelastic calculations by more than 50,000%. This magnitude of acceleration enables the modeling of geometrically complex faults over thousands of earthquake cycles across wider ranges of model parameters and at larger spatial and temporal scales than have been previously possible. Perhaps most interestingly from a scientific perspective, ANN representations of viscoelastic physics may lead to basic advances in the understanding of the underlying model phenomenology. We demonstrate the potential of artificial neural networks to illuminate fundamental physical insights with specific examples.

  7. Ontology patterns for complex topographic feature yypes

    USGS Publications Warehouse

    Varanka, Dalia E.

    2011-01-01

    Complex feature types are defined as integrated relations between basic features for a shared meaning or concept. The shared semantic concept is difficult to define in commonly used geographic information systems (GIS) and remote sensing technologies. The role of spatial relations between complex feature parts was recognized in early GIS literature, but had limited representation in the feature or coverage data models of GIS. Spatial relations are more explicitly specified in semantic technology. In this paper, semantics for topographic feature ontology design patterns (ODP) are developed as data models for the representation of complex features. In the context of topographic processes, component assemblages are supported by resource systems and are found on local landscapes. The topographic ontology is organized across six thematic modules that can account for basic feature types, resource systems, and landscape types. Types of complex feature attributes include location, generative processes and physical description. Node/edge networks model standard spatial relations and relations specific to topographic science to represent complex features. To demonstrate these concepts, data from The National Map of the U. S. Geological Survey was converted and assembled into ODP.

  8. Physical Model Study of Flowerpot Discharge Outlet, Western Closure Complex, New Orleans, Louisiana

    DTIC Science & Technology

    2013-05-01

    FPDO ........................................................................................ 12  3  Flowerpot Model with Straight Pipe Immediately...used at downstream end of 90-degree elbow. .................... 23  Figure 18. 1:20.377-scale preliminary FPDO model showing 7-ft-long PVC pipe ...27  Figure 23. 1:20.377-scale preliminary model with 1.3 in. lip. The black material at base of pipe was a sealant used to

  9. Human performance cognitive-behavioral modeling: a benefit for occupational safety.

    PubMed

    Gore, Brian F

    2002-01-01

    Human Performance Modeling (HPM) is a computer-aided job analysis software methodology used to generate predictions of complex human-automation integration and system flow patterns with the goal of improving operator and system safety. The use of HPM tools has recently been increasing due to reductions in computational cost, augmentations in the tools' fidelity, and usefulness in the generated output. An examination of an Air Man-machine Integration Design and Analysis System (Air MIDAS) model evaluating complex human-automation integration currently underway at NASA Ames Research Center will highlight the importance to occupational safety of considering both cognitive and physical aspects of performance when researching human error.

  10. Advanced laser modeling with BLAZE multiphysics

    NASA Astrophysics Data System (ADS)

    Palla, Andrew D.; Carroll, David L.; Gray, Michael I.; Suzuki, Lui

    2017-01-01

    The BLAZE Multiphysics™ software simulation suite was specifically developed to model highly complex multiphysical systems in a computationally efficient and highly scalable manner. These capabilities are of particular use when applied to the complexities associated with high energy laser systems that combine subsonic/transonic/supersonic fluid dynamics, chemically reacting flows, laser electronics, heat transfer, optical physics, and in some cases plasma discharges. In this paper we present detailed cw and pulsed gas laser calculations using the BLAZE model with comparisons to data. Simulations of DPAL, XPAL, ElectricOIL (EOIL), and the optically pumped rare gas laser were found to be in good agreement with experimental data.

  11. Human performance cognitive-behavioral modeling: a benefit for occupational safety

    NASA Technical Reports Server (NTRS)

    Gore, Brian F.

    2002-01-01

    Human Performance Modeling (HPM) is a computer-aided job analysis software methodology used to generate predictions of complex human-automation integration and system flow patterns with the goal of improving operator and system safety. The use of HPM tools has recently been increasing due to reductions in computational cost, augmentations in the tools' fidelity, and usefulness in the generated output. An examination of an Air Man-machine Integration Design and Analysis System (Air MIDAS) model evaluating complex human-automation integration currently underway at NASA Ames Research Center will highlight the importance to occupational safety of considering both cognitive and physical aspects of performance when researching human error.

  12. On Design Mining: Coevolution and Surrogate Models.

    PubMed

    Preen, Richard J; Bull, Larry

    2017-01-01

    Design mining is the use of computational intelligence techniques to iteratively search and model the attribute space of physical objects evaluated directly through rapid prototyping to meet given objectives. It enables the exploitation of novel materials and processes without formal models or complex simulation. In this article, we focus upon the coevolutionary nature of the design process when it is decomposed into concurrent sub-design-threads due to the overall complexity of the task. Using an abstract, tunable model of coevolution, we consider strategies to sample subthread designs for whole-system testing and how best to construct and use surrogate models within the coevolutionary scenario. Drawing on our findings, we then describe the effective design of an array of six heterogeneous vertical-axis wind turbines.

  13. The evolution of process-based hydrologic models: historical challenges and the collective quest for physical realism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis

    The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. Here, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We also illustrate how modeling advances have been made by groups using models of different type and complexity,more » and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.« less

  14. The evolution of process-based hydrologic models: historical challenges and the collective quest for physical realism

    DOE PAGES

    Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis; ...

    2017-07-11

    The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. Here, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We also illustrate how modeling advances have been made by groups using models of different type and complexity,more » and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.« less

  15. Towards a physically-based multi-scale ecohydrological simulator for semi-arid regions

    NASA Astrophysics Data System (ADS)

    Caviedes-Voullième, Daniel; Josefik, Zoltan; Hinz, Christoph

    2017-04-01

    The use of numerical models as tools for describing and understanding complex ecohydrological systems has enabled to test hypothesis and propose fundamental, process-based explanations of the system system behaviour as a whole as well as its internal dynamics. Reaction-diffusion equations have been used to describe and generate organized pattern such as bands, spots, and labyrinths using simple feedback mechanisms and boundary conditions. Alternatively, pattern-matching cellular automaton models have been used to generate vegetation self-organization in arid and semi-arid regions also using simple description of surface hydrological processes. A key question is: How much physical realism is needed in order to adequately capture the pattern formation processes in semi-arid regions while reliably representing the water balance dynamics at the relevant time scales? In fact, redistribution of water by surface runoff at the hillslope scale occurs at temporal resolution of minutes while the vegetation development requires much lower temporal resolution and longer times spans. This generates a fundamental spatio-temporal multi-scale problem to be solved, for which high resolution rainfall and surface topography are required. Accordingly, the objective of this contribution is to provide proof-of-concept that governing processes can be described numerically at those multiple scales. The requirements for a simulating ecohydrological processes and pattern formation with increased physical realism are, amongst others: i. high resolution rainfall that adequately captures the triggers of growth as vegetation dynamics of arid regions respond as pulsed systems. ii. complex, natural topography in order to accurately model drainage patterns, as surface water redistribution is highly sensitive to topographic features. iii. microtopography and hydraulic roughness, as small scale variations do impact on large scale hillslope behaviour iv. moisture dependent infiltration as temporal dynamics of infiltration affects water storage under vegetation and in bare soil Despite the volume of research in this field, fundamental limitations still exist in the models regarding the aforementioned issues. Topography and hydrodynamics have been strongly simplified. Infiltration has been modelled as dependent on depth but independent of soil moisture. Temporal rainfall variability has only been addressed for seasonal rain. Spatial heterogenity of the topography as well as roughness and infiltration properties, has not been fully and explicitly represented. We hypothesize that physical processes must be robustly modelled and the drivers of complexity must be present with as much resolution as possible in order to provide the necessary realism to improve transient simulations, perhaps leading the way to virtual laboratories and, arguably, predictive tools. This work provides a first approach into a model with explicit hydrological processes represented by physically-based hydrodynamic models, coupled with well-accepted vegetation models. The model aims to enable new possibilities relating to spatiotemporal variability, arbitrary topography and representation of spatial heterogeneity, including sub-daily (in fact, arbitrary) temporal variability of rain as the main forcing of the model, explicit representation of infiltration processes, and various feedback mechanisms between the hydrodynamics and the vegetation. Preliminary testing strongly suggests that the model is viable, has the potential of producing new information of internal dynamics of the system, and allows to successfully aggregate many of the sources of complexity. Initial benchmarking of the model also reveals strengths to be exploited, thus providing an interesting research outlook, as well as weaknesses to be addressed in the immediate future.

  16. Strategies for Large Scale Implementation of a Multiscale, Multiprocess Integrated Hydrologic Model

    NASA Astrophysics Data System (ADS)

    Kumar, M.; Duffy, C.

    2006-05-01

    Distributed models simulate hydrologic state variables in space and time while taking into account the heterogeneities in terrain, surface, subsurface properties and meteorological forcings. Computational cost and complexity associated with these model increases with its tendency to accurately simulate the large number of interacting physical processes at fine spatio-temporal resolution in a large basin. A hydrologic model run on a coarse spatial discretization of the watershed with limited number of physical processes needs lesser computational load. But this negatively affects the accuracy of model results and restricts physical realization of the problem. So it is imperative to have an integrated modeling strategy (a) which can be universally applied at various scales in order to study the tradeoffs between computational complexity (determined by spatio- temporal resolution), accuracy and predictive uncertainty in relation to various approximations of physical processes (b) which can be applied at adaptively different spatial scales in the same domain by taking into account the local heterogeneity of topography and hydrogeologic variables c) which is flexible enough to incorporate different number and approximation of process equations depending on model purpose and computational constraint. An efficient implementation of this strategy becomes all the more important for Great Salt Lake river basin which is relatively large (~89000 sq. km) and complex in terms of hydrologic and geomorphic conditions. Also the types and the time scales of hydrologic processes which are dominant in different parts of basin are different. Part of snow melt runoff generated in the Uinta Mountains infiltrates and contributes as base flow to the Great Salt Lake over a time scale of decades to centuries. The adaptive strategy helps capture the steep topographic and climatic gradient along the Wasatch front. Here we present the aforesaid modeling strategy along with an associated hydrologic modeling framework which facilitates a seamless, computationally efficient and accurate integration of the process model with the data model. The flexibility of this framework leads to implementation of multiscale, multiresolution, adaptive refinement/de-refinement and nested modeling simulations with least computational burden. However, performing these simulations and related calibration of these models over a large basin at higher spatio- temporal resolutions is computationally intensive and requires use of increasing computing power. With the advent of parallel processing architectures, high computing performance can be achieved by parallelization of existing serial integrated-hydrologic-model code. This translates to running the same model simulation on a network of large number of processors thereby reducing the time needed to obtain solution. The paper also discusses the implementation of the integrated model on parallel processors. Also will be discussed the mapping of the problem on multi-processor environment, method to incorporate coupling between hydrologic processes using interprocessor communication models, model data structure and parallel numerical algorithms to obtain high performance.

  17. Collaborative Research. Damage and Burst Dynamics in Failure of Complex Geomaterials. A Statistical Physics Approach to Understanding the Complex Emergent Dynamics in Near Mean-Field Geological Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rundle, John B.; Klein, William

    We have carried out research to determine the dynamics of failure in complex geomaterials, specifically focusing on the role of defects, damage and asperities in the catastrophic failure processes (now popularly termed “Black Swan events”). We have examined fracture branching and flow processes using models for invasion percolation, focusing particularly on the dynamics of bursts in the branching process. We have achieved a fundamental understanding of the dynamics of nucleation in complex geomaterials, specifically in the presence of inhomogeneous structures.

  18. The Spinal Cord Injury- Functional Index: Item Banks to Measure Physical Functioning of Individuals with Spinal Cord Injury

    PubMed Central

    Tulsky, David S.; Jette, Alan; Kisala, Pamela A.; Kalpakjian, Claire; Dijkers, Marcel P.; Whiteneck, Gale; Ni, Pengsheng; Kirshblum, Steven; Charlifue, Susan; Heinemann, Allen W.; Forchheimer, Martin; Slavin, Mary; Houlihan, Bethlyn; Tate, Denise; Dyson-Hudson, Trevor; Fyffe, Denise; Williams, Steve; Zanca, Jeanne

    2012-01-01

    Objective To develop a comprehensive set of patient reported items to assess multiple aspects of physical functioning relevant to the lives of people with spinal cord injury (SCI) and to evaluate the underlying structure of physical functioning. Design Cross-sectional Setting Inpatient and community Participants Item pools of physical functioning were developed, refined and field tested in a large sample of 855 individuals with traumatic spinal cord injury stratified by diagnosis, severity, and time since injury Interventions None Main Outcome Measure SCI-FI measurement system Results Confirmatory factor analysis (CFA) indicated that a 5-factor model, including basic mobility, ambulation, wheelchair mobility, self care, and fine motor, had the best model fit and was most closely aligned conceptually with feedback received from individuals with SCI and SCI clinicians. When just the items making up basic mobility were tested in CFA, the fit statistics indicate strong support for a unidimensional model. Similar results were demonstrated for each of the other four factors indicating unidimensional models. Conclusions Though unidimensional or 2-factor (mobility and upper extremity) models of physical functioning make up outcomes measures in the general population, the underlying structure of physical function in SCI is more complex. A 5-factor solution allows for comprehensive assessment of key domain areas of physical functioning. These results informed the structure and development of the SCI-FI measurement system of physical functioning. PMID:22609299

  19. Perspective: Sloppiness and emergent theories in physics, biology, and beyond.

    PubMed

    Transtrum, Mark K; Machta, Benjamin B; Brown, Kevin S; Daniels, Bryan C; Myers, Christopher R; Sethna, James P

    2015-07-07

    Large scale models of physical phenomena demand the development of new statistical and computational tools in order to be effective. Many such models are "sloppy," i.e., exhibit behavior controlled by a relatively small number of parameter combinations. We review an information theoretic framework for analyzing sloppy models. This formalism is based on the Fisher information matrix, which is interpreted as a Riemannian metric on a parameterized space of models. Distance in this space is a measure of how distinguishable two models are based on their predictions. Sloppy model manifolds are bounded with a hierarchy of widths and extrinsic curvatures. The manifold boundary approximation can extract the simple, hidden theory from complicated sloppy models. We attribute the success of simple effective models in physics as likewise emerging from complicated processes exhibiting a low effective dimensionality. We discuss the ramifications and consequences of sloppy models for biochemistry and science more generally. We suggest that the reason our complex world is understandable is due to the same fundamental reason: simple theories of macroscopic behavior are hidden inside complicated microscopic processes.

  20. Complex Physical, Biophysical and Econophysical Systems

    NASA Astrophysics Data System (ADS)

    Dewar, Robert L.; Detering, Frank

    1. Introduction to complex and econophysics systems: a navigation map / T. Aste and T. Di Matteo -- 2. An introduction to fractional diffusion / B. I. Henry, T.A.M. Langlands and P. Straka -- 3. Space plasmas and fusion plasmas as complex systems / R. O. Dendy -- 4. Bayesian data analysis / M. S. Wheatland -- 5. Inverse problems and complexity in earth system science / I. G. Enting -- 6. Applied fluid chaos: designing advection with periodically reoriented flows for micro to geophysical mixing and transport enhancement / G. Metcalfe -- 7. Approaches to modelling the dynamical activity of brain function based on the electroencephalogram / D. T. J. Liley and F. Frascoli -- 8. Jaynes' maximum entropy principle, Riemannian metrics and generalised least action bound / R. K. Niven and B. Andresen -- 9. Complexity, post-genomic biology and gene expression programs / R. B. H. Williams and O. J.-H. Luo -- 10. Tutorials on agent-based modelling with NetLogo and network analysis with Pajek / M. J. Berryman and S. D. Angus.

  1. Making classical ground-state spin computing fault-tolerant.

    PubMed

    Crosson, I J; Bacon, D; Brown, K R

    2010-09-01

    We examine a model of classical deterministic computing in which the ground state of the classical system is a spatial history of the computation. This model is relevant to quantum dot cellular automata as well as to recent universal adiabatic quantum computing constructions. In its most primitive form, systems constructed in this model cannot compute in an error-free manner when working at nonzero temperature. However, by exploiting a mapping between the partition function for this model and probabilistic classical circuits we are able to show that it is possible to make this model effectively error-free. We achieve this by using techniques in fault-tolerant classical computing and the result is that the system can compute effectively error-free if the temperature is below a critical temperature. We further link this model to computational complexity and show that a certain problem concerning finite temperature classical spin systems is complete for the complexity class Merlin-Arthur. This provides an interesting connection between the physical behavior of certain many-body spin systems and computational complexity.

  2. [Documenting a rehabilitation program using a logic model: an advantage to the assessment process].

    PubMed

    Poncet, Frédérique; Swaine, Bonnie; Pradat-Diehl, Pascale

    2017-03-06

    The cognitive and behavioral disorders after brain injury can result in severe limitations of activities and restrictions of participation. An interdisciplinary rehabilitation program was developed in physical medicine and rehabilitation at the Pitié-Salpêtriere Hospital, Paris, France. Clinicians believe this program decreases activity limitations and improves participation in patients. However, the program’s effectiveness had never been assessed. To do this, we had to define/describe this program. However rehabilitation programs are holistic and thus complex making them difficult to describe. Therefore, to facilitate the evaluation of complex programs, including those for rehabilitation, we illustrate the use of a theoretical logic model, as proposed by Champagne, through the process of documentation of a specific complex and interdisciplinary rehabilitation program. Through participatory/collaborative research, the rehabilitation program was analyzed using three “submodels” of the logic model of intervention: causal model, intervention model and program theory model. This should facilitate the evaluation of programs, including those for rehabilitation.

  3. Multiscale Informatics for Low-Temperature Propane Oxidation: Further Complexities in Studies of Complex Reactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, Michael P.; Goldsmith, C. Franklin; Klippenstein, Stephen J.

    2015-07-16

    We have developed a multi-scale approach (Burke, M. P.; Klippenstein, S. J.; Harding, L. B. Proc. Combust. Inst. 2013, 34, 547–555.) to kinetic model formulation that directly incorporates elementary kinetic theories as a means to provide reliable, physics-based extrapolation to unexplored conditions. Here, we extend and generalize the multi-scale modeling strategy to treat systems of considerable complexity – involving multi-well reactions, potentially missing reactions, non-statistical product branching ratios, and non-Boltzmann (i.e. non-thermal) reactant distributions. The methodology is demonstrated here for a subsystem of low-temperature propane oxidation, as a representative system for low-temperature fuel oxidation. A multi-scale model is assembled andmore » informed by a wide variety of targets that include ab initio calculations of molecular properties, rate constant measurements of isolated reactions, and complex systems measurements. Active model parameters are chosen to accommodate both “parametric” and “structural” uncertainties. Theoretical parameters (e.g. barrier heights) are included as active model parameters to account for parametric uncertainties in the theoretical treatment; experimental parameters (e.g. initial temperatures) are included to account for parametric uncertainties in the physical models of the experiments. RMG software is used to assess potential structural uncertainties due to missing reactions. Additionally, branching ratios among product channels are included as active model parameters to account for structural uncertainties related to difficulties in modeling sequences of multiple chemically activated steps. The approach is demonstrated here for interpreting time-resolved measurements of OH, HO2, n-propyl, i-propyl, propene, oxetane, and methyloxirane from photolysis-initiated low-temperature oxidation of propane at pressures from 4 to 60 Torr and temperatures from 300 to 700 K. In particular, the multi-scale informed model provides a consistent quantitative explanation of both ab initio calculations and time-resolved species measurements. The present results show that interpretations of OH measurements are significantly more complicated than previously thought – in addition to barrier heights for key transition states considered previously, OH profiles also depend on additional theoretical parameters for R + O2 reactions, secondary reactions, QOOH + O2 reactions, and treatment of non-Boltzmann reaction sequences. Extraction of physically rigorous information from those measurements may require more sophisticated treatment of all of those model aspects, as well as additional experimental data under more conditions, to discriminate among possible interpretations and ensure model reliability. Keywords: Optimization, Uncertainty quantification, Chemical mechanism, Low-Temperature Oxidation, Non-Boltzmann« less

  4. Modeling the chemistry of complex petroleum mixtures.

    PubMed Central

    Quann, R J

    1998-01-01

    Determining the complete molecular composition of petroleum and its refined products is not feasible with current analytical techniques because of the astronomical number of molecular components. Modeling the composition and behavior of such complex mixtures in refinery processes has accordingly evolved along a simplifying concept called lumping. Lumping reduces the complexity of the problem to a manageable form by grouping the entire set of molecular components into a handful of lumps. This traditional approach does not have a molecular basis and therefore excludes important aspects of process chemistry and molecular property fundamentals from the model's formulation. A new approach called structure-oriented lumping has been developed to model the composition and chemistry of complex mixtures at a molecular level. The central concept is to represent an individual molecular or a set of closely related isomers as a mathematical construct of certain specific and repeating structural groups. A complex mixture such as petroleum can then be represented as thousands of distinct molecular components, each having a mathematical identity. This enables the automated construction of large complex reaction networks with tens of thousands of specific reactions for simulating the chemistry of complex mixtures. Further, the method provides a convenient framework for incorporating molecular physical property correlations, existing group contribution methods, molecular thermodynamic properties, and the structure--activity relationships of chemical kinetics in the development of models. PMID:9860903

  5. The PAC-MAN model: Benchmark case for linear acoustics in computational physics

    NASA Astrophysics Data System (ADS)

    Ziegelwanger, Harald; Reiter, Paul

    2017-10-01

    Benchmark cases in the field of computational physics, on the one hand, have to contain a certain complexity to test numerical edge cases and, on the other hand, require the existence of an analytical solution, because an analytical solution allows the exact quantification of the accuracy of a numerical simulation method. This dilemma causes a need for analytical sound field formulations of complex acoustic problems. A well known example for such a benchmark case for harmonic linear acoustics is the ;Cat's Eye model;, which describes the three-dimensional sound field radiated from a sphere with a missing octant analytically. In this paper, a benchmark case for two-dimensional (2D) harmonic linear acoustic problems, viz., the ;PAC-MAN model;, is proposed. The PAC-MAN model describes the radiated and scattered sound field around an infinitely long cylinder with a cut out sector of variable angular width. While the analytical calculation of the 2D sound field allows different angular cut-out widths and arbitrarily positioned line sources, the computational cost associated with the solution of this problem is similar to a 1D problem because of a modal formulation of the sound field in the PAC-MAN model.

  6. Charge frustration in complex fluids and in electronic systems

    NASA Astrophysics Data System (ADS)

    Carraro, Carlo

    1997-02-01

    The idea of charge frustration is applied to describe the properties of such diverse physical systems as oil-water-surfactant mixtures and metal-ammonia solutions. The minimalist charge-frustrated model possesses one energy scale and two length scales. For oil-water-surfactant mixtures, these parameters have been determined starting from the microscopic properties of the physical systems under study. Thus, microscopic properties are successfully related to the observed mesoscopic structure.

  7. Sparsity-promoting inversion for modeling of irregular volcanic deformation source

    NASA Astrophysics Data System (ADS)

    Zhai, G.; Shirzaei, M.

    2016-12-01

    Kīlauea volcano, Hawaíi Island, has a complex magmatic system. Nonetheless, kinematic models of the summit reservoir have so far been limited to first-order analytical solutions with pre-determined geometry. To investigate the complex geometry and kinematics of the summit reservoir, we apply a multitrack multitemporal wavelet-based InSAR (Interferometric Synthetic Aperture Radar) algorithm and a geometry-free time-dependent modeling scheme considering a superposition of point centers of dilatation (PCDs). Applying Principal Component Analysis (PCA) to the time-dependent source model, six spatially independent deformation zones (i.e., reservoirs) are identified, whose locations are consistent with previous studies. Time-dependence of the model allows also identifying periods of correlated or anti-correlated behaviors between reservoirs. Hence, we suggest that likely the reservoir are connected and form a complex magmatic reservoir [Zhai and Shirzaei, 2016]. To obtain a physically-meaningful representation of the complex reservoir, we devise a new sparsity-promoting modeling scheme assuming active magma bodies are well-localized melt accumulations (i.e., outliers in background crust). The major steps include inverting surface deformation data using a hybrid L-1 and L-2 norm regularization approach to solve for sparse volume change distribution and then implementing a BEM based method to solve for opening distribution on a triangular mesh representing the complex reservoir. Using this approach, we are able to constrain the internal excess pressure of magma body with irregular geometry, satisfying uniformly pressurized boundary condition on the surface of magma chamber. The inversion method with sparsity constraint is tested using five synthetic source geometries, including torus, prolate ellipsoid, and sphere as well as horizontal and vertical L-shape bodies. The results show that source dimension, depth and shape are well recovered. Afterward, we apply this modeling scheme to deformation observed at Kilauea summit to constrain the magmatic source geometry, and revise the kinematics of Kilauea's shallow plumbing system. Such a model is valuable for understanding the physical processes in a magmatic reservoir and the method can readily be applied to other volcanic settings.

  8. Time scale variations of the physical parameters of the Si IV resonance lines in the case of the Be star HD 50138

    NASA Astrophysics Data System (ADS)

    Stathopoulos, D.

    2012-01-01

    As it is well known many lines in the spectra of hot emission stars (Be and Oe) present peculiar and very complex profiles. As a result, we cannot find a classical theoretical distribution in order to fit these profiles. Because of this, we are not able to calculate the physical parameters of the regions were these lines are created. In this paper, using the Gauss-Rotation model (GR-model Danezis et al), that proposed the idea that these complex profiles consist of a number of independent Discrete or Satellite Absorption Components (DACs, SACs), we study the UV Si IV (λλ 1393.755, 1402.77 A) resonance lines of the Be star HD 50138 in three different periods. From this analysis we can calculate the values of a group of physical parameters. The parameters are the apparent rotational and radial velocities, the random velocities of the thermal motions of the ions, as well as the Full Width at Half Maximum (FWHM) an the absorbed energy of the independent regions of matter which produce the main and the satellite components of the studied spectral line. Finally we calculate the time scale variations of the above physical parameters.

  9. An approach for modelling snowcover ablation and snowmelt runoff in cold region environments

    NASA Astrophysics Data System (ADS)

    Dornes, Pablo Fernando

    Reliable hydrological model simulations are the result of numerous complex interactions among hydrological inputs, landscape properties, and initial conditions. Determination of the effects of these factors is one of the main challenges in hydrological modelling. This situation becomes even more difficult in cold regions due to the ungauged nature of subarctic and arctic environments. This research work is an attempt to apply a new approach for modelling snowcover ablation and snowmelt runoff in complex subarctic environments with limited data while retaining integrity in the process representations. The modelling strategy is based on the incorporation of both detailed process understanding and inputs along with information gained from observations of basin-wide streamflow phenomenon; essentially a combination of deductive and inductive approaches. The study was conducted in the Wolf Creek Research Basin, Yukon Territory, using three models, a small-scale physically based hydrological model, a land surface scheme, and a land surface hydrological model. The spatial representation was based on previous research studies and observations, and was accomplished by incorporating landscape units, defined according to topography and vegetation, as the spatial model elements. Comparisons between distributed and aggregated modelling approaches showed that simulations incorporating distributed initial snowcover and corrected solar radiation were able to properly simulate snowcover ablation and snowmelt runoff whereas the aggregated modelling approaches were unable to represent the differential snowmelt rates and complex snowmelt runoff dynamics. Similarly, the inclusion of spatially distributed information in a land surface scheme clearly improved simulations of snowcover ablation. Application of the same modelling approach at a larger scale using the same landscape based parameterisation showed satisfactory results in simulating snowcover ablation and snowmelt runoff with minimal calibration. Verification of this approach in an arctic basin illustrated that landscape based parameters are a feasible regionalisation framework for distributed and physically based models. In summary, the proposed modelling philosophy, based on the combination of an inductive and deductive reasoning, is a suitable strategy for reliable predictions of snowcover ablation and snowmelt runoff in cold regions and complex environments.

  10. Combining Statistics and Physics to Improve Climate Downscaling

    NASA Astrophysics Data System (ADS)

    Gutmann, E. D.; Eidhammer, T.; Arnold, J.; Nowak, K.; Clark, M. P.

    2017-12-01

    Getting useful information from climate models is an ongoing problem that has plagued climate science and hydrologic prediction for decades. While it is possible to develop statistical corrections for climate models that mimic current climate almost perfectly, this does not necessarily guarantee that future changes are portrayed correctly. In contrast, convection permitting regional climate models (RCMs) have begun to provide an excellent representation of the regional climate system purely from first principles, providing greater confidence in their change signal. However, the computational cost of such RCMs prohibits the generation of ensembles of simulations or long time periods, thus limiting their applicability for hydrologic applications. Here we discuss a new approach combining statistical corrections with physical relationships for a modest computational cost. We have developed the Intermediate Complexity Atmospheric Research model (ICAR) to provide a climate and weather downscaling option that is based primarily on physics for a fraction of the computational requirements of a traditional regional climate model. ICAR also enables the incorporation of statistical adjustments directly within the model. We demonstrate that applying even simple corrections to precipitation while the model is running can improve the simulation of land atmosphere feedbacks in ICAR. For example, by incorporating statistical corrections earlier in the modeling chain, we permit the model physics to better represent the effect of mountain snowpack on air temperature changes.

  11. A wave model test bed study for wave energy resource characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Zhaoqing; Neary, Vincent S.; Wang, Taiping

    This paper presents a test bed study conducted to evaluate best practices in wave modeling to characterize energy resources. The model test bed off the central Oregon Coast was selected because of the high wave energy and available measured data at the site. Two third-generation spectral wave models, SWAN and WWIII, were evaluated. A four-level nested-grid approach—from global to test bed scale—was employed. Model skills were assessed using a set of model performance metrics based on comparing six simulated wave resource parameters to observations from a wave buoy inside the test bed. Both WWIII and SWAN performed well at themore » test bed site and exhibited similar modeling skills. The ST4 package with WWIII, which represents better physics for wave growth and dissipation, out-performed ST2 physics and improved wave power density and significant wave height predictions. However, ST4 physics tended to overpredict the wave energy period. The newly developed ST6 physics did not improve the overall model skill for predicting the six wave resource parameters. Sensitivity analysis using different wave frequencies and direction resolutions indicated the model results were not sensitive to spectral resolutions at the test bed site, likely due to the absence of complex bathymetric and geometric features.« less

  12. Development of a One-Equation Eddy Viscosity Turbulence Model for Application to Complex Turbulent Flows

    NASA Astrophysics Data System (ADS)

    Wray, Timothy J.

    Computational fluid dynamics (CFD) is routinely used in performance prediction and design of aircraft, turbomachinery, automobiles, and in many other industrial applications. Despite its wide range of use, deficiencies in its prediction accuracy still exist. One critical weakness is the accurate simulation of complex turbulent flows using the Reynolds-Averaged Navier-Stokes equations in conjunction with a turbulence model. The goal of this research has been to develop an eddy viscosity type turbulence model to increase the accuracy of flow simulations for mildly separated flows, flows with rotation and curvature effects, and flows with surface roughness. It is accomplished by developing a new zonal one-equation turbulence model which relies heavily on the flow physics; it is now known in the literature as the Wray-Agarwal one-equation turbulence model. The effectiveness of the new model is demonstrated by comparing its results with those obtained by the industry standard one-equation Spalart-Allmaras model and two-equation Shear-Stress-Transport k - o model and experimental data. Results for subsonic, transonic, and supersonic flows in and about complex geometries are presented. It is demonstrated that the Wray-Agarwal model can provide the industry and CFD researchers an accurate, efficient, and reliable turbulence model for the computation of a large class of complex turbulent flows.

  13. Unexpected Results are Usually Wrong, but Often Interesting

    NASA Astrophysics Data System (ADS)

    Huber, M.

    2014-12-01

    In climate modeling, an unexpected result is usually wrong, arising from some sort of mistake. Despite the fact that we all bemoan uncertainty in climate, the field is underlain by a robust, successful body of theory and any properly conducted modeling experiment is posed and conducted within that context. Consequently, if results from a complex climate model disagree with theory or from expectations from simpler models, much skepticism is in order. But, this exposes the fundamental tension of using complex, sophisticated models. If simple models and theory were perfect there would be no reason for complex models--the entire point of sophisticated models is to see if unexpected phenomena arise as emergent properties of the system. In this talk, I will step through some paleoclimate examples, drawn from my own work, of unexpected results that emerge from complex climate models arising from mistakes of two kinds. The first kind of mistake, is what I call a 'smart mistake'; it is an intentional incorporation of assumptions, boundary conditions, or physics that is in violation of theoretical or observational constraints. The second mistake, a 'dumb mistake', is just that, an unintentional violation. Analysis of such mistaken simulations provides some potentially novel and certainly interesting insights into what is possible and right in paleoclimate modeling by forcing the reexamination of well-held assumptions and theories.

  14. Quasi-dynamic earthquake fault systems with rheological heterogeneity

    NASA Astrophysics Data System (ADS)

    Brietzke, G. B.; Hainzl, S.; Zoeller, G.; Holschneider, M.

    2009-12-01

    Seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates, such models cannot allow for physical statements of the described seismicity. In contrary such empirical stochastic models, physics based earthquake fault systems models allow for a physical reasoning and interpretation of the produced seismicity and system dynamics. Recently different fault system earthquake simulators based on frictional stick-slip behavior have been used to study effects of stress heterogeneity, rheological heterogeneity, or geometrical complexity on earthquake occurrence, spatial and temporal clustering of earthquakes, and system dynamics. Here we present a comparison of characteristics of synthetic earthquake catalogs produced by two different formulations of quasi-dynamic fault system earthquake simulators. Both models are based on discretized frictional faults embedded in an elastic half-space. While one (1) is governed by rate- and state-dependent friction with allowing three evolutionary stages of independent fault patches, the other (2) is governed by instantaneous frictional weakening with scheduled (and therefore causal) stress transfer. We analyze spatial and temporal clustering of events and characteristics of system dynamics by means of physical parameters of the two approaches.

  15. Evidence of non-extensivity and complexity in the seismicity observed during 2011-2012 at the Santorini volcanic complex, Greece

    NASA Astrophysics Data System (ADS)

    Vallianatos, F.; Tzanis, A.; Michas, G.; Papadakis, G.

    2012-04-01

    Since the middle of summer 2011, an increase in the seismicity rates of the volcanic complex system of Santorini Island, Greece, was observed. In the present work, the temporal distribution of seismicity, as well as the magnitude distribution of earthquakes, have been studied using the concept of Non-Extensive Statistical Physics (NESP; Tsallis, 2009) along with the evolution of Shanon entropy H (also called information entropy). The analysis is based on the earthquake catalogue of the Geodynamic Institute of the National Observatory of Athens for the period July 2011-January 2012 (http://www.gein.noa.gr/). Non-Extensive Statistical Physics, which is a generalization of Boltzmann-Gibbs statistical physics, seems a suitable framework for studying complex systems. The observed distributions of seismicity rates at Santorini can be described (fitted) with NESP models to exceptionally well. This implies the inherent complexity of the Santorini volcanic seismicity, the applicability of NESP concepts to volcanic earthquake activity and the usefulness of NESP in investigating phenomena exhibiting multifractality and long-range coupling effects. Acknowledgments. This work was supported in part by the THALES Program of the Ministry of Education of Greece and the European Union in the framework of the project entitled "Integrated understanding of Seismicity, using innovative Methodologies of Fracture mechanics along with Earthquake and non extensive statistical physics - Application to the geodynamic system of the Hellenic Arc. SEISMO FEAR HELLARC". GM and GP wish to acknowledge the partial support of the Greek State Scholarships Foundation (ΙΚΥ).

  16. Data mining and statistical inference in selective laser melting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamath, Chandrika

    Selective laser melting (SLM) is an additive manufacturing process that builds a complex three-dimensional part, layer-by-layer, using a laser beam to fuse fine metal powder together. The design freedom afforded by SLM comes associated with complexity. As the physical phenomena occur over a broad range of length and time scales, the computational cost of modeling the process is high. At the same time, the large number of parameters that control the quality of a part make experiments expensive. In this paper, we describe ways in which we can use data mining and statistical inference techniques to intelligently combine simulations andmore » experiments to build parts with desired properties. We start with a brief summary of prior work in finding process parameters for high-density parts. We then expand on this work to show how we can improve the approach by using feature selection techniques to identify important variables, data-driven surrogate models to reduce computational costs, improved sampling techniques to cover the design space adequately, and uncertainty analysis for statistical inference. Here, our results indicate that techniques from data mining and statistics can complement those from physical modeling to provide greater insight into complex processes such as selective laser melting.« less

  17. Data mining and statistical inference in selective laser melting

    DOE PAGES

    Kamath, Chandrika

    2016-01-11

    Selective laser melting (SLM) is an additive manufacturing process that builds a complex three-dimensional part, layer-by-layer, using a laser beam to fuse fine metal powder together. The design freedom afforded by SLM comes associated with complexity. As the physical phenomena occur over a broad range of length and time scales, the computational cost of modeling the process is high. At the same time, the large number of parameters that control the quality of a part make experiments expensive. In this paper, we describe ways in which we can use data mining and statistical inference techniques to intelligently combine simulations andmore » experiments to build parts with desired properties. We start with a brief summary of prior work in finding process parameters for high-density parts. We then expand on this work to show how we can improve the approach by using feature selection techniques to identify important variables, data-driven surrogate models to reduce computational costs, improved sampling techniques to cover the design space adequately, and uncertainty analysis for statistical inference. Here, our results indicate that techniques from data mining and statistics can complement those from physical modeling to provide greater insight into complex processes such as selective laser melting.« less

  18. Untangling Slab Dynamics Using 3-D Numerical and Analytical Models

    NASA Astrophysics Data System (ADS)

    Holt, A. F.; Royden, L.; Becker, T. W.

    2016-12-01

    Increasingly sophisticated numerical models have enabled us to make significant strides in identifying the key controls on how subducting slabs deform. For example, 3-D models have demonstrated that subducting plate width, and the related strength of toroidal flow around the plate edge, exerts a strong control on both the curvature and the rate of migration of the trench. However, the results of numerical subduction models can be difficult to interpret, and many first order dynamics issues remain at least partially unresolved. Such issues include the dominant controls on trench migration, the interdependence of asthenospheric pressure and slab dynamics, and how nearby slabs influence each other's dynamics. We augment 3-D, dynamically evolving finite element models with simple, analytical force-balance models to distill the physics associated with subduction into more manageable parts. We demonstrate that for single, isolated subducting slabs much of the complexity of our fully numerical models can be encapsulated by simple analytical expressions. Rates of subduction and slab dip correlate strongly with the asthenospheric pressure difference across the subducting slab. For double subduction, an additional slab gives rise to more complex mantle pressure and flow fields, and significantly extends the range of plate kinematics (e.g., convergence rate, trench migration rate) beyond those present in single slab models. Despite these additional complexities, we show that much of the dynamics of such multi-slab systems can be understood using the physics illuminated by our single slab study, and that a force-balance method can be used to relate intra-plate stress to viscous pressure in the asthenosphere and coupling forces at plate boundaries. This method has promise for rapid modeling of large systems of subduction zones on a global scale.

  19. Practical modeling approaches for geological storage of carbon dioxide.

    PubMed

    Celia, Michael A; Nordbotten, Jan M

    2009-01-01

    The relentless increase of anthropogenic carbon dioxide emissions and the associated concerns about climate change have motivated new ideas about carbon-constrained energy production. One technological approach to control carbon dioxide emissions is carbon capture and storage, or CCS. The underlying idea of CCS is to capture the carbon before it emitted to the atmosphere and store it somewhere other than the atmosphere. Currently, the most attractive option for large-scale storage is in deep geological formations, including deep saline aquifers. Many physical and chemical processes can affect the fate of the injected CO2, with the overall mathematical description of the complete system becoming very complex. Our approach to the problem has been to reduce complexity as much as possible, so that we can focus on the few truly important questions about the injected CO2, most of which involve leakage out of the injection formation. Toward this end, we have established a set of simplifying assumptions that allow us to derive simplified models, which can be solved numerically or, for the most simplified cases, analytically. These simplified models allow calculation of solutions to large-scale injection and leakage problems in ways that traditional multicomponent multiphase simulators cannot. Such simplified models provide important tools for system analysis, screening calculations, and overall risk-assessment calculations. We believe this is a practical and important approach to model geological storage of carbon dioxide. It also serves as an example of how complex systems can be simplified while retaining the essential physics of the problem.

  20. Modeling Costal Zone Responses to Sea-Level Rise Using MoCCS: A Model of Complex Coastal System

    NASA Astrophysics Data System (ADS)

    Dai, H.; Niedoroda, A. W.; Ye, M.; Saha, B.; Donoghue, J. F.; Kish, S.

    2011-12-01

    Large-scale coastal systems consisting of several morphological components (e.g. beach, surf zone, dune, inlet, shoreface, and estuary) can be expected to exhibit complex and interacting responses to changes in the rate of sea level rise and storm climate. We have developed a numerical model of complex coastal systems (MoCCS), derived from earlier morphdynamic models, to represent the large-scale time-averaged physical processes that shape each component and govern the component interactions. These control the ongoing evolution of the barrier islands, beach and dune erosion, shoal formation and sand withdrawal at tidal inlets, depth changes in the bay, and changes in storm flooding. The model has been used to study the response of an idealized coastal system with physical characteristics and storm climatology similar to Santa Rosa Island on the Florida Panhandle coast. Five SLR scenarios have been used, covering the range of recently published projections for the next century. Each scenario has been input with a constant and then a time-varying storm climate. The results indicate that substantial increases in the rate of beach erosion are largely due to increased sand transfer to inlet shoals with increased rates of sea level rise. The barrier island undergoes cycles of dune destruction and regrowth, leading to sand deposition. This largely maintains island freeboard but is progressively less effective in offsetting bayside inundation and marsh habitat loss at accelerated sea level rise rates.

  1. Route complexity and simulated physical ageing negatively influence wayfinding.

    PubMed

    Zijlstra, Emma; Hagedoorn, Mariët; Krijnen, Wim P; van der Schans, Cees P; Mobach, Mark P

    2016-09-01

    The aim of this age-simulation field experiment was to assess the influence of route complexity and physical ageing on wayfinding. Seventy-five people (aged 18-28) performed a total of 108 wayfinding tasks (i.e., 42 participants performed two wayfinding tasks and 33 performed one wayfinding task), of which 59 tasks were performed wearing gerontologic ageing suits. Outcome variables were wayfinding performance (i.e., efficiency and walking speed) and physiological outcomes (i.e., heart and respiratory rates). Analysis of covariance showed that persons on more complex routes (i.e., more floor and building changes) walked less efficiently than persons on less complex routes. In addition, simulated elderly participants perform worse in wayfinding than young participants in terms of speed (p < 0.001). Moreover, a linear mixed model showed that simulated elderly persons had higher heart rates and respiratory rates compared to young people during a wayfinding task, suggesting that simulated elderly consumed more energy during this task. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Fast Quantum Algorithm for Predicting Descriptive Statistics of Stochastic Processes

    NASA Technical Reports Server (NTRS)

    Williams Colin P.

    1999-01-01

    Stochastic processes are used as a modeling tool in several sub-fields of physics, biology, and finance. Analytic understanding of the long term behavior of such processes is only tractable for very simple types of stochastic processes such as Markovian processes. However, in real world applications more complex stochastic processes often arise. In physics, the complicating factor might be nonlinearities; in biology it might be memory effects; and in finance is might be the non-random intentional behavior of participants in a market. In the absence of analytic insight, one is forced to understand these more complex stochastic processes via numerical simulation techniques. In this paper we present a quantum algorithm for performing such simulations. In particular, we show how a quantum algorithm can predict arbitrary descriptive statistics (moments) of N-step stochastic processes in just O(square root of N) time. That is, the quantum complexity is the square root of the classical complexity for performing such simulations. This is a significant speedup in comparison to the current state of the art.

  3. Planning in Higher Education and Chaos Theory: A Model, a Method.

    ERIC Educational Resources Information Center

    Cutright, Marc

    This paper proposes a model, based on chaos theory, that explores strategic planning in higher education. It notes that chaos theory was first developed in the physical sciences to explain how apparently random activity was, in fact, complexity patterned. The paper goes on to describe how chaos theory has subsequently been applied to the social…

  4. Large eddy simulation of forest canopy flow for wildland fire modeling

    Treesearch

    Eric Mueller; William Mell; Albert Simeoni

    2014-01-01

    Large eddy simulation (LES) based computational fluid dynamics (CFD) simulators have obtained increasing attention in the wildland fire research community, as these tools allow the inclusion of important driving physics. However, due to the complexity of the models, individual aspects must be isolated and tested rigorously to ensure meaningful results. As wind is a...

  5. Biological system interactions.

    PubMed Central

    Adomian, G; Adomian, G E; Bellman, R E

    1984-01-01

    Mathematical modeling of cellular population growth, interconnected subsystems of the body, blood flow, and numerous other complex biological systems problems involves nonlinearities and generally randomness as well. Such problems have been dealt with by mathematical methods often changing the actual model to make it tractable. The method presented in this paper (and referenced works) allows much more physically realistic solutions. PMID:6585837

  6. PREFACE: 3rd International Conference on Mathematical Modeling in Physical Sciences (IC-MSQUARE 2014)

    NASA Astrophysics Data System (ADS)

    2015-01-01

    The third International Conference on Mathematical Modeling in Physical Sciences (IC-MSQUARE) took place at Madrid, Spain, from Thursday 28 to Sunday 31 August 2014. The Conference was attended by more than 200 participants and hosted about 350 oral, poster, and virtual presentations. More than 600 pre-registered authors were also counted. The third IC-MSQUARE consisted of different and diverging workshops and thus covered various research fields where Mathematical Modeling is used, such as Theoretical/Mathematical Physics, Neutrino Physics, Non-Integrable Systems, Dynamical Systems, Computational Nanoscience, Biological Physics, Computational Biomechanics, Complex Networks, Stochastic Modeling, Fractional Statistics, DNA Dynamics, Macroeconomics etc. The scientific program was rather heavy since after the Keynote and Invited Talks in the morning, three parallel oral sessions and one poster session were running every day. However, according to all attendees, the program was excellent with high level of talks and the scientific environment was fruitful, thus all attendees had a creative time. We would like to thank the Keynote Speaker and the Invited Speakers for their significant contribution to IC-MSQUARE. We also would like to thank the Members of the International Advisory and Scientific Committees as well as the Members of the Organizing Committee.

  7. PREFACE: 4th International Conference on Mathematical Modeling in Physical Sciences (IC-MSquare2015)

    NASA Astrophysics Data System (ADS)

    Vlachos, Dimitrios; Vagenas, Elias C.

    2015-09-01

    The 4th International Conference on Mathematical Modeling in Physical Sciences (IC-MSQUARE) took place in Mykonos, Greece, from Friday 5th June to Monday 8th June 2015. The Conference was attended by more than 150 participants and hosted about 200 oral, poster, and virtual presentations. There were more than 600 pre-registered authors. The 4th IC-MSQUARE consisted of different and diverging workshops and thus covered various research fields where Mathematical Modeling is used, such as Theoretical/Mathematical Physics, Neutrino Physics, Non-Integrable Systems, Dynamical Systems, Computational Nanoscience, Biological Physics, Computational Biomechanics, Complex Networks, Stochastic Modeling, Fractional Statistics, DNA Dynamics, Macroeconomics etc. The scientific program was rather intense as after the Keynote and Invited Talks in the morning, three parallel oral and one poster session were running every day. However, according to all attendees, the program was excellent with a high quality of talks creating an innovative and productive scientific environment for all attendees. We would like to thank the Keynote Speaker and the Invited Speakers for their significant contribution to IC-MSQUARE. We also would like to thank the Members of the International Advisory and Scientific Committees as well as the Members of the Organizing Committee.

  8. Coupling of a distributed stakeholder-built system dynamics socio-economic model with SAHYSMOD for sustainable soil salinity management - Part 1: Model development

    NASA Astrophysics Data System (ADS)

    Inam, Azhar; Adamowski, Jan; Prasher, Shiv; Halbe, Johannes; Malard, Julien; Albano, Raffaele

    2017-08-01

    Effective policies, leading to sustainable management solutions for land and water resources, require a full understanding of interactions between socio-economic and physical processes. However, the complex nature of these interactions, combined with limited stakeholder engagement, hinders the incorporation of socio-economic components into physical models. The present study addresses this challenge by integrating the physical Spatial Agro Hydro Salinity Model (SAHYSMOD) with a participatory group-built system dynamics model (GBSDM) that includes socio-economic factors. A stepwise process to quantify the GBSDM is presented, along with governing equations and model assumptions. Sub-modules of the GBSDM, describing agricultural, economic, water and farm management factors, are linked together with feedbacks and finally coupled with the physically based SAHYSMOD model through commonly used tools (i.e., MS Excel and a Python script). The overall integrated model (GBSDM-SAHYSMOD) can be used to help facilitate the role of stakeholders with limited expertise and resources in model and policy development and implementation. Following the development of the integrated model, a testing methodology was used to validate the structure and behavior of the integrated model. Model robustness under different operating conditions was also assessed. The model structure was able to produce anticipated real behaviours under the tested scenarios, from which it can be concluded that the formulated structures generate the right behaviour for the right reasons.

  9. Application of machine learning techniques to analyse the effects of physical exercise in ventricular fibrillation.

    PubMed

    Caravaca, Juan; Soria-Olivas, Emilio; Bataller, Manuel; Serrano, Antonio J; Such-Miquel, Luis; Vila-Francés, Joan; Guerrero, Juan F

    2014-02-01

    This work presents the application of machine learning techniques to analyse the influence of physical exercise in the physiological properties of the heart, during ventricular fibrillation. To this end, different kinds of classifiers (linear and neural models) are used to classify between trained and sedentary rabbit hearts. The use of those classifiers in combination with a wrapper feature selection algorithm allows to extract knowledge about the most relevant features in the problem. The obtained results show that neural models outperform linear classifiers (better performance indices and a better dimensionality reduction). The most relevant features to describe the benefits of physical exercise are those related to myocardial heterogeneity, mean activation rate and activation complexity. © 2013 Published by Elsevier Ltd.

  10. Self-organized pattern formation at organic-inorganic interfaces during deposition: Experiment versus modeling

    NASA Astrophysics Data System (ADS)

    Szillat, F.; Mayr, S. G.

    2011-09-01

    Self-organized pattern formation during physical vapor deposition of organic materials onto rough inorganic substrates is characterized by a complex morphological evolution as a function of film thickness. We employ a combined experimental-theoretical study using atomic force microscopy and numerically solved continuum rate equations to address morphological evolution in the model system: poly(bisphenol A carbonate) on polycrystalline Cu. As the key ingredients for pattern formation, (i) curvature and interface potential driven surface diffusion, (ii) deposition noise, and (iii) interface boundary effects are identified. Good agreement of experiments and theory, fitting only the Hamaker constant and diffusivity within narrow physical parameter windows, corroborates the underlying physics and paves the way for computer-assisted interface engineering.

  11. wfip2.model/realtime.hrrr_esrl.graphics.01 (Model: Real Time)

    DOE Data Explorer

    Macduff, Matt

    2017-10-27

    The primary purpose of WFIP2 Model Development Team is to improve existing numerical weather prediction models in a manner that leads to improved wind forecasts in regions of complex terrain. Improvements in the models will come through better understanding of the physics associated with the wind flow in and around the wind plant across a range of temporal and spatial scales, which will be gained through WFIP2’s observational field study and analysis.

  12. wfip2.model/realtime.rap_esrl.icbc.01 (Model: Real Time)

    DOE Data Explorer

    Macduff, Matt

    2017-10-27

    The primary purpose of WFIP2 Model Development Team is to improve existing numerical weather prediction models in a manner that leads to improved wind forecasts in regions of complex terrain. Improvements in the models will come through better understanding of the physics associated with the wind flow in and around the wind plant across a range of temporal and spatial scales, which will be gained through WFIP2’s observational field study and analysis.

  13. wfip2.model/refcst.01.fcst.02 (Model: Year-Long Reforecast)

    DOE Data Explorer

    Macduff, Matt

    2017-10-27

    The primary purpose of WFIP2 Model Development Team is to improve existing numerical weather prediction models in a manner that leads to improved wind forecasts in regions of complex terrain. Improvements in the models will come through better understanding of the physics associated with the wind flow in and around the wind plant across a range of temporal and spatial scales, which will be gained through WFIP2’s observational field study and analysis.

  14. wfip2.model/refcst.coldstart.icbc.02 (Model: Year-Long Reforecast)

    DOE Data Explorer

    Macduff, Matt

    2017-10-27

    The primary purpose of WFIP2 Model Development Team is to improve existing numerical weather prediction models in a manner that leads to improved wind forecasts in regions of complex terrain. Improvements in the models will come through better understanding of the physics associated with the wind flow in and around the wind plant across a range of temporal and spatial scales, which will be gained through WFIP2’s observational field study and analysis.

  15. wfip2.model/realtime.hrrr_esrl.icbc.01 (Model: Real Time)

    DOE Data Explorer

    Macduff, Matt

    2017-10-27

    The primary purpose of WFIP2 Model Development Team is to improve existing numerical weather prediction models in a manner that leads to improved wind forecasts in regions of complex terrain. Improvements in the models will come through better understanding of the physics associated with the wind flow in and around the wind plant across a range of temporal and spatial scales, which will be gained through WFIP2’s observational field study and analysis.

  16. wfip2.model/realtime.rap_esrl.graphics.01 (Model: Real Time)

    DOE Data Explorer

    Macduff, Matt

    2017-10-27

    The primary purpose of WFIP2 Model Development Team is to improve existing numerical weather prediction models in a manner that leads to improved wind forecasts in regions of complex terrain. Improvements in the models will come through better understanding of the physics associated with the wind flow in and around the wind plant across a range of temporal and spatial scales, which will be gained through WFIP2’s observational field study and analysis.

  17. wfip2.model/refcst.01.fcst.01 (Model: Year-Long Reforecast)

    DOE Data Explorer

    Macduff, Matt

    2017-10-27

    The primary purpose of WFIP2 Model Development Team is to improve existing numerical weather prediction models in a manner that leads to improved wind forecasts in regions of complex terrain. Improvements in the models will come through better understanding of the physics associated with the wind flow in and around the wind plant across a range of temporal and spatial scales, which will be gained through WFIP2’s observational field study and analysis.

  18. wfip2.model/refcst.coldstart.icbc.01 (Model: Year-Long Reforecast)

    DOE Data Explorer

    Macduff, Matt

    2017-10-27

    The primary purpose of WFIP2 Model Development Team is to improve existing numerical weather prediction models in a manner that leads to improved wind forecasts in regions of complex terrain. Improvements in the models will come through better understanding of the physics associated with the wind flow in and around the wind plant across a range of temporal and spatial scales, which will be gained through WFIP2’s observational field study and analysis.

  19. wfip2.model/refcst.02.fcst.02 (Model: Year-Long Reforecast)

    DOE Data Explorer

    Macduff, Matt

    2017-10-27

    The primary purpose of WFIP2 Model Development Team is to improve existing numerical weather prediction models in a manner that leads to improved wind forecasts in regions of complex terrain. Improvements in the models will come through better understanding of the physics associated with the wind flow in and around the wind plant across a range of temporal and spatial scales, which will be gained through WFIP2’s observational field study and analysis.

  20. Stochastic Spatial Models in Ecology: A Statistical Physics Approach

    NASA Astrophysics Data System (ADS)

    Pigolotti, Simone; Cencini, Massimo; Molina, Daniel; Muñoz, Miguel A.

    2018-07-01

    Ecosystems display a complex spatial organization. Ecologists have long tried to characterize them by looking at how different measures of biodiversity change across spatial scales. Ecological neutral theory has provided simple predictions accounting for general empirical patterns in communities of competing species. However, while neutral theory in well-mixed ecosystems is mathematically well understood, spatial models still present several open problems, limiting the quantitative understanding of spatial biodiversity. In this review, we discuss the state of the art in spatial neutral theory. We emphasize the connection between spatial ecological models and the physics of non-equilibrium phase transitions and how concepts developed in statistical physics translate in population dynamics, and vice versa. We focus on non-trivial scaling laws arising at the critical dimension D = 2 of spatial neutral models, and their relevance for biological populations inhabiting two-dimensional environments. We conclude by discussing models incorporating non-neutral effects in the form of spatial and temporal disorder, and analyze how their predictions deviate from those of purely neutral theories.

  1. Stochastic Spatial Models in Ecology: A Statistical Physics Approach

    NASA Astrophysics Data System (ADS)

    Pigolotti, Simone; Cencini, Massimo; Molina, Daniel; Muñoz, Miguel A.

    2017-11-01

    Ecosystems display a complex spatial organization. Ecologists have long tried to characterize them by looking at how different measures of biodiversity change across spatial scales. Ecological neutral theory has provided simple predictions accounting for general empirical patterns in communities of competing species. However, while neutral theory in well-mixed ecosystems is mathematically well understood, spatial models still present several open problems, limiting the quantitative understanding of spatial biodiversity. In this review, we discuss the state of the art in spatial neutral theory. We emphasize the connection between spatial ecological models and the physics of non-equilibrium phase transitions and how concepts developed in statistical physics translate in population dynamics, and vice versa. We focus on non-trivial scaling laws arising at the critical dimension D = 2 of spatial neutral models, and their relevance for biological populations inhabiting two-dimensional environments. We conclude by discussing models incorporating non-neutral effects in the form of spatial and temporal disorder, and analyze how their predictions deviate from those of purely neutral theories.

  2. wfip2.model/refcst.02.fcst.01

    DOE Data Explorer

    Macduff, Matt

    2017-10-26

    The primary purpose of WFIP2 Model Development Team is to improve existing numerical weather prediction models in a manner that leads to improved wind forecasts in regions of complex terrain. Improvements in the models will come through better understanding of the physics associated with the wind flow in and around the wind plant across a range of temporal and spatial scales, which will be gained through WFIP2’s observational field study and analysis.

  3. Welding arc plasma physics

    NASA Technical Reports Server (NTRS)

    Cain, Bruce L.

    1990-01-01

    The problems of weld quality control and weld process dependability continue to be relevant issues in modern metal welding technology. These become especially important for NASA missions which may require the assembly or repair of larger orbiting platforms using automatic welding techniques. To extend present welding technologies for such applications, NASA/MSFC's Materials and Processes Lab is developing physical models of the arc welding process with the goal of providing both a basis for improved design of weld control systems, and a better understanding of how arc welding variables influence final weld properties. The physics of the plasma arc discharge is reasonably well established in terms of transport processes occurring in the arc column itself, although recourse to sophisticated numerical treatments is normally required to obtain quantitative results. Unfortunately the rigor of these numerical computations often obscures the physics of the underlying model due to its inherent complexity. In contrast, this work has focused on a relatively simple physical model of the arc discharge to describe the gross features observed in welding arcs. Emphasis was placed of deriving analytic expressions for the voltage along the arc axis as a function of known or measurable arc parameters. The model retains the essential physics for a straight polarity, diffusion dominated free burning arc in argon, with major simplifications of collisionless sheaths and simple energy balances at the electrodes.

  4. A Complex Systems Model Approach to Quantified Mineral Resource Appraisal

    USGS Publications Warehouse

    Gettings, M.E.; Bultman, M.W.; Fisher, F.S.

    2004-01-01

    For federal and state land management agencies, mineral resource appraisal has evolved from value-based to outcome-based procedures wherein the consequences of resource development are compared with those of other management options. Complex systems modeling is proposed as a general framework in which to build models that can evaluate outcomes. Three frequently used methods of mineral resource appraisal (subjective probabilistic estimates, weights of evidence modeling, and fuzzy logic modeling) are discussed to obtain insight into methods of incorporating complexity into mineral resource appraisal models. Fuzzy logic and weights of evidence are most easily utilized in complex systems models. A fundamental product of new appraisals is the production of reusable, accessible databases and methodologies so that appraisals can easily be repeated with new or refined data. The data are representations of complex systems and must be so regarded if all of their information content is to be utilized. The proposed generalized model framework is applicable to mineral assessment and other geoscience problems. We begin with a (fuzzy) cognitive map using (+1,0,-1) values for the links and evaluate the map for various scenarios to obtain a ranking of the importance of various links. Fieldwork and modeling studies identify important links and help identify unanticipated links. Next, the links are given membership functions in accordance with the data. Finally, processes are associated with the links; ideally, the controlling physical and chemical events and equations are found for each link. After calibration and testing, this complex systems model is used for predictions under various scenarios.

  5. Simulation modelling for new gas turbine fuel controller creation.

    NASA Astrophysics Data System (ADS)

    Vendland, L. E.; Pribylov, V. G.; Borisov, Yu A.; Arzamastsev, M. A.; Kosoy, A. A.

    2017-11-01

    State of the art gas turbine fuel flow control systems are based on throttle principle. Major disadvantage of such systems is that they require high pressure fuel intake. Different approach to fuel flow control is to use regulating compressor. And for this approach because of controller and gas turbine interaction a specific regulating compressor is required. Difficulties emerge as early as the requirement definition stage. To define requirements for new object, his properties must be known. Simulation modelling helps to overcome these difficulties. At the requirement definition stage the most simplified mathematical model is used. Mathematical models will get more complex and detailed as we advance in planned work. If future adjusting of regulating compressor physical model to work with virtual gas turbine and physical control system is planned.

  6. Constructing high-accuracy intermolecular potential energy surface with multi-dimension Morse/Long-Range model

    NASA Astrophysics Data System (ADS)

    Zhai, Yu; Li, Hui; Le Roy, Robert J.

    2018-04-01

    Spectroscopically accurate Potential Energy Surfaces (PESs) are fundamental for explaining and making predictions of the infrared and microwave spectra of van der Waals (vdW) complexes, and the model used for the potential energy function is critically important for providing accurate, robust and portable analytical PESs. The Morse/Long-Range (MLR) model has proved to be one of the most general, flexible and accurate one-dimensional (1D) model potentials, as it has physically meaningful parameters, is flexible, smooth and differentiable everywhere, to all orders and extrapolates sensibly at both long and short ranges. The Multi-Dimensional Morse/Long-Range (mdMLR) potential energy model described herein is based on that 1D MLR model, and has proved to be effective and accurate in the potentiology of various types of vdW complexes. In this paper, we review the current status of development of the mdMLR model and its application to vdW complexes. The future of the mdMLR model is also discussed. This review can serve as a tutorial for the construction of an mdMLR PES.

  7. Resolving the Antarctic contribution to sea-level rise: a hierarchical modelling framework.

    PubMed

    Zammit-Mangion, Andrew; Rougier, Jonathan; Bamber, Jonathan; Schön, Nana

    2014-06-01

    Determining the Antarctic contribution to sea-level rise from observational data is a complex problem. The number of physical processes involved (such as ice dynamics and surface climate) exceeds the number of observables, some of which have very poor spatial definition. This has led, in general, to solutions that utilise strong prior assumptions or physically based deterministic models to simplify the problem. Here, we present a new approach for estimating the Antarctic contribution, which only incorporates descriptive aspects of the physically based models in the analysis and in a statistical manner. By combining physical insights with modern spatial statistical modelling techniques, we are able to provide probability distributions on all processes deemed to play a role in both the observed data and the contribution to sea-level rise. Specifically, we use stochastic partial differential equations and their relation to geostatistical fields to capture our physical understanding and employ a Gaussian Markov random field approach for efficient computation. The method, an instantiation of Bayesian hierarchical modelling, naturally incorporates uncertainty in order to reveal credible intervals on all estimated quantities. The estimated sea-level rise contribution using this approach corroborates those found using a statistically independent method. © 2013 The Authors. Environmetrics Published by John Wiley & Sons, Ltd.

  8. Resolving the Antarctic contribution to sea-level rise: a hierarchical modelling framework†

    PubMed Central

    Zammit-Mangion, Andrew; Rougier, Jonathan; Bamber, Jonathan; Schön, Nana

    2014-01-01

    Determining the Antarctic contribution to sea-level rise from observational data is a complex problem. The number of physical processes involved (such as ice dynamics and surface climate) exceeds the number of observables, some of which have very poor spatial definition. This has led, in general, to solutions that utilise strong prior assumptions or physically based deterministic models to simplify the problem. Here, we present a new approach for estimating the Antarctic contribution, which only incorporates descriptive aspects of the physically based models in the analysis and in a statistical manner. By combining physical insights with modern spatial statistical modelling techniques, we are able to provide probability distributions on all processes deemed to play a role in both the observed data and the contribution to sea-level rise. Specifically, we use stochastic partial differential equations and their relation to geostatistical fields to capture our physical understanding and employ a Gaussian Markov random field approach for efficient computation. The method, an instantiation of Bayesian hierarchical modelling, naturally incorporates uncertainty in order to reveal credible intervals on all estimated quantities. The estimated sea-level rise contribution using this approach corroborates those found using a statistically independent method. © 2013 The Authors. Environmetrics Published by John Wiley & Sons, Ltd. PMID:25505370

  9. Spectrum simulation in DTSA-II.

    PubMed

    Ritchie, Nicholas W M

    2009-10-01

    Spectrum simulation is a useful practical and pedagogical tool. Particularly with complex samples or trace constituents, a simulation can help to understand the limits of the technique and the instrument parameters for the optimal measurement. DTSA-II, software for electron probe microanalysis, provides both easy to use and flexible tools for simulating common and less common sample geometries and materials. Analytical models based on (rhoz) curves provide quick simulations of simple samples. Monte Carlo models based on electron and X-ray transport provide more sophisticated models of arbitrarily complex samples. DTSA-II provides a broad range of simulation tools in a framework with many different interchangeable physical models. In addition, DTSA-II provides tools for visualizing, comparing, manipulating, and quantifying simulated and measured spectra.

  10. Dynamics of embedded curves by doubly-nonlocal reaction-diffusion systems

    NASA Astrophysics Data System (ADS)

    von Brecht, James H.; Blair, Ryan

    2017-11-01

    We study a class of nonlocal, energy-driven dynamical models that govern the motion of closed, embedded curves from both an energetic and dynamical perspective. Our energetic results provide a variety of ways to understand physically motivated energetic models in terms of more classical, combinatorial measures of complexity for embedded curves. This line of investigation culminates in a family of complexity bounds that relate a rather broad class of models to a generalized, or weighted, variant of the crossing number. Our dynamic results include global well-posedness of the associated partial differential equations, regularity of equilibria for these flows as well as a more detailed investigation of dynamics near such equilibria. Finally, we explore a few global dynamical properties of these models numerically.

  11. Implications of Biospheric Energization

    NASA Astrophysics Data System (ADS)

    Budding, Edd; Demircan, Osman; Gündüz, Güngör; Emin Özel, Mehmet

    2016-07-01

    Our physical model relating to the origin and development of lifelike processes from very simple beginnings is reviewed. This molecular ('ABC') process is compared with the chemoton model, noting the role of the autocatalytic tuning to the time-dependent source of energy. This substantiates a Darwinian character to evolution. The system evolves from very simple beginnings to a progressively more highly tuned, energized and complex responding biosphere, that grows exponentially; albeit with a very low net growth factor. Rates of growth and complexity in the evolution raise disturbing issues of inherent stability. Autocatalytic processes can include a fractal character to their development allowing recapitulative effects to be observed. This property, in allowing similarities of pattern to be recognized, can be useful in interpreting complex (lifelike) systems.

  12. Error and Uncertainty Quantification in the Numerical Simulation of Complex Fluid Flows

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2010-01-01

    The failure of numerical simulation to predict physical reality is often a direct consequence of the compounding effects of numerical error arising from finite-dimensional approximation and physical model uncertainty resulting from inexact knowledge and/or statistical representation. In this topical lecture, we briefly review systematic theories for quantifying numerical errors and restricted forms of model uncertainty occurring in simulations of fluid flow. A goal of this lecture is to elucidate both positive and negative aspects of applying these theories to practical fluid flow problems. Finite-element and finite-volume calculations of subsonic and hypersonic fluid flow are presented to contrast the differing roles of numerical error and model uncertainty. for these problems.

  13. Particle Dark Matter constraints: the effect of Galactic uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benito, Maria; Bernal, Nicolás; Iocco, Fabio

    2017-02-01

    Collider, space, and Earth based experiments are now able to probe several extensions of the Standard Model of particle physics which provide viable dark matter candidates. Direct and indirect dark matter searches rely on inputs of astrophysical nature, such as the local dark matter density or the shape of the dark matter density profile in the target in object. The determination of these quantities is highly affected by astrophysical uncertainties. The latter, especially those for our own Galaxy, are ill-known, and often not fully accounted for when analyzing the phenomenology of particle physics models. In this paper we present amore » systematic, quantitative estimate of how astrophysical uncertainties on Galactic quantities (such as the local galactocentric distance, circular velocity, or the morphology of the stellar disk and bulge) propagate to the determination of the phenomenology of particle physics models, thus eventually affecting the determination of new physics parameters. We present results in the context of two specific extensions of the Standard Model (the Singlet Scalar and the Inert Doublet) that we adopt as case studies for their simplicity in illustrating the magnitude and impact of such uncertainties on the parameter space of the particle physics model itself. Our findings point toward very relevant effects of current Galactic uncertainties on the determination of particle physics parameters, and urge a systematic estimate of such uncertainties in more complex scenarios, in order to achieve constraints on the determination of new physics that realistically include all known uncertainties.« less

  14. A Bayesian approach for parameter estimation and prediction using a computationally intensive model

    DOE PAGES

    Higdon, Dave; McDonnell, Jordan D.; Schunck, Nicolas; ...

    2015-02-05

    Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based modelmore » $$\\eta (\\theta )$$, where θ denotes the uncertain, best input setting. Hence the statistical model is of the form $$y=\\eta (\\theta )+\\epsilon ,$$ where $$\\epsilon $$ accounts for measurement, and possibly other, error sources. When nonlinearity is present in $$\\eta (\\cdot )$$, the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model $$\\eta (\\cdot )$$. This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. Lastly, we also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory.« less

  15. DiffPy-CMI-Python libraries for Complex Modeling Initiative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Billinge, Simon; Juhas, Pavol; Farrow, Christopher

    2014-02-01

    Software to manipulate and describe crystal and molecular structures and set up structural refinements from multiple experimental inputs. Calculation and simulation of structure derived physical quantities. Library for creating customized refinements of atomic structures from available experimental and theoretical inputs.

  16. Experimental econophysics: Complexity, self-organization, and emergent properties

    NASA Astrophysics Data System (ADS)

    Huang, J. P.

    2015-03-01

    Experimental econophysics is concerned with statistical physics of humans in the laboratory, and it is based on controlled human experiments developed by physicists to study some problems related to economics or finance. It relies on controlled human experiments in the laboratory together with agent-based modeling (for computer simulations and/or analytical theory), with an attempt to reveal the general cause-effect relationship between specific conditions and emergent properties of real economic/financial markets (a kind of complex adaptive systems). Here I review the latest progress in the field, namely, stylized facts, herd behavior, contrarian behavior, spontaneous cooperation, partial information, and risk management. Also, I highlight the connections between such progress and other topics of traditional statistical physics. The main theme of the review is to show diverse emergent properties of the laboratory markets, originating from self-organization due to the nonlinear interactions among heterogeneous humans or agents (complexity).

  17. Shock tubes and blast injury modeling.

    PubMed

    Ning, Ya-Lei; Zhou, Yuan-Guo

    2015-01-01

    Explosive blast injury has become the most prevalent injury in recent military conflicts and terrorist attacks. The magnitude of this kind of polytrauma is complex due to the basic physics of blast and the surrounding environments. Therefore, development of stable, reproducible and controllable animal model using an ideal blast simulation device is the key of blast injury research. The present review addresses the modeling of blast injury and applications of shock tubes.

  18. Mock Data Challenge for the MPD/NICA Experiment on the HybriLIT Cluster

    NASA Astrophysics Data System (ADS)

    Gertsenberger, Konstantin; Rogachevsky, Oleg

    2018-02-01

    Simulation of data processing before receiving first experimental data is an important issue in high-energy physics experiments. This article presents the current Event Data Model and the Mock Data Challenge for the MPD experiment at the NICA accelerator complex which uses ongoing simulation studies to exercise in a stress-testing the distributed computing infrastructure and experiment software in the full production environment from simulated data through the physical analysis.

  19. Complexity in built environment, health, and destination walking: a neighborhood-scale analysis.

    PubMed

    Carlson, Cynthia; Aytur, Semra; Gardner, Kevin; Rogers, Shannon

    2012-04-01

    This study investigates the relationships between the built environment, the physical attributes of the neighborhood, and the residents' perceptions of those attributes. It focuses on destination walking and self-reported health, and does so at the neighborhood scale. The built environment, in particular sidewalks, road connectivity, and proximity of local destinations, correlates with destination walking, and similarly destination walking correlates with physical health. It was found, however, that the built environment and health metrics may not be simply, directly correlated but rather may be correlated through a series of feedback loops that may regulate risk in different ways in different contexts. In particular, evidence for a feedback loop between physical health and destination walking is observed, as well as separate feedback loops between destination walking and objective metrics of the built environment, and destination walking and perception of the built environment. These feedback loops affect the ability to observe how the built environment correlates with residents' physical health. Previous studies have investigated pieces of these associations, but are potentially missing the more complex relationships present. This study proposes a conceptual model describing complex feedback relationships between destination walking and public health, with the built environment expected to increase or decrease the strength of the feedback loop. Evidence supporting these feedback relationships is presented.

  20. Scalable Methods for Uncertainty Quantification, Data Assimilation and Target Accuracy Assessment for Multi-Physics Advanced Simulation of Light Water Reactors

    NASA Astrophysics Data System (ADS)

    Khuwaileh, Bassam

    High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL) based algorithm previously developed to quantify the uncertainty for single physics models is extended for large scale multi-physics coupled problems with feedback effect. Moreover, a non-linear surrogate based UQ approach is developed, used and compared to performance of the KL approach and brute force Monte Carlo (MC) approach. On the other hand, an efficient Data Assimilation (DA) algorithm is developed to assess information about model's parameters: nuclear data cross-sections and thermal-hydraulics parameters. Two improvements are introduced in order to perform DA on the high dimensional problems. First, a goal-oriented surrogate model can be used to replace the original models in the depletion sequence (MPACT -- COBRA-TF - ORIGEN). Second, approximating the complex and high dimensional solution space with a lower dimensional subspace makes the sampling process necessary for DA possible for high dimensional problems. Moreover, safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. Accordingly, an inverse problem can be defined and solved to assess the contributions from sources of uncertainty; and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this dissertation a subspace-based gradient-free and nonlinear algorithm for inverse uncertainty quantification namely the Target Accuracy Assessment (TAA) has been developed and tested. The ideas proposed in this dissertation were first validated using lattice physics applications simulated using SCALE6.1 package (Pressurized Water Reactor (PWR) and Boiling Water Reactor (BWR) lattice models). Ultimately, the algorithms proposed her were applied to perform UQ and DA for assembly level (CASL progression problem number 6) and core wide problems representing Watts Bar Nuclear 1 (WBN1) for cycle 1 of depletion (CASL Progression Problem Number 9) modeled via simulated using VERA-CS which consists of several multi-physics coupled models. The analysis and algorithms developed in this dissertation were encoded and implemented in a newly developed tool kit algorithms for Reduced Order Modeling based Uncertainty/Sensitivity Estimator (ROMUSE).

  1. Climate refugia: The physical, hydrologic and disturbance basis

    NASA Astrophysics Data System (ADS)

    Holden, Z. A.; Maneta, M. P.; Forthofer, J.

    2015-12-01

    Projected changes in global climate and associated shifts in vegetation have increased interest in understanding species persistence at local scales. We examine the climatic and physical factors that could mediate changes in the distribution of vegetation in regions of complex topography. Using massive networks of low-cost temperature and humidity sensors, we developed topographically-resolved daily historical gridded temperature data for the US Northern Rockies. We used the WindNinja model to create daily historical wind speed maps across the same domain. Using a spatially distributed ecohydrology model (ECH2O) we examine separately the sensitivity of modeled evapotranspiration and soil moisture to wind, radiation, soil properties, minimum temperature and humidity. A suite of physical factors including lower wind speeds, cold air drainage, solar shading and increased soil depth reduce evapotranspiration and increase late season moisture availability in valley bottoms. Evapotranspiration shows strong sensitivity to spatial variability in surface wind speed, suggesting that sheltering effects from winds may be an important factor contributing to mountain refugia. Fundamental to our understanding of patterns of vegetation change is the role of stand-replacing wildfires, which modify the physical environment and subsequent patterns of species persistence and recruitment. Using satellite-derived maps of burn severity for recent fires in the US Northern Rockies we examined relationships between wind speed, cold air drainage potential and soil depth and the occurrence of unburned and low severity fire. Severe fire is less likely to occur in areas with high cold air drainage potential and low wind speeds, suggesting that sheltered valley bottoms have mediated the severity of recent wildfires. Our finding highlight the complex physical mechanisms by which mountain weather and climate mediate fire-induced vegetation changes in the US Northern Rocky Mountains.

  2. Constrained Total Energy Expenditure and Metabolic Adaptation to Physical Activity in Adult Humans.

    PubMed

    Pontzer, Herman; Durazo-Arvizu, Ramon; Dugas, Lara R; Plange-Rhule, Jacob; Bovet, Pascal; Forrester, Terrence E; Lambert, Estelle V; Cooper, Richard S; Schoeller, Dale A; Luke, Amy

    2016-02-08

    Current obesity prevention strategies recommend increasing daily physical activity, assuming that increased activity will lead to corresponding increases in total energy expenditure and prevent or reverse energy imbalance and weight gain [1-3]. Such Additive total energy expenditure models are supported by exercise intervention and accelerometry studies reporting positive correlations between physical activity and total energy expenditure [4] but are challenged by ecological studies in humans and other species showing that more active populations do not have higher total energy expenditure [5-8]. Here we tested a Constrained total energy expenditure model, in which total energy expenditure increases with physical activity at low activity levels but plateaus at higher activity levels as the body adapts to maintain total energy expenditure within a narrow range. We compared total energy expenditure, measured using doubly labeled water, against physical activity, measured using accelerometry, for a large (n = 332) sample of adults living in five populations [9]. After adjusting for body size and composition, total energy expenditure was positively correlated with physical activity, but the relationship was markedly stronger over the lower range of physical activity. For subjects in the upper range of physical activity, total energy expenditure plateaued, supporting a Constrained total energy expenditure model. Body fat percentage and activity intensity appear to modulate the metabolic response to physical activity. Models of energy balance employed in public health [1-3] should be revised to better reflect the constrained nature of total energy expenditure and the complex effects of physical activity on metabolic physiology. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Complexity, accuracy and practical applicability of different biogeochemical model versions

    NASA Astrophysics Data System (ADS)

    Los, F. J.; Blaas, M.

    2010-04-01

    The construction of validated biogeochemical model applications as prognostic tools for the marine environment involves a large number of choices particularly with respect to the level of details of the .physical, chemical and biological aspects. Generally speaking, enhanced complexity might enhance veracity, accuracy and credibility. However, very complex models are not necessarily effective or efficient forecast tools. In this paper, models of varying degrees of complexity are evaluated with respect to their forecast skills. In total 11 biogeochemical model variants have been considered based on four different horizontal grids. The applications vary in spatial resolution, in vertical resolution (2DH versus 3D), in nature of transport, in turbidity and in the number of phytoplankton species. Included models range from 15 year old applications with relatively simple physics up to present state of the art 3D models. With all applications the same year, 2003, has been simulated. During the model intercomparison it has been noticed that the 'OSPAR' Goodness of Fit cost function (Villars and de Vries, 1998) leads to insufficient discrimination of different models. This results in models obtaining similar scores although closer inspection of the results reveals large differences. In this paper therefore, we have adopted the target diagram by Jolliff et al. (2008) which provides a concise and more contrasting picture of model skill on the entire model domain and for the entire period of the simulations. Correctness in prediction of the mean and the variability are separated and thus enhance insight in model functioning. Using the target diagrams it is demonstrated that recent models are more consistent and have smaller biases. Graphical inspection of time series confirms this, as the level of variability appears more realistic, also given the multi-annual background statistics of the observations. Nevertheless, whether the improvements are all genuine for the particular year cannot be judged due to the low sampling frequency of the traditional monitoring data at hand. Specifically, the overall results for chlorophyll- a are rather consistent throughout all models, but regionally recent models are better; resolution is crucial for the accuracy of transport and more important than the nature of the forcing of the transport; SPM strongly affects the biomass simulation and species composition, but even the most recent SPM results do not yet obtain a good overall score; coloured dissolved organic matter (CDOM) should be included in the calculation of the light regime; more complexity in the phytoplankton model improves the chlorophyll- a simulation, but the simulated species composition needs further improvement for some of the functional groups.

  4. Trends in the salience of data collected in a multi user virtual environment: An exploratory study

    NASA Astrophysics Data System (ADS)

    Tutwiler, M. Shane

    In this study, by exploring patterns in the degree of physical salience of the data the students collected, I investigated the relationship between the level of students' tendency to frame explanations in terms of complex patterns and evidence of how they attend to and select data in support of their developing understandings of causal relationships. I accomplished this by analyzing longitudinal data collected as part of a larger study of 143 7th grade students (clustered within 36 teams, 5 teachers, and 2 schools in the same Northeastern school district) as they navigated and collected data in an ecosystems-based multi-user virtual environment curriculum known as the EcoMUVE Pond module (Metcalf, Kamarainen, Tutwiler, Grotzer, Dede, 2011) . Using individual growth modeling (Singer & Willett, 2003) I found no direct link between student pre-intervention tendency to offer explanations containing complex causal components and patterns of physical salience-driven data collection (average physical salience level, number of low physical salience data points collected, and proportion of low physical salience data points collected), though prior science content knowledge did affect the initial status and rate of change of outcomes in the average physical salience level and proportion of low physical salience data collected over time. The findings of this study suggest two issues for consideration about the use of MUVEs to study student data collection behaviors in complex spaces. Firstly, the structure of the curriculum in which the MUVE is embedded might have a direct effect on what types of data students choose to collect. This undercuts our ability to make inferences about student-driven decisions to collect specific types of data, and suggests that a more open-ended curricular model might be better suited to this type of inquiry. Secondly, differences between teachers' choices in how to facilitate the units likely contribute to the variance in student data collection behaviors between students with different teachers. This foreshadows external validity issues in studies that use behaviors of students within a single class to develop "detectors" of student latent traits (e.g., Baker, Corbett, Roll, Koedinger, 2008).

  5. Challenges of Representing Sub-Grid Physics in an Adaptive Mesh Refinement Atmospheric Model

    NASA Astrophysics Data System (ADS)

    O'Brien, T. A.; Johansen, H.; Johnson, J. N.; Rosa, D.; Benedict, J. J.; Keen, N. D.; Collins, W.; Goodfriend, E.

    2015-12-01

    Some of the greatest potential impacts from future climate change are tied to extreme atmospheric phenomena that are inherently multiscale, including tropical cyclones and atmospheric rivers. Extremes are challenging to simulate in conventional climate models due to existing models' coarse resolutions relative to the native length-scales of these phenomena. Studying the weather systems of interest requires an atmospheric model with sufficient local resolution, and sufficient performance for long-duration climate-change simulations. To this end, we have developed a new global climate code with adaptive spatial and temporal resolution. The dynamics are formulated using a block-structured conservative finite volume approach suitable for moist non-hydrostatic atmospheric dynamics. By using both space- and time-adaptive mesh refinement, the solver focuses computational resources only where greater accuracy is needed to resolve critical phenomena. We explore different methods for parameterizing sub-grid physics, such as microphysics, macrophysics, turbulence, and radiative transfer. In particular, we contrast the simplified physics representation of Reed and Jablonowski (2012) with the more complex physics representation used in the System for Atmospheric Modeling of Khairoutdinov and Randall (2003). We also explore the use of a novel macrophysics parameterization that is designed to be explicitly scale-aware.

  6. Reactive transport simulation via combination of a multiphase-capable transport code for unstructured meshes with a Gibbs energy minimization solver of geochemical equilibria

    NASA Astrophysics Data System (ADS)

    Fowler, S. J.; Driesner, T.; Hingerl, F. F.; Kulik, D. A.; Wagner, T.

    2011-12-01

    We apply a new, C++-based computational model for hydrothermal fluid-rock interaction and scale formation in geothermal reservoirs. The model couples the Complex System Modelling Platform (CSMP++) code for fluid flow in porous and fractured media (Matthai et al., 2007) with the Gibbs energy minimization numerical kernel GEMS3K of the GEM-Selektor (GEMS3) geochemical modelling package (Kulik et al., 2010) in a modular fashion. CSMP++ includes interfaces to commercial file formats, accommodating complex geometry construction using CAD (Rhinoceros) and meshing (ANSYS) software. The CSMP++ approach employs finite element-finite volume spatial discretization, implicit or explicit time discretization, and operator splitting. GEMS3K can calculate complex fluid-mineral equilibria based on a variety of equation of state and activity models. A selection of multi-electrolyte aqueous solution models, such as extended Debye-Huckel, Pitzer (Harvie et al., 1984), EUNIQUAC (Thomsen et al., 1996), and the new ELVIS model (Hingerl et al., this conference), makes it well-suited for application to a wide range of geothermal conditions. An advantage of the GEMS3K solver is simultaneous consideration of complex solid solutions (e.g., clay minerals), gases, fluids, and aqueous solutions. Each coupled simulation results in a thermodynamically-based description of the geochemical and physical state of a hydrothermal system evolving along a complex P-T-X path. The code design allows efficient, flexible incorporation of numerical and thermodynamic database improvements. We demonstrate the coupled code workflow and applicability to compositionally and physically complex natural systems relevant to enhanced geothermal systems, where temporally and spatially varying chemical interactions may take place within diverse lithologies of varying geometry. Engesgaard, P. & Kipp, K. L. (1992). Water Res. Res. 28: 2829-2843. Harvie, C. E.; Møller, N. & Weare, J. H. (1984). Geochim. Cosmochim. Acta 48: 723-751. Kulik, D. A., Wagner, T., Dmytrieva S. V, et al. (2010). GEM-Selektor home page, Paul Scherrer Institut. Available at http://gems.web.psi.ch. Matthäi, S. K., Geiger, S., Roberts, S. G., Paluszny, A., Belayneh, M., Burri, A., Mezentsev, A., Lu, H., Coumou, D., Driesner, T. & Heinrich C. A. (2007). Geol. Soc. London, Spec. Publ. 292: 405-429. Thomsen, K. Rasmussen, P. & Gani, R. (1996). Chem. Eng. Sci. 51: 3675-3683.

  7. The adaption and use of research codes for performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liebetrau, A.M.

    1987-05-01

    Models of real-world phenomena are developed for many reasons. The models are usually, if not always, implemented in the form of a computer code. The characteristics of a code are determined largely by its intended use. Realizations or implementations of detailed mathematical models of complex physical and/or chemical processes are often referred to as research or scientific (RS) codes. Research codes typically require large amounts of computing time. One example of an RS code is a finite-element code for solving complex systems of differential equations that describe mass transfer through some geologic medium. Considerable computing time is required because computationsmore » are done at many points in time and/or space. Codes used to evaluate the overall performance of real-world physical systems are called performance assessment (PA) codes. Performance assessment codes are used to conduct simulated experiments involving systems that cannot be directly observed. Thus, PA codes usually involve repeated simulations of system performance in situations that preclude the use of conventional experimental and statistical methods. 3 figs.« less

  8. wfip2.model/retro.hrrr.01.fcst.01 (Model: 10-Day Retrospective)

    DOE Data Explorer

    Macduff, Matt

    2017-10-27

    The primary purpose of WFIP2 Model Development Team is to improve existing numerical weather prediction models in a manner that leads to improved wind forecasts in regions of complex terrain. Improvements in the models will come through better understanding of the physics associated with the wind flow in and around the wind plant across a range of temporal and spatial scales, which will be gained through WFIP2’s observational field study and analysis.

  9. wfip2.model/retro.hrrr.02.fcst.01 (Model: 10-Day Retrospective)

    DOE Data Explorer

    Macduff, Matt

    2017-10-27

    The primary purpose of WFIP2 Model Development Team is to improve existing numerical weather prediction models in a manner that leads to improved wind forecasts in regions of complex terrain. Improvements in the models will come through better understanding of the physics associated with the wind flow in and around the wind plant across a range of temporal and spatial scales, which will be gained through WFIP2’s observational field study and analysis.

  10. wfip2.model/retro.hrrr.02.fcst.02 (Model: 10-Day Retrospective)

    DOE Data Explorer

    Macduff, Matt

    2017-10-27

    The primary purpose of WFIP2 Model Development Team is to improve existing numerical weather prediction models in a manner that leads to improved wind forecasts in regions of complex terrain. Improvements in the models will come through better understanding of the physics associated with the wind flow in and around the wind plant across a range of temporal and spatial scales, which will be gained through WFIP2’s observational field study and analysis.

  11. wfip2.model/retro.rap.01.fcst.01 (Model: 10-Day Retrospective)

    DOE Data Explorer

    Macduff, Matt

    2017-10-27

    The primary purpose of WFIP2 Model Development Team is to improve existing numerical weather prediction models in a manner that leads to improved wind forecasts in regions of complex terrain. Improvements in the models will come through better understanding of the physics associated with the wind flow in and around the wind plant across a range of temporal and spatial scales, which will be gained through WFIP2’s observational field study and analysis.

  12. wfip2.model/realtime.hrrr_wfip2.graphics.02 (Model: Real Time)

    DOE Data Explorer

    Macduff, Matt

    2017-10-27

    The primary purpose of WFIP2 Model Development Team is to improve existing numerical weather prediction models in a manner that leads to improved wind forecasts in regions of complex terrain. Improvements in the models will come through better understanding of the physics associated with the wind flow in and around the wind plant across a range of temporal and spatial scales, which will be gained through WFIP2’s observational field study and analysis.

  13. wfip2.model/retro.rap.02.fcst.01 (Model: 10-Day Retrospective)

    DOE Data Explorer

    Macduff, Matt

    2017-10-27

    The primary purpose of WFIP2 Model Development Team is to improve existing numerical weather prediction models in a manner that leads to improved wind forecasts in regions of complex terrain. Improvements in the models will come through better understanding of the physics associated with the wind flow in and around the wind plant across a range of temporal and spatial scales, which will be gained through WFIP2’s observational field study and analysis.

  14. wfip2.model/realtime.hrrr_wfip2.icbc.02 (Model: Real Time)

    DOE Data Explorer

    Macduff, Matt

    2017-10-27

    The primary purpose of WFIP2 Model Development Team is to improve existing numerical weather prediction models in a manner that leads to improved wind forecasts in regions of complex terrain. Improvements in the models will come through better understanding of the physics associated with the wind flow in and around the wind plant across a range of temporal and spatial scales, which will be gained through WFIP2’s observational field study and analysis.

  15. wfip2.model/retro.hrrr.01.fcst.02 (Model: 10-Day Retrospective)

    DOE Data Explorer

    Macduff, Matt

    2017-10-27

    The primary purpose of WFIP2 Model Development Team is to improve existing numerical weather prediction models in a manner that leads to improved wind forecasts in regions of complex terrain. Improvements in the models will come through better understanding of the physics associated with the wind flow in and around the wind plant across a range of temporal and spatial scales, which will be gained through WFIP2’s observational field study and analysis.

  16. Multi-physics CFD simulations in engineering

    NASA Astrophysics Data System (ADS)

    Yamamoto, Makoto

    2013-08-01

    Nowadays Computational Fluid Dynamics (CFD) software is adopted as a design and analysis tool in a great number of engineering fields. We can say that single-physics CFD has been sufficiently matured in the practical point of view. The main target of existing CFD software is single-phase flows such as water and air. However, many multi-physics problems exist in engineering. Most of them consist of flow and other physics, and the interactions between different physics are very important. Obviously, multi-physics phenomena are critical in developing machines and processes. A multi-physics phenomenon seems to be very complex, and it is so difficult to be predicted by adding other physics to flow phenomenon. Therefore, multi-physics CFD techniques are still under research and development. This would be caused from the facts that processing speed of current computers is not fast enough for conducting a multi-physics simulation, and furthermore physical models except for flow physics have not been suitably established. Therefore, in near future, we have to develop various physical models and efficient CFD techniques, in order to success multi-physics simulations in engineering. In the present paper, I will describe the present states of multi-physics CFD simulations, and then show some numerical results such as ice accretion and electro-chemical machining process of a three-dimensional compressor blade which were obtained in my laboratory. Multi-physics CFD simulations would be a key technology in near future.

  17. Quantum-like Probabilistic Models Outside Physics

    NASA Astrophysics Data System (ADS)

    Khrennikov, Andrei

    We present a quantum-like (QL) model in that contexts (complexes of e.g. mental, social, biological, economic or even political conditions) are represented by complex probability amplitudes. This approach gives the possibility to apply the mathematical quantum formalism to probabilities induced in any domain of science. In our model quantum randomness appears not as irreducible randomness (as it is commonly accepted in conventional quantum mechanics, e.g. by von Neumann and Dirac), but as a consequence of obtaining incomplete information about a system. We pay main attention to the QL description of processing of incomplete information. Our QL model can be useful in cognitive, social and political sciences as well as economics and artificial intelligence. In this paper we consider in a more detail one special application — QL modeling of brain's functioning. The brain is modeled as a QL-computer.

  18. Intertwining evidence- and model-based reasoning in physics sensemaking: An example from electrostatics

    NASA Astrophysics Data System (ADS)

    Russ, Rosemary S.; Odden, Tor Ole B.

    2017-12-01

    Our field has long valued the goal of teaching students not just the facts of physics, but also the thinking and reasoning skills of professional physicists. The complexity inherent in scientific reasoning demands that we think carefully about how we conceptualize for ourselves, enact in our classes, and encourage in our students the relationship between the multifaceted practices of professional science. The current study draws on existing research in the philosophy of science and psychology to advocate for intertwining two important aspects of scientific reasoning: using evidence from experimentation and modeling. We present a case from an undergraduate physics course to illustrate how these aspects can be intertwined productively and describe specific ways in which these aspects of reasoning can mutually reinforce one another in student learning. We end by discussing implications for this work for instruction in introductory physics courses and for research on scientific reasoning at the undergraduate level.

  19. Clinician burnout and satisfaction with resources in caring for complex patients.

    PubMed

    Whitebird, Robin R; Solberg, Leif I; Crain, A Lauren; Rossom, Rebecca C; Beck, Arne; Neely, Claire; Dreskin, Mark; Coleman, Karen J

    To describe primary care clinicians' self-reported satisfaction, burnout and barriers for treating complex patients. We conducted a survey of 1554 primary care clinicians in 172 primary care clinics in 18 health care systems across 8 states prior to the implementation of a collaborative model of care for patients with depression and diabetes and/or cardiovascular disease. Of the clinicians who responded to the survey (n=709; 46%), we found that a substantial minority (31%) were experiencing burnout that was associated with lower career satisfaction (P<.0001) and lower satisfaction with resources to treat complex patients (P<.0001). Less than 50% of clinicians rated their ability to treat complex patients as very good to excellent with 21% rating their ability as fair to poor. The majority of clinicians (72%) thought that a collaborative model of care would be very helpful for treating complex patients. Burnout remains a problem for primary care clinicians and is associated with low job satisfaction and low satisfaction with resources to treat complex patients. A collaborative care model for patients with mental and physical health problems may provide the resources needed to improve the quality of care for these patients. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Deep Drawing Simulations With Different Polycrystalline Models

    NASA Astrophysics Data System (ADS)

    Duchêne, Laurent; de Montleau, Pierre; Bouvier, Salima; Habraken, Anne Marie

    2004-06-01

    The goal of this research is to study the anisotropic material behavior during forming processes, represented by both complex yield loci and kinematic-isotropic hardening models. A first part of this paper describes the main concepts of the `Stress-strain interpolation' model that has been implemented in the non-linear finite element code Lagamine. This model consists of a local description of the yield locus based on the texture of the material through the full constraints Taylor's model. The texture evolution due to plastic deformations is computed throughout the FEM simulations. This `local yield locus' approach was initially linked to the classical isotropic Swift hardening law. Recently, a more complex hardening model was implemented: the physically-based microstructural model of Teodosiu. It takes into account intergranular heterogeneity due to the evolution of dislocation structures, that affects isotropic and kinematic hardening. The influence of the hardening model is compared to the influence of the texture evolution thanks to deep drawing simulations.

  1. Progress and challenges in coupled hydrodynamic-ecological estuarine modeling.

    PubMed

    Ganju, Neil K; Brush, Mark J; Rashleigh, Brenda; Aretxabaleta, Alfredo L; Del Barrio, Pilar; Grear, Jason S; Harris, Lora A; Lake, Samuel J; McCardell, Grant; O'Donnell, James; Ralston, David K; Signell, Richard P; Testa, Jeremy M; Vaudrey, Jamie M P

    2016-03-01

    Numerical modeling has emerged over the last several decades as a widely accepted tool for investigations in environmental sciences. In estuarine research, hydrodynamic and ecological models have moved along parallel tracks with regard to complexity, refinement, computational power, and incorporation of uncertainty. Coupled hydrodynamic-ecological models have been used to assess ecosystem processes and interactions, simulate future scenarios, and evaluate remedial actions in response to eutrophication, habitat loss, and freshwater diversion. The need to couple hydrodynamic and ecological models to address research and management questions is clear, because dynamic feedbacks between biotic and physical processes are critical interactions within ecosystems. In this review we present historical and modern perspectives on estuarine hydrodynamic and ecological modeling, consider model limitations, and address aspects of model linkage, skill assessment, and complexity. We discuss the balance between spatial and temporal resolution and present examples using different spatiotemporal scales. Finally, we recommend future lines of inquiry, approaches to balance complexity and uncertainty, and model transparency and utility. It is idealistic to think we can pursue a "theory of everything" for estuarine models, but recent advances suggest that models for both scientific investigations and management applications will continue to improve in terms of realism, precision, and accuracy.

  2. Progress and challenges in coupled hydrodynamic-ecological estuarine modeling

    USGS Publications Warehouse

    Ganju, Neil K.; Brush, Mark J.; Rashleigh, Brenda; Aretxabaleta, Alfredo L.; del Barrio, Pilar; Grear, Jason S.; Harris, Lora A.; Lake, Samuel J.; McCardell, Grant; O'Donnell, James; Ralston, David K.; Signell, Richard P.; Testa, Jeremy; Vaudrey, Jamie M. P.

    2016-01-01

    Numerical modeling has emerged over the last several decades as a widely accepted tool for investigations in environmental sciences. In estuarine research, hydrodynamic and ecological models have moved along parallel tracks with regard to complexity, refinement, computational power, and incorporation of uncertainty. Coupled hydrodynamic-ecological models have been used to assess ecosystem processes and interactions, simulate future scenarios, and evaluate remedial actions in response to eutrophication, habitat loss, and freshwater diversion. The need to couple hydrodynamic and ecological models to address research and management questions is clear because dynamic feedbacks between biotic and physical processes are critical interactions within ecosystems. In this review, we present historical and modern perspectives on estuarine hydrodynamic and ecological modeling, consider model limitations, and address aspects of model linkage, skill assessment, and complexity. We discuss the balance between spatial and temporal resolution and present examples using different spatiotemporal scales. Finally, we recommend future lines of inquiry, approaches to balance complexity and uncertainty, and model transparency and utility. It is idealistic to think we can pursue a “theory of everything” for estuarine models, but recent advances suggest that models for both scientific investigations and management applications will continue to improve in terms of realism, precision, and accuracy.

  3. Progress and challenges in coupled hydrodynamic-ecological estuarine modeling

    PubMed Central

    Ganju, Neil K.; Brush, Mark J.; Rashleigh, Brenda; Aretxabaleta, Alfredo L.; del Barrio, Pilar; Grear, Jason S.; Harris, Lora A.; Lake, Samuel J.; McCardell, Grant; O’Donnell, James; Ralston, David K.; Signell, Richard P.; Testa, Jeremy M.; Vaudrey, Jamie M.P.

    2016-01-01

    Numerical modeling has emerged over the last several decades as a widely accepted tool for investigations in environmental sciences. In estuarine research, hydrodynamic and ecological models have moved along parallel tracks with regard to complexity, refinement, computational power, and incorporation of uncertainty. Coupled hydrodynamic-ecological models have been used to assess ecosystem processes and interactions, simulate future scenarios, and evaluate remedial actions in response to eutrophication, habitat loss, and freshwater diversion. The need to couple hydrodynamic and ecological models to address research and management questions is clear, because dynamic feedbacks between biotic and physical processes are critical interactions within ecosystems. In this review we present historical and modern perspectives on estuarine hydrodynamic and ecological modeling, consider model limitations, and address aspects of model linkage, skill assessment, and complexity. We discuss the balance between spatial and temporal resolution and present examples using different spatiotemporal scales. Finally, we recommend future lines of inquiry, approaches to balance complexity and uncertainty, and model transparency and utility. It is idealistic to think we can pursue a “theory of everything” for estuarine models, but recent advances suggest that models for both scientific investigations and management applications will continue to improve in terms of realism, precision, and accuracy. PMID:27721675

  4. Leveraging this Golden Age of Remote Sensing and Modeling of Terrestrial Hydrology to Understand Water Cycling in the Water Availability Grand Challenge for North America

    NASA Astrophysics Data System (ADS)

    Painter, T. H.; Famiglietti, J. S.; Stephens, G. L.

    2016-12-01

    We live in a time of increasing strains on our global fresh water availability due to increasing population, warming climate, changes in precipitation, and extensive depletion of groundwater supplies. At the same time, we have seen enormous growth in capabilities to remotely sense the regional to global water cycle and model complex systems with physically based frameworks. The GEWEX Water Availability Grand Challenge for North America is poised to leverage this convergence of remote sensing and modeling capabilities to answer fundamental questions on the water cycle. In particular, we envision an experiment that targets the complex and resource-critical Western US from California to just into the Great Plains, constraining physically-based hydrologic modeling with the US and international remote sensing capabilities. In particular, the last decade has seen the implementation or soon-to-be launch of water cycle missions such as GRACE and GRACE-FO for groundwater, SMAP for soil moisture, GPM for precipitation, SWOT for terrestrial surface water, and the Airborne Snow Observatory for snowpack. With the advent of convection-resolving mesoscale climate and water cycle modeling (e.g. WRF, WRF-Hydro) and mesoscale models capable of quantitative assimilation of remotely sensed data (e.g. the JPL Western States Water Mission), we can now begin to test hypotheses on the nature and changes in the water cycle of the Western US from a physical standpoint. In turn, by fusing water cycle science, water management, and ecosystem management while addressing these hypotheses, this golden age of remote sensing and modeling can bring all fields into a markedly less uncertain state of present knowledge and decadal scale forecasts.

  5. High Fidelity Modeling of Field Reversed Configuration (FRC) Thrusters

    DTIC Science & Technology

    2017-04-22

    signatures which can be used for direct, non -invasive, comparison with experimental diagnostics can be produced. This research will be directly... experimental campaign is critical to developing general design philosophies for low-power plasmoid formation, the complexity of non -linear plasma processes...advanced space propulsion. The work consists of numerical method development, physical model development, and systematic studies of the non -linear

  6. A MODEL FOR DIFFUSION CONTROLLED BIOAVAILABILITY OF CRUDE OIL COMPONENTS

    EPA Science Inventory

    Crude oil is a complex mixture of several different structural classes of compounds including alkanes, aromatics, heterocyclic polar compounds, and asphaltenes. The rate and extent of microbial degradation of crude oil depends on the interaction between the physical and biochemi...

  7. Development of Civic Engagement: Theoretical and Methodological Issues

    ERIC Educational Resources Information Center

    Lerner, Richard M.; Wang, Jun; Champine, Robey B.; Warren, Daniel J. A.; Erickson, Karl

    2014-01-01

    Within contemporary developmental science, models derived from relational developmental systems (RDS) metatheory emphasize that the basic process of human development involves mutually-influential relations, termed developmental regulations, between the developing individual and his or her complex and changing physical, social, and cultural…

  8. Use of Dynamic Traffic Assignment in FSUTMS in Support of Transportation Planning in Florida

    DOT National Transportation Integrated Search

    2012-06-01

    Transportation planning is based on the physical : structure of roadway networks and, less : tangibly, on choices individuals make about their : transportation needs and use of the roads. For a : task this complex, computer modeling is essential. : I...

  9. Physical bases of the generation of short-term earthquake precursors: A complex model of ionization-induced geophysical processes in the lithosphere-atmosphere-ionosphere-magnetosphere system

    NASA Astrophysics Data System (ADS)

    Pulinets, S. A.; Ouzounov, D. P.; Karelin, A. V.; Davidenko, D. V.

    2015-07-01

    This paper describes the current understanding of the interaction between geospheres from a complex set of physical and chemical processes under the influence of ionization. The sources of ionization involve the Earth's natural radioactivity and its intensification before earthquakes in seismically active regions, anthropogenic radioactivity caused by nuclear weapon testing and accidents in nuclear power plants and radioactive waste storage, the impact of galactic and solar cosmic rays, and active geophysical experiments using artificial ionization equipment. This approach treats the environment as an open complex system with dissipation, where inherent processes can be considered in the framework of the synergistic approach. We demonstrate the synergy between the evolution of thermal and electromagnetic anomalies in the Earth's atmosphere, ionosphere, and magnetosphere. This makes it possible to determine the direction of the interaction process, which is especially important in applications related to short-term earthquake prediction. That is why the emphasis in this study is on the processes proceeding the final stage of earthquake preparation; the effects of other ionization sources are used to demonstrate that the model is versatile and broadly applicable in geophysics.

  10. Low frequency complex dielectric (conductivity) response of dilute clay suspensions: Modeling and experiments.

    PubMed

    Hou, Chang-Yu; Feng, Ling; Seleznev, Nikita; Freed, Denise E

    2018-09-01

    In this work, we establish an effective medium model to describe the low-frequency complex dielectric (conductivity) dispersion of dilute clay suspensions. We use previously obtained low-frequency polarization coefficients for a charged oblate spheroidal particle immersed in an electrolyte as the building block for the Maxwell Garnett mixing formula to model the dilute clay suspension. The complex conductivity phase dispersion exhibits a near-resonance peak when the clay grains have a narrow size distribution. The peak frequency is associated with the size distribution as well as the shape of clay grains and is often referred to as the characteristic frequency. In contrast, if the size of the clay grains has a broad distribution, the phase peak is broadened and can disappear into the background of the canonical phase response of the brine. To benchmark our model, the low-frequency dispersion of the complex conductivity of dilute clay suspensions is measured using a four-point impedance measurement, which can be reliably calibrated in the frequency range between 0.1 Hz and 10 kHz. By using a minimal number of fitting parameters when reliable information is available as input for the model and carefully examining the issue of potential over-fitting, we found that our model can be used to fit the measured dispersion of the complex conductivity with reasonable parameters. The good match between the modeled and experimental complex conductivity dispersion allows us to argue that our simplified model captures the essential physics for describing the low-frequency dispersion of the complex conductivity of dilute clay suspensions. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Evaluation of integration methods for hybrid simulation of complex structural systems through collapse

    NASA Astrophysics Data System (ADS)

    Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto

    2017-10-01

    This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.

  12. Utility and Scope of Rapid Prototyping in Patients with Complex Muscular Ventricular Septal Defects or Double-Outlet Right Ventricle: Does it Alter Management Decisions?

    PubMed

    Bhatla, Puneet; Tretter, Justin T; Ludomirsky, Achi; Argilla, Michael; Latson, Larry A; Chakravarti, Sujata; Barker, Piers C; Yoo, Shi-Joon; McElhinney, Doff B; Wake, Nicole; Mosca, Ralph S

    2017-01-01

    Rapid prototyping facilitates comprehension of complex cardiac anatomy. However, determining when this additional information proves instrumental in patient management remains a challenge. We describe our experience with patient-specific anatomic models created using rapid prototyping from various imaging modalities, suggesting their utility in surgical and interventional planning in congenital heart disease (CHD). Virtual and physical 3-dimensional (3D) models were generated from CT or MRI data, using commercially available software for patients with complex muscular ventricular septal defects (CMVSD) and double-outlet right ventricle (DORV). Six patients with complex anatomy and uncertainty of the optimal management strategy were included in this study. The models were subsequently used to guide management decisions, and the outcomes reviewed. 3D models clearly demonstrated the complex intra-cardiac anatomy in all six patients and were utilized to guide management decisions. In the three patients with CMVSD, one underwent successful endovascular device closure following a prior failed attempt at transcatheter closure, and the other two underwent successful primary surgical closure with the aid of 3D models. In all three cases of DORV, the models provided better anatomic delineation and additional information that altered or confirmed the surgical plan. Patient-specific 3D heart models show promise in accurately defining intra-cardiac anatomy in CHD, specifically CMVSD and DORV. We believe these models improve understanding of the complex anatomical spatial relationships in these defects and provide additional insight for pre/intra-interventional management and surgical planning.

  13. Development of a physically-based planar inductors VHDL-AMS model for integrated power converter design

    NASA Astrophysics Data System (ADS)

    Ammouri, Aymen; Ben Salah, Walid; Khachroumi, Sofiane; Ben Salah, Tarek; Kourda, Ferid; Morel, Hervé

    2014-05-01

    Design of integrated power converters needs prototype-less approaches. Specific simulations are required for investigation and validation process. Simulation relies on active and passive device models. Models of planar devices, for instance, are still not available in power simulator tools. There is, thus, a specific limitation during the simulation process of integrated power systems. The paper focuses on the development of a physically-based planar inductor model and its validation inside a power converter during transient switching. The planar inductor model remains a complex device to model, particularly when the skin, the proximity and the parasitic capacitances effects are taken into account. Heterogeneous simulation scheme, including circuit and device models, is successfully implemented in VHDL-AMS language and simulated in Simplorer platform. The mixed simulation results has been favorably tested and compared with practical measurements. It is found that the multi-domain simulation results and measurements data are in close agreement.

  14. Stochastic Modeling and Generation of Partially Polarized or Partially Coherent Electromagnetic Waves

    NASA Technical Reports Server (NTRS)

    Davis, Brynmor; Kim, Edward; Piepmeier, Jeffrey; Hildebrand, Peter H. (Technical Monitor)

    2001-01-01

    Many new Earth remote-sensing instruments are embracing both the advantages and added complexity that result from interferometric or fully polarimetric operation. To increase instrument understanding and functionality a model of the signals these instruments measure is presented. A stochastic model is used as it recognizes the non-deterministic nature of any real-world measurements while also providing a tractable mathematical framework. A stationary, Gaussian-distributed model structure is proposed. Temporal and spectral correlation measures provide a statistical description of the physical properties of coherence and polarization-state. From this relationship the model is mathematically defined. The model is shown to be unique for any set of physical parameters. A method of realizing the model (necessary for applications such as synthetic calibration-signal generation) is given and computer simulation results are presented. The signals are constructed using the output of a multi-input multi-output linear filter system, driven with white noise.

  15. Quantifying Florida Bay habitat suitability for fishes and invertebrates under climate change scenarios.

    PubMed

    Kearney, Kelly A; Butler, Mark; Glazer, Robert; Kelble, Christopher R; Serafy, Joseph E; Stabenau, Erik

    2015-04-01

    The Florida Bay ecosystem supports a number of economically important ecosystem services, including several recreational fisheries, which may be affected by changing salinity and temperature due to climate change. In this paper, we use a combination of physical models and habitat suitability index models to quantify the effects of potential climate change scenarios on a variety of juvenile fish and lobster species in Florida Bay. The climate scenarios include alterations in sea level, evaporation and precipitation rates, coastal runoff, and water temperature. We find that the changes in habitat suitability vary in both magnitude and direction across the scenarios and species, but are on average small. Only one of the seven species we investigate (Lagodon rhomboides, i.e., pinfish) sees a sizable decrease in optimal habitat under any of the scenarios. This suggests that the estuarine fauna of Florida Bay may not be as vulnerable to climate change as other components of the ecosystem, such as those in the marine/terrestrial ecotone. However, these models are relatively simplistic, looking only at single species effects of physical drivers without considering the many interspecific interactions that may play a key role in the adjustment of the ecosystem as a whole. More complex models that capture the mechanistic links between physics and biology, as well as the complex dynamics of the estuarine food web, may be necessary to further understand the potential effects of climate change on the Florida Bay ecosystem.

  16. Modelling fungal growth in heterogeneous soil: analyses of the effect of soil physical structure on fungal community dynamics

    NASA Astrophysics Data System (ADS)

    Falconer, R.; Radoslow, P.; Grinev, D.; Otten, W.

    2009-04-01

    Fungi play a pivital role in soil ecosystems contributing to plant productivity. The underlying soil physical and biological processes responsible for community dynamics are interrelated and, at present, poorly understood. If these complex processes can be understood then this knowledge can be managed with an aim to providing more sustainable agriculture. Our understanding of microbial dynamics in soil has long been hampered by a lack of a theoretical framework and difficulties in observation and quantification. We will demonstrate how the spatial and temporal dynamics of fungi in soil can be understood by linking mathematical modelling with novel techniques that visualise the complex structure of the soil. The combination of these techniques and mathematical models opens up new possibilities to understand how the physical structure of soil affects fungal colony dynamics and also how fungal dynamics affect soil structure. We will quantify, using X ray tomography, soil structure for a range of artificially prepared microcosms. We characterise the soil structures using soil metrics such as porosity, fractal dimension, and the connectivity of the pore volume. Furthermore we will use the individual based fungal colony growth model of Falconer et al. 2005, which is based on the physiological processes of fungi, to assess the effect of soil structure on microbial dynamics by qualifying biomass abundances and distributions. We demonstrate how soil structure can critically affect fungal species interactions with consequences for biological control and fungal biodiversity.

  17. Quantifying Florida Bay Habitat Suitability for Fishes and Invertebrates Under Climate Change Scenarios

    NASA Astrophysics Data System (ADS)

    Kearney, Kelly A.; Butler, Mark; Glazer, Robert; Kelble, Christopher R.; Serafy, Joseph E.; Stabenau, Erik

    2015-04-01

    The Florida Bay ecosystem supports a number of economically important ecosystem services, including several recreational fisheries, which may be affected by changing salinity and temperature due to climate change. In this paper, we use a combination of physical models and habitat suitability index models to quantify the effects of potential climate change scenarios on a variety of juvenile fish and lobster species in Florida Bay. The climate scenarios include alterations in sea level, evaporation and precipitation rates, coastal runoff, and water temperature. We find that the changes in habitat suitability vary in both magnitude and direction across the scenarios and species, but are on average small. Only one of the seven species we investigate ( Lagodon rhomboides, i.e., pinfish) sees a sizable decrease in optimal habitat under any of the scenarios. This suggests that the estuarine fauna of Florida Bay may not be as vulnerable to climate change as other components of the ecosystem, such as those in the marine/terrestrial ecotone. However, these models are relatively simplistic, looking only at single species effects of physical drivers without considering the many interspecific interactions that may play a key role in the adjustment of the ecosystem as a whole. More complex models that capture the mechanistic links between physics and biology, as well as the complex dynamics of the estuarine food web, may be necessary to further understand the potential effects of climate change on the Florida Bay ecosystem.

  18. Automated Design of Complex Dynamic Systems

    PubMed Central

    Hermans, Michiel; Schrauwen, Benjamin; Bienstman, Peter; Dambre, Joni

    2014-01-01

    Several fields of study are concerned with uniting the concept of computation with that of the design of physical systems. For example, a recent trend in robotics is to design robots in such a way that they require a minimal control effort. Another example is found in the domain of photonics, where recent efforts try to benefit directly from the complex nonlinear dynamics to achieve more efficient signal processing. The underlying goal of these and similar research efforts is to internalize a large part of the necessary computations within the physical system itself by exploiting its inherent non-linear dynamics. This, however, often requires the optimization of large numbers of system parameters, related to both the system's structure as well as its material properties. In addition, many of these parameters are subject to fabrication variability or to variations through time. In this paper we apply a machine learning algorithm to optimize physical dynamic systems. We show that such algorithms, which are normally applied on abstract computational entities, can be extended to the field of differential equations and used to optimize an associated set of parameters which determine their behavior. We show that machine learning training methodologies are highly useful in designing robust systems, and we provide a set of both simple and complex examples using models of physical dynamical systems. Interestingly, the derived optimization method is intimately related to direct collocation a method known in the field of optimal control. Our work suggests that the application domains of both machine learning and optimal control have a largely unexplored overlapping area which envelopes a novel design methodology of smart and highly complex physical systems. PMID:24497969

  19. Non-Archimedean reaction-ultradiffusion equations and complex hierarchic systems

    NASA Astrophysics Data System (ADS)

    Zúñiga-Galindo, W. A.

    2018-06-01

    We initiate the study of non-Archimedean reaction-ultradiffusion equations and their connections with models of complex hierarchic systems. From a mathematical perspective, the equations studied here are the p-adic counterpart of the integro-differential models for phase separation introduced by Bates and Chmaj. Our equations are also generalizations of the ultradiffusion equations on trees studied in the 1980s by Ogielski, Stein, Bachas, Huberman, among others, and also generalizations of the master equations of the Avetisov et al models, which describe certain complex hierarchic systems. From a physical perspective, our equations are gradient flows of non-Archimedean free energy functionals and their solutions describe the macroscopic density profile of a bistable material whose space of states has an ultrametric structure. Some of our results are p-adic analogs of some well-known results in the Archimedean setting, however, the mechanism of diffusion is completely different due to the fact that it occurs in an ultrametric space.

  20. Oscillations and Multiple Equilibria in Microvascular Blood Flow.

    PubMed

    Karst, Nathaniel J; Storey, Brian D; Geddes, John B

    2015-07-01

    We investigate the existence of oscillatory dynamics and multiple steady-state flow rates in a network with a simple topology and in vivo microvascular blood flow constitutive laws. Unlike many previous analytic studies, we employ the most biologically relevant models of the physical properties of whole blood. Through a combination of analytic and numeric techniques, we predict in a series of two-parameter bifurcation diagrams a range of dynamical behaviors, including multiple equilibria flow configurations, simple oscillations in volumetric flow rate, and multiple coexistent limit cycles at physically realizable parameters. We show that complexity in network topology is not necessary for complex behaviors to arise and that nonlinear rheology, in particular the plasma skimming effect, is sufficient to support oscillatory dynamics similar to those observed in vivo.

  1. Will electrical cyber-physical interdependent networks undergo first-order transition under random attacks?

    NASA Astrophysics Data System (ADS)

    Ji, Xingpei; Wang, Bo; Liu, Dichen; Dong, Zhaoyang; Chen, Guo; Zhu, Zhenshan; Zhu, Xuedong; Wang, Xunting

    2016-10-01

    Whether the realistic electrical cyber-physical interdependent networks will undergo first-order transition under random failures still remains a question. To reflect the reality of Chinese electrical cyber-physical system, the "partial one-to-one correspondence" interdependent networks model is proposed and the connectivity vulnerabilities of three realistic electrical cyber-physical interdependent networks are analyzed. The simulation results show that due to the service demands of power system the topologies of power grid and its cyber network are highly inter-similar which can effectively avoid the first-order transition. By comparing the vulnerability curves between electrical cyber-physical interdependent networks and its single-layer network, we find that complex network theory is still useful in the vulnerability analysis of electrical cyber-physical interdependent networks.

  2. Toward Supersonic Retropropulsion CFD Validation

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Schauerhamer, D. Guy; Trumble, Kerry; Sozer, Emre; Barnhardt, Michael; Carlson, Jan-Renee; Edquist, Karl

    2011-01-01

    This paper begins the process of verifying and validating computational fluid dynamics (CFD) codes for supersonic retropropulsive flows. Four CFD codes (DPLR, FUN3D, OVERFLOW, and US3D) are used to perform various numerical and physical modeling studies toward the goal of comparing predictions with a wind tunnel experiment specifically designed to support CFD validation. Numerical studies run the gamut in rigor from code-to-code comparisons to observed order-of-accuracy tests. Results indicate that this complex flowfield, involving time-dependent shocks and vortex shedding, design order of accuracy is not clearly evident. Also explored is the extent of physical modeling necessary to predict the salient flowfield features found in high-speed Schlieren images and surface pressure measurements taken during the validation experiment. Physical modeling studies include geometric items such as wind tunnel wall and sting mount interference, as well as turbulence modeling that ranges from a RANS (Reynolds-Averaged Navier-Stokes) 2-equation model to DES (Detached Eddy Simulation) models. These studies indicate that tunnel wall interference is minimal for the cases investigated; model mounting hardware effects are confined to the aft end of the model; and sparse grid resolution and turbulence modeling can damp or entirely dissipate the unsteadiness of this self-excited flow.

  3. On entropy, financial markets and minority games

    NASA Astrophysics Data System (ADS)

    Zapart, Christopher A.

    2009-04-01

    The paper builds upon an earlier statistical analysis of financial time series with Shannon information entropy, published in [L. Molgedey, W. Ebeling, Local order, entropy and predictability of financial time series, European Physical Journal B-Condensed Matter and Complex Systems 15/4 (2000) 733-737]. A novel generic procedure is proposed for making multistep-ahead predictions of time series by building a statistical model of entropy. The approach is first demonstrated on the chaotic Mackey-Glass time series and later applied to Japanese Yen/US dollar intraday currency data. The paper also reinterprets Minority Games [E. Moro, The minority game: An introductory guide, Advances in Condensed Matter and Statistical Physics (2004)] within the context of physical entropy, and uses models derived from minority game theory as a tool for measuring the entropy of a model in response to time series. This entropy conditional upon a model is subsequently used in place of information-theoretic entropy in the proposed multistep prediction algorithm.

  4. 3D anisotropic modeling and identification for airborne EM systems based on the spectral-element method

    NASA Astrophysics Data System (ADS)

    Huang, Xin; Yin, Chang-Chun; Cao, Xiao-Yue; Liu, Yun-He; Zhang, Bo; Cai, Jing

    2017-09-01

    The airborne electromagnetic (AEM) method has a high sampling rate and survey flexibility. However, traditional numerical modeling approaches must use high-resolution physical grids to guarantee modeling accuracy, especially for complex geological structures such as anisotropic earth. This can lead to huge computational costs. To solve this problem, we propose a spectral-element (SE) method for 3D AEM anisotropic modeling, which combines the advantages of spectral and finite-element methods. Thus, the SE method has accuracy as high as that of the spectral method and the ability to model complex geology inherited from the finite-element method. The SE method can improve the modeling accuracy within discrete grids and reduce the dependence of modeling results on the grids. This helps achieve high-accuracy anisotropic AEM modeling. We first introduced a rotating tensor of anisotropic conductivity to Maxwell's equations and described the electrical field via SE basis functions based on GLL interpolation polynomials. We used the Galerkin weighted residual method to establish the linear equation system for the SE method, and we took a vertical magnetic dipole as the transmission source for our AEM modeling. We then applied fourth-order SE calculations with coarse physical grids to check the accuracy of our modeling results against a 1D semi-analytical solution for an anisotropic half-space model and verified the high accuracy of the SE. Moreover, we conducted AEM modeling for different anisotropic 3D abnormal bodies using two physical grid scales and three orders of SE to obtain the convergence conditions for different anisotropic abnormal bodies. Finally, we studied the identification of anisotropy for single anisotropic abnormal bodies, anisotropic surrounding rock, and single anisotropic abnormal body embedded in an anisotropic surrounding rock. This approach will play a key role in the inversion and interpretation of AEM data collected in regions with anisotropic geology.

  5. 3D Numerical simulation of bed morphological responses to complex in-streamstructures

    NASA Astrophysics Data System (ADS)

    Xu, Y.; Liu, X.

    2017-12-01

    In-stream structures are widely used in stream restoration for both hydraulic and ecologicalpurposes. The geometries of the structures are usually designed to be extremely complex andirregular, so as to provide nature-like physical habitat. The aim of this study is to develop anumerical model to accurately predict the bed-load transport and the morphological changescaused by the complex in-stream structures. This model is developed in the platform ofOpenFOAM. In the hydrodynamics part, it utilizes different turbulence models to capture thedetailed turbulence information near the in-stream structures. The technique of immersedboundary method (IBM) is efficiently implemented in the model to describe the movable bendand the rigid solid body of in-stream structures. With IBM, the difficulty of mesh generation onthe complex geometry is greatly alleviated, and the bed surface deformation is able to becoupled in to flow system. This morphodynamics model is firstly validated by simple structures,such as the morphology of the scour in log-vane structure. Then it is applied in a more complexstructure, engineered log jams (ELJ), which consists of multiple logs piled together. Thenumerical results including turbulence flow information and bed morphological responses areevaluated against the experimental measurement within the exact same flow condition.

  6. Modelling the Interplay between Lifestyle Factors and Genetic Predisposition on Markers of Type 2 Diabetes Mellitus Risk.

    PubMed

    Walker, Celia G; Solis-Trapala, Ivonne; Holzapfel, Christina; Ambrosini, Gina L; Fuller, Nicholas R; Loos, Ruth J F; Hauner, Hans; Caterson, Ian D; Jebb, Susan A

    2015-01-01

    The risk of developing type 2 diabetes mellitus (T2DM) is determined by a complex interplay involving lifestyle factors and genetic predisposition. Despite this, many studies do not consider the relative contributions of this complex array of factors to identify relationships which are important in progression or prevention of complex diseases. We aimed to describe the integrated effect of a number of lifestyle changes (weight, diet and physical activity) in the context of genetic susceptibility, on changes in glycaemic traits in overweight or obese participants following 12-months of a weight management programme. A sample of 353 participants from a behavioural weight management intervention were included in this study. A graphical Markov model was used to describe the impact of the intervention, by dividing the effects into various pathways comprising changes in proportion of dietary saturated fat, physical activity and weight loss, and a genetic predisposition score (T2DM-GPS), on changes in insulin sensitivity (HOMA-IR), insulin secretion (HOMA-B) and short and long term glycaemia (glucose and HbA1c). We demonstrated the use of graphical Markov modelling to identify the importance and interrelationships of a number of possible variables changed as a result of a lifestyle intervention, whilst considering fixed factors such as genetic predisposition, on changes in traits. Paths which led to weight loss and change in dietary saturated fat were important factors in the change of all glycaemic traits, whereas the T2DM-GPS only made a significant direct contribution to changes in HOMA-IR and plasma glucose after considering the effects of lifestyle factors. This analysis shows that modifiable factors relating to body weight, diet, and physical activity are more likely to impact on glycaemic traits than genetic predisposition during a behavioural intervention.

  7. An uncertainty analysis of the hydrogen source term for a station blackout accident in Sequoyah using MELCOR 1.8.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gauntt, Randall O.; Bixler, Nathan E.; Wagner, Kenneth Charles

    2014-03-01

    A methodology for using the MELCOR code with the Latin Hypercube Sampling method was developed to estimate uncertainty in various predicted quantities such as hydrogen generation or release of fission products under severe accident conditions. In this case, the emphasis was on estimating the range of hydrogen sources in station blackout conditions in the Sequoyah Ice Condenser plant, taking into account uncertainties in the modeled physics known to affect hydrogen generation. The method uses user-specified likelihood distributions for uncertain model parameters, which may include uncertainties of a stochastic nature, to produce a collection of code calculations, or realizations, characterizing themore » range of possible outcomes. Forty MELCOR code realizations of Sequoyah were conducted that included 10 uncertain parameters, producing a range of in-vessel hydrogen quantities. The range of total hydrogen produced was approximately 583kg 131kg. Sensitivity analyses revealed expected trends with respected to the parameters of greatest importance, however, considerable scatter in results when plotted against any of the uncertain parameters was observed, with no parameter manifesting dominant effects on hydrogen generation. It is concluded that, with respect to the physics parameters investigated, in order to further reduce predicted hydrogen uncertainty, it would be necessary to reduce all physics parameter uncertainties similarly, bearing in mind that some parameters are inherently uncertain within a range. It is suspected that some residual uncertainty associated with modeling complex, coupled and synergistic phenomena, is an inherent aspect of complex systems and cannot be reduced to point value estimates. The probabilistic analyses such as the one demonstrated in this work are important to properly characterize response of complex systems such as severe accident progression in nuclear power plants.« less

  8. The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems.

    PubMed

    White, Andrew; Tolman, Malachi; Thames, Howard D; Withers, Hubert Rodney; Mason, Kathy A; Transtrum, Mark K

    2016-12-01

    We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model's discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system-a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model.

  9. Aerosol Complexity and Implications for Predictability and Short-Term Forecasting

    NASA Technical Reports Server (NTRS)

    Colarco, Peter

    2016-01-01

    There are clear NWP and climate impacts from including aerosol radiative and cloud interactions. Changes in dynamics and cloud fields affect aerosol lifecycle, plume height, long-range transport, overall forcing of the climate system, etc. Inclusion of aerosols in NWP systems has benefit to surface field biases (e.g., T2m, U10m). Including aerosol affects has impact on analysis increments and can have statistically significant impacts on, e.g., tropical cyclogenesis. Above points are made especially with respect to aerosol radiative interactions, but aerosol-cloud interaction is a bigger signal on the global system. Many of these impacts are realized even in models with relatively simple (bulk) aerosol schemes (approx.10 -20 tracers). Simple schemes though imply simple representation of aerosol absorption and importantly for aerosol-cloud interaction particle-size distribution. Even so, more complex schemes exhibit a lot of diversity between different models, with issues such as size selection both for emitted particles and for modes. Prospects for complex sectional schemes to tune modal (and even bulk) schemes toward better selection of size representation. I think this is a ripe topic for more research -Systematic documentation of benefits of no vs. climatological vs. interactive (direct and then direct+indirect) aerosols. Document aerosol impact on analysis increments, inclusion in NWP data assimilation operator -Further refinement of baseline assumptions in model design (e.g., absorption, particle size distribution). Did not get into model resolution and interplay of other physical processes with aerosols (e.g., moist physics, obviously important), chemistry

  10. On validating remote sensing simulations using coincident real data

    NASA Astrophysics Data System (ADS)

    Wang, Mingming; Yao, Wei; Brown, Scott; Goodenough, Adam; van Aardt, Jan

    2016-05-01

    The remote sensing community often requires data simulation, either via spectral/spatial downsampling or through virtual, physics-based models, to assess systems and algorithms. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model is one such first-principles, physics-based model for simulating imagery for a range of modalities. Complex simulation of vegetation environments subsequently has become possible, as scene rendering technology and software advanced. This in turn has created questions related to the validity of such complex models, with potential multiple scattering, bidirectional distribution function (BRDF), etc. phenomena that could impact results in the case of complex vegetation scenes. We selected three sites, located in the Pacific Southwest domain (Fresno, CA) of the National Ecological Observatory Network (NEON). These sites represent oak savanna, hardwood forests, and conifer-manzanita-mixed forests. We constructed corresponding virtual scenes, using airborne LiDAR and imaging spectroscopy data from NEON, ground-based LiDAR data, and field-collected spectra to characterize the scenes. Imaging spectroscopy data for these virtual sites then were generated using the DIRSIG simulation environment. This simulated imagery was compared to real AVIRIS imagery (15m spatial resolution; 12 pixels/scene) and NEON Airborne Observation Platform (AOP) data (1m spatial resolution; 180 pixels/scene). These tests were performed using a distribution-comparison approach for select spectral statistics, e.g., established the spectra's shape, for each simulated versus real distribution pair. The initial comparison results of the spectral distributions indicated that the shapes of spectra between the virtual and real sites were closely matched.

  11. Conditional Probabilities of Large Earthquake Sequences in California from the Physics-based Rupture Simulator RSQSim

    NASA Astrophysics Data System (ADS)

    Gilchrist, J. J.; Jordan, T. H.; Shaw, B. E.; Milner, K. R.; Richards-Dinger, K. B.; Dieterich, J. H.

    2017-12-01

    Within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM), we are developing physics-based forecasting models for earthquake ruptures in California. We employ the 3D boundary element code RSQSim (Rate-State Earthquake Simulator of Dieterich & Richards-Dinger, 2010) to generate synthetic catalogs with tens of millions of events that span up to a million years each. This code models rupture nucleation by rate- and state-dependent friction and Coulomb stress transfer in complex, fully interacting fault systems. The Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault and deformation models are used to specify the fault geometry and long-term slip rates. We have employed the Blue Waters supercomputer to generate long catalogs of simulated California seismicity from which we calculate the forecasting statistics for large events. We have performed probabilistic seismic hazard analysis with RSQSim catalogs that were calibrated with system-wide parameters and found a remarkably good agreement with UCERF3 (Milner et al., this meeting). We build on this analysis, comparing the conditional probabilities of sequences of large events from RSQSim and UCERF3. In making these comparisons, we consider the epistemic uncertainties associated with the RSQSim parameters (e.g., rate- and state-frictional parameters), as well as the effects of model-tuning (e.g., adjusting the RSQSim parameters to match UCERF3 recurrence rates). The comparisons illustrate how physics-based rupture simulators might assist forecasters in understanding the short-term hazards of large aftershocks and multi-event sequences associated with complex, multi-fault ruptures.

  12. Simulation of Propellant Loading System Senior Design Implement in Computer Algorithm

    NASA Technical Reports Server (NTRS)

    Bandyopadhyay, Alak

    2010-01-01

    Propellant loading from the Storage Tank to the External Tank is one of the very important and time consuming pre-launch ground operations for the launch vehicle. The propellant loading system is a complex integrated system involving many physical components such as the storage tank filled with cryogenic fluid at a very low temperature, the long pipe line connecting the storage tank with the external tank, the external tank along with the flare stack, and vent systems for releasing the excess fuel. Some of the very important parameters useful for design purpose are the prediction of pre-chill time, loading time, amount of fuel lost, the maximum pressure rise etc. The physics involved for mathematical modeling is quite complex due to the fact the process is unsteady, there is phase change as some of the fuel changes from liquid to gas state, then conjugate heat transfer in the pipe walls as well as between solid-to-fluid region. The simulation is very tedious and time consuming too. So overall, this is a complex system and the objective of the work is student's involvement and work in the parametric study and optimization of numerical modeling towards the design of such system. The students have to first become familiar and understand the physical process, the related mathematics and the numerical algorithm. The work involves exploring (i) improved algorithm to make the transient simulation computationally effective (reduced CPU time) and (ii) Parametric study to evaluate design parameters by changing the operational conditions

  13. Wave modeling for the Beaufort and Chukchi Seas

    NASA Astrophysics Data System (ADS)

    Rogers, W.; Thomson, J.; Shen, H. H.; Posey, P. G.; Hebert, D. A.

    2016-02-01

    Authors: W. Erick Rogers(1), Jim Thomson(2), Hayley Shen (3), PamelaPosey (1), David Hebert (1) 1 Naval Research Laboratory, Stennis Space Center, Mississippi, USA2 Applied Physics Laboratory, University of Washington, Seattle,Washington, USA3 Clarkson University, Potsdam, New York, USA Abstract : In this presentation, we will discuss the development and application of numerical models for prediction of wind-generated surface gravity waves to the Arctic Ocean, and specifically the Beaufort and Chukchi Seas, for which the Office of Naval Research (ONR) has supported two major field campaigns in 2014 and 2015. The modeling platform is the spectral wave model WAVEWATCH III (R) (WW3). We will begin by reviewing progress with the model numerics in 2007 and 2008 which permits efficient application at high latitudes. Then, we will discuss more recent progress (2012 to 2015) adding new physics to WW3 for ice effects. The latter include two parameterizations for dissipation by turbulence at the ice/water interface, and a more complex parameterization which treat the ice as a viscoelastic fluid. With these new physics, the primary challenge is to find observational data suitable for calibration of the parameterization, and there are concerns about validity of application of any calibration to the wide variety of ice types that exist in the Arctic (or Southern Ocean). Quality of input is another major challenge, for which some recent progress has been made (at least in the context of ice concentration and ice edge) with data assimilative ice modeling at NRL. We will discuss our recent work to invert for dissipation rate using data from a 2012 mooring in the Beaufort Sea, how the results vary by season (ice retreat vs. advance), and what this tells us in context of those complex physical parameterizations used by the model. We will summarize plans for further development of the model, such as adding scattering by floes, through collaboration with IFREMER (France), and improving on the simple "proportional scaling" treatment of the open water source functions in presence of partial ice cover. Finally, we will discuss lessons learned for wave modeling from the autumn 2015 R/V Sikuliaq cruise supported by ONR.

  14. PULSE: A numerical model for the simulation of snowpack solute dynamics to capture runoff ionic pulses during snowmelt

    NASA Astrophysics Data System (ADS)

    Costa, D.; Pomeroy, J. W.; Wheater, H. S.

    2017-12-01

    Early ionic pulses in spring snowmelt can cause the temporary acidification of streams and account for a significant portion of the total annual nutrient export, particularly in seasonally snow-covered areas where the frozen ground may limit runoff-soil contact and cause the rapid delivery of these ions to streams. Ionic pulses are a consequence of snow ion exclusion, a process induced by snow metamorphism where ions are segregated from the snow grains losing mass to the surface of the grains gaining mass. While numerous studies have been successful in providing quantitative evidence of this process, few mechanistic mathematical models have been proposed for diagnostic and prediction. A few early modelling attempts have been successful in capturing this process assuming transport through porous media with variable porosity, however their implementation is difficult because they require complex models of snow physics to resolve the evolution of in-snow properties and processes during snowmelt, such as heat conduction, metamorphism, melt and water flow. Furthermore, initial snowpack to snow-surface ion concentration ratios are difficult to measure but are required to initiate these models and ion exclusion processes are not represented in a physically-based transparent fashion. In this research, a standalone numerical model has been developed to capture ionic pulses in snowmelt by emulating solute leaching from snow grains during melt and its subsequent transport by the percolating meltwater. Estimating snow porosity and water content dynamics is shown to be a viable alternative to deployment of complex snow physics models for this purpose. The model was applied to four study sites located in the Arctic and in Sierra Nevada to test for different climatic and hydrological conditions. The model compares very well with observations and could capture both the timing and magnitude of early melt ionic pulses accurately. This study demonstrates how physically based approaches can provide successful simulations of the spatial and temporal fluxes of snowmelt ions, which can be used to improve the prediction of nutrient export in cold regions for the spring freshet.

  15. PULSE: A numerical model for the simulation of snowpack solute dynamics to capture runoff ionic pulses during snowmelt

    NASA Astrophysics Data System (ADS)

    Clark, M. P.; Nijssen, B.; Lundquist, J. D.; Luce, C. H.; Musselman, K. N.; Wayand, N. E.; Ou, M.; Lapo, K. E.

    2016-12-01

    Early ionic pulses in spring snowmelt can cause the temporary acidification of streams and account for a significant portion of the total annual nutrient export, particularly in seasonally snow-covered areas where the frozen ground may limit runoff-soil contact and cause the rapid delivery of these ions to streams. Ionic pulses are a consequence of snow ion exclusion, a process induced by snow metamorphism where ions are segregated from the snow grains losing mass to the surface of the grains gaining mass. While numerous studies have been successful in providing quantitative evidence of this process, few mechanistic mathematical models have been proposed for diagnostic and prediction. A few early modelling attempts have been successful in capturing this process assuming transport through porous media with variable porosity, however their implementation is difficult because they require complex models of snow physics to resolve the evolution of in-snow properties and processes during snowmelt, such as heat conduction, metamorphism, melt and water flow. Furthermore, initial snowpack to snow-surface ion concentration ratios are difficult to measure but are required to initiate these models and ion exclusion processes are not represented in a physically-based transparent fashion. In this research, a standalone numerical model has been developed to capture ionic pulses in snowmelt by emulating solute leaching from snow grains during melt and its subsequent transport by the percolating meltwater. Estimating snow porosity and water content dynamics is shown to be a viable alternative to deployment of complex snow physics models for this purpose. The model was applied to four study sites located in the Arctic and in Sierra Nevada to test for different climatic and hydrological conditions. The model compares very well with observations and could capture both the timing and magnitude of early melt ionic pulses accurately. This study demonstrates how physically based approaches can provide successful simulations of the spatial and temporal fluxes of snowmelt ions, which can be used to improve the prediction of nutrient export in cold regions for the spring freshet.

  16. An ocean scatter propagation model for aeronautical satellite communication applications

    NASA Technical Reports Server (NTRS)

    Moreland, K. W.

    1990-01-01

    In this paper an ocean scattering propagation model, developed for aircraft-to-satellite (aeronautical) applications, is described. The purpose of the propagation model is to characterize the behavior of sea reflected multipath as a function of physical propagation path parameters. An accurate validation against the theoretical far field solution for a perfectly conducting sinusoidal surface is provided. Simulation results for typical L band aeronautical applications with low complexity antennas are presented.

  17. The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems

    PubMed Central

    Tolman, Malachi; Thames, Howard D.; Mason, Kathy A.

    2016-01-01

    We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model’s discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system–a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model. PMID:27923060

  18. Digital Rock Physics of hydrate-bearing sediments: Determination of effective elastic properties on the microscale

    NASA Astrophysics Data System (ADS)

    Sell, Kathleen; Saenger, Erik H.; Quintal, Beatriz; Enzmann, Frieder; Kersten, Michael

    2017-04-01

    To date, very little is known about the distribution of natural gas hydrates in sedimentary matrices and its influence on the seismic properties of the host rock, in particular at low hydrate concentration. Digital rock physics offers a unique approach to this issue yet requires good quality, high resolution 3D representations for the accurate modelling of petrophysical and transport properties. Although such models are readily available via in-situ synchrotron radiation X-ray tomography the analysis of such data asks for complex workflows and high computational power to maintain valuable results. More recently digital rock physics took also on data from a fairly new group of techniques focused on in-situ studies recreating complex settings that cannot be easily accessed by conventional means. Here, we present a best-practise procedure complementing high-resolution synchrotron-tomography data of hydrate-bearing sedimentary matrices from Chaouachi et al. (2015) with data post-processing, including image enhancement and segmentation as well as exemplary numerical simulations of acoustic wave propagation in 3D on realistic rock using the derived results. A combination of the tomography and 3D modelling opens a path to a more reliable deduction of properties of gas hydrate bearing sediments without a reliance on idealised and frequently imprecise models (Sell et al. 2016). The advantage of this method over traditional, often oversimplified models lays in a more faithful description of complex pore geometries and microstructures found in natural formations (Andrä et al., 2013b, a). References: Chaouachi, M., Falenty, A., Sell, K., Enzmann, F., Kersten, M., Haberthür, D., and Kuhs, W. F.: Microstructural evolution of gas hydrates in sedimentary matrices observed with synchrotron x-ray computed tomographic microscopy, Geochem. Geophy. Geosy., 16, 1711-1722, 2015. Sell, K., E. H. Saenger, A. Falenty, M. Chaouachi, D. Haberthür, F. Enzmann, W. F. Kuhs, and M. Kersten: On the path to the digital rock physics of gas hydrate-bearing sediments - processing of in situ synchrotron-tomography data, Solid Earth, 7(4), 1243-1258, 2016. Andrä, H., Combaret, N., Dvorkin, J., Glatt, E., Han, J., Kabel, M., Keehm, Y., Krzikalla, F., Lee, M., Madonna, C., Marsh, M., Mukerji, T., Saenger, E. H., Sain, R., Saxena, N., Ricker, S., Wiegmann, A., and Zhan, X.: Digital rock physics benchmarks - Part II: Computing effective properties, Comput. Geosci., 50, 33-43, 2013a. Andrä, H., Combaret, N., Dvorkin, J., Glatt, E., Han, J., Kabel, M., Keehm, Y., Krzikalla, F., Lee, M., Madonna, C., Marsh, M., Mukerji, T., Saenger, E. H., Sain, R., Saxena, N., Ricker, S., Wiegmann, A., and Zhan, X.: Digital rock physics benchmarks - Part I: Imaging and segmentation, Comput. Geosci., 50, 25-32, 2013b.

  19. Close-range laser scanning in forests: towards physically based semantics across scales.

    PubMed

    Morsdorf, F; Kükenbrink, D; Schneider, F D; Abegg, M; Schaepman, M E

    2018-04-06

    Laser scanning with its unique measurement concept holds the potential to revolutionize the way we assess and quantify three-dimensional vegetation structure. Modern laser systems used at close range, be it on terrestrial, mobile or unmanned aerial platforms, provide dense and accurate three-dimensional data whose information just waits to be harvested. However, the transformation of such data to information is not as straightforward as for airborne and space-borne approaches, where typically empirical models are built using ground truth of target variables. Simpler variables, such as diameter at breast height, can be readily derived and validated. More complex variables, e.g. leaf area index, need a thorough understanding and consideration of the physical particularities of the measurement process and semantic labelling of the point cloud. Quantified structural models provide a framework for such labelling by deriving stem and branch architecture, a basis for many of the more complex structural variables. The physical information of the laser scanning process is still underused and we show how it could play a vital role in conjunction with three-dimensional radiative transfer models to shape the information retrieval methods of the future. Using such a combined forward and physically based approach will make methods robust and transferable. In addition, it avoids replacing observer bias from field inventories with instrument bias from different laser instruments. Still, an intensive dialogue with the users of the derived information is mandatory to potentially re-design structural concepts and variables so that they profit most of the rich data that close-range laser scanning provides.

  20. 25 Years of Self-Organized Criticality: Solar and Astrophysics

    NASA Astrophysics Data System (ADS)

    Aschwanden, Markus J.; Crosby, Norma B.; Dimitropoulou, Michaila; Georgoulis, Manolis K.; Hergarten, Stefan; McAteer, James; Milovanov, Alexander V.; Mineshige, Shin; Morales, Laura; Nishizuka, Naoto; Pruessner, Gunnar; Sanchez, Raul; Sharma, A. Surja; Strugarek, Antoine; Uritsky, Vadim

    2016-01-01

    Shortly after the seminal paper "Self-Organized Criticality: An explanation of 1/ f noise" by Bak et al. (1987), the idea has been applied to solar physics, in "Avalanches and the Distribution of Solar Flares" by Lu and Hamilton (1991). In the following years, an inspiring cross-fertilization from complexity theory to solar and astrophysics took place, where the SOC concept was initially applied to solar flares, stellar flares, and magnetospheric substorms, and later extended to the radiation belt, the heliosphere, lunar craters, the asteroid belt, the Saturn ring, pulsar glitches, soft X-ray repeaters, blazars, black-hole objects, cosmic rays, and boson clouds. The application of SOC concepts has been performed by numerical cellular automaton simulations, by analytical calculations of statistical (powerlaw-like) distributions based on physical scaling laws, and by observational tests of theoretically predicted size distributions and waiting time distributions. Attempts have been undertaken to import physical models into the numerical SOC toy models, such as the discretization of magneto-hydrodynamics (MHD) processes. The novel applications stimulated also vigorous debates about the discrimination between SOC models, SOC-like, and non-SOC processes, such as phase transitions, turbulence, random-walk diffusion, percolation, branching processes, network theory, chaos theory, fractality, multi-scale, and other complexity phenomena. We review SOC studies from the last 25 years and highlight new trends, open questions, and future challenges, as discussed during two recent ISSI workshops on this theme.

  1. A Statistician's View of Upcoming Grand Challenges

    NASA Astrophysics Data System (ADS)

    Meng, Xiao Li

    2010-01-01

    In this session we have seen some snapshots of the broad spectrum of challenges, in this age of huge, complex, computer-intensive models, data, instruments,and questions. These challenges bridge astronomy at many wavelengths; basic physics; machine learning; -- and statistics. At one end of our spectrum, we think of 'compressing' the data with non-parametric methods. This raises the question of creating 'pseudo-replicas' of the data for uncertainty estimates. What would be involved in, e.g. boot-strap and related methods? Somewhere in the middle are these non-parametric methods for encapsulating the uncertainty information. At the far end, we find more model-based approaches, with the physics model embedded in the likelihood and analysis. The other distinctive problem is really the 'black-box' problem, where one has a complicated e.g. fundamental physics-based computer code, or 'black box', and one needs to know how changing the parameters at input -- due to uncertainties of any kind -- will map to changing the output. All of these connect to challenges in complexity of data and computation speed. Dr. Meng will highlight ways to 'cut corners' with advanced computational techniques, such as Parallel Tempering and Equal Energy methods. As well, there are cautionary tales of running automated analysis with real data -- where "30 sigma" outliers due to data artifacts can be more common than the astrophysical event of interest.

  2. When physics is not "just physics": complexity science invites new measurement frames for exploring the physics of cognitive and biological development.

    PubMed

    Kelty-Stephen, Damian; Dixon, James A

    2012-01-01

    The neurobiological sciences have struggled to resolve the physical foundations for biological and cognitive phenomena with a suspicion that biological and cognitive systems, capable of exhibiting and contributing to structure within themselves and through their contexts, are fundamentally distinct or autonomous from purely physical systems. Complexity science offers new physics-based approaches to explaining biological and cognitive phenomena. In response to controversy over whether complexity science might seek to "explain away" biology and cognition as "just physics," we propose that complexity science serves as an application of recent advances in physics to phenomena in biology and cognition without reducing or undermining the integrity of the phenomena to be explained. We highlight that physics is, like the neurobiological sciences, an evolving field and that the threat of reduction is overstated. We propose that distinctions between biological and cognitive systems from physical systems are pretheoretical and thus optional. We review our own work applying insights from post-classical physics regarding turbulence and fractal fluctuations to the problems of developing cognitive structure. Far from hoping to reduce biology and cognition to "nothing but" physics, we present our view that complexity science offers new explanatory frameworks for considering physical foundations of biological and cognitive phenomena.

  3. On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo

    NASA Astrophysics Data System (ADS)

    Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl

    2016-09-01

    A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.

  4. Fast Physically Accurate Rendering of Multimodal Signatures of Distributed Fracture in Heterogeneous Materials.

    PubMed

    Visell, Yon

    2015-04-01

    This paper proposes a fast, physically accurate method for synthesizing multimodal, acoustic and haptic, signatures of distributed fracture in quasi-brittle heterogeneous materials, such as wood, granular media, or other fiber composites. Fracture processes in these materials are challenging to simulate with existing methods, due to the prevalence of large numbers of disordered, quasi-random spatial degrees of freedom, representing the complex physical state of a sample over the geometric volume of interest. Here, I develop an algorithm for simulating such processes, building on a class of statistical lattice models of fracture that have been widely investigated in the physics literature. This algorithm is enabled through a recently published mathematical construction based on the inverse transform method of random number sampling. It yields a purely time domain stochastic jump process representing stress fluctuations in the medium. The latter can be readily extended by a mean field approximation that captures the averaged constitutive (stress-strain) behavior of the material. Numerical simulations and interactive examples demonstrate the ability of these algorithms to generate physically plausible acoustic and haptic signatures of fracture in complex, natural materials interactively at audio sampling rates.

  5. Complex networks as a unified framework for descriptive analysis and predictive modeling in climate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steinhaeuser, Karsten J K; Chawla, Nitesh; Ganguly, Auroop R

    The analysis of climate data has relied heavily on hypothesis-driven statistical methods, while projections of future climate are based primarily on physics-based computational models. However, in recent years a wealth of new datasets has become available. Therefore, we take a more data-centric approach and propose a unified framework for studying climate, with an aim towards characterizing observed phenomena as well as discovering new knowledge in the climate domain. Specifically, we posit that complex networks are well-suited for both descriptive analysis and predictive modeling tasks. We show that the structural properties of climate networks have useful interpretation within the domain. Further,more » we extract clusters from these networks and demonstrate their predictive power as climate indices. Our experimental results establish that the network clusters are statistically significantly better predictors than clusters derived using a more traditional clustering approach. Using complex networks as data representation thus enables the unique opportunity for descriptive and predictive modeling to inform each other.« less

  6. Multiagent model and mean field theory of complex auction dynamics

    NASA Astrophysics Data System (ADS)

    Chen, Qinghua; Huang, Zi-Gang; Wang, Yougui; Lai, Ying-Cheng

    2015-09-01

    Recent years have witnessed a growing interest in analyzing a variety of socio-economic phenomena using methods from statistical and nonlinear physics. We study a class of complex systems arising from economics, the lowest unique bid auction (LUBA) systems, which is a recently emerged class of online auction game systems. Through analyzing large, empirical data sets of LUBA, we identify a general feature of the bid price distribution: an inverted J-shaped function with exponential decay in the large bid price region. To account for the distribution, we propose a multi-agent model in which each agent bids stochastically in the field of winner’s attractiveness, and develop a theoretical framework to obtain analytic solutions of the model based on mean field analysis. The theory produces bid-price distributions that are in excellent agreement with those from the real data. Our model and theory capture the essential features of human behaviors in the competitive environment as exemplified by LUBA, and may provide significant quantitative insights into complex socio-economic phenomena.

  7. Models and observations of Arctic melt ponds

    NASA Astrophysics Data System (ADS)

    Golden, K. M.

    2016-12-01

    During the Arctic melt season, the sea ice surface undergoes a striking transformation from vast expanses of snow covered ice to complex mosaics of ice and melt ponds. Sea ice albedo, a key parameter in climate modeling, is largely determined by the complex evolution of melt pond configurations. In fact, ice-albedo feedback has played a significant role in the recent declines of the summer Arctic sea ice pack. However, understanding melt pond evolution remains a challenge to improving climate projections. It has been found that as the ponds grow and coalesce, the fractal dimension of their boundaries undergoes a transition from 1 to about 2, around a critical length scale of 100 square meters in area. As the ponds evolve they take complex, self-similar shapes with boundaries resembling space-filling curves. I will outline how mathematical models of composite materials and statistical physics, such as percolation and Ising models, are being used to describe this evolution and predict key geometrical parameters that agree very closely with observations.

  8. Is complexity of work associated with risk of dementia? The Canadian Study of Health And Aging.

    PubMed

    Kröger, Edeltraut; Andel, Ross; Lindsay, Joan; Benounissa, Zohra; Verreault, René; Laurin, Danielle

    2008-04-01

    The authors evaluated the association of complexity of work with data, people, and things with the incidence of dementia, Alzheimer's disease, and vascular dementia in the Canadian Study of Health and Aging, while adjusting for work-related physical activity. The Canadian Study of Health and Aging is a 10-year population study, from 1991 to 2001, of a representative sample of persons aged 65 years or older. Lifetime job history allowed application of complexity scores and classification of work-related physical activity. Analyses included 3,557 subjects, of whom 400 were incident dementia cases, including 299 with Alzheimer's disease and 93 with vascular dementia. In fully adjusted Cox regression models, high complexity of work with people or things reduced risk of dementia (hazard ratios were 0.66 (95% confidence interval: 0.44, 0.98) and 0.72 (95% confidence interval: 0.52, 0.99), respectively) but not Alzheimer's disease. For vascular dementia, hazard ratios were 0.36 (95% confidence interval: 0.15, 0.90) for high complexity of work with people and 0.50 (95% confidence interval: 0.25, 1.00) for high complexity of work with things. Subgroup analyses according to median duration (23 years) of principal occupation showed that associations with complexity varied according to duration of employment. High complexity of work appears to be associated with risk of dementia, but effects may vary according to subtype.

  9. Large eddy simulation modeling of particle-laden flows in complex terrain

    NASA Astrophysics Data System (ADS)

    Salesky, S.; Giometto, M. G.; Chamecki, M.; Lehning, M.; Parlange, M. B.

    2017-12-01

    The transport, deposition, and erosion of heavy particles over complex terrain in the atmospheric boundary layer is an important process for hydrology, air quality forecasting, biology, and geomorphology. However, in situ observations can be challenging in complex terrain due to spatial heterogeneity. Furthermore, there is a need to develop numerical tools that can accurately represent the physics of these multiphase flows over complex surfaces. We present a new numerical approach to accurately model the transport and deposition of heavy particles in complex terrain using large eddy simulation (LES). Particle transport is represented through solution of the advection-diffusion equation including terms that represent gravitational settling and inertia. The particle conservation equation is discretized in a cut-cell finite volume framework in order to accurately enforce mass conservation. Simulation results will be validated with experimental data, and numerical considerations required to enforce boundary conditions at the surface will be discussed. Applications will be presented in the context of snow deposition and transport, as well as urban dispersion.

  10. Use of Dynamic Traffic Assignment in FSUTMS in Support of Transportation Planning in Florida [Summary

    DOT National Transportation Integrated Search

    2012-01-01

    Transportation planning is based on the physical : structure of roadway networks and, less : tangibly, on choices individuals make about their : transportation needs and use of the roads. For a : task this complex, computer modeling is essential. : I...

  11. One-Dimensional Transport with Equilibrium Chemistry (OTEQ) - A Reactive Transport Model for Streams and Rivers

    USGS Publications Warehouse

    Runkel, Robert L.

    2010-01-01

    OTEQ is a mathematical simulation model used to characterize the fate and transport of waterborne solutes in streams and rivers. The model is formed by coupling a solute transport model with a chemical equilibrium submodel. The solute transport model is based on OTIS, a model that considers the physical processes of advection, dispersion, lateral inflow, and transient storage. The equilibrium submodel is based on MINTEQ, a model that considers the speciation and complexation of aqueous species, acid-base reactions, precipitation/dissolution, and sorption. Within OTEQ, reactions in the water column may result in the formation of solid phases (precipitates and sorbed species) that are subject to downstream transport and settling processes. Solid phases on the streambed may also interact with the water column through dissolution and sorption/desorption reactions. Consideration of both mobile (waterborne) and immobile (streambed) solid phases requires a unique set of governing differential equations and solution techniques that are developed herein. The partial differential equations describing physical transport and the algebraic equations describing chemical equilibria are coupled using the sequential iteration approach. The model's ability to simulate pH, precipitation/dissolution, and pH-dependent sorption provides a means of evaluating the complex interactions between instream chemistry and hydrologic transport at the field scale. This report details the development and application of OTEQ. Sections of the report describe model theory, input/output specifications, model applications, and installation instructions. OTEQ may be obtained over the Internet at http://water.usgs.gov/software/OTEQ.

  12. Position-based dynamic of a particle system: a configurable algorithm to describe complex behaviour of continuum material starting from swarm robotics

    NASA Astrophysics Data System (ADS)

    dell'Erba, Ramiro

    2018-04-01

    In a previous work, we considered a two-dimensional lattice of particles and calculated its time evolution by using an interaction law based on the spatial position of the particles themselves. The model reproduced the behaviour of deformable bodies both according to the standard Cauchy model and second gradient theory; this success led us to use this method in more complex cases. This work is intended as the natural evolution of the previous one in which we shall consider both energy aspects, coherence with the principle of Saint Venant and we start to manage a more general tool that can be adapted to different physical phenomena, supporting complex effects like lateral contraction, anisotropy or elastoplasticity.

  13. On the shelf resonances of the Gulf of Carpentaria and the Arafura Sea

    NASA Astrophysics Data System (ADS)

    Webb, D. J.

    2012-09-01

    A numerical model is used to investigate the resonances of the Gulf of Carpentaria and the Arafura Sea, and the additional insights that come from extending the analysis into the complex angular velocity plane. When the model is forced at the shelf edge with physically realistic real values of the angular velocity, the response functions at points within the region show maxima and other behaviour which imply that resonances are involved but provide little additional information. The study is then extended to complex angular velocities, and the results then show a clear pattern of gravity wave and Rossby wave like resonances. The properties of the resonances are investigated and used to reinterpret the response at real values of angular velocity. It is found that in some regions the response is dominated by modes trapped between the shelf edge and the coast or between opposing coastlines. In other regions the resonances show cooperative behaviour, possibly indicating the importance of other physical processes.

  14. Network-Physics (NP) BEC DIGITAL(#)-VULNERABILITY; ``Q-Computing"=Simple-Arithmetic;Modular-Congruences=SignalXNoise PRODUCTS=Clock-model;BEC-Factorization;RANDOM-# Definition;P=/=NP TRIVIAL Proof!!!

    NASA Astrophysics Data System (ADS)

    Pi, E. I.; Siegel, E.

    2010-03-01

    Siegel[AMS Natl.Mtg.(2002)-Abs.973-60-124] digits logarithmic- law inversion to ONLY BEQS BEC:Quanta/Bosons=#: EMP-like SEVERE VULNERABILITY of ONLY #-networks(VS.ANALOG INvulnerability) via Barabasi NP(VS.dynamics[Not.AMS(5/2009)] critique);(so called)``quantum-computing''(QC) = simple-arithmetic (sansdivision);algorithmiccomplexities:INtractibility/UNdecidabi lity/INefficiency/NONcomputability/HARDNESS(so MIScalled) ``noise''-induced-phase-transition(NIT)ACCELERATION:Cook-Levin theorem Reducibility = RG fixed-points; #-Randomness DEFINITION via WHAT? Query(VS. Goldreich[Not.AMS(2002)] How? mea culpa)= ONLY MBCS hot-plasma v #-clumping NON-random BEC; Modular-Arithmetic Congruences = Signal x Noise PRODUCTS = clock-model; NON-Shor[Physica A,341,586(04)]BEC logarithmic-law inversion factorization: Watkins #-theory U statistical- physics); P=/=NP C-S TRIVIAL Proof: Euclid!!! [(So Miscalled) computational-complexity J-O obviation(3 millennia AGO geometry: NO:CC,``CS'';``Feet of Clay!!!'']; Query WHAT?:Definition: (so MIScalled)``complexity''=UTTER-SIMPLICITY!! v COMPLICATEDNESS MEASURE(S).

  15. Global Coordinates and Exact Aberration Calculations Applied to Physical Optics Modeling of Complex Optical Systems

    NASA Astrophysics Data System (ADS)

    Lawrence, G.; Barnard, C.; Viswanathan, V.

    1986-11-01

    Historically, wave optics computer codes have been paraxial in nature. Folded systems could be modeled by "unfolding" the optical system. Calculation of optical aberrations is, in general, left for the analyst to do with off-line codes. While such paraxial codes were adequate for the simpler systems being studied 10 years ago, current problems such as phased arrays, ring resonators, coupled resonators, and grazing incidence optics require a major advance in analytical capability. This paper describes extension of the physical optics codes GLAD and GLAD V to include a global coordinate system and exact ray aberration calculations. The global coordinate system allows components to be positioned and rotated arbitrarily. Exact aberrations are calculated for components in aligned or misaligned configurations by using ray tracing to compute optical path differences and diffraction propagation. Optical path lengths between components and beam rotations in complex mirror systems are calculated accurately so that coherent interactions in phased arrays and coupled devices may be treated correctly.

  16. An Agent-Based Modeling Framework and Application for the Generic Nuclear Fuel Cycle

    NASA Astrophysics Data System (ADS)

    Gidden, Matthew J.

    Key components of a novel methodology and implementation of an agent-based, dynamic nuclear fuel cycle simulator, Cyclus , are presented. The nuclear fuel cycle is a complex, physics-dependent supply chain. To date, existing dynamic simulators have not treated constrained fuel supply, time-dependent, isotopic-quality based demand, or fuel fungibility particularly well. Utilizing an agent-based methodology that incorporates sophisticated graph theory and operations research techniques can overcome these deficiencies. This work describes a simulation kernel and agents that interact with it, highlighting the Dynamic Resource Exchange (DRE), the supply-demand framework at the heart of the kernel. The key agent-DRE interaction mechanisms are described, which enable complex entity interaction through the use of physics and socio-economic models. The translation of an exchange instance to a variant of the Multicommodity Transportation Problem, which can be solved feasibly or optimally, follows. An extensive investigation of solution performance and fidelity is then presented. Finally, recommendations for future users of Cyclus and the DRE are provided.

  17. A Variational Formalism for the Radiative Transfer Equation and a Geostrophic, Hydrostatic Atmosphere: Prelude to Model 3

    NASA Technical Reports Server (NTRS)

    Achtemeier, Gary L.

    1991-01-01

    The second step in development of MODEL III is summarized. It combines the four radiative transfer equations of the first step with the equations for a geostrophic and hydrostatic atmosphere. This step is intended to bring radiance into a three dimensional balance with wind, height, and temperature. The use of the geostrophic approximation in place of the full set of primitive equations allows for an easier evaluation of how the inclusion of the radiative transfer equation increases the complexity of the variational equations. Seven different variational formulations were developed for geostrophic, hydrostatic, and radiative transfer equations. The first derivation was too complex to yield solutions that were physically meaningful. For the remaining six derivations, the variational method gave the same physical interpretation (the observed brightness temperatures could provide no meaningful input to a geostrophic, hydrostatic balance) at least through the problem solving methodology used in these studies. The variational method is presented and the Euler-Lagrange equations rederived for the geostrophic, hydrostatic, and radiative transfer equations.

  18. Migration of cells in a social context

    PubMed Central

    Vedel, Søren; Tay, Savaş; Johnston, Darius M.; Bruus, Henrik; Quake, Stephen R.

    2013-01-01

    In multicellular organisms and complex ecosystems, cells migrate in a social context. Whereas this is essential for the basic processes of life, the influence of neighboring cells on the individual remains poorly understood. Previous work on isolated cells has observed a stereotypical migratory behavior characterized by short-time directional persistence with long-time random movement. We discovered a much richer dynamic in the social context, with significant variations in directionality, displacement, and speed, which are all modulated by local cell density. We developed a mathematical model based on the experimentally identified “cellular traffic rules” and basic physics that revealed that these emergent behaviors are caused by the interplay of single-cell properties and intercellular interactions, the latter being dominated by a pseudopod formation bias mediated by secreted chemicals and pseudopod collapse following collisions. The model demonstrates how aspects of complex biology can be explained by simple rules of physics and constitutes a rapid test bed for future studies of collective migration of individual cells. PMID:23251032

  19. Migration of cells in a social context.

    PubMed

    Vedel, Søren; Tay, Savaş; Johnston, Darius M; Bruus, Henrik; Quake, Stephen R

    2013-01-02

    In multicellular organisms and complex ecosystems, cells migrate in a social context. Whereas this is essential for the basic processes of life, the influence of neighboring cells on the individual remains poorly understood. Previous work on isolated cells has observed a stereotypical migratory behavior characterized by short-time directional persistence with long-time random movement. We discovered a much richer dynamic in the social context, with significant variations in directionality, displacement, and speed, which are all modulated by local cell density. We developed a mathematical model based on the experimentally identified "cellular traffic rules" and basic physics that revealed that these emergent behaviors are caused by the interplay of single-cell properties and intercellular interactions, the latter being dominated by a pseudopod formation bias mediated by secreted chemicals and pseudopod collapse following collisions. The model demonstrates how aspects of complex biology can be explained by simple rules of physics and constitutes a rapid test bed for future studies of collective migration of individual cells.

  20. A preliminary Monte Carlo study for the treatment head of a carbon-ion radiotherapy facility using TOPAS

    NASA Astrophysics Data System (ADS)

    Liu, Hongdong; Zhang, Lian; Chen, Zhi; Liu, Xinguo; Dai, Zhongying; Li, Qiang; Xu, Xie George

    2017-09-01

    In medical physics it is desirable to have a Monte Carlo code that is less complex, reliable yet flexible for dose verification, optimization, and component design. TOPAS is a newly developed Monte Carlo simulation tool which combines extensive radiation physics libraries available in Geant4 code, easyto-use geometry and support for visualization. Although TOPAS has been widely tested and verified in simulations of proton therapy, there has been no reported application for carbon ion therapy. To evaluate the feasibility and accuracy of TOPAS simulations for carbon ion therapy, a licensed TOPAS code (version 3_0_p1) was used to carry out a dosimetric study of therapeutic carbon ions. Results of depth dose profile based on different physics models have been obtained and compared with the measurements. It is found that the G4QMD model is at least as accurate as the TOPAS default BIC physics model for carbon ions, but when the energy is increased to relatively high levels such as 400 MeV/u, the G4QMD model shows preferable performance. Also, simulations of special components used in the treatment head at the Institute of Modern Physics facility was conducted to investigate the Spread-Out dose distribution in water. The physical dose in water of SOBP was found to be consistent with the aim of the 6 cm ridge filter.

  1. Motor Competence and its Effect on Positive Developmental Trajectories of Health.

    PubMed

    Robinson, Leah E; Stodden, David F; Barnett, Lisa M; Lopes, Vitor P; Logan, Samuel W; Rodrigues, Luis Paulo; D'Hondt, Eva

    2015-09-01

    In 2008, Stodden and colleagues took a unique developmental approach toward addressing the potential role of motor competence in promoting positive or negative trajectories of physical activity, health-related fitness, and weight status. The conceptual model proposed synergistic relationships among physical activity, motor competence, perceived motor competence, health-related physical fitness, and obesity with associations hypothesized to strengthen over time. At the time the model was proposed, limited evidence was available to support or refute the model hypotheses. Over the past 6 years, the number of investigations exploring these relationships has increased significantly. Thus, it is an appropriate time to examine published data that directly or indirectly relate to specific pathways noted in the conceptual model. Evidence indicates that motor competence is positively associated with perceived competence and multiple aspects of health (i.e., physical activity, cardiorespiratory fitness, muscular strength, muscular endurance, and a healthy weight status). However, questions related to the increased strength of associations across time and antecedent/consequent mechanisms remain. An individual's physical and psychological development is a complex and multifaceted process that synergistically evolves across time. Understanding the most salient factors that influence health and well-being and how relationships among these factors change across time is a critical need for future research in this area. This knowledge could aid in addressing the declining levels of physical activity and fitness along with the increasing rates of obesity across childhood and adolescence.

  2. Modelling the Burstiness of Complex Space Plasmas Using Linear Fractional Stable Motion

    NASA Astrophysics Data System (ADS)

    Watkins, N. W.; Rosenberg, S. J.; Chapman, S. C.; Sanchez, R.; Credgington, D.

    2009-12-01

    The Earth's magnetosphere is quite clearly “complex" in the everyday sense of the word. However, in the last 15 to 20 years there has been a growing thread in space physics (e.g. Freeman & Watkins [Science, 2002] , Chapman & Watkins [Space Science Reviews, 2001]) using and developing some of the emerging science of complex systems (e.g. Sornette, 2nd Edition, 2004). A particularly well-studied set of system properties has been derived from those used in the study of critical phenomena, notably correlation functions, power spectra, distributions of bursts above a threshold, and so on (e.g. Watkins [Nonlinear Processes in Geophysics, 2002]). These have revealed behaviours familiar from many other complex systems, such as burstiness, long range dependence, heavy tailed probability distributions and so forth. The results of these studies are typically interpreted within existing paradigms, most notably self-organised criticality. However, just as in other developing areas of complexity science (Sornette, op. cit.; Watkins & Freeman [Science, 2008]), it is increasingly being realised that the diagnostics in use have not been extensively studied outside the context in which they were originally proposed. This means that, for example, it is not well established what the expected distribution of bursts above a fixed threshold will be for time series other than Brownian (or fractional Brownian) motion. We will describe some preliminary investigations (Watkins et al [Physical Review E, 2009]) into the burst distribution problem, using Linear Fractional Stable Motion as a controllable toy model of a process exhibiting both long-range dependence and heavy tails. A by product of the work was a differential equation for LFSM (Watkins et al, op cit), which we also briefly discuss. Current and future work will also focus on the thorny problem of distinguishing turbulence from SOC in natural datasets (Watkins et al; Uritsky et al [Physical Review Letters, 2009]) with limited dynamic range, an area which will also be briefly discussed.

  3. PREFACE: 2nd International Conference on Mathematical Modeling in Physical Sciences 2013 (IC-MSQUARE 2013)

    NASA Astrophysics Data System (ADS)

    2014-03-01

    The second International Conference on Mathematical Modeling in Physical Sciences (IC-MSQUARE) took place at Prague, Czech Republic, from Sunday 1 September to Thursday 5 September 2013. The Conference was attended by more than 280 participants and hosted about 400 oral, poster, and virtual presentations while counted more than 600 pre-registered authors. The second IC-MSQUARE consisted of different and diverging workshops and thus covered various research fields where Mathematical Modeling is used, such as Theoretical/Mathematical Physics, Neutrino Physics, Non-Integrable Systems, Dynamical Systems, Computational Nanoscience, Biological Physics, Computational Biomechanics, Complex Networks, Stochastic Modeling, Fractional Statistics, DNA Dynamics, Macroeconomics. The scientific program was rather heavy since after the Keynote and Invited Talks in the morning, three parallel sessions were running every day. However, according to all attendees, the program was excellent with high level of talks and the scientific environment was fruitful, thus all attendees had a creative time. We would like to thank the Keynote Speaker and the Invited Speakers for their significant contribution to IC-MSQUARE. We also would like to thank the Members of the International Advisory and Scientific Committees as well as the Members of the Organizing Committee. Further information on the editors, speakers and committees is available in the attached pdf.

  4. Unforced decadal fluctuations in a coupled model of the atmosphere and ocean mixed layer

    NASA Technical Reports Server (NTRS)

    Barnett, T. P.; Del Genio, A. D.; Ruedy, R. A.

    1992-01-01

    Global average temperature in a 100-year control run of a model used for greenhouse gas response simulations showed low-frequency natural variability comparable in magnitude to that observed over the last 100 years. The model variability was found to be barotropic in the atmosphere, and located in the tropical strip with largest values near the equator in the Pacific. The model variations were traced to complex, low-frequency interactions between the meridional sea surface temperature gradients in the eastern equatorial Pacific, clouds at both high and low levels, and features of the tropical atmospheric circulation. The variations in these and other model parameters appear to oscillate between two limiting climate states. The physical scenario accounting for the oscillations on decadal time scales is almost certainly not found in the real world on shorter time scales due to limited resolution and the omission of key physics (e.g., equatorial ocean dynamics) in the model. The real message is that models with dynamical limitations can still produce significant long-term variability. Only a thorough physical diagnosis of such simulations and comparisons with decadal-length data sets will allow one to decide if faith in the model results is, or is not, warranted.

  5. Modelling radiation fluxes in simple and complex environments: basics of the RayMan model.

    PubMed

    Matzarakis, Andreas; Rutz, Frank; Mayer, Helmut

    2010-03-01

    Short- and long-wave radiation flux densities absorbed by people have a significant influence on their energy balance. The heat effect of the absorbed radiation flux densities is parameterised by the mean radiant temperature. This paper presents the physical basis of the RayMan model, which simulates the short- and long-wave radiation flux densities from the three-dimensional surroundings in simple and complex environments. RayMan has the character of a freely available radiation and human-bioclimate model. The aim of the RayMan model is to calculate radiation flux densities, sunshine duration, shadow spaces and thermo-physiologically relevant assessment indices using only a limited number of meteorological and other input data. A comparison between measured and simulated values for global radiation and mean radiant temperature shows that the simulated data closely resemble measured data.

  6. Mid-IR Imaging of Orion BN/KL: Modeling of Physical Conditions and Energy Balance

    NASA Astrophysics Data System (ADS)

    Gezari, Daniel; Varosi, Frank; Dwek, Eli; Danchi, William; Tan, Jonathan; Okumura, Shin-Ichiro

    We have modeled two mid-infrared imaging photometry data sets to determine the spatial distribution of physical conditions in the BN/KL infrared complex. We observed the BN/KL region using the 10-m Keck I telescope and the LWS in the direct imaging mode, over a 13'' × 19'' field (Figure 1, left). We also modeled images obtained with COMICS (Kataza et al. 2000) at the 8.2-m SUBARU telescope, over a total field of view is 31'' × 41'' (Figure 1, right), in a total of nine bands: 7.8, 8.8, 9.7, 10.5, 11.7, 12.4, 18.5, 20.8 and 24.8 μm with ~1 μm bandwidth interference filters.

  7. Neuronal avalanches and learning

    NASA Astrophysics Data System (ADS)

    de Arcangelis, Lucilla

    2011-05-01

    Networks of living neurons represent one of the most fascinating systems of biology. If the physical and chemical mechanisms at the basis of the functioning of a single neuron are quite well understood, the collective behaviour of a system of many neurons is an extremely intriguing subject. Crucial ingredient of this complex behaviour is the plasticity property of the network, namely the capacity to adapt and evolve depending on the level of activity. This plastic ability is believed, nowadays, to be at the basis of learning and memory in real brains. Spontaneous neuronal activity has recently shown features in common to other complex systems. Experimental data have, in fact, shown that electrical information propagates in a cortex slice via an avalanche mode. These avalanches are characterized by a power law distribution for the size and duration, features found in other problems in the context of the physics of complex systems and successful models have been developed to describe their behaviour. In this contribution we discuss a statistical mechanical model for the complex activity in a neuronal network. The model implements the main physiological properties of living neurons and is able to reproduce recent experimental results. Then, we discuss the learning abilities of this neuronal network. Learning occurs via plastic adaptation of synaptic strengths by a non-uniform negative feedback mechanism. The system is able to learn all the tested rules, in particular the exclusive OR (XOR) and a random rule with three inputs. The learning dynamics exhibits universal features as function of the strength of plastic adaptation. Any rule could be learned provided that the plastic adaptation is sufficiently slow.

  8. Multi-element least square HDMR methods and their applications for stochastic multiscale model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Xinping, E-mail: exping@126.com

    Stochastic multiscale modeling has become a necessary approach to quantify uncertainty and characterize multiscale phenomena for many practical problems such as flows in stochastic porous media. The numerical treatment of the stochastic multiscale models can be very challengeable as the existence of complex uncertainty and multiple physical scales in the models. To efficiently take care of the difficulty, we construct a computational reduced model. To this end, we propose a multi-element least square high-dimensional model representation (HDMR) method, through which the random domain is adaptively decomposed into a few subdomains, and a local least square HDMR is constructed in eachmore » subdomain. These local HDMRs are represented by a finite number of orthogonal basis functions defined in low-dimensional random spaces. The coefficients in the local HDMRs are determined using least square methods. We paste all the local HDMR approximations together to form a global HDMR approximation. To further reduce computational cost, we present a multi-element reduced least-square HDMR, which improves both efficiency and approximation accuracy in certain conditions. To effectively treat heterogeneity properties and multiscale features in the models, we integrate multiscale finite element methods with multi-element least-square HDMR for stochastic multiscale model reduction. This approach significantly reduces the original model's complexity in both the resolution of the physical space and the high-dimensional stochastic space. We analyze the proposed approach, and provide a set of numerical experiments to demonstrate the performance of the presented model reduction techniques. - Highlights: • Multi-element least square HDMR is proposed to treat stochastic models. • Random domain is adaptively decomposed into some subdomains to obtain adaptive multi-element HDMR. • Least-square reduced HDMR is proposed to enhance computation efficiency and approximation accuracy in certain conditions. • Integrating MsFEM and multi-element least square HDMR can significantly reduce computation complexity.« less

  9. Analysis of macromolecules, ligands and macromolecule-ligand complexes

    DOEpatents

    Von Dreele, Robert B [Los Alamos, NM

    2008-12-23

    A method for determining atomic level structures of macromolecule-ligand complexes through high-resolution powder diffraction analysis and a method for providing suitable microcrystalline powder for diffraction analysis are provided. In one embodiment, powder diffraction data is collected from samples of polycrystalline macromolecule and macromolecule-ligand complex and the refined structure of the macromolecule is used as an approximate model for a combined Rietveld and stereochemical restraint refinement of the macromolecule-ligand complex. A difference Fourier map is calculated and the ligand position and points of interaction between the atoms of the macromolecule and the atoms of the ligand can be deduced and visualized. A suitable polycrystalline sample of macromolecule-ligand complex can be produced by physically agitating a mixture of lyophilized macromolecule, ligand and a solvent.

  10. Proceedings of the Quantum Computation for Physical Modeling Workshop Held in North Falmouth, Massachusetts on October 18-19, 2000

    DTIC Science & Technology

    2002-01-01

    1-3], a task that is exponen- algorithms to model quantum mechanical systems. tially complex in the number of particles treated and A starting point ...cell size approaches zero). There- tion were presented by Succi and Benzi [10,11] and fore, from the point -of-view of the modeler, there ex- by... point regarding this particular In both cases, the model behaves as expected. gate is that when measurements are periodically made Third, in Section 4

  11. Physical and cognitive effort discounting across different reward magnitudes: Tests of discounting models

    PubMed Central

    Ostaszewski, Paweł

    2017-01-01

    The effort required to obtain a rewarding outcome is an important factor in decision-making. Describing the reward devaluation by increasing effort intensity is substantial to understanding human preferences, because every action and choice that we make is in itself effortful. To investigate how reward valuation is affected by physical and cognitive effort, we compared mathematical discounting functions derived from research on discounting. Seven discounting models were tested across three different reward magnitudes. To test the models, data were collected from a total of 114 participants recruited from the general population. For one-parameter models (hyperbolic, exponential, and parabolic), the data were explained best by the exponential model as given by a percentage of explained variance. However, after introducing an additional parameter, data obtained in the cognitive and physical effort conditions were best described by the power function model. Further analysis, using the second order Akaike and Bayesian Information Criteria, which account for model complexity, allowed us to identify the best model among all tested. We found that the power function best described the data, which corresponds to conventional analyses based on the R2 measure. This supports the conclusion that the function best describing reward devaluation by physical and cognitive effort is a concave one and is different from those that describe delay or probability discounting. In addition, consistent magnitude effects were observed that correspond to those in delay discounting research. PMID:28759631

  12. Physically based modeling in catchment hydrology at 50: Survey and outlook

    NASA Astrophysics Data System (ADS)

    Paniconi, Claudio; Putti, Mario

    2015-09-01

    Integrated, process-based numerical models in hydrology are rapidly evolving, spurred by novel theories in mathematical physics, advances in computational methods, insights from laboratory and field experiments, and the need to better understand and predict the potential impacts of population, land use, and climate change on our water resources. At the catchment scale, these simulation models are commonly based on conservation principles for surface and subsurface water flow and solute transport (e.g., the Richards, shallow water, and advection-dispersion equations), and they require robust numerical techniques for their resolution. Traditional (and still open) challenges in developing reliable and efficient models are associated with heterogeneity and variability in parameters and state variables; nonlinearities and scale effects in process dynamics; and complex or poorly known boundary conditions and initial system states. As catchment modeling enters a highly interdisciplinary era, new challenges arise from the need to maintain physical and numerical consistency in the description of multiple processes that interact over a range of scales and across different compartments of an overall system. This paper first gives an historical overview (past 50 years) of some of the key developments in physically based hydrological modeling, emphasizing how the interplay between theory, experiments, and modeling has contributed to advancing the state of the art. The second part of the paper examines some outstanding problems in integrated catchment modeling from the perspective of recent developments in mathematical and computational science.

  13. Time scale variation of NV resonance line profiles of HD203064

    NASA Astrophysics Data System (ADS)

    Strantzalis, A.

    2012-01-01

    Hot emission star, such as Be and Oe, present many spectral lines with very complex and peculiar profiles. Therefore, we cannot find a classical distribution to fit theoretically those physical line profiles. So, many physical parameters of the regions, where spectral lines are created, are difficult to estimate. Here, in this poster paper we study the UV NV (λλ 1238.821, 1242.804 A) resonance lines of the Be star HD203064 at three different dates. We using the Gauss-Rotation model, that proposed the idea that these complex profiles consist of a number of independent Discrete or Satellite Absorption Components (DACs, SACs). Our purpose is to calculate the values of a group of physical parameters as the apparent rotational, radial, and random velocities of the thermal motions of the ions. Also the Full Width at Half Maximum (FWHM) and the column density, as well as the absorbed energy of the independent regions of matter, which produce the main and the satellite components of the studied spectral lines. In addition, we determine the time scale variations of the above physical parameters.

  14. Internet Based Simulations of Debris Dispersion of Shuttle Launch

    NASA Technical Reports Server (NTRS)

    Bardina, Jorge; Thirumalainambi, Rajkumar

    2004-01-01

    The debris dispersion model (which dispersion model?) is so heterogeneous and interrelated with various factors, 3D graphics combined with physical models are useful in understanding the complexity of launch and range operations. Modeling and simulation in this area mainly focuses on orbital dynamics and range safety concepts, including destruct limits, telemetry and tracking, and population risk. Particle explosion modeling is the process of simulating an explosion by breaking the rocket into many pieces. The particles are scattered throughout their motion using the laws of physics eventually coming to rest. The size of the foot print explains the type of explosion and distribution of the particles. The shuttle launch and range operations in this paper are discussed based on the operations of the Kennedy Space Center, Florida, USA. Java 3D graphics provides geometric and visual content with suitable modeling behaviors of Shuttle launches.

  15. Ultracold Nonreactive Molecules in an Optical Lattice: Connecting Chemistry to Many-Body Physics.

    PubMed

    Doçaj, Andris; Wall, Michael L; Mukherjee, Rick; Hazzard, Kaden R A

    2016-04-01

    We derive effective lattice models for ultracold bosonic or fermionic nonreactive molecules (NRMs) in an optical lattice, analogous to the Hubbard model that describes ultracold atoms in a lattice. In stark contrast to the Hubbard model, which is commonly assumed to accurately describe NRMs, we find that the single on-site interaction parameter U is replaced by a multichannel interaction, whose properties we elucidate. Because this arises from complex short-range collisional physics, it requires no dipolar interactions and thus occurs even in the absence of an electric field or for homonuclear molecules. We find a crossover between coherent few-channel models and fully incoherent single-channel models as the lattice depth is increased. We show that the effective model parameters can be determined in lattice modulation experiments, which, consequently, measure molecular collision dynamics with a vastly sharper energy resolution than experiments in a free-space ultracold gas.

  16. A Density Perturbation Method to Study the Eigenstructure of Two-Phase Flow Equation Systems

    NASA Astrophysics Data System (ADS)

    Cortes, J.; Debussche, A.; Toumi, I.

    1998-12-01

    Many interesting and challenging physical mechanisms are concerned with the mathematical notion of eigenstructure. In two-fluid models, complex phasic interactions yield a complex eigenstructure which may raise numerous problems in numerical simulations. In this paper, we develop a perturbation method to examine the eigenvalues and eigenvectors of two-fluid models. This original method, based on the stiffness of the density ratio, provides a convenient tool to study the relevance of pressure momentum interactions and allows us to get precise approximations of the whole flow eigendecomposition for minor requirements. Roe scheme is successfully implemented and some numerical tests are presented.

  17. Curvature and temperature of complex networks.

    PubMed

    Krioukov, Dmitri; Papadopoulos, Fragkiskos; Vahdat, Amin; Boguñá, Marián

    2009-09-01

    We show that heterogeneous degree distributions in observed scale-free topologies of complex networks can emerge as a consequence of the exponential expansion of hidden hyperbolic space. Fermi-Dirac statistics provides a physical interpretation of hyperbolic distances as energies of links. The hidden space curvature affects the heterogeneity of the degree distribution, while clustering is a function of temperature. We embed the internet into the hyperbolic plane and find a remarkable congruency between the embedding and our hyperbolic model. Besides proving our model realistic, this embedding may be used for routing with only local information, which holds significant promise for improving the performance of internet routing.

  18. Hyperpolarized 89Y NMR spectroscopic detection of yttrium ion and DOTA macrocyclic ligand complexation: pH dependence and Y-DOTA intermediates

    NASA Astrophysics Data System (ADS)

    Ferguson, Sarah; Kiswandhi, Andhika; Niedbalski, Peter; Parish, Christopher; Kovacs, Zoltan; Lumata, Lloyd

    Dissolution dynamic nuclear polarization (DNP) is a rapidly emerging physics technique used to enhance the signal strength in nuclear magnetic resonance (NMR) and imaging (MRI) experiments for nuclear spins such as yttrium-89 by >10,000-fold. One of the most common and stable MRI contrast agents used in the clinic is Gd-DOTA. In this work, we have investigated the binding of the yttrium and DOTA ligand as a model for complexation of Gd ion and DOTA ligand. The macrocyclic ligand DOTA is special because its complexation with lanthanide ions such as Gd3+ or Y3+ is highly pH dependent. Using this physics technology, we have tracked the complexation kinetics of hyperpolarized Y-triflate and DOTA ligand in real-time and detected the Y-DOTA intermediates. Different kinds of buffers were used (lactate, acetate, citrate, oxalate) and the pseudo-first order complexation kinetic calculations will be discussed. The authors would like to acknowledge the support by US Dept of Defense Award No. W81XWH-14-1-0048 and Robert A. Welch Foundation Grant No. AT-1877.

  19. Physical Complexity and Cognitive Evolution

    NASA Astrophysics Data System (ADS)

    Jedlicka, Peter

    Our intuition tells us that there is a general trend in the evolution of nature, a trend towards greater complexity. However, there are several definitions of complexity and hence it is difficult to argue for or against the validity of this intuition. Christoph Adami has recently introduced a novel measure called physical complexity that assigns low complexity to both ordered and random systems and high complexity to those in between. Physical complexity measures the amount of information that an organism stores in its genome about the environment in which it evolves. The theory of physical complexity predicts that evolution increases the amount of `knowledge' an organism accumulates about its niche. It might be fruitful to generalize Adami's concept of complexity to the entire evolution (including the evolution of man). Physical complexity fits nicely into the philosophical framework of cognitive biology which considers biological evolution as a progressing process of accumulation of knowledge (as a gradual increase of epistemic complexity). According to this paradigm, evolution is a cognitive `ratchet' that pushes the organisms unidirectionally towards higher complexity. Dynamic environment continually creates problems to be solved. To survive in the environment means to solve the problem, and the solution is an embodied knowledge. Cognitive biology (as well as the theory of physical complexity) uses the concepts of information and entropy and views the evolution from both the information-theoretical and thermodynamical perspective. Concerning humans as conscious beings, it seems necessary to postulate an emergence of a new kind of knowledge - a self-aware and self-referential knowledge. Appearence of selfreflection in evolution indicates that the human brain reached a new qualitative level in the epistemic complexity.

  20. Complex systems: physics beyond physics

    NASA Astrophysics Data System (ADS)

    Holovatch, Yurij; Kenna, Ralph; Thurner, Stefan

    2017-03-01

    Complex systems are characterised by specific time-dependent interactions among their many constituents. As a consequence they often manifest rich, non-trivial and unexpected behaviour. Examples arise both in the physical and non-physical worlds. The study of complex systems forms a new interdisciplinary research area that cuts across physics, biology, ecology, economics, sociology, and the humanities. In this paper we review the essence of complex systems from a physicists' point of view, and try to clarify what makes them conceptually different from systems that are traditionally studied in physics. Our goal is to demonstrate how the dynamics of such systems may be conceptualised in quantitative and predictive terms by extending notions from statistical physics and how they can often be captured in a framework of co-evolving multiplex network structures. We mention three areas of complex-systems science that are currently studied extensively, the science of cities, dynamics of societies, and the representation of texts as evolutionary objects. We discuss why these areas form complex systems in the above sense. We argue that there exists plenty of new ground for physicists to explore and that methodical and conceptual progress is needed most.

  1. A novel multilayer model for missing link prediction and future link forecasting in dynamic complex networks

    NASA Astrophysics Data System (ADS)

    Yasami, Yasser; Safaei, Farshad

    2018-02-01

    The traditional complex network theory is particularly focused on network models in which all network constituents are dealt with equivalently, while fail to consider the supplementary information related to the dynamic properties of the network interactions. This is a main constraint leading to incorrect descriptions of some real-world phenomena or incomplete capturing the details of certain real-life problems. To cope with the problem, this paper addresses the multilayer aspects of dynamic complex networks by analyzing the properties of intrinsically multilayered co-authorship networks, DBLP and Astro Physics, and presenting a novel multilayer model of dynamic complex networks. The model examines the layers evolution (layers birth/death process and lifetime) throughout the network evolution. Particularly, this paper models the evolution of each node's membership in different layers by an Infinite Factorial Hidden Markov Model considering feature cascade, and thereby formulates the link generation process for intra-layer and inter-layer links. Although adjacency matrixes are useful to describe the traditional single-layer networks, such a representation is not sufficient to describe and analyze the multilayer dynamic networks. This paper also extends a generalized mathematical infrastructure to address the problems issued by multilayer complex networks. The model inference is performed using some Markov Chain Monte Carlo sampling strategies, given synthetic and real complex networks data. Experimental results indicate a tremendous improvement in the performance of the proposed multilayer model in terms of sensitivity, specificity, positive and negative predictive values, positive and negative likelihood ratios, F1-score, Matthews correlation coefficient, and accuracy for two important applications of missing link prediction and future link forecasting. The experimental results also indicate the strong predictivepower of the proposed model for the application of cascade prediction in terms of accuracy.

  2. Calabi-Yau structures on categories of matrix factorizations

    NASA Astrophysics Data System (ADS)

    Shklyarov, Dmytro

    2017-09-01

    Using tools of complex geometry, we construct explicit proper Calabi-Yau structures, that is, non-degenerate cyclic cocycles on differential graded categories of matrix factorizations of regular functions with isolated critical points. The formulas involve the Kapustin-Li trace and its higher corrections. From the physics perspective, our result yields explicit 'off-shell' models for categories of topological D-branes in B-twisted Landau-Ginzburg models.

  3. Introducing ab initio based neural networks for transition-rate prediction in kinetic Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Messina, Luca; Castin, Nicolas; Domain, Christophe; Olsson, Pär

    2017-02-01

    The quality of kinetic Monte Carlo (KMC) simulations of microstructure evolution in alloys relies on the parametrization of point-defect migration rates, which are complex functions of the local chemical composition and can be calculated accurately with ab initio methods. However, constructing reliable models that ensure the best possible transfer of physical information from ab initio to KMC is a challenging task. This work presents an innovative approach, where the transition rates are predicted by artificial neural networks trained on a database of 2000 migration barriers, obtained with density functional theory (DFT) in place of interatomic potentials. The method is tested on copper precipitation in thermally aged iron alloys, by means of a hybrid atomistic-object KMC model. For the object part of the model, the stability and mobility properties of copper-vacancy clusters are analyzed by means of independent atomistic KMC simulations, driven by the same neural networks. The cluster diffusion coefficients and mean free paths are found to increase with size, confirming the dominant role of coarsening of medium- and large-sized clusters in the precipitation kinetics. The evolution under thermal aging is in better agreement with experiments with respect to a previous interatomic-potential model, especially concerning the experiment time scales. However, the model underestimates the solubility of copper in iron due to the excessively high solution energy predicted by the chosen DFT method. Nevertheless, this work proves the capability of neural networks to transfer complex ab initio physical properties to higher-scale models, and facilitates the extension to systems with increasing chemical complexity, setting the ground for reliable microstructure evolution simulations in a wide range of alloys and applications.

  4. Computer Simulations and Theoretical Studies of Complex Systems: from complex fluids to frustrated magnets

    NASA Astrophysics Data System (ADS)

    Choi, Eunsong

    Computer simulations are an integral part of research in modern condensed matter physics; they serve as a direct bridge between theory and experiment by systemactically applying a microscopic model to a collection of particles that effectively imitate a macroscopic system. In this thesis, we study two very differnt condensed systems, namely complex fluids and frustrated magnets, primarily by simulating classical dynamics of each system. In the first part of the thesis, we focus on ionic liquids (ILs) and polymers--the two complementary classes of materials that can be combined to provide various unique properties. The properties of polymers/ILs systems, such as conductivity, viscosity, and miscibility, can be fine tuned by choosing an appropriate combination of cations, anions, and polymers. However, designing a system that meets a specific need requires a concrete understanding of physics and chemistry that dictates a complex interplay between polymers and ionic liquids. In this regard, molecular dynamics (MD) simulation is an efficient tool that provides a molecular level picture of such complex systems. We study the behavior of Poly (ethylene oxide) (PEO) and the imidazolium based ionic liquids, using MD simulations and statistical mechanics. We also discuss our efforts to develop reliable and efficient classical force-fields for PEO and the ionic liquids. The second part is devoted to studies on geometrically frustrated magnets. In particular, a microscopic model, which gives rise to an incommensurate spiral magnetic ordering observed in a pyrochlore antiferromagnet is investigated. The validation of the model is made via a comparison of the spin-wave spectra with the neutron scattering data. Since the standard Holstein-Primakoff method is difficult to employ in such a complex ground state structure with a large unit cell, we carry out classical spin dynamics simulations to compute spin-wave spectra directly from the Fourier transform of spin trajectories. We conclude the study by showing an excellent agreement between the simulation and the experiment.

  5. Limits on efficient computation in the physical world

    NASA Astrophysics Data System (ADS)

    Aaronson, Scott Joel

    More than a speculative technology, quantum computing seems to challenge our most basic intuitions about how the physical world should behave. In this thesis I show that, while some intuitions from classical computer science must be jettisoned in the light of modern physics, many others emerge nearly unscathed; and I use powerful tools from computational complexity theory to help determine which are which. In the first part of the thesis, I attack the common belief that quantum computing resembles classical exponential parallelism, by showing that quantum computers would face serious limitations on a wider range of problems than was previously known. In particular, any quantum algorithm that solves the collision problem---that of deciding whether a sequence of n integers is one-to-one or two-to-one---must query the sequence O (n1/5) times. This resolves a question that was open for years; previously no lower bound better than constant was known. A corollary is that there is no "black-box" quantum algorithm to break cryptographic hash functions or solve the Graph Isomorphism problem in polynomial time. I also show that relative to an oracle, quantum computers could not solve NP-complete problems in polynomial time, even with the help of nonuniform "quantum advice states"; and that any quantum algorithm needs O (2n/4/n) queries to find a local minimum of a black-box function on the n-dimensional hypercube. Surprisingly, the latter result also leads to new classical lower bounds for the local search problem. Finally, I give new lower bounds on quantum one-way communication complexity, and on the quantum query complexity of total Boolean functions and recursive Fourier sampling. The second part of the thesis studies the relationship of the quantum computing model to physical reality. I first examine the arguments of Leonid Levin, Stephen Wolfram, and others who believe quantum computing to be fundamentally impossible. I find their arguments unconvincing without a "Sure/Shor separator"---a criterion that separates the already-verified quantum states from those that appear in Shor's factoring algorithm. I argue that such a separator should be based on a complexity classification of quantum states, and go on to create such a classification. Next I ask what happens to the quantum computing model if we take into account that the speed of light is finite---and in particular, whether Grover's algorithm still yields a quadratic speedup for searching a database. Refuting a claim by Benioff, I show that the surprising answer is yes. Finally, I analyze hypothetical models of computation that go even beyond quantum computing. I show that many such models would be as powerful as the complexity class PP, and use this fact to give a simple, quantum computing based proof that PP is closed under intersection. On the other hand, I also present one model---wherein we could sample the entire history of a hidden variable---that appears to be more powerful than standard quantum computing, but only slightly so.

  6. Reduction of a linear complex model for respiratory system during Airflow Interruption.

    PubMed

    Jablonski, Ireneusz; Mroczka, Janusz

    2010-01-01

    The paper presents methodology of a complex model reduction to its simpler version - an identifiable inverse model. Its main tool is a numerical procedure of sensitivity analysis (structural and parametric) applied to the forward linear equivalent designed for the conditions of interrupter experiment. Final result - the reduced analog for the interrupter technique is especially worth of notice as it fills a major gap in occlusional measurements, which typically use simple, one- or two-element physical representations. Proposed electrical reduced circuit, being structural combination of resistive, inertial and elastic properties, can be perceived as a candidate for reliable reconstruction and quantification (in the time and frequency domain) of dynamical behavior of the respiratory system in response to a quasi-step excitation by valve closure.

  7. Tidal Simulations of an Incised-Valley Fluvial System with a Physics-Based Geologic Model

    NASA Astrophysics Data System (ADS)

    Ghayour, K.; Sun, T.

    2012-12-01

    Physics-based geologic modeling approaches use fluid flow in conjunction with sediment transport and deposition models to devise evolutionary geologic models that focus on underlying physical processes and attempt to resolve them at pertinent spatial and temporal scales. Physics-based models are particularly useful when the evolution of a depositional system is driven by the interplay of autogenic processes and their response to allogenic controls. This interplay can potentially create complex reservoir architectures with high permeability sedimentary bodies bounded by a hierarchy of shales that can effectively impede flow in the subsurface. The complex stratigraphy of tide-influenced fluvial systems is an example of such co-existing and interacting environments of deposition. The focus of this talk is a novel formulation of boundary conditions for hydrodynamics-driven models of sedimentary systems. In tidal simulations, a time-accurate boundary treatment is essential for proper imposition of tidal forcing and fluvial inlet conditions where the flow may be reversed at times within a tidal cycle. As such, the boundary treatment at the inlet has to accommodate for a smooth transition from inflow to outflow and vice-versa without creating numerical artifacts. Our numerical experimentations showed that boundary condition treatments based on a local (frozen) one-dimensional approach along the boundary normal which does not account for the variation of flow quantities in the tangential direction often lead to unsatisfactory results corrupted by numerical artifacts. In this talk, we propose a new boundary treatment that retains all spatial and temporal terms in the model and as such is capable to account for nonlinearities and sharp variations of model variables near boundaries. The proposed approach borrows heavily from the idea set forth by J. Sesterhenn1 for compressible Navier-Stokes equations. The methodology is successfully applied to a tide-influenced incised valley fluvial system and the resulting stratigraphy is shown and discussed for different tide amplitudes. 1 Sesterhenn, J.: "A characteristic-type formulation of the Navier-Stokes equations for high-order upwind schemes", Computers & Fluids 30 (1) 37-67, 2001.;

  8. Development of property-transfer models for estimating the hydraulic properties of deep sediments at the Idaho National Engineering and Environmental Laboratory, Idaho

    USGS Publications Warehouse

    Winfield, Kari A.

    2005-01-01

    Because characterizing the unsaturated hydraulic properties of sediments over large areas or depths is costly and time consuming, development of models that predict these properties from more easily measured bulk-physical properties is desirable. At the Idaho National Engineering and Environmental Laboratory, the unsaturated zone is composed of thick basalt flow sequences interbedded with thinner sedimentary layers. Determining the unsaturated hydraulic properties of sedimentary layers is one step in understanding water flow and solute transport processes through this complex unsaturated system. Multiple linear regression was used to construct simple property-transfer models for estimating the water-retention curve and saturated hydraulic conductivity of deep sediments at the Idaho National Engineering and Environmental Laboratory. The regression models were developed from 109 core sample subsets with laboratory measurements of hydraulic and bulk-physical properties. The core samples were collected at depths of 9 to 175 meters at two facilities within the southwestern portion of the Idaho National Engineering and Environmental Laboratory-the Radioactive Waste Management Complex, and the Vadose Zone Research Park southwest of the Idaho Nuclear Technology and Engineering Center. Four regression models were developed using bulk-physical property measurements (bulk density, particle density, and particle size) as the potential explanatory variables. Three representations of the particle-size distribution were compared: (1) textural-class percentages (gravel, sand, silt, and clay), (2) geometric statistics (mean and standard deviation), and (3) graphical statistics (median and uniformity coefficient). The four response variables, estimated from linear combinations of the bulk-physical properties, included saturated hydraulic conductivity and three parameters that define the water-retention curve. For each core sample,values of each water-retention parameter were estimated from the appropriate regression equation and used to calculate an estimated water-retention curve. The degree to which the estimated curve approximated the measured curve was quantified using a goodness-of-fit indicator, the root-mean-square error. Comparison of the root-mean-square-error distributions for each alternative particle-size model showed that the estimated water-retention curves were insensitive to the way the particle-size distribution was represented. Bulk density, the median particle diameter, and the uniformity coefficient were chosen as input parameters for the final models. The property-transfer models developed in this study allow easy determination of hydraulic properties without need for their direct measurement. Additionally, the models provide the basis for development of theoretical models that rely on physical relationships between the pore-size distribution and the bulk-physical properties of the media. With this adaptation, the property-transfer models should have greater application throughout the Idaho National Engineering and Environmental Laboratory and other geographic locations.

  9. One ring to rule them all: storm time ring current and its influence on radiation belts, plasmasphere and global magnetosphere electrodynamics

    NASA Astrophysics Data System (ADS)

    Buzulukova, Natalia; Fok, Mei-Ching; Glocer, Alex; Moore, Thomas E.

    2013-04-01

    We report studies of the storm time ring current and its influence on the radiation belts, plasmasphere and global magnetospheric dynamics. The near-Earth space environment is described by multiscale physics that reflects a variety of processes and conditions that occur in magnetospheric plasma. For a successful description of such a plasma, a complex solution is needed which allows multiple physics domains to be described using multiple physical models. A key population of the inner magnetosphere is ring current plasma. Ring current dynamics affects magnetic and electric fields in the entire magnetosphere, the distribution of cold ionospheric plasma (plasmasphere), and radiation belts particles. To study electrodynamics of the inner magnetosphere, we present a MHD model (BATSRUS code) coupled with ionospheric solver for electric field and with ring current-radiation belt model (CIMI code). The model will be used as a tool to reveal details of coupling between different regions of the Earth's magnetosphere. A model validation will be also presented based on comparison with data from THEMIS, POLAR, GOES, and TWINS missions. INVITED TALK

  10. Verification of Electromagnetic Physics Models for Parallel Computing Architectures in the GeantV Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amadio, G.; et al.

    An intensive R&D and programming effort is required to accomplish new challenges posed by future experimental high-energy particle physics (HEP) programs. The GeantV project aims to narrow the gap between the performance of the existing HEP detector simulation software and the ideal performance achievable, exploiting latest advances in computing technology. The project has developed a particle detector simulation prototype capable of transporting in parallel particles in complex geometries exploiting instruction level microparallelism (SIMD and SIMT), task-level parallelism (multithreading) and high-level parallelism (MPI), leveraging both the multi-core and the many-core opportunities. We present preliminary verification results concerning the electromagnetic (EM) physicsmore » models developed for parallel computing architectures within the GeantV project. In order to exploit the potential of vectorization and accelerators and to make the physics model effectively parallelizable, advanced sampling techniques have been implemented and tested. In this paper we introduce a set of automated statistical tests in order to verify the vectorized models by checking their consistency with the corresponding Geant4 models and to validate them against experimental data.« less

  11. Application of Intervention Mapping to the Development of a Complex Physical Therapist Intervention.

    PubMed

    Jones, Taryn M; Dear, Blake F; Hush, Julia M; Titov, Nickolai; Dean, Catherine M

    2016-12-01

    Physical therapist interventions, such as those designed to change physical activity behavior, are often complex and multifaceted. In order to facilitate rigorous evaluation and implementation of these complex interventions into clinical practice, the development process must be comprehensive, systematic, and transparent, with a sound theoretical basis. Intervention Mapping is designed to guide an iterative and problem-focused approach to the development of complex interventions. The purpose of this case report is to demonstrate the application of an Intervention Mapping approach to the development of a complex physical therapist intervention, a remote self-management program aimed at increasing physical activity after acquired brain injury. Intervention Mapping consists of 6 steps to guide the development of complex interventions: (1) needs assessment; (2) identification of outcomes, performance objectives, and change objectives; (3) selection of theory-based intervention methods and practical applications; (4) organization of methods and applications into an intervention program; (5) creation of an implementation plan; and (6) generation of an evaluation plan. The rationale and detailed description of this process are presented using an example of the development of a novel and complex physical therapist intervention, myMoves-a program designed to help individuals with an acquired brain injury to change their physical activity behavior. The Intervention Mapping framework may be useful in the development of complex physical therapist interventions, ensuring the development is comprehensive, systematic, and thorough, with a sound theoretical basis. This process facilitates translation into clinical practice and allows for greater confidence and transparency when the program efficacy is investigated. © 2016 American Physical Therapy Association.

  12. Development of a Three-Dimensional, Unstructured Material Response Design Tool

    NASA Technical Reports Server (NTRS)

    Schulz, Joseph C.; Stern, Eric C.; Muppidi, Suman; Palmer, Grant E.; Schroeder, Olivia

    2017-01-01

    A preliminary verification and validation of a new material response model is presented. This model, Icarus, is intended to serve as a design tool for the thermal protection systems of re-entry vehicles. Currently, the capability of the model is limited to simulating the pyrolysis of a material as a result of the radiative and convective surface heating imposed on the material from the surrounding high enthalpy gas. Since the major focus behind the development of Icarus has been model extensibility, the hope is that additional physics can be quickly added. This extensibility is critical since thermal protection systems are becoming increasing complex, e.g. woven carbon polymers. Additionally, as a three-dimensional, unstructured, finite-volume model, Icarus is capable of modeling complex geometries. In this paper, the mathematical and numerical formulation is presented followed by a discussion of the software architecture and some preliminary verification and validation studies.

  13. A summary of computational experience at GE Aircraft Engines for complex turbulent flows in gas turbines

    NASA Astrophysics Data System (ADS)

    Zerkle, Ronald D.; Prakash, Chander

    1995-03-01

    This viewgraph presentation summarizes some CFD experience at GE Aircraft Engines for flows in the primary gaspath of a gas turbine engine and in turbine blade cooling passages. It is concluded that application of the standard k-epsilon turbulence model with wall functions is not adequate for accurate CFD simulation of aerodynamic performance and heat transfer in the primary gas path of a gas turbine engine. New models are required in the near-wall region which include more physics than wall functions. The two-layer modeling approach appears attractive because of its computational complexity. In addition, improved CFD simulation of film cooling and turbine blade internal cooling passages will require anisotropic turbulence models. New turbulence models must be practical in order to have a significant impact on the engine design process. A coordinated turbulence modeling effort between NASA centers would be beneficial to the gas turbine industry.

  14. A summary of computational experience at GE Aircraft Engines for complex turbulent flows in gas turbines

    NASA Technical Reports Server (NTRS)

    Zerkle, Ronald D.; Prakash, Chander

    1995-01-01

    This viewgraph presentation summarizes some CFD experience at GE Aircraft Engines for flows in the primary gaspath of a gas turbine engine and in turbine blade cooling passages. It is concluded that application of the standard k-epsilon turbulence model with wall functions is not adequate for accurate CFD simulation of aerodynamic performance and heat transfer in the primary gas path of a gas turbine engine. New models are required in the near-wall region which include more physics than wall functions. The two-layer modeling approach appears attractive because of its computational complexity. In addition, improved CFD simulation of film cooling and turbine blade internal cooling passages will require anisotropic turbulence models. New turbulence models must be practical in order to have a significant impact on the engine design process. A coordinated turbulence modeling effort between NASA centers would be beneficial to the gas turbine industry.

  15. Physically-Based Modelling and Real-Time Simulation of Fluids.

    NASA Astrophysics Data System (ADS)

    Chen, Jim Xiong

    1995-01-01

    Simulating physically realistic complex fluid behaviors presents an extremely challenging problem for computer graphics researchers. Such behaviors include the effects of driving boats through water, blending differently colored fluids, rain falling and flowing on a terrain, fluids interacting in a Distributed Interactive Simulation (DIS), etc. Such capabilities are useful in computer art, advertising, education, entertainment, and training. We present a new method for physically-based modeling and real-time simulation of fluids in computer graphics and dynamic virtual environments. By solving the 2D Navier -Stokes equations using a CFD method, we map the surface into 3D using the corresponding pressures in the fluid flow field. This achieves realistic real-time fluid surface behaviors by employing the physical governing laws of fluids but avoiding extensive 3D fluid dynamics computations. To complement the surface behaviors, we calculate fluid volume and external boundary changes separately to achieve full 3D general fluid flow. To simulate physical activities in a DIS, we introduce a mechanism which uses a uniform time scale proportional to the clock-time and variable time-slicing to synchronize physical models such as fluids in the networked environment. Our approach can simulate many different fluid behaviors by changing the internal or external boundary conditions. It can model different kinds of fluids by varying the Reynolds number. It can simulate objects moving or floating in fluids. It can also produce synchronized general fluid flows in a DIS. Our model can serve as a testbed to simulate many other fluid phenomena which have never been successfully modeled previously.

  16. Complex Organic Parents during Star-Forming Infall

    NASA Astrophysics Data System (ADS)

    Drozdovskaya, Maria; Walsh, Catherine; Visser, Ruud; Harsono, Daniel; van Dishoeck, Ewine

    2013-07-01

    Stars are born upon the gravitation infall of clumps in molecular clouds. Complex organic compounds have been observed to accompany star formation and are also believed to be the simplest ingredients to life. Therefore understanding complex organics under star forming conditions is fundamentally interesting. This work models the formation and distribution of several potential parent species for complex organic compounds, such as formaldehyde (H2CO) and methanol (CH3OH), along trajectories of matter parcels, as they undergo infall from the cold outer envelope towards the hot core region and eventually onto the disk. The code from Visser et al. (2009, 2011) serves as the basis for this research. The gas-phase chemistry network has now been expanded with grain-surface reactions to form CH3OH and, ultimately, larger organics such as methyl formate (HCOOCH3) and dimethyl ether (CH3OCH3). The intention behind this work is to obtain information on complex organic parents in the star formation scenario by means of a physically and chemically robust model. The availability of complex organic compounds will vary depending on where the parent species are abundant, such as in the pre-stellar stage, hot-core, or only in the disk; and where they are available for a sufficient amount of time for the complexity enhancement. Such model-based conclusions can then be used in order to explain the observational data on complex organic compounds.

  17. An Idealized Test of the Response of the Community Atmosphere Model to Near-Grid-Scale Forcing Across Hydrostatic Resolutions

    NASA Astrophysics Data System (ADS)

    Herrington, A. R.; Reed, K. A.

    2018-02-01

    A set of idealized experiments are developed using the Community Atmosphere Model (CAM) to understand the vertical velocity response to reductions in forcing scale that is known to occur when the horizontal resolution of the model is increased. The test consists of a set of rising bubble experiments, in which the horizontal radius of the bubble and the model grid spacing are simultaneously reduced. The test is performed with moisture, through incorporating moist physics routines of varying complexity, although convection schemes are not considered. Results confirm that the vertical velocity in CAM is to first-order, proportional to the inverse of the horizontal forcing scale, which is consistent with a scale analysis of the dry equations of motion. In contrast, experiments in which the coupling time step between the moist physics routines and the dynamical core (i.e., the "physics" time step) are relaxed back to more conventional values results in severely damped vertical motion at high resolution, degrading the scaling. A set of aqua-planet simulations using different physics time steps are found to be consistent with the results of the idealized experiments.

  18. Interwell Connectivity Evaluation Using Injection and Production Fluctuation Data

    NASA Astrophysics Data System (ADS)

    Shang, Barry Zhongqi

    The development of multiscale methods for computational simulation of biophysical systems represents a significant challenge. Effective computational models that bridge physical insights obtained from atomistic simulations and experimental findings are lacking. An accurate passing of information between these scales would enable: (1) an improved physical understanding of structure-function relationships, and (2) enhanced rational strategies for molecular engineering and materials design. Two approaches are described in this dissertation to facilitate these multiscale goals. In Part I, we develop a lattice kinetic Monte Carlo model to simulate cellulose decomposition by cellulase enzymes and to understand the effects of spatial confinement on enzyme kinetics. An enhanced mechanistic understanding of this reaction system could enhance the design of cellulose bioconversion technologies for renewable and sustainable energy. Using our model, we simulate the reaction up to experimental conversion times of days, while simultaneously capturing the microscopic kinetic behaviors. Therefore, the influence of molecular-scale kinetics on the macroscopic conversion rate is made transparent. The inclusion of spatial constraints in the kinetic model represents a significant advance over classical mass-action models commonly used to describe this reaction system. We find that restrictions due to enzyme jamming and substrate heterogeneity at the molecular level play a dominate role in limiting cellulose conversion. We identify that the key rate limitations are the slow rates of enzyme complexation with glucan chains and the competition between enzyme processivity and jamming. We show that the kinetics of complexation, which involves extraction of a glucan chain end from the cellulose surface and threading through the enzyme active site, occurs slowly on the order of hours, while intrinsic hydrolytic bond cleavage occurs on the order of seconds. We also elucidate the subtle trade-off between processivity and jamming. Highly processive enzymes cleave a large fraction of a glucan chain during each processive run but are prone to jamming at obstacles. Less processive enzymes avoid jamming but cleave only a small fraction of a chain. Optimizing this trade-off maximizes the cellulose conversion rate. We also elucidate the molecular-scale kinetic origins for synergy among cellulases in enzyme mixtures. In contrast to the currently accepted theory, we show that the ability of an endoglucanase to increase the concentration of chain ends for exoglucanases is insufficient for synergy to occur. Rather, endoglucanases must enhance the rate of complexation between exoglucanases and the newly created chain ends. This enhancement occurs when the endoglucanase is able to partially decrystallize the cellulose surface. We show generally that the driving forces for complexation and jamming, which govern the kinetics of pure exoglucanases, also control the degree of synergy in endo-exo mixtures. In Part II, we focus our attention on a different multiscale problem. This challenge is the development of coarse-grained models from atomistic models to access larger length- and time-scales in a simulation. This problem is difficult because it requires a delicate balance between maintaining (1) physical simplicity in the coarse-grained model and (2) physical consistency with the atomistic model. To achieve these goals, we develop a scheme to coarse-grain an atomistic fluid model into a fluctuating hydrodynamics (FHD) model. The FHD model describes the solvent as a field of fluctuating mass, momentum, and energy densities. The dynamics of the fluid are governed by continuum balance equations and fluctuation-dissipation relations based on the constitutive transport laws. The incorporation of both macroscopic transport and microscopic fluctuation phenomena could provide richer physical insight into the behaviors of biophysical systems driven by hydrodynamic fluctuations, such as hydrophobic assembly and crystal nucleation. We further extend our coarse-graining method by developing an interfacial FHD model using information obtained from simulations of an atomistic liquid-vapor interface. We illustrate that a phenomenological Ginzburg-Landau free energy employed in the FHD model can effectively represent the attractive molecular interactions of the atomistic model, which give rise to phase separation. For argon and water, we show that the interfacial FHD model can reproduce the compressibility, surface tension, and capillary wave spectrum of the atomistic model. Via this approach, simulations that explore the coupling between hydrodynamic fluctuations and phase equilibria with molecular-scale consistency are now possible. In both Parts I and II, the emerging theme is that the combination of bottom-up coarse graining and top-down phenomenology is essential for enabling a multiscale approach to remain physically consistent with molecular-scale interactions while simultaneously capturing the collective macroscopic behaviors. This hybrid strategy enables the resulting computational models to be both physically insightful and practically meaningful. (Abstract shortened by UMI.).

  19. Meta-Modeling: A Knowledge-Based Approach to Facilitating Model Construction and Reuse

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Dungan, Jennifer L.

    1997-01-01

    In this paper, we introduce a new modeling approach called meta-modeling and illustrate its practical applicability to the construction of physically-based ecosystem process models. As a critical adjunct to modeling codes meta-modeling requires explicit specification of certain background information related to the construction and conceptual underpinnings of a model. This information formalizes the heretofore tacit relationship between the mathematical modeling code and the underlying real-world phenomena being investigated, and gives insight into the process by which the model was constructed. We show how the explicit availability of such information can make models more understandable and reusable and less subject to misinterpretation. In particular, background information enables potential users to better interpret an implemented ecosystem model without direct assistance from the model author. Additionally, we show how the discipline involved in specifying background information leads to improved management of model complexity and fewer implementation errors. We illustrate the meta-modeling approach in the context of the Scientists' Intelligent Graphical Modeling Assistant (SIGMA) a new model construction environment. As the user constructs a model using SIGMA the system adds appropriate background information that ties the executable model to the underlying physical phenomena under investigation. Not only does this information improve the understandability of the final model it also serves to reduce the overall time and programming expertise necessary to initially build and subsequently modify models. Furthermore, SIGMA's use of background knowledge helps eliminate coding errors resulting from scientific and dimensional inconsistencies that are otherwise difficult to avoid when building complex models. As a. demonstration of SIGMA's utility, the system was used to reimplement and extend a well-known forest ecosystem dynamics model: Forest-BGC.

  20. Patterns of Circulating Inflammatory Biomarkers in Older Persons with Varying Levels of Physical Performance: A Partial Least Squares-Discriminant Analysis Approach

    PubMed Central

    Marzetti, Emanuele; Landi, Francesco; Marini, Federico; Cesari, Matteo; Buford, Thomas W.; Manini, Todd M.; Onder, Graziano; Pahor, Marco; Bernabei, Roberto; Leeuwenburgh, Christiaan; Calvani, Riccardo

    2014-01-01

    Background: Chronic, low-grade inflammation and declining physical function are hallmarks of the aging process. However, previous attempts to correlate individual inflammatory biomarkers with physical performance in older people have produced mixed results. Given the complexity of the inflammatory response, the simultaneous analysis of an array of inflammatory mediators may provide more insights into the relationship between inflammation and age-related physical function decline. This study was designed to explore the association between a panel of inflammatory markers and physical performance in older adults through a multivariate statistical approach. Methods: Community-dwelling older persons were categorized into “normal walkers” (NWs; n = 27) or “slow walkers” (SWs; n = 11) groups using 0.8 m s−1 as the 4-m gait speed cutoff. A panel of 14 circulating inflammatory biomarkers was assayed by multiplex analysis. Partial least squares-discriminant analysis (PLS-DA) was used to identify patterns of inflammatory mediators associated with gait speed categories. Results: The optimal complexity of the PLS-DA model was found to be five latent variables. The proportion of correct classification was 88.9% for NW subjects (74.1% in cross-validation) and 90.9% for SW individuals (81.8% in cross-validation). Discriminant biomarkers in the model were interleukin 8, myeloperoxidase, and tumor necrosis factor alpha (all higher in the SW group), and P-selectin, interferon gamma, and granulocyte–macrophage colony-stimulating factor (all higher in the NW group). Conclusion: Distinct profiles of circulating inflammatory biomarkers characterize older subjects with different levels of physical performance. The dissection of these patterns may provide novel insights into the role played by inflammation in the disabling cascade and possible new targets for interventions. PMID:25593902

  1. Design criteria for extraction with chemical reaction and liquid membrane permeation

    NASA Technical Reports Server (NTRS)

    Bart, H. J.; Bauer, A.; Lorbach, D.; Marr, R.

    1988-01-01

    The design criteria for heterogeneous chemical reactions in liquid/liquid systems formally correspond to those of classical physical extraction. More complex models are presented which describe the material exchange at the individual droplets in an extraction with chemical reaction and in liquid membrane permeation.

  2. Comparisons between physics-based, engineering, and statistical learning models for outdoor sound propagation.

    PubMed

    Hart, Carl R; Reznicek, Nathan J; Wilson, D Keith; Pettit, Chris L; Nykaza, Edward T

    2016-05-01

    Many outdoor sound propagation models exist, ranging from highly complex physics-based simulations to simplified engineering calculations, and more recently, highly flexible statistical learning methods. Several engineering and statistical learning models are evaluated by using a particular physics-based model, namely, a Crank-Nicholson parabolic equation (CNPE), as a benchmark. Narrowband transmission loss values predicted with the CNPE, based upon a simulated data set of meteorological, boundary, and source conditions, act as simulated observations. In the simulated data set sound propagation conditions span from downward refracting to upward refracting, for acoustically hard and soft boundaries, and low frequencies. Engineering models used in the comparisons include the ISO 9613-2 method, Harmonoise, and Nord2000 propagation models. Statistical learning methods used in the comparisons include bagged decision tree regression, random forest regression, boosting regression, and artificial neural network models. Computed skill scores are relative to sound propagation in a homogeneous atmosphere over a rigid ground. Overall skill scores for the engineering noise models are 0.6%, -7.1%, and 83.8% for the ISO 9613-2, Harmonoise, and Nord2000 models, respectively. Overall skill scores for the statistical learning models are 99.5%, 99.5%, 99.6%, and 99.6% for bagged decision tree, random forest, boosting, and artificial neural network regression models, respectively.

  3. Validated Predictions of Metabolic Energy Consumption for Submaximal Effort Movement

    PubMed Central

    Tsianos, George A.; MacFadden, Lisa N.

    2016-01-01

    Physical performance emerges from complex interactions among many physiological systems that are largely driven by the metabolic energy demanded. Quantifying metabolic demand is an essential step for revealing the many mechanisms of physical performance decrement, but accurate predictive models do not exist. The goal of this study was to investigate if a recently developed model of muscle energetics and force could be extended to reproduce the kinematics, kinetics, and metabolic demand of submaximal effort movement. Upright dynamic knee extension against various levels of ergometer load was simulated. Task energetics were estimated by combining the model of muscle contraction with validated models of lower limb musculotendon paths and segment dynamics. A genetic algorithm was used to compute the muscle excitations that reproduced the movement with the lowest energetic cost, which was determined to be an appropriate criterion for this task. Model predictions of oxygen uptake rate (VO2) were well within experimental variability for the range over which the model parameters were confidently known. The model's accurate estimates of metabolic demand make it useful for assessing the likelihood and severity of physical performance decrement for a given task as well as investigating underlying physiologic mechanisms. PMID:27248429

  4. A consistent approach to estimate the breakdown voltage of high voltage electrodes under positive switching impulses

    NASA Astrophysics Data System (ADS)

    Arevalo, L.; Wu, D.; Jacobson, B.

    2013-08-01

    The main propose of this paper is to present a physical model of long air gap electrical discharges under positive switching impulses. The development and progression of discharges in long air gaps are attributable to two intertwined physical phenomena, namely, the leader channel and the streamer zone. Experimental studies have been used to develop empirical and physical models capable to represent the streamer zone and the leader channel. The empirical ones have led to improvements in the electrical design of high voltage apparatus and insulation distances, but they cannot take into account factors associated with fundamental physics and/or the behavior of materials. The physical models have been used to describe and understand the discharge phenomena of laboratory and lightning discharges. However, because of the complex simulations necessary to reproduce real cases, they are not in widespread use in the engineering of practical applications. Hence, the aim of the work presented here is to develop a model based on physics of the discharge capable to validate and complement the existing engineering models. The model presented here proposes a new geometrical approximation for the representation of the streamer and the calculation of the accumulated electrical charge. The model considers a variable streamer region that changes with the temporal and spatial variations of the electric field. The leader channel is modeled using the non local thermo-equilibrium equations. Furthermore, statistical delays before the inception of the first corona, and random distributions to represent the tortuous nature of the path taken by the leader channel were included based on the behavior observed in experimental tests, with the intention of ensuring the discharge behaved in a realistic manner. For comparison purposes, two different gap configurations were simulated. A reasonable agreement was found between the physical model and the experimental test results.

  5. Quantum theory in real Hilbert space: How the complex Hilbert space structure emerges from Poincaré symmetry

    NASA Astrophysics Data System (ADS)

    Moretti, Valter; Oppio, Marco

    As earlier conjectured by several authors and much later established by Solèr (relying on partial results by Piron, Maeda-Maeda and other authors), from the lattice theory point of view, Quantum Mechanics may be formulated in real, complex or quaternionic Hilbert spaces only. Stückelberg provided some physical, but not mathematically rigorous, reasons for ruling out the real Hilbert space formulation, assuming that any formulation should encompass a statement of Heisenberg principle. Focusing on this issue from another — in our opinion, deeper — viewpoint, we argue that there is a general fundamental reason why elementary quantum systems are not described in real Hilbert spaces. It is their basic symmetry group. In the first part of the paper, we consider an elementary relativistic system within Wigner’s approach defined as a locally-faithful irreducible strongly-continuous unitary representation of the Poincaré group in a real Hilbert space. We prove that, if the squared-mass operator is non-negative, the system admits a natural, Poincaré invariant and unique up to sign, complex structure which commutes with the whole algebra of observables generated by the representation itself. This complex structure leads to a physically equivalent reformulation of the theory in a complex Hilbert space. Within this complex formulation, differently from what happens in the real one, all selfadjoint operators represent observables in accordance with Solèr’s thesis, and the standard quantum version of Noether theorem may be formulated. In the second part of this work, we focus on the physical hypotheses adopted to define a quantum elementary relativistic system relaxing them on the one hand, and making our model physically more general on the other hand. We use a physically more accurate notion of irreducibility regarding the algebra of observables only, we describe the symmetries in terms of automorphisms of the restricted lattice of elementary propositions of the quantum system and we adopt a notion of continuity referred to the states viewed as probability measures on the elementary propositions. Also in this case, the final result proves that there exists a unique (up to sign) Poincaré invariant complex structure making the theory complex and completely fitting into Solèr’s picture. This complex structure reveals a nice interplay of Poincaré symmetry and the classification of the commutant of irreducible real von Neumann algebras.

  6. Circular analysis in complex stochastic systems

    PubMed Central

    Valleriani, Angelo

    2015-01-01

    Ruling out observations can lead to wrong models. This danger occurs unwillingly when one selects observations, experiments, simulations or time-series based on their outcome. In stochastic processes, conditioning on the future outcome biases all local transition probabilities and makes them consistent with the selected outcome. This circular self-consistency leads to models that are inconsistent with physical reality. It is also the reason why models built solely on macroscopic observations are prone to this fallacy. PMID:26656656

  7. Extending the Riemann-Solver-Free High-Order Space-Time Discontinuous Galerkin Cell Vertex Scheme (DG-CVS) to Solve Compressible Magnetohydrodynamics Equations

    DTIC Science & Technology

    2016-06-08

    forces. Plasmas in hypersonic and astrophysical flows are one of the most typical examples of such conductive fluids. Though MHD models are a low...remain powerful tools in helping researchers to understand the complex physical processes in the geospace environment. For example, the ideal MHD...vertex level within each physical time step. For this reason and the method’s DG ingredient, the method was named as the space-time discontinuous Galerkin

  8. Investigating low flow process controls, through complex modelling, in a UK chalk catchment

    NASA Astrophysics Data System (ADS)

    Lubega Musuuza, Jude; Wagener, Thorsten; Coxon, Gemma; Freer, Jim; Woods, Ross; Howden, Nicholas

    2017-04-01

    The typical streamflow response of Chalk catchments is dominated by groundwater contributions due the high degree of groundwater recharge through preferential flow pathways. The groundwater store attenuates the precipitation signal, which causes a delay between the corresponding high and low extremes in the precipitation and the stream flow signals. Streamflow responses can therefore be quite out of phase with the precipitation input to a Chalk catchment. Therefore characterising such catchment systems, including modelling approaches, clearly need to reproduce these percolation and groundwater dominated pathways to capture these dominant flow pathways. The simulation of low flow conditions for chalk catchments in numerical models is especially difficult due to the complex interactions between various processes that may not be adequately represented or resolved in the models. Periods of low stream flows are particularly important due to competing water uses in the summer, including agriculture and water supply. In this study we apply and evaluate the physically-based Pennstate Integrated Hydrologic Model (PIHM) to the River Kennet, a sub-catchment of the Thames Basin, to demonstrate how the simulations of a chalk catchment are improved by a physically-based system representation. We also use an ensemble of simulations to investigate the sensitivity of various hydrologic signatures (relevant to low flows and droughts) to the different parameters in the model, thereby inferring the levels of control exerted by the processes that the parameters represent.

  9. Progress and challenges in the development of physically-based numerical models for prediction of flow and contaminant dispersion in the urban environment

    NASA Astrophysics Data System (ADS)

    Lien, F. S.; Yee, E.; Ji, H.; Keats, A.; Hsieh, K. J.

    2006-06-01

    The release of chemical, biological, radiological, or nuclear (CBRN) agents by terrorists or rogue states in a North American city (densely populated urban centre) and the subsequent exposure, deposition and contamination are emerging threats in an uncertain world. The modeling of the transport, dispersion, deposition and fate of a CBRN agent released in an urban environment is an extremely complex problem that encompasses potentially multiple space and time scales. The availability of high-fidelity, time-dependent models for the prediction of a CBRN agent's movement and fate in a complex urban environment can provide the strongest technical and scientific foundation for support of Canada's more broadly based effort at advancing counter-terrorism planning and operational capabilities.The objective of this paper is to report the progress of developing and validating an integrated, state-of-the-art, high-fidelity multi-scale, multi-physics modeling system for the accurate and efficient prediction of urban flow and dispersion of CBRN (and other toxic) materials discharged into these flows. Development of this proposed multi-scale modeling system will provide the real-time modeling and simulation tool required to predict injuries, casualties and contamination and to make relevant decisions (based on the strongest technical and scientific foundations) in order to minimize the consequences of a CBRN incident in a populated centre.

  10. SPARK: A Framework for Multi-Scale Agent-Based Biomedical Modeling.

    PubMed

    Solovyev, Alexey; Mikheev, Maxim; Zhou, Leming; Dutta-Moscato, Joyeeta; Ziraldo, Cordelia; An, Gary; Vodovotz, Yoram; Mi, Qi

    2010-01-01

    Multi-scale modeling of complex biological systems remains a central challenge in the systems biology community. A method of dynamic knowledge representation known as agent-based modeling enables the study of higher level behavior emerging from discrete events performed by individual components. With the advancement of computer technology, agent-based modeling has emerged as an innovative technique to model the complexities of systems biology. In this work, the authors describe SPARK (Simple Platform for Agent-based Representation of Knowledge), a framework for agent-based modeling specifically designed for systems-level biomedical model development. SPARK is a stand-alone application written in Java. It provides a user-friendly interface, and a simple programming language for developing Agent-Based Models (ABMs). SPARK has the following features specialized for modeling biomedical systems: 1) continuous space that can simulate real physical space; 2) flexible agent size and shape that can represent the relative proportions of various cell types; 3) multiple spaces that can concurrently simulate and visualize multiple scales in biomedical models; 4) a convenient graphical user interface. Existing ABMs of diabetic foot ulcers and acute inflammation were implemented in SPARK. Models of identical complexity were run in both NetLogo and SPARK; the SPARK-based models ran two to three times faster.

  11. Managing Analysis Models in the Design Process

    NASA Technical Reports Server (NTRS)

    Briggs, Clark

    2006-01-01

    Design of large, complex space systems depends on significant model-based support for exploration of the design space. Integrated models predict system performance in mission-relevant terms given design descriptions and multiple physics-based numerical models. Both the design activities and the modeling activities warrant explicit process definitions and active process management to protect the project from excessive risk. Software and systems engineering processes have been formalized and similar formal process activities are under development for design engineering and integrated modeling. JPL is establishing a modeling process to define development and application of such system-level models.

  12. Preparing the Dutch delta for future droughts: model based support in the national Delta Programme

    NASA Astrophysics Data System (ADS)

    ter Maat, Judith; Haasnoot, Marjolijn; van der Vat, Marnix; Hunink, Joachim; Prinsen, Geert; Visser, Martijn

    2014-05-01

    Keywords: uncertainty, policymaking, adaptive policies, fresh water management, droughts, Netherlands, Dutch Deltaprogramme, physically-based complex model, theory-motivated meta-model To prepare the Dutch Delta for future droughts and water scarcity, a nation-wide 4-year project, called Delta Programme, is established to assess impacts of climate scenarios and socio-economic developments and to explore policy options. The results should contribute to a national adaptive plan that is able to adapt to future uncertain conditions, if necessary. For this purpose, we followed a model-based step-wise approach, wherein both physically-based complex models and theory-motivated meta-models were used. First step (2010-2011) was to make a quantitative problem description. This involved a sensitivity analysis of the water system for drought situations under current and future conditions. The comprehensive Dutch national hydrological instrument was used for this purpose and further developed. Secondly (2011-2012) our main focus was on making an inventory of potential actions together with stakeholders. We assessed efficacy, sell-by date of actions, and reassessed vulnerabilities and opportunities for the future water supply system if actions were (not) taken. A rapid assessment meta-model was made based on the complex model. The effects of all potential measures were included in the tool. Thirdly (2012-2013), with support of the rapid assessment model, we assessed the efficacy of policy actions over time for an ensemble of possible futures including sea level rise and climate and land use change. Last step (2013-2014) involves the selection of preferred actions from a set of promising actions that meet the defined objectives. These actions are all modeled and evaluated using the complex model. The outcome of the process will be an adaptive management plan. The adaptive plan describes a set of preferred policy pathways - sequences of policy actions - to achieve targets under changing conditions. The plan commits to short term actions, and identifies signpost indicators and trigger values to assess if next actions of the identified policy pathways need to be implemented or if reassessment of the plan is needed. For example, river discharges could be measured to monitor changes in low discharges as a result of climate change, and assess whether policy options such as diverting more water the main fresh water lake (IJsselmeer) need to be implemented sooner or later or not at all. The adaptive plan of the Delta Programme will be presented in 2014. First lessons of this part of the Delta Programme can already be drawn: Both the complex and meta-model had its own purpose in each phase. The meta-model was particularly useful for identifying promising policy options and for consultation of stakeholders due to the instant response. The complex model had much more opportunities to assess impacts of regional policy actions, and was supported by regional stakeholders that recognized their areas better in this model. Different sector impact assessment modules are also included in the workflow of the complex model. However, the complex model has a long runtime (i.e. three days for 1 year simulation or more than 100 days for 35 year time series simulation), which makes it less suitable to support the dynamic policy process on instant demand and interactively.

  13. Quaternionic Kähler Detour Complexes and {mathcal{N} = 2} Supersymmetric Black Holes

    NASA Astrophysics Data System (ADS)

    Cherney, D.; Latini, E.; Waldron, A.

    2011-03-01

    We study a class of supersymmetric spinning particle models derived from the radial quantization of stationary, spherically symmetric black holes of four dimensional {{mathcal N} = 2} supergravities. By virtue of the c-map, these spinning particles move in quaternionic Kähler manifolds. Their spinning degrees of freedom describe mini-superspace-reduced supergravity fermions. We quantize these models using BRST detour complex technology. The construction of a nilpotent BRST charge is achieved by using local (worldline) supersymmetry ghosts to generate special holonomy transformations. (An interesting byproduct of the construction is a novel Dirac operator on the superghost extended Hilbert space.) The resulting quantized models are gauge invariant field theories with fields equaling sections of special quaternionic vector bundles. They underly and generalize the quaternionic version of Dolbeault cohomology discovered by Baston. In fact, Baston’s complex is related to the BPS sector of the models we write down. Our results rely on a calculus of operators on quaternionic Kähler manifolds that follows from BRST machinery, and although directly motivated by black hole physics, can be broadly applied to any model relying on quaternionic geometry.

  14. The CEOP Inter-Monsoon Studies (CIMS)

    NASA Technical Reports Server (NTRS)

    Lau, William K. M.

    2003-01-01

    Prediction of climate relies on models, and better model prediction depends on good model physics. Improving model physics requires the maximal utilization of climate data of the past, present and future. CEOP provides the first example of a comprehensive, integrated global and regional data set, consisting of globally gridded data, reference site in-situ observations, model location time series (MOLTS), and integrated satellite data for a two-year period covering two complete annual cycles of 2003-2004. The monsoon regions are the most important socio-economically in terms of devastation by floods and droughts, and potential impacts from climate change md fluctuatinns nf the hydrologic cyc!e. Scientifically, it is most challenging, because of complex interactions of atmosphere, land and oceans, local vs. remote forcings in contributing to climate variability and change in the region. Given that many common features, and physical teleconnection exist among different monsoon regions, an international research focus on monsoon must be coordinated and sustained. Current models of the monsoon are grossly inadequate for regional predictions. For improvement, models must be confronted with relevant observations, and model physic developers must be made to be aware of the wealth of information from existing climate data, field measurements, and satellite data that can be used to improve models. Model transferability studles must be conducted. CIMS is a major initiative under CEOP to engage the modeling and the observational communities to join in a coordinated effort to study the monsoons. The objectives of CIMS are (a) To provide a better understanding of fundamental physical processes (diurnal cycle, annual cycle, and intraseasonal oscillations) in monsoon regions around the world and (b) To demonstrate the synergy and utility of CEOP data in providing a pathway for model physics evaluation and improvement. In this talk, I will present the basic concepts of CIMS and the key scientific problems facing monsoon climates and provide examples of common monsoon features, and possible monsoon induced teleconnections linking different parts of the world.

  15. Hydrology or biology? Modeling simplistic physical constraints on lake carbon biogeochemistry to identify when and where biology is likely to matter

    NASA Astrophysics Data System (ADS)

    Jones, S.; Zwart, J. A.; Solomon, C.; Kelly, P. T.

    2017-12-01

    Current efforts to scale lake carbon biogeochemistry rely heavily on empirical observations and rarely consider physical or biological inter-lake heterogeneity that is likely to regulate terrestrial dissolved organic carbon (tDOC) decomposition in lakes. This may in part result from a traditional focus of lake ecologists on in-lake biological processes OR physical-chemical pattern across lake regions, rather than on process AND pattern across scales. To explore the relative importance of local biological processes and physical processes driven by lake hydrologic setting, we created a simple, analytical model of tDOC decomposition in lakes that focuses on the regulating roles of lake size and catchment hydrologic export. Our simplistic model can generally recreate patterns consistent with both local- and regional-scale patterns in tDOC concentration and decomposition. We also see that variation in lake hydrologic setting, including the importance of evaporation as a hydrologic export, generates significant, emergent variation in tDOC decomposition at a given hydrologic residence time, and creates patterns that have been historically attributed to variation in tDOC quality. Comparing predictions of this `biologically null model' to field observations and more biologically complex models could indicate when and where biology is likely to matter most.

  16. Improvements to Fidelity, Generation and Implementation of Physics-Based Lithium-Ion Reduced-Order Models

    NASA Astrophysics Data System (ADS)

    Rodriguez Marco, Albert

    Battery management systems (BMS) require computationally simple but highly accurate models of the battery cells they are monitoring and controlling. Historically, empirical equivalent-circuit models have been used, but increasingly researchers are focusing their attention on physics-based models due to their greater predictive capabilities. These models are of high intrinsic computational complexity and so must undergo some kind of order-reduction process to make their use by a BMS feasible: we favor methods based on a transfer-function approach of battery cell dynamics. In prior works, transfer functions have been found from full-order PDE models via two simplifying assumptions: (1) a linearization assumption--which is a fundamental necessity in order to make transfer functions--and (2) an assumption made out of expedience that decouples the electrolyte-potential and electrolyte-concentration PDEs in order to render an approach to solve for the transfer functions from the PDEs. This dissertation improves the fidelity of physics-based models by eliminating the need for the second assumption and, by linearizing nonlinear dynamics around different constant currents. Electrochemical transfer functions are infinite-order and cannot be expressed as a ratio of polynomials in the Laplace variable s. Thus, for practical use, these systems need to be approximated using reduced-order models that capture the most significant dynamics. This dissertation improves the generation of physics-based reduced-order models by introducing different realization algorithms, which produce a low-order model from the infinite-order electrochemical transfer functions. Physics-based reduced-order models are linear and describe cell dynamics if operated near the setpoint at which they have been generated. Hence, multiple physics-based reduced-order models need to be generated at different setpoints (i.e., state-of-charge, temperature and C-rate) in order to extend the cell operating range. This dissertation improves the implementation of physics-based reduced-order models by introducing different blending approaches that combine the pre-computed models generated (offline) at different setpoints in order to produce good electrochemical estimates (online) along the cell state-of-charge, temperature and C-rate range.

  17. Plasma and Electro-energetic Physics

    DTIC Science & Technology

    2012-03-07

    Dynamical Equations (with complex surfaces ): Relativistic Lorentz Force Law for relativistic momentum p and velocity u: tDcJcH tBcE   /)/1()/4...0.1-1 s • 3D, high-fidelity, parallel modeling of high energy density fields and particles in complex geometry with some surface effects...cathodes (500 µm separation) Tang, AFRL/RD 12 DISTRIBUTION A: Approved for public release; distribution is unlimited. ICEPIC simulations Equipotential

  18. Exploiting non-covalent π interactions for catalyst design

    NASA Astrophysics Data System (ADS)

    Neel, Andrew J.; Hilton, Margaret J.; Sigman, Matthew S.; Toste, F. Dean

    2017-03-01

    Molecular recognition, binding and catalysis are often mediated by non-covalent interactions involving aromatic functional groups. Although the relative complexity of these so-called π interactions has made them challenging to study, theory and modelling have now reached the stage at which we can explain their physical origins and obtain reliable insight into their effects on molecular binding and chemical transformations. This offers opportunities for the rational manipulation of these complex non-covalent interactions and their direct incorporation into the design of small-molecule catalysts and enzymes.

  19. The formal de Rham complex

    NASA Astrophysics Data System (ADS)

    Zharinov, V. V.

    2013-02-01

    We propose a formal construction generalizing the classic de Rham complex to a wide class of models in mathematical physics and analysis. The presentation is divided into a sequence of definitions and elementary, easily verified statements; proofs are therefore given only in the key case. Linear operations are everywhere performed over a fixed number field {F} = {R},{C}. All linear spaces, algebras, and modules, although not stipulated explicitly, are by definition or by construction endowed with natural locally convex topologies, and their morphisms are continuous.

  20. Molecular Structure and Sequence in Complex Coacervates

    NASA Astrophysics Data System (ADS)

    Sing, Charles; Lytle, Tyler; Madinya, Jason; Radhakrishna, Mithun

    Oppositely-charged polyelectrolytes in aqueous solution can undergo associative phase separation, in a process known as complex coacervation. This results in a polyelectrolyte-dense phase (coacervate) and polyelectrolyte-dilute phase (supernatant). There remain challenges in understanding this process, despite a long history in polymer physics. We use Monte Carlo simulation to demonstrate that molecular features (charge spacing, size) play a crucial role in governing the equilibrium in coacervates. We show how these molecular features give rise to strong monomer sequence effects, due to a combination of counterion condensation and correlation effects. We distinguish between structural and sequence-based correlations, which can be designed to tune the phase diagram of coacervation. Sequence effects further inform the physical understanding of coacervation, and provide the basis for new coacervation models that take monomer-level features into account.

  1. Cross-comparison of spacecraft-environment interaction model predictions applied to Solar Probe Plus near perihelion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marchand, R.; Miyake, Y.; Usui, H.

    2014-06-15

    Five spacecraft-plasma models are used to simulate the interaction of a simplified geometry Solar Probe Plus (SPP) satellite with the space environment under representative solar wind conditions near perihelion. By considering similarities and differences between results obtained with different numerical approaches under well defined conditions, the consistency and validity of our models can be assessed. The impact on model predictions of physical effects of importance in the SPP mission is also considered by comparing results obtained with and without these effects. Simulation results are presented and compared with increasing levels of complexity in the physics of interaction between solar environmentmore » and the SPP spacecraft. The comparisons focus particularly on spacecraft floating potentials, contributions to the currents collected and emitted by the spacecraft, and on the potential and density spatial profiles near the satellite. The physical effects considered include spacecraft charging, photoelectron and secondary electron emission, and the presence of a background magnetic field. Model predictions obtained with our different computational approaches are found to be in agreement within 2% when the same physical processes are taken into account and treated similarly. The comparisons thus indicate that, with the correct description of important physical effects, our simulation models should have the required skill to predict details of satellite-plasma interaction physics under relevant conditions, with a good level of confidence. Our models concur in predicting a negative floating potential V{sub fl}∼−10V for SPP at perihelion. They also predict a “saturated emission regime” whereby most emitted photo- and secondary electron will be reflected by a potential barrier near the surface, back to the spacecraft where they will be recollected.« less

  2. Inter-model analysis of tsunami-induced coastal currents

    NASA Astrophysics Data System (ADS)

    Lynett, Patrick J.; Gately, Kara; Wilson, Rick; Montoya, Luis; Arcas, Diego; Aytore, Betul; Bai, Yefei; Bricker, Jeremy D.; Castro, Manuel J.; Cheung, Kwok Fai; David, C. Gabriel; Dogan, Gozde Guney; Escalante, Cipriano; González-Vida, José Manuel; Grilli, Stephan T.; Heitmann, Troy W.; Horrillo, Juan; Kânoğlu, Utku; Kian, Rozita; Kirby, James T.; Li, Wenwen; Macías, Jorge; Nicolsky, Dmitry J.; Ortega, Sergio; Pampell-Manis, Alyssa; Park, Yong Sung; Roeber, Volker; Sharghivand, Naeimeh; Shelby, Michael; Shi, Fengyan; Tehranirad, Babak; Tolkova, Elena; Thio, Hong Kie; Velioğlu, Deniz; Yalçıner, Ahmet Cevdet; Yamazaki, Yoshiki; Zaytsev, Andrey; Zhang, Y. J.

    2017-06-01

    To help produce accurate and consistent maritime hazard products, the National Tsunami Hazard Mitigation Program organized a benchmarking workshop to evaluate the numerical modeling of tsunami currents. Thirteen teams of international researchers, using a set of tsunami models currently utilized for hazard mitigation studies, presented results for a series of benchmarking problems; these results are summarized in this paper. Comparisons focus on physical situations where the currents are shear and separation driven, and are thus de-coupled from the incident tsunami waveform. In general, we find that models of increasing physical complexity provide better accuracy, and that low-order three-dimensional models are superior to high-order two-dimensional models. Inside separation zones and in areas strongly affected by eddies, the magnitude of both model-data errors and inter-model differences can be the same as the magnitude of the mean flow. Thus, we make arguments for the need of an ensemble modeling approach for areas affected by large-scale turbulent eddies, where deterministic simulation may be misleading. As a result of the analyses presented herein, we expect that tsunami modelers now have a better awareness of their ability to accurately capture the physics of tsunami currents, and therefore a better understanding of how to use these simulation tools for hazard assessment and mitigation efforts.

  3. Topics in Complexity: Dynamical Patterns in the Cyberworld

    NASA Astrophysics Data System (ADS)

    Qi, Hong

    Quantitative understanding of mechanism in complex systems is a common "difficult" problem across many fields such as physical, biological, social and economic sciences. Investigation on underlying dynamics of complex systems and building individual-based models have recently been fueled by big data resulted from advancing information technology. This thesis investigates complex systems in social science, focusing on civil unrests on streets and relevant activities online. Investigation consists of collecting data of unrests from open digital source, featuring dynamical patterns underlying, making predictions and constructing models. A simple law governing the progress of two-sided confrontations is proposed with data of activities at micro-level. Unraveling the connections between activity of organizing online and outburst of unrests on streets gives rise to a further meso-level pattern of human behavior, through which adversarial groups evolve online and hyper-escalate ahead of real-world uprisings. Based on the patterns found, noticeable improvement of prediction of civil unrests is achieved. Meanwhile, novel model created from combination of mobility dynamics in the cyberworld and a traditional contagion model can better capture the characteristics of modern civil unrests and other contagion-like phenomena than the original one.

  4. Climate, bleaching and connectivity in the Coral Triangle.

    NASA Astrophysics Data System (ADS)

    Curchitser, E. N.; Kleypas, J. A.; Castruccio, F. S.; Drenkard, E.; Thompson, D. M.; Pinsky, M. L.

    2016-12-01

    The Coral Triangle (CT) is the apex of marine biodiversity and supports the livelihoods of millions of people. It is also one of the most threatened of all reef regions in the world. We present results from a series of high-resolution, numerical ocean models designed to address physical and ecological questions relevant to the region's coral communities. The hierarchy of models was designed to optimize the model performance in addressing questions ranging from the role of internal tides in larval connectivity to distinguishing the role of interannual variability from decadal trends in thermal stress leading to mass bleaching events. In this presentation we will show how combining ocean circulation with models of larval dispersal leads to new insights into the interplay of physics and ecology in this complex oceanographic region, which can ultimately be used to inform conservation efforts.

  5. Verification of Functional Fault Models and the Use of Resource Efficient Verification Tools

    NASA Technical Reports Server (NTRS)

    Bis, Rachael; Maul, William A.

    2015-01-01

    Functional fault models (FFMs) are a directed graph representation of the failure effect propagation paths within a system's physical architecture and are used to support development and real-time diagnostics of complex systems. Verification of these models is required to confirm that the FFMs are correctly built and accurately represent the underlying physical system. However, a manual, comprehensive verification process applied to the FFMs was found to be error prone due to the intensive and customized process necessary to verify each individual component model and to require a burdensome level of resources. To address this problem, automated verification tools have been developed and utilized to mitigate these key pitfalls. This paper discusses the verification of the FFMs and presents the tools that were developed to make the verification process more efficient and effective.

  6. Modeling the surface tension of complex, reactive organic-inorganic mixtures

    NASA Astrophysics Data System (ADS)

    Schwier, A. N.; Viglione, G. A.; Li, Z.; McNeill, V. F.

    2013-01-01

    Atmospheric aerosols can contain thousands of organic compounds which impact aerosol surface tension, affecting aerosol properties such as cloud condensation nuclei (CCN) ability. We present new experimental data for the surface tension of complex, reactive organic-inorganic aqueous mixtures mimicking tropospheric aerosols. Each solution contained 2-6 organic compounds, including methylglyoxal, glyoxal, formaldehyde, acetaldehyde, oxalic acid, succinic acid, leucine, alanine, glycine, and serine, with and without ammonium sulfate. We test two surface tension models and find that most reactive, complex, aqueous organic mixtures which do not contain salt are well-described by a weighted Szyszkowski-Langmuir (S-L) model which was first presented by Henning et al. (2005). Two approaches for modeling the effects of salt were tested: (1) the Tuckermann approach (an extension of the Henning model with an additional explicit salt term), and (2) a new implicit method proposed here which employs experimental surface tension data obtained for each organic species in the presence of salt used with the Henning model. We recommend the use of method (2) for surface tension modeling because the Henning model (using data obtained from organic-inorganic systems) and Tuckermann approach provide similar modeling fits and goodness of fit (χ2) values, yet the Henning model is a simpler and more physical approach to modeling the effects of salt, requiring less empirically determined parameters.

  7. A network model for characterizing brine channels in sea ice

    NASA Astrophysics Data System (ADS)

    Lieblappen, Ross M.; Kumar, Deip D.; Pauls, Scott D.; Obbard, Rachel W.

    2018-03-01

    The brine pore space in sea ice can form complex connected structures whose geometry is critical in the governance of important physical transport processes between the ocean, sea ice, and surface. Recent advances in three-dimensional imaging using X-ray micro-computed tomography have enabled the visualization and quantification of the brine network morphology and variability. Using imaging of first-year sea ice samples at in situ temperatures, we create a new mathematical network model to characterize the topology and connectivity of the brine channels. This model provides a statistical framework where we can characterize the pore networks via two parameters, depth and temperature, for use in dynamical sea ice models. Our approach advances the quantification of brine connectivity in sea ice, which can help investigations of bulk physical properties, such as fluid permeability, that are key in both global and regional sea ice models.

  8. From Rivers to Oceans and Back: Linking Models to Encompass the Full Salmon Life Cycle

    NASA Astrophysics Data System (ADS)

    Danner, E.; Hendrix, N.; Martin, B.; Lindley, S. T.

    2016-02-01

    Pacific salmon are a promising study subject for investigating the linkages between freshwater and coastal ocean ecosystems. Salmon use a wide range of habitats throughout their life cycle as they move with water from mountain streams, mainstem rivers, estuaries, bays, and coastal oceans, with adult fish swimming back through the same migration route they took as juveniles. Conditions in one habitat can have growth and survival consequences that manifest in the following habitat, so is key that full life cycle models are used to further our understanding salmon population dynamics. Given the wide range of habitats and potential stressors, this approach requires the coordination of a multidisciplinary suite of physical and biological models, including climate, hydrologic, hydraulic, food web, circulation, bioenergetic, and ecosystem models. Here we present current approaches to linking physical and biological models that capture the foundational drivers for salmon in complex and dynamic systems.

  9. Dissociative recombination by frame transformation to Siegert pseudostates: A comparison with a numerically solvable model

    NASA Astrophysics Data System (ADS)

    Hvizdoš, Dávid; Váňa, Martin; Houfek, Karel; Greene, Chris H.; Rescigno, Thomas N.; McCurdy, C. William; Čurík, Roman

    2018-02-01

    We present a simple two-dimensional model of the indirect dissociative recombination process. The model has one electronic and one nuclear degree of freedom and it can be solved to high precision, without making any physically motivated approximations, by employing the exterior complex scaling method together with the finite-elements method and discrete variable representation. The approach is applied to solve a model for dissociative recombination of H2 + in the singlet ungerade channels, and the results serve as a benchmark to test validity of several physical approximations commonly used in the computational modeling of dissociative recombination for real molecular targets. The second, approximate, set of calculations employs a combination of multichannel quantum defect theory and frame transformation into a basis of Siegert pseudostates. The cross sections computed with the two methods are compared in detail for collision energies from 0 to 2 eV.

  10. Physics-Based Modeling of Electric Operation, Heat Transfer, and Scrap Melting in an AC Electric Arc Furnace

    NASA Astrophysics Data System (ADS)

    Opitz, Florian; Treffinger, Peter

    2016-04-01

    Electric arc furnaces (EAF) are complex industrial plants whose actual behavior depends upon numerous factors. Due to its energy intensive operation, the EAF process has always been subject to optimization efforts. For these reasons, several models have been proposed in literature to analyze and predict different modes of operation. Most of these models focused on the processes inside the vessel itself. The present paper introduces a dynamic, physics-based model of a complete EAF plant which consists of the four subsystems vessel, electric system, electrode regulation, and off-gas system. Furthermore the solid phase is not treated to be homogenous but a simple spatial discretization is employed. Hence it is possible to simulate the energy input by electric arcs and fossil fuel burners depending on the state of the melting progress. The model is implemented in object-oriented, equation-based language Modelica. The simulation results are compared to literature data.

  11. In vitro experimental investigation of voice production

    PubMed Central

    Horáčcek, Jaromír; Brücker, Christoph; Becker, Stefan

    2012-01-01

    The process of human phonation involves a complex interaction between the physical domains of structural dynamics, fluid flow, and acoustic sound production and radiation. Given the high degree of nonlinearity of these processes, even small anatomical or physiological disturbances can significantly affect the voice signal. In the worst cases, patients can lose their voice and hence the normal mode of speech communication. To improve medical therapies and surgical techniques it is very important to understand better the physics of the human phonation process. Due to the limited experimental access to the human larynx, alternative strategies, including artificial vocal folds, have been developed. The following review gives an overview of experimental investigations of artificial vocal folds within the last 30 years. The models are sorted into three groups: static models, externally driven models, and self-oscillating models. The focus is on the different models of the human vocal folds and on the ways in which they have been applied. PMID:23181007

  12. A calcium-driven mechanochemical model for prediction of force generation in smooth muscle.

    PubMed

    Murtada, Sae-Il; Kroon, Martin; Holzapfel, Gerhard A

    2010-12-01

    A new model for the mechanochemical response of smooth muscle is presented. The focus is on the response of the actin-myosin complex and on the related generation of force (or stress). The chemical (kinetic) model describes the cross-bridge interactions with the thin filament in which the calcium-dependent myosin phosphorylation is the only regulatory mechanism. The new mechanical model is based on Hill's three-component model and it includes one internal state variable that describes the contraction/relaxation of the contractile units. It is characterized by a strain-energy function and an evolution law incorporating only a few material parameters with clear physical meaning. The proposed model satisfies the second law of thermodynamics. The results of the combined coupled model are broadly consistent with isometric and isotonic experiments on smooth muscle tissue. The simulations suggest that the matrix in which the actin-myosin complex is embedded does have a viscous property. It is straightforward for implementation into a finite element program in order to solve more complex boundary-value problems such as the control of short-term changes in lumen diameter of arteries due to mechanochemical signals.

  13. A finite element model of rigid body structures actuated by dielectric elastomer actuators

    NASA Astrophysics Data System (ADS)

    Simone, F.; Linnebach, P.; Rizzello, G.; Seelecke, S.

    2018-06-01

    This paper presents on finite element (FE) modeling and simulation of dielectric elastomer actuators (DEAs) coupled with articulated structures. DEAs have proven to represent an effective transduction technology for the realization of large deformation, low-power consuming, and fast mechatronic actuators. However, the complex dynamic behavior of the material, characterized by nonlinearities and rate-dependent phenomena, makes it difficult to accurately model and design DEA systems. The problem is further complicated in case the DEA is used to activate articulated structures, which increase both system complexity and implementation effort of numerical simulation models. In this paper, we present a model based tool which allows to effectively implement and simulate complex articulated systems actuated by DEAs. A first prototype of a compact switch actuated by DEA membranes is chosen as reference study to introduce the methodology. The commercially available FE software COMSOL is used for implementing and coupling a physics-based dynamic model of the DEA with the external structure, i.e., the switch. The model is then experimentally calibrated and validated in both quasi-static and dynamic loading conditions. Finally, preliminary results on how to use the simulation tool to optimize the design are presented.

  14. Nutritional Status of Rural Older Adults is Linked to Physical and Emotional Health

    PubMed Central

    Jung, Seung Eun; Bishop, Alex J; Kim, Minjung; Hermann, Janice; Kim, Giyeon; Lawrence, Jeannine

    2017-01-01

    Background Although nutritional status is influenced by multi-dimensional aspects encompassing physical and emotional well-being, there is limited research on this complex relationship. Objective The purpose of this study was to examine the interplay between indicators of physical health (perceived health status and self-care capacity) and emotional well-being (depressive affect and loneliness) on rural older adults’ nutritional status. Design The cross-sectional study was conducted from June 1, 2007 to June 1, 2008. Participants/setting A total of 171 community-dwelling older adults, 65 years and older, who resided within non-metro rural communities in the U.S. participated in this study. Main outcome measures Participants completed validated instruments measuring self-care capacity, perceived health status, loneliness, depressive affect, and nutritional status. Statistical analyses performed Structural equation modeling (SEM) was employed to investigate the complex interplay of physical and emotional health status with nutritional status among rural older adults, Chi-square statistic, CFI, RMSEA and SRMR were used to assess model fit. Results Chi-square statistic and the other model fit indices showed the hypothesized SEM model provided a good fit to the data (χ2 (2)=2.15, p=0.34; CFI=1.00; RMSEA=0.02; SRMR=0.03). Self-care capacity was significantly related with depressive affect (γ = −0.11, p=0.03) whereas self-care capacity was not significantly related with loneliness. Perceived health status had a significant negative relationship with both loneliness (γ = −0.16, p=0.03) and depressive affect (γ = −0.22, p=0.03). Although loneliness showed no significant direct relationship with nutritional status, it showed a significant direct relationship with depressive affect (β = 0.46, p<0.01). Finally, the results demonstrated that depressive affect had a significant negative relationship with nutritional status (β = −0.30, p<0.01). The results indicated physical health and emotional indicators have significant multi-dimensional associations with nutritional status among rural older adults. Conclusions The present study provides insights into the importance of addressing both physical and emotional well-being together to reduce potential effects of poor emotional well-being on nutritional status, particularly among rural older adults with impaired physical health and self-care capacity. PMID:28274787

  15. Nutritional Status of Rural Older Adults Is Linked to Physical and Emotional Health.

    PubMed

    Jung, Seung Eun; Bishop, Alex J; Kim, Minjung; Hermann, Janice; Kim, Giyeon; Lawrence, Jeannine

    2017-06-01

    Although nutritional status is influenced by multidimensional aspects encompassing physical and emotional well-being, there is limited research on this complex relationship. The purpose of this study was to examine the interplay between indicators of physical health (perceived health status and self-care capacity) and emotional well-being (depressive affect and loneliness) on rural older adults' nutritional status. The cross-sectional study was conducted from June 1, 2007, to June 1, 2008. A total of 171 community-dwelling older adults, aged 65 years and older, residing within nonmetro rural communities in the United States participated in this study. Participants completed validated instruments measuring self-care capacity, perceived health status, loneliness, depressive affect, and nutritional status. Structural equation modeling was employed to investigate the complex interplay of physical and emotional health status with nutritional status among rural older adults. The χ 2 test, comparative fit index, root mean square error of approximation, and standardized root mean square residual were used to assess model fit. The χ 2 test and the other model fit indexes showed the hypothesized structural equation model provided a good fit to the data (χ 2 (2)=2.15; P=0.34; comparative fit index=1.00; root mean square error of approximation=0.02; and standardized root mean square residual=0.03). Self-care capacity was significantly related with depressive affect (γ=-0.11; P=0.03), whereas self-care capacity was not significantly related with loneliness. Perceived health status had a significant negative relationship with both loneliness (γ=-0.16; P=0.03) and depressive affect (γ=-0.22; P=0.03). Although loneliness showed no significant direct relationship with nutritional status, it showed a significant direct relationship with depressive affect (β=.4; P<0.01). Finally, the results demonstrated that depressive affect had a significant negative relationship with nutritional status (β=-.30; P<0.01). The results indicated physical health and emotional indicators have significant multidimensional associations with nutritional status among rural older adults. The present study provides insights into the importance of addressing both physical and emotional well-being together to reduce potential effects of poor emotional well-being on nutritional status, particularly among rural older adults with impaired physical health and self-care capacity. Copyright © 2017 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  16. Modeling socio-cultural processes in network-centric environments

    NASA Astrophysics Data System (ADS)

    Santos, Eunice E.; Santos, Eugene, Jr.; Korah, John; George, Riya; Gu, Qi; Kim, Keumjoo; Li, Deqing; Russell, Jacob; Subramanian, Suresh

    2012-05-01

    The major focus in the field of modeling & simulation for network centric environments has been on the physical layer while making simplifications for the human-in-the-loop. However, the human element has a big impact on the capabilities of network centric systems. Taking into account the socio-behavioral aspects of processes such as team building, group decision-making, etc. are critical to realistically modeling and analyzing system performance. Modeling socio-cultural processes is a challenge because of the complexity of the networks, dynamism in the physical and social layers, feedback loops and uncertainty in the modeling data. We propose an overarching framework to represent, model and analyze various socio-cultural processes within network centric environments. The key innovation in our methodology is to simultaneously model the dynamism in both the physical and social layers while providing functional mappings between them. We represent socio-cultural information such as friendships, professional relationships and temperament by leveraging the Culturally Infused Social Network (CISN) framework. The notion of intent is used to relate the underlying socio-cultural factors to observed behavior. We will model intent using Bayesian Knowledge Bases (BKBs), a probabilistic reasoning network, which can represent incomplete and uncertain socio-cultural information. We will leverage previous work on a network performance modeling framework called Network-Centric Operations Performance and Prediction (N-COPP) to incorporate dynamism in various aspects of the physical layer such as node mobility, transmission parameters, etc. We validate our framework by simulating a suitable scenario, incorporating relevant factors and providing analyses of the results.

  17. Science, Semantics, and Social Change.

    ERIC Educational Resources Information Center

    Lemke, J. L.

    Social semiotics suggests that social and cultural formations, including the language and practice of science and the ways in which new generations and communities advance them, develop as an integral part of the evolution of social ecosystems. Some recent models of complex dynamic systems in physics, chemistry, and biology focus more on the…

  18. Do You Have a Strategy?

    ERIC Educational Resources Information Center

    Kalina, David

    2006-01-01

    Education is undergoing a transformation across the country as it responds to new understandings of the mechanisms for learning. These changes are affecting the physical environments where learning occurs, from individual rooms to entire building complexes. The impact of these trends on facilities is dramatic. Old classroom models will not support…

  19. Differentiating Instruction in Physical Education: Personalization of Learning

    ERIC Educational Resources Information Center

    Colquitt, Gavin; Pritchard, Tony; Johnson, Christine; McCollum, Starla

    2017-01-01

    Differentiated instruction (DI) is a complex conceptual model and philosophy that is implemented in many traditional classroom settings. The primary focus of DI is to personalize the learning process by taking into account individual differences among students' varied levels of readiness, interest and learning profile. Varied assessments are used…

  20. Representing Energy. II. Energy Tracking Representations

    ERIC Educational Resources Information Center

    Scherr, Rachel E.; Close, Hunter G.; Close, Eleanor W.; Vokos, Stamatis

    2012-01-01

    The Energy Project at Seattle Pacific University has developed representations that embody the substance metaphor and support learners in conserving and tracking energy as it flows from object to object and changes form. Such representations enable detailed modeling of energy dynamics in complex physical processes. We assess student learning by…

Top